This morning, I had the plan to write something about the historical figure behind St. Nicolaus (Santa Claus for his friends) who in Germany fills children's shoes with sweets and small presents in the night to December 6th. On my way to IUB, I had heard a radio program about him: He lived in the fourth century somewhere where it's now Turkey, was a bishop and provided three sisters that were so poor that they had to prositute themselves with balls of gold so they could merry. Some 700 years after his death, some knights brought his body to Bari in Italy to save it from the arabs and then parts of his body were distributed all over Europe. The character of this saint also changed a lot over the centuries from being the saint of millers to the saint of drinkers (apearently, the Russian word for getting drunk is derived from his name) to the saint of children.

But this is not what I am going to talk about.

Rather, I would like to point out this news: Heise is a German publishing company of not only in my mind by far the best computer journals here. They also have a news ticker which I think is comparable to slashdot which hosts a discussion forum. Now a court (Landgericht Hamburg) has ordered the Heise publishing house to make sure that there is no illegal content in the forum (and not only delete entries when it is pointed out to them that they are illegal). Otherwise, they could be fined by any lawyer ('Abmahnung').

The court ruled in the case of a posting providing a script for to run a simple denial of service attack against the server of a company that was discussed in the original article. The court decided that Heise must make sure that no such illegal contet is distributed via their website. Heise will challenge this ruling at the next higher level.

But if it prevails, it means that in Germany anybody providing any possibility for users to leave comments is potentially threatened to be fined, no matter if it is a forum, a guest book or the comment section of a blog: You can simply post an annonymous comment of some illegal content and then sue the provider of the website for publishing it. This would be the end of any unmoderated discussion on the German part of the internet. Just another case where a court show complete ignorance of the working of the internet.

So, comment, as long as I still let you!

(note: I had written this yesterday, but due to a problem with blogspot.com I could not post it until today)

## Wednesday, December 07, 2005

## Thursday, November 17, 2005

### What is not a duality

A couple of days ago, Sergey pointed me to a paper Background independent duals of the harmonic oscillator by Viqar Husain. The abstract promises to show that there is a duality between a class of topological and thus background independent theories that are dual to the harmonic oscillator. Sounds interesting. So, what's going on? This four and a half page paper starts out with one page discussing the general philosophy, how important GR's lesson to look for background independence is and how great dualities are. The holy grail would be to find a background independent theory that has some classical, long wavelength limit in which it looks like a metric theory. For dualities, the author mentions the Ising/Thirring model duality and of course AdS/CFT. The latter already involves a metric theory in terms of an ordinary field theory, but the AdS theory is not background independent, it is an expansion around AdS and one has to maintain the AdS symmetries at least asymptotically. So he looks for something different.

So what constitutes a duality? Roughly speaking it means that there is a single theory (defined in an operational sense, the theory is the collection of what one could measure) that has at least two different looking descriptions. For example, there is one theory that can either be described as type IIB strings on an AdSxS5 background or as N=4 strongly coupled large N gauge theory. Husain gives a more precise definition when he claims:

Then he goes on to show that there is a one to one map between the observables in some topological theories and the observables of the harmonic oscillator. Unfortunately, such a map is not enough for a duality in the usual sense. Otherwise, all quantum mechanical theories with a finite number of degrees of freedom would be dual to each other. All have equivalent Hilbert spaces and thus operators acting on one Hilbert space can also be interpreted as operators acting in the other Hilbert space. But this is only kinematics. What is different between the harmonic oscillator and the hydrogen atom say is the dynamics. They have different Hamiltonians. By the above argument, the oscillator Hamiltonian also acts in the hydrogen atom Hilbert space but it does not generate the dynamics.

So what does Husain do concretely? He focusses on BF theory on space-times of the globally hyperbolic form R x Sigma for some Euclidean compact 3-manifold Sigma. There are two fields, a 2-form B and a (abelian for simplicity) 1-form A with field strength F=dA. The Lagrangian is just B wedge F. This theory does not need a metric and is therefore topological.

Classically, the equations of motion are dB=0 and F=0. For quantization, Husain performs a canonical analysis. From now on, indices a,b,c run over 1,2,3. He finds that epsilon_abc B_bc is the canonical momentum for A_a and that there are first class constraints setting F_ab=0 and the spatial dB=0.

Observables come in two classes O1(gamma) and O2(S) where gamma is a closed path in Sigma and S is a closed 2-surface in Sigma. O1(gamma) is given by the integral of A over gamma, while O2(S) is the integral of B over S. Because of the constraints, these observables are invariant under deformations of S and gamma and thus only depend on homotopy classes of gamma and S. Thus one can think of O1 as living in H^1(Sigma) and O2 as living in H^2(Sigma).

Next, one computes the Poisson brackets of the observables and finds that two O1's or two O2's Poisson commute while {O1(gamma),O2(S)} is given in terms of the intersection number of gamma and S.

As the theory is diffeomorphism invariant, the Hamiltonian vanishes and the dynamics are trivial.

Basically, that's all one could (should) say about this theory. However Husain goes on: First, he specialises to Sigma = S1 x S2. This means (up to equivalence) there is only one non-trivial gamma (winding around S1) and one S (winding around the S2). Their intersection is 1. Thus, in the quantum theory, O1(gamma) and O2(S) form a canonical pair of operators having the same commutation relations as x and p. Another example is Sigma=T3 where H^1 = H^2 = R^3 so this is like 3d quantum mechanics.

Husain chooses to form combinations of these operators like for creation and annihilation operators for the harmonic oscillator. According to the above definition of "duality" this constitutes a duality between the BF-theory and the harmonic oscillator: We have found a one to one map between the algebras of observables.

What he misses is that there is a similar one to one map to any other quantum mechanical system: One could directly identify x and p and use that for any composite observables (for example for the particle in any complicated potential). Alternatively, one could take any orthogonal generating system e1, e2,... of a (separable) Hilbert space and define latter operators a+ mapping e(i) to e(i+1) and a acting in the opposite direction. Big deal. This map lifts to a map for all operators acting on that Hilbert space to the observables of the BF-theory. So, for the above definition of "duality" all systems with a finite number of degrees of freedom are dual to each other.

What is missing of course (and I should not hesitate to say that Husain realises that) is that this is only kinematical. A system is not only given by its algebra of observables but also by the dynamics or time evolution or Hamiltonian: On has to single out one of the operators in the algebra as the Hamiltonian of the system (leaving issues of convergence aside, strictly one only needs time evolution as an automorphism of the algebra and can later ask if there is actually an operator that generates it. This is important in the story of the LQG string but not here).

For BF-theory, this operator is H_BF=0 while for the harmonic oscillator it is H_o= a^+ a + 1/2. So the dynamics of the two theories have no relation at all. Still, Husain makes a big deal out of this by claiming that the harmonic oscillator Hamiltonian is dual to the occupation number operator in the topological theory. So what? The occupation number operator is just another operator with no special meaning in that system. But even more, he stresses the significance of the 1/2: The occupation number doesn't have that and if for some (unclear) reason one would take that operator as a generator of something, there would not be any zero point energy. And this might have a relevance for the cosmological constant problem.

What is that? There is one (as it happens background independent) theory that has a Hamiltonian. But if one takes a different, random operator as the Hamiltonian, that has its smallest eigenvalue at 0. What has that to say about the cosmological constant? Maybe one should tell these people that there are other dualities that not only identify the structure of the observable algebra (without dynamics). But, dear reader, be warned that in the near future we will read or hear that background independent theories have solved the cosmological constant problem.

Let me end with a question that I would really like to understand (and probably, there is a textbook answer to it): If I quantise a system the way we have done it for the LQG string, one does the following: One singles out special observables say x and p (or their exponentials) and promotes them to elements of the abstract quantum algebra (the Weyl algebra in the free case). Then there are automorphisms of the classical algebra that get promoted to automorphisms of the quantum algebra in a straight forward way. For the string, those were the diffeomorphisms, but take simply the time evolution. Then one uses the GNS construction to construct a Hilbert space and tries to find operators in that Hilbert space that implement those automorphisms: Be a_t the automorphism in the algebra sending observable O to a_t(O) and p the representation map that sends algebra elements to operators on the Hilbert space. Then one looks for unitary operators U(t) (or their hermitian generators) such that

p( a_t(O) ) = U(t)^-1 p(O) U(t)

In the case of time evolution, this yields the quantum Hamilton operator.

However, there is an ambiguity in the above procedure: If U(t) fulfils the above requirement, so does e^(i phi(t)) U(t) for any real number phi(t). Usually, there is an additional requirement as t comes from a group (R in the case of time translations but Diff(S^1) in the case of the string) and one could require that U(t1) U(t2) = U(t1 + t2) where + is the group law. This does not leave much room for the t-dependence of phi(t). In fact, in general it is not possible to find phi(t) such that this relation is always satisfied. In that case we have an anomaly and this is exactly the way the central charge appears in the LQG string case.

Assume now, that there is no anomaly. Then it is still possible to shift phi by a constant times t (in case of a one dimensional group of automorphisms, read: time translation). This does not effect any of the relations about the implementation of the automorphisms a_t or the group representation property. But in terms of the Hamiltonian, this is nothing but a shift of the zero point of energy. So, it seems to me that none of the physics is affected by this. The only way to change this is to turn on gravity because the metric couples to this in form of a cosmological constant.

Am I right? That would mean that any non-gravitational theory cannot say anything about zero point energies because they are only observable in gravity. So if you are a studying any theory that does not contain gravity you cannot make any sensible statements about zero point energies or the cosmological constant.

So what constitutes a duality? Roughly speaking it means that there is a single theory (defined in an operational sense, the theory is the collection of what one could measure) that has at least two different looking descriptions. For example, there is one theory that can either be described as type IIB strings on an AdSxS5 background or as N=4 strongly coupled large N gauge theory. Husain gives a more precise definition when he claims:

Two [...] theories [...] are equivalent at the quantum level. "Equivalent" means that there is a precise correspondence between operators and quantum states in the dual theories, and a relation between their coupling constants, at least in some limits.

Then he goes on to show that there is a one to one map between the observables in some topological theories and the observables of the harmonic oscillator. Unfortunately, such a map is not enough for a duality in the usual sense. Otherwise, all quantum mechanical theories with a finite number of degrees of freedom would be dual to each other. All have equivalent Hilbert spaces and thus operators acting on one Hilbert space can also be interpreted as operators acting in the other Hilbert space. But this is only kinematics. What is different between the harmonic oscillator and the hydrogen atom say is the dynamics. They have different Hamiltonians. By the above argument, the oscillator Hamiltonian also acts in the hydrogen atom Hilbert space but it does not generate the dynamics.

So what does Husain do concretely? He focusses on BF theory on space-times of the globally hyperbolic form R x Sigma for some Euclidean compact 3-manifold Sigma. There are two fields, a 2-form B and a (abelian for simplicity) 1-form A with field strength F=dA. The Lagrangian is just B wedge F. This theory does not need a metric and is therefore topological.

Classically, the equations of motion are dB=0 and F=0. For quantization, Husain performs a canonical analysis. From now on, indices a,b,c run over 1,2,3. He finds that epsilon_abc B_bc is the canonical momentum for A_a and that there are first class constraints setting F_ab=0 and the spatial dB=0.

Observables come in two classes O1(gamma) and O2(S) where gamma is a closed path in Sigma and S is a closed 2-surface in Sigma. O1(gamma) is given by the integral of A over gamma, while O2(S) is the integral of B over S. Because of the constraints, these observables are invariant under deformations of S and gamma and thus only depend on homotopy classes of gamma and S. Thus one can think of O1 as living in H^1(Sigma) and O2 as living in H^2(Sigma).

Next, one computes the Poisson brackets of the observables and finds that two O1's or two O2's Poisson commute while {O1(gamma),O2(S)} is given in terms of the intersection number of gamma and S.

As the theory is diffeomorphism invariant, the Hamiltonian vanishes and the dynamics are trivial.

Basically, that's all one could (should) say about this theory. However Husain goes on: First, he specialises to Sigma = S1 x S2. This means (up to equivalence) there is only one non-trivial gamma (winding around S1) and one S (winding around the S2). Their intersection is 1. Thus, in the quantum theory, O1(gamma) and O2(S) form a canonical pair of operators having the same commutation relations as x and p. Another example is Sigma=T3 where H^1 = H^2 = R^3 so this is like 3d quantum mechanics.

Husain chooses to form combinations of these operators like for creation and annihilation operators for the harmonic oscillator. According to the above definition of "duality" this constitutes a duality between the BF-theory and the harmonic oscillator: We have found a one to one map between the algebras of observables.

What he misses is that there is a similar one to one map to any other quantum mechanical system: One could directly identify x and p and use that for any composite observables (for example for the particle in any complicated potential). Alternatively, one could take any orthogonal generating system e1, e2,... of a (separable) Hilbert space and define latter operators a+ mapping e(i) to e(i+1) and a acting in the opposite direction. Big deal. This map lifts to a map for all operators acting on that Hilbert space to the observables of the BF-theory. So, for the above definition of "duality" all systems with a finite number of degrees of freedom are dual to each other.

What is missing of course (and I should not hesitate to say that Husain realises that) is that this is only kinematical. A system is not only given by its algebra of observables but also by the dynamics or time evolution or Hamiltonian: On has to single out one of the operators in the algebra as the Hamiltonian of the system (leaving issues of convergence aside, strictly one only needs time evolution as an automorphism of the algebra and can later ask if there is actually an operator that generates it. This is important in the story of the LQG string but not here).

For BF-theory, this operator is H_BF=0 while for the harmonic oscillator it is H_o= a^+ a + 1/2. So the dynamics of the two theories have no relation at all. Still, Husain makes a big deal out of this by claiming that the harmonic oscillator Hamiltonian is dual to the occupation number operator in the topological theory. So what? The occupation number operator is just another operator with no special meaning in that system. But even more, he stresses the significance of the 1/2: The occupation number doesn't have that and if for some (unclear) reason one would take that operator as a generator of something, there would not be any zero point energy. And this might have a relevance for the cosmological constant problem.

What is that? There is one (as it happens background independent) theory that has a Hamiltonian. But if one takes a different, random operator as the Hamiltonian, that has its smallest eigenvalue at 0. What has that to say about the cosmological constant? Maybe one should tell these people that there are other dualities that not only identify the structure of the observable algebra (without dynamics). But, dear reader, be warned that in the near future we will read or hear that background independent theories have solved the cosmological constant problem.

Let me end with a question that I would really like to understand (and probably, there is a textbook answer to it): If I quantise a system the way we have done it for the LQG string, one does the following: One singles out special observables say x and p (or their exponentials) and promotes them to elements of the abstract quantum algebra (the Weyl algebra in the free case). Then there are automorphisms of the classical algebra that get promoted to automorphisms of the quantum algebra in a straight forward way. For the string, those were the diffeomorphisms, but take simply the time evolution. Then one uses the GNS construction to construct a Hilbert space and tries to find operators in that Hilbert space that implement those automorphisms: Be a_t the automorphism in the algebra sending observable O to a_t(O) and p the representation map that sends algebra elements to operators on the Hilbert space. Then one looks for unitary operators U(t) (or their hermitian generators) such that

p( a_t(O) ) = U(t)^-1 p(O) U(t)

In the case of time evolution, this yields the quantum Hamilton operator.

However, there is an ambiguity in the above procedure: If U(t) fulfils the above requirement, so does e^(i phi(t)) U(t) for any real number phi(t). Usually, there is an additional requirement as t comes from a group (R in the case of time translations but Diff(S^1) in the case of the string) and one could require that U(t1) U(t2) = U(t1 + t2) where + is the group law. This does not leave much room for the t-dependence of phi(t). In fact, in general it is not possible to find phi(t) such that this relation is always satisfied. In that case we have an anomaly and this is exactly the way the central charge appears in the LQG string case.

Assume now, that there is no anomaly. Then it is still possible to shift phi by a constant times t (in case of a one dimensional group of automorphisms, read: time translation). This does not effect any of the relations about the implementation of the automorphisms a_t or the group representation property. But in terms of the Hamiltonian, this is nothing but a shift of the zero point of energy. So, it seems to me that none of the physics is affected by this. The only way to change this is to turn on gravity because the metric couples to this in form of a cosmological constant.

Am I right? That would mean that any non-gravitational theory cannot say anything about zero point energies because they are only observable in gravity. So if you are a studying any theory that does not contain gravity you cannot make any sensible statements about zero point energies or the cosmological constant.

## Tuesday, November 15, 2005

### Fixing radios

As everybody knows a gauge invariance is the only thing that needs to be fixed if it is not broken. So broken radios should be fixed. Here is a nice article about how this problem would be approached by an experimental biologist that somebody posted to a noticeboard here at IUB. You could read some analogies to string theory into it but you could also just read it for fun.

## Thursday, November 03, 2005

### Sudoku types

You surely have seen these sudoku puzzles in newspapers: In the original version, it is a 9x9 grid with some numbers inserted. The problem is to fill the grid with number 1 to 9 such that in each row, column and 3x3 block each digit appears exactly once.

In the past I was mildly interested in them, I had done perhaps five or six over several weeks, mostly the ones in Die Zeit. But the last couple of days I was back in the UK where this is really a big thing. And our host clearly is an addict with books all over the house. So I myself did a couple more of them. And indeed, there is something to it.

But what I wanted to point out is that I found several types of ways to approach these puzzles. This starts from "I don't care about puzzles, especially if they are about numbers". This is an excellent attitude because it saves you lots of time. However, sudokus are about permutations of five things and it just happens that they are usually numbers but this is inessentiel in the problem. A similar approach was taken by a famous Cambridge physicist who expressed that he found "solving systems of linear equations" not too entertaining. Well, either he's has a much deeper understanding of sudokus than me or he has not really looked at a single one to see that probably linear equations are of no help at all.

But the main distinction (and that probably tells about your degree of geekiness) is in my mind: How many sudokus do you solve before you write a progam that does it? If the answer is 0 you are really lazy. You could object that if you enjoy solving puzzles why would you delegate that fun to your computer but this just shows that you have never felt the joy of programming. Here is my go at it.

In the past I was mildly interested in them, I had done perhaps five or six over several weeks, mostly the ones in Die Zeit. But the last couple of days I was back in the UK where this is really a big thing. And our host clearly is an addict with books all over the house. So I myself did a couple more of them. And indeed, there is something to it.

But what I wanted to point out is that I found several types of ways to approach these puzzles. This starts from "I don't care about puzzles, especially if they are about numbers". This is an excellent attitude because it saves you lots of time. However, sudokus are about permutations of five things and it just happens that they are usually numbers but this is inessentiel in the problem. A similar approach was taken by a famous Cambridge physicist who expressed that he found "solving systems of linear equations" not too entertaining. Well, either he's has a much deeper understanding of sudokus than me or he has not really looked at a single one to see that probably linear equations are of no help at all.

But the main distinction (and that probably tells about your degree of geekiness) is in my mind: How many sudokus do you solve before you write a progam that does it? If the answer is 0 you are really lazy. You could object that if you enjoy solving puzzles why would you delegate that fun to your computer but this just shows that you have never felt the joy of programming. Here is my go at it.

## Wednesday, October 26, 2005

### Spacetime dynamics and RG flow

A couple of days ago there appeared a paper by Freedman, Headrick, and Lawrence that I find highly original. It not just follows up on a number of other papers but actually answers a question that has been lurking around for quite a while but had not really been addressed so far (at least as far as I am aware of). I had asked myself the question before but attributed it to my lack of understanding of the field and never worried enough to try to work it out myself. At least, these gentlemen have and produced this beautiful paper.

It is set in the context of tachyon condensation (and this is of course where all this K-Theory stuff is located): You imagine setting up some arrangement of branes and (as far as this paper is concerned even more important as this is about closed strings) some spatial manifold (if you want with first fundamental form, that is the conjugated momentum to a spatial metric) with all the fields you like in terms of string theory and ask what happens.

In general, your setup will be unstable. There could be forces or you could be in some unstable equilibrium. The result is that typically your space-time goes BOOOOOOOOOOM as you had Planck scale energy densities all around but eventually the dust (i.e. gravitational and other radiation) settles and you ask: What will I find?

The K-Theory approach to this is to compute all the conserved charges before turning on dynamics and then predicting you will end up in the lowest energy state with the same value for all the charges (here one might worry that we are in a gravitational theory which does not really have local energy density but only different expansion rates but let's not do that tonight). Then K-Theory (rather than for example de Rham or some other cohomology) is the correct theory of charges.

The disadvantage of this approach is that it is potentially very crude and just knowing a couple of charges might not tell you a lot.

You can also try to approach the problem from the worldsheet perspective. There you start out with a CFT and perturb it by a relevant operator. This kicks off a renormalisation group flow and you will end up in some other CFT describing the IR fixed point. General lore tells you that this IR RG fixed point describes your space-time after the boom. The c-theorem tells you that the central charge decreases during the flow but of course you want a critical string theory before and after and this is compensated by the dilaton getting the appropriate slope.

The paper is addresses this lore and checks if it is true. The first concern is of course that proper space-time dynamics is expected to (classically) be given by some ordinary field equation in some effective theory with typically two time derivatives and time reversal symmetry where the beta functions play the role of force. In contrast, RG flow is a first order differential equation where the beta-functions point in the direction of the flow. And (not only because of the c-theorem) there is a preferred direction of time (downhill from UV to IR).

As it is shown in the paper, this general scheme is in fact true. And since we have to include the dilaton anyway, this also gets its equation of motion and (like the Hubble term in Friedman Robertson Walker cosmology) provides a damping term for the space-time fields. So, at least for large damping, the space-time theory is also effectively first order but at small (or negative which is possible and of course needed for time reversal) damping the dynamics is of different character.

What the two descriptions agree on is the set of possible end-points of this tachyon condensation, but in general the dynamics is different and because second order equations can overshoot at minima, the proper space-time dynamics can end up in a different minimum than predicted by RG flow.

All this (with all details and nice calculations) is in the paper and I can strongly recommend reading it!

It is set in the context of tachyon condensation (and this is of course where all this K-Theory stuff is located): You imagine setting up some arrangement of branes and (as far as this paper is concerned even more important as this is about closed strings) some spatial manifold (if you want with first fundamental form, that is the conjugated momentum to a spatial metric) with all the fields you like in terms of string theory and ask what happens.

In general, your setup will be unstable. There could be forces or you could be in some unstable equilibrium. The result is that typically your space-time goes BOOOOOOOOOOM as you had Planck scale energy densities all around but eventually the dust (i.e. gravitational and other radiation) settles and you ask: What will I find?

The K-Theory approach to this is to compute all the conserved charges before turning on dynamics and then predicting you will end up in the lowest energy state with the same value for all the charges (here one might worry that we are in a gravitational theory which does not really have local energy density but only different expansion rates but let's not do that tonight). Then K-Theory (rather than for example de Rham or some other cohomology) is the correct theory of charges.

The disadvantage of this approach is that it is potentially very crude and just knowing a couple of charges might not tell you a lot.

You can also try to approach the problem from the worldsheet perspective. There you start out with a CFT and perturb it by a relevant operator. This kicks off a renormalisation group flow and you will end up in some other CFT describing the IR fixed point. General lore tells you that this IR RG fixed point describes your space-time after the boom. The c-theorem tells you that the central charge decreases during the flow but of course you want a critical string theory before and after and this is compensated by the dilaton getting the appropriate slope.

The paper is addresses this lore and checks if it is true. The first concern is of course that proper space-time dynamics is expected to (classically) be given by some ordinary field equation in some effective theory with typically two time derivatives and time reversal symmetry where the beta functions play the role of force. In contrast, RG flow is a first order differential equation where the beta-functions point in the direction of the flow. And (not only because of the c-theorem) there is a preferred direction of time (downhill from UV to IR).

As it is shown in the paper, this general scheme is in fact true. And since we have to include the dilaton anyway, this also gets its equation of motion and (like the Hubble term in Friedman Robertson Walker cosmology) provides a damping term for the space-time fields. So, at least for large damping, the space-time theory is also effectively first order but at small (or negative which is possible and of course needed for time reversal) damping the dynamics is of different character.

What the two descriptions agree on is the set of possible end-points of this tachyon condensation, but in general the dynamics is different and because second order equations can overshoot at minima, the proper space-time dynamics can end up in a different minimum than predicted by RG flow.

All this (with all details and nice calculations) is in the paper and I can strongly recommend reading it!

## Monday, October 24, 2005

### Hamburg summary

So, finally, I am back on the train to Bremen and can write up my summary of the openening colloquium of the centre for mathematical physics in Hamburg.

As I reported earlier, this was a conference with an exceptionally well selected program. Not that all talks were in exactly on topics that I think about day and night but with very few exceptions, the speakers had something interesting to say and found good ways to present it. Well done, organisers! I hope your centre will be as successful as this colloquium!

The first physics talk on Thursday was Nikita Nekrasov who talked about Berkovtis' pure spinor approach. As you might know, this is an attempt to combine the advantages of the Green-Schwarz and the Ramond-Neveu-Schwarz formalism for superstrings and gives a covariant formulation with manifest supersymmetry in the target (amongst other things, Lubos has talked about this before). This is done by including not only the X and theta coordinates of the target superspace but also having an additional spinor lambda which obeys the "pure spinor" constraints lambda gamma^i lambda = 0 for all i. You can convince yourself that this equation describes the cone over SO(10)/U(5). This space has a conical singularity at the origin and Nikita asked the question if this can really give a consistent quantization. In particular, the beta-gamma-ghosts for the spinors have to be well defined not only in a patch but globally.

Nikita argued (showing quite explicitly how Cech-Cohomology arises) that this requires the first two Chern classes to vanish. He first showed how not to and then how to properly resolve the singularity of the cone and concluded that in the end, the pure spinor quantization is in fact consistent. However (and unfortunately my notes are cryptic at that point) he mentioned that there are still open problems when you try do use this approach for worldsheet of genus larger than two. Thus, even in this approach there might still be technical difficulties to define string amplitudes beyond two loops.

The next speaker was Roberto Longo. He is one of the big shots in the algebraic approach to quantum field theory and he talked about 2D conformal theories. As you know, the algebraists start from a mathematical definition of a quantum field theory (a Haag-Kastler net which is a refinement of the Wightman axioms) and then deduce general theorems with proofs (of mathematical standard) valid for large classes of QFTs. The problem however is to give examples of theories that can be shown to obey their definition. Free fields do but are a bit boring after a while. And perturbative descriptions on terms of Feynman rules are no good as long as the expansion can be shown to converge (which is probably wrong). You could use the lattice regularization to get a handle on gauge theories but there you have to show (and this hasn't been done despite decades of attempts in the constructive field theory community) Lorentz invariance, positivity of the spectrum and locality, all after the continuum limit has been taken. So you have a nice framework but you are not sure what theories it applies to (although there is little doubt that asymptotically free gauge theories should be in that class). Now Longo reviewed how you can cast the usual language of 2d CFTs into their language and thus have additional, interacting examples. He displayed several theorems that however sounded vaguely familiar to people that have some background in the BPZ approach to CFTs.

The last speaker of Thursday was Nikolai Reshetikhin. He started out with a combinatorial problem of certain graphs with two coloured vertices, transformed that into a dimer model and ended up getting a discrete version of a Dirac operator on graphs (in the sense that the adjacency matrix can give the Laplacian). He also mentioned a related version of Boson-Fermion-correspondence and a relation to the quantum foam of Vafa and collaborators but again my notes are too spares to be of any more use there.

Friday morning started with Philippe Di Francesco. He started out with a combinatorial problem again: Count 4-valent planar graphs with two external edges. He transformed this to rooted ternary trees with black and white leaves and always one more black than white leaves. This could be solved by giving a solvable recursion relation for the generating function. The next question was how many edges (in the first graph) have to be transversed to get from the outside to the face with the other external edge. Again there was a (now slightly more involved) generating function which he again solved and showed that the solution can be thought of as a one soliton solution in terms of a tau function.

After that, he talked about the six-vertex-model and treated it with similar means, showed a beautiful movie of how the transfer matrix acts and suddenly was right in the middle of Peron-Frobenius eigenvectors, Temperley-Lieb algebras and Yang-Baxter equation. Amazing!

Then came Tudor Ratiu who gave quite a dramatic presentation but I have to admit I did not get much out of it. It was on doing the symplectic treatment of symmetries in the infinite dimensional case and how to deal with the functional analysis issues coming up there (in general what would be a Hamiltonian vector field is not a vector field etc.)

John Cardy discussed Stochastic Loewner Evolution: Take as an example the 2D Ising model on a hexagonal lattice and instead of the spins view the phase boundary as your fundamental observable. Then you can ask about its statistics and again in the continuum limit this should be given in terms of a conformal field theory. He focussed on a phase boundary that runs from boundary to boundary. The trick is to parametrise it by t and consider it only up to a certain t1. If the domain before was the disk it is now a disk with a path that wiggles from the boundary somewhere into the interior. By the uniformisation theorem there is a function that maps the complement of the path again onto the unit disk, call it g_t1. Instead of looking at the propagation of the path you can ask how g_t1 varies if you change t1. Cardy derived a differential equation for g_t1 and argued that all the information about the CFT is encoded in the solution to this equation with the appropriate boundary conditions.

The afternoon was started by Robbert Dijkgraaf. He reviewed the connection of black hole entropy (including quantum corrections as computed by Cardoso, de Wit and Mohaupt) and wave functions in topological string theory. He did not give much details (which was good given the broad audience) but one thing I had not yet heard about is to understand why the entropy (after the Legendre transform to electric charges and chemical potential that Vafa and friends discovered to simplify the CdEM result) has to be treated like a wave function while the topological string partition function appears like a probability. Dijkgraaf proposed that the fact that Omega, the holomorphic volume form varies over a SLAG in the complex structure moduli space could be a key to understand this as a Lagrangian submanifold is exactly where a wave function lives after quantization (it only depends on position and not on momenta!). Furthermore, he displayed the diagram for open-closed string duality that can be viewed as a loop of an open string stretched between to D-branes or the D-branes exchanging a closed string at tree level. He interpreted this as an index theorem: The open string loop looked like Tr((-1)^F D-slash) with trace for the loop while the closed string side is given by the integral over ch(E1) ch(E2) A-roof(R) where E1/2 are bundles on the D-brane. He argued that the right hand side looked like scalar product with the Chern classes as wave function and the A-roof-genus as measure. He went on discussing YM on a 2-torus and free fermions (via the eigenvalues of the holonomy). Those are non-relativistic and thus have two Fermi surfaces one for each sign of the square root in the dispersion relation. Perturbations are then about creating holes in these Fermi surfaces and 1/N (N being interpreted as the difference between the two surfaces) effects appear when a hole makes it through the energy band to the other Fermi surface. This again can be computed via a recursion relation and Dijkgraaf ended by interpreting it as being about a grand canonical ensemble of multi black holes rather than a single one.

Then came Bob Wald who reviewed thirty years of quantum field theory on curved backgrounds. If you leave Minkowsky space you have to give up many things that are quite helpful in the flat space approach: Poincare invariance, a preferred vacuum state, the notion of particles (as irreps of the Poincare group), a Fourier transform to momentum space, Wick rotation, the S-Matrix. Wald gave an overview of how people have learnt to deal with these difficulties and which more general concepts replace the flat space one. In the morning, the lecture room was quite cool and more and more people put on their coats. In contrast in the afternoon, the heating worked properly however at the expense of higher levels of carbon dioxide that in my case overcame the effects of lots of coffee from the coffee breaks. So for this lecture I cannot tell you anymore.

Last speaker before the banquet was Sasha Zamolodchikov. He again admitted to mainly live in two dimensions and discussed behaviour of the mass gap and free energy close to criticality. Those are dominated by the most relevant operator perturbing the CFT and usually are well understood. He however wanted to understand the sub-leading contributions and gave a very general argument (which I am unfortunately unable to reproduce) of why the expectation value of the L_(-2) L-bar_(-1) descendant of the vacuum (which is responsible for these sub-leading effects) is given by the energy density.

The last day started out (half an hour later as Friday as I only found out by being the only one at the lecture hall) with Martin Zirnbauer. As he mentioned many different systems (atomic nuclei, disordered metallic grains, chaotic billiards, microwaves in a cavity, acoustic modes of vibration of solids, quarks in non-abelian gauge theory (?) and the zeros of the Riemann zeta function) show similar spectral behaviour: When you plot the histogram of energy differences between levels you do not get a Poisson distribution as you would get if the energy levels are just random but a curve that starts of with a power law and later decays exponentially. There are three different power laws and the universality classes are represented by Gaussian matrix models with either hermitian, real symmetric or quaternion self-dual matrices. This has been well known for decades. Zirnbauer now argued that you will get 11 classes if you allow for super-matrices. He mentioned a theorem of his that showed that any Hamiltonian quadratic in fermionic creation and annihilation operators is in one of those classes (although I did not understand the relevance of this result for what he discussed before). He went on and claimed (again not convincing to me) that the physics of the earlier systems would be described by a non-linear sigma model with these 11 supermatrix spaces as targets. He called all this supersymmetry but to me it sounded as at best this was about systems with both bosons and fermions. In the discussion he had to admit that although he has supergroups, the Hamiltonian is not an element of these and thus the crucial relation H={Q,Q} that gives us all the nice properties of really supersymmetric theories does not hold in his case.

Then came Matthias Staudacher who gave a nice presentation of integrability properties in the AdS/CFT correspondence in particular in spin chains and rotating strings. Most of this we have heard already several times but new to me was the detailed description of how the (generalised) Bethe ansatz arises. As you know, the results about spin-chains and strings do not agree anymore at the three loop level. This is surprising as they agreed up to two loops but on the other hand you are doing different expansions in the two cases so this does not mean that the AdS/CFT correspondence is in trouble. This is pretty much like the situation in M(atrix)-model vs. supergravity. There are certain amplitudes that work (probably those protected by susy) and certain more complicated ones that do not. Matthias summarised this by making the statement "Who cares about the correspondence if you have integrability?"

The conference was rounded off by Nigel Hitchin who gave an overview of generalised geometry. Most of this is beautifully explained in Gualtieri's thesis, but there are a few points to note: Hitchin only talked about generalised metrics (given in terms of generalisations of the graph of g in TM+T^*M he did not mention generalised complex structure (except than in the Q&A period). He showed how to write the Levi-Civita connection (well, with torsion given by +/- H) in terms of the Lie- and the Courrant-bracket and the generalised metric (actually g+/-B) given in terms of maximal rank isotropic subbundles. What was new to me was how to carry out generalised Hamiltonian reduction of a group action (which he said was related to gauged WZW-models): The important step is to lift the Hamilton vector field X to X + xi_a where a labels the coordinate patch under consideration. It is important that under changes of coordinates xi changes as xi_b - xi_a = i_X dA_ab where A_ab is the 1-form that translates the two B-fields B_a and B_b. Then one can define L_X (Y+eta_a) = Lie_X (Y+eta_a) -i_Y dxi_a in terms of the Lie derivative Lie. This is globally defined as it works across patches. Now if you have a symmetry, take K to be the bundle of its Hamilton vector fields and K-perp its orthogonal bundle (containing K). Then what you want is the bundle E-bar = (K-perp / K)/G. You have the exact sequence 0->T*(M/G)->E-bar->T(M/G)->0 with non-degenerate inner product and the Courrant bracket descends nicely but it is not naturally a direct sum. Furthermore, you can define the 'moment form' c = i_X B_a - xi_a which makes sense globally. We have dc = i_X H and on the quotient g(X,Y) = i_Y c. Note that even when dB=0 on M before, it we can have H non-vanishing in cohomology on M/G because the horizontal vector bundle can have a curvature F and in fact downstairs one computes H=cF. Again, as always in this generalised geometry context, I find this extremely beautiful!

Update: After arriving at IUB, I see that Urs has reported from Nikita's talk.

Update: Giuseppe Policastro has pointed out a couple of typos that I corrected.

As I reported earlier, this was a conference with an exceptionally well selected program. Not that all talks were in exactly on topics that I think about day and night but with very few exceptions, the speakers had something interesting to say and found good ways to present it. Well done, organisers! I hope your centre will be as successful as this colloquium!

The first physics talk on Thursday was Nikita Nekrasov who talked about Berkovtis' pure spinor approach. As you might know, this is an attempt to combine the advantages of the Green-Schwarz and the Ramond-Neveu-Schwarz formalism for superstrings and gives a covariant formulation with manifest supersymmetry in the target (amongst other things, Lubos has talked about this before). This is done by including not only the X and theta coordinates of the target superspace but also having an additional spinor lambda which obeys the "pure spinor" constraints lambda gamma^i lambda = 0 for all i. You can convince yourself that this equation describes the cone over SO(10)/U(5). This space has a conical singularity at the origin and Nikita asked the question if this can really give a consistent quantization. In particular, the beta-gamma-ghosts for the spinors have to be well defined not only in a patch but globally.

Nikita argued (showing quite explicitly how Cech-Cohomology arises) that this requires the first two Chern classes to vanish. He first showed how not to and then how to properly resolve the singularity of the cone and concluded that in the end, the pure spinor quantization is in fact consistent. However (and unfortunately my notes are cryptic at that point) he mentioned that there are still open problems when you try do use this approach for worldsheet of genus larger than two. Thus, even in this approach there might still be technical difficulties to define string amplitudes beyond two loops.

The next speaker was Roberto Longo. He is one of the big shots in the algebraic approach to quantum field theory and he talked about 2D conformal theories. As you know, the algebraists start from a mathematical definition of a quantum field theory (a Haag-Kastler net which is a refinement of the Wightman axioms) and then deduce general theorems with proofs (of mathematical standard) valid for large classes of QFTs. The problem however is to give examples of theories that can be shown to obey their definition. Free fields do but are a bit boring after a while. And perturbative descriptions on terms of Feynman rules are no good as long as the expansion can be shown to converge (which is probably wrong). You could use the lattice regularization to get a handle on gauge theories but there you have to show (and this hasn't been done despite decades of attempts in the constructive field theory community) Lorentz invariance, positivity of the spectrum and locality, all after the continuum limit has been taken. So you have a nice framework but you are not sure what theories it applies to (although there is little doubt that asymptotically free gauge theories should be in that class). Now Longo reviewed how you can cast the usual language of 2d CFTs into their language and thus have additional, interacting examples. He displayed several theorems that however sounded vaguely familiar to people that have some background in the BPZ approach to CFTs.

The last speaker of Thursday was Nikolai Reshetikhin. He started out with a combinatorial problem of certain graphs with two coloured vertices, transformed that into a dimer model and ended up getting a discrete version of a Dirac operator on graphs (in the sense that the adjacency matrix can give the Laplacian). He also mentioned a related version of Boson-Fermion-correspondence and a relation to the quantum foam of Vafa and collaborators but again my notes are too spares to be of any more use there.

Friday morning started with Philippe Di Francesco. He started out with a combinatorial problem again: Count 4-valent planar graphs with two external edges. He transformed this to rooted ternary trees with black and white leaves and always one more black than white leaves. This could be solved by giving a solvable recursion relation for the generating function. The next question was how many edges (in the first graph) have to be transversed to get from the outside to the face with the other external edge. Again there was a (now slightly more involved) generating function which he again solved and showed that the solution can be thought of as a one soliton solution in terms of a tau function.

After that, he talked about the six-vertex-model and treated it with similar means, showed a beautiful movie of how the transfer matrix acts and suddenly was right in the middle of Peron-Frobenius eigenvectors, Temperley-Lieb algebras and Yang-Baxter equation. Amazing!

Then came Tudor Ratiu who gave quite a dramatic presentation but I have to admit I did not get much out of it. It was on doing the symplectic treatment of symmetries in the infinite dimensional case and how to deal with the functional analysis issues coming up there (in general what would be a Hamiltonian vector field is not a vector field etc.)

John Cardy discussed Stochastic Loewner Evolution: Take as an example the 2D Ising model on a hexagonal lattice and instead of the spins view the phase boundary as your fundamental observable. Then you can ask about its statistics and again in the continuum limit this should be given in terms of a conformal field theory. He focussed on a phase boundary that runs from boundary to boundary. The trick is to parametrise it by t and consider it only up to a certain t1. If the domain before was the disk it is now a disk with a path that wiggles from the boundary somewhere into the interior. By the uniformisation theorem there is a function that maps the complement of the path again onto the unit disk, call it g_t1. Instead of looking at the propagation of the path you can ask how g_t1 varies if you change t1. Cardy derived a differential equation for g_t1 and argued that all the information about the CFT is encoded in the solution to this equation with the appropriate boundary conditions.

The afternoon was started by Robbert Dijkgraaf. He reviewed the connection of black hole entropy (including quantum corrections as computed by Cardoso, de Wit and Mohaupt) and wave functions in topological string theory. He did not give much details (which was good given the broad audience) but one thing I had not yet heard about is to understand why the entropy (after the Legendre transform to electric charges and chemical potential that Vafa and friends discovered to simplify the CdEM result) has to be treated like a wave function while the topological string partition function appears like a probability. Dijkgraaf proposed that the fact that Omega, the holomorphic volume form varies over a SLAG in the complex structure moduli space could be a key to understand this as a Lagrangian submanifold is exactly where a wave function lives after quantization (it only depends on position and not on momenta!). Furthermore, he displayed the diagram for open-closed string duality that can be viewed as a loop of an open string stretched between to D-branes or the D-branes exchanging a closed string at tree level. He interpreted this as an index theorem: The open string loop looked like Tr((-1)^F D-slash) with trace for the loop while the closed string side is given by the integral over ch(E1) ch(E2) A-roof(R) where E1/2 are bundles on the D-brane. He argued that the right hand side looked like scalar product

Then came Bob Wald who reviewed thirty years of quantum field theory on curved backgrounds. If you leave Minkowsky space you have to give up many things that are quite helpful in the flat space approach: Poincare invariance, a preferred vacuum state, the notion of particles (as irreps of the Poincare group), a Fourier transform to momentum space, Wick rotation, the S-Matrix. Wald gave an overview of how people have learnt to deal with these difficulties and which more general concepts replace the flat space one. In the morning, the lecture room was quite cool and more and more people put on their coats. In contrast in the afternoon, the heating worked properly however at the expense of higher levels of carbon dioxide that in my case overcame the effects of lots of coffee from the coffee breaks. So for this lecture I cannot tell you anymore.

Last speaker before the banquet was Sasha Zamolodchikov. He again admitted to mainly live in two dimensions and discussed behaviour of the mass gap and free energy close to criticality. Those are dominated by the most relevant operator perturbing the CFT and usually are well understood. He however wanted to understand the sub-leading contributions and gave a very general argument (which I am unfortunately unable to reproduce) of why the expectation value of the L_(-2) L-bar_(-1) descendant of the vacuum (which is responsible for these sub-leading effects) is given by the energy density.

The last day started out (half an hour later as Friday as I only found out by being the only one at the lecture hall) with Martin Zirnbauer. As he mentioned many different systems (atomic nuclei, disordered metallic grains, chaotic billiards, microwaves in a cavity, acoustic modes of vibration of solids, quarks in non-abelian gauge theory (?) and the zeros of the Riemann zeta function) show similar spectral behaviour: When you plot the histogram of energy differences between levels you do not get a Poisson distribution as you would get if the energy levels are just random but a curve that starts of with a power law and later decays exponentially. There are three different power laws and the universality classes are represented by Gaussian matrix models with either hermitian, real symmetric or quaternion self-dual matrices. This has been well known for decades. Zirnbauer now argued that you will get 11 classes if you allow for super-matrices. He mentioned a theorem of his that showed that any Hamiltonian quadratic in fermionic creation and annihilation operators is in one of those classes (although I did not understand the relevance of this result for what he discussed before). He went on and claimed (again not convincing to me) that the physics of the earlier systems would be described by a non-linear sigma model with these 11 supermatrix spaces as targets. He called all this supersymmetry but to me it sounded as at best this was about systems with both bosons and fermions. In the discussion he had to admit that although he has supergroups, the Hamiltonian is not an element of these and thus the crucial relation H={Q,Q} that gives us all the nice properties of really supersymmetric theories does not hold in his case.

Then came Matthias Staudacher who gave a nice presentation of integrability properties in the AdS/CFT correspondence in particular in spin chains and rotating strings. Most of this we have heard already several times but new to me was the detailed description of how the (generalised) Bethe ansatz arises. As you know, the results about spin-chains and strings do not agree anymore at the three loop level. This is surprising as they agreed up to two loops but on the other hand you are doing different expansions in the two cases so this does not mean that the AdS/CFT correspondence is in trouble. This is pretty much like the situation in M(atrix)-model vs. supergravity. There are certain amplitudes that work (probably those protected by susy) and certain more complicated ones that do not. Matthias summarised this by making the statement "Who cares about the correspondence if you have integrability?"

The conference was rounded off by Nigel Hitchin who gave an overview of generalised geometry. Most of this is beautifully explained in Gualtieri's thesis, but there are a few points to note: Hitchin only talked about generalised metrics (given in terms of generalisations of the graph of g in TM+T^*M he did not mention generalised complex structure (except than in the Q&A period). He showed how to write the Levi-Civita connection (well, with torsion given by +/- H) in terms of the Lie- and the Courrant-bracket and the generalised metric (actually g+/-B) given in terms of maximal rank isotropic subbundles. What was new to me was how to carry out generalised Hamiltonian reduction of a group action (which he said was related to gauged WZW-models): The important step is to lift the Hamilton vector field X to X + xi_a where a labels the coordinate patch under consideration. It is important that under changes of coordinates xi changes as xi_b - xi_a = i_X dA_ab where A_ab is the 1-form that translates the two B-fields B_a and B_b. Then one can define L_X (Y+eta_a) = Lie_X (Y+eta_a) -i_Y dxi_a in terms of the Lie derivative Lie. This is globally defined as it works across patches. Now if you have a symmetry, take K to be the bundle of its Hamilton vector fields and K-perp its orthogonal bundle (containing K). Then what you want is the bundle E-bar = (K-perp / K)/G. You have the exact sequence 0->T*(M/G)->E-bar->T(M/G)->0 with non-degenerate inner product and the Courrant bracket descends nicely but it is not naturally a direct sum. Furthermore, you can define the 'moment form' c = i_X B_a - xi_a which makes sense globally. We have dc = i_X H and on the quotient g(X,Y) = i_Y c. Note that even when dB=0 on M before, it we can have H non-vanishing in cohomology on M/G because the horizontal vector bundle can have a curvature F and in fact downstairs one computes H=cF. Again, as always in this generalised geometry context, I find this extremely beautiful!

Update: After arriving at IUB, I see that Urs has reported from Nikita's talk.

Update: Giuseppe Policastro has pointed out a couple of typos that I corrected.

## Friday, October 21, 2005

### No news is good news

You might have noticed that I haven't reported from the Hamburg opening colloquium, yet. First of all this is due to the fact that there is no wlan in the lecture hall (or at least no wlan that I can log into) and second, and that really is the good news, that the talks are so interesting that I am far too busy listening and taking notes to turn on my laptop. The organisers really have done a great job in selecting not only prominent speakers but at the same time people who know how to give good talks. Thanks a million!

However, you my dear readers will have to wait until Monday for me to give you a conference summary.

However, you my dear readers will have to wait until Monday for me to give you a conference summary.

## Wednesday, October 19, 2005

### More conference reporting

I just found that the weekly quality paper "Die Zeit" has an interview with Smolin on the occation of Loops '05. Probably no need to learn German for this, nothing new: String theory doesn't predict anything because there are 10^500 String theories (they lost the ^ somewhere), Peter W. can tell you more about this, stringy people have lost contact to experiment, LQG people do better because they predict a violation of the relativistic dispersion relation for light (is this due to the 3+1 split of their canonical formalism?) and Einstein would have been suppressed today because he was an independant thinker and not part of the (quantum mechanics) mainstream.

I was told, "Frankfurter Allgemeine Sonntagszeitung" also had a report on Loops '05. On their webpage, the article costs 1.50 Euros and I am reluctant to pay this. Maybe one of my readers has a copy and can post/fax it to me?

Tomorrow, I will be going to Hamburg where for three days they are celebrating the opening of the centre for mathematical physics. This is a joint efford of people from the physics (Louis, Fredenhagen sen., Samtleben, Kuckert) and math (Schweigert) departments of Hamburg university and the DESY theory group (Schomerus, Teschner). This is only one hour away and I am really looking forward to having a stringy critical mass coming together in northern Germany. Speakers of the opening colloquium include Dijkgraaf (hopefully he will make it this time), Hitchin, Zamolodchikov, Nekrassov, Cardy and others.

If there is some reasonable network connection, there will be some more live blogging, Urs in now a postdoc in Christoph Schweigert's group, I assume he will be online as well.

I was told, "Frankfurter Allgemeine Sonntagszeitung" also had a report on Loops '05. On their webpage, the article costs 1.50 Euros and I am reluctant to pay this. Maybe one of my readers has a copy and can post/fax it to me?

Tomorrow, I will be going to Hamburg where for three days they are celebrating the opening of the centre for mathematical physics. This is a joint efford of people from the physics (Louis, Fredenhagen sen., Samtleben, Kuckert) and math (Schweigert) departments of Hamburg university and the DESY theory group (Schomerus, Teschner). This is only one hour away and I am really looking forward to having a stringy critical mass coming together in northern Germany. Speakers of the opening colloquium include Dijkgraaf (hopefully he will make it this time), Hitchin, Zamolodchikov, Nekrassov, Cardy and others.

If there is some reasonable network connection, there will be some more live blogging, Urs in now a postdoc in Christoph Schweigert's group, I assume he will be online as well.

## Monday, October 17, 2005

### Classical limit of mathematics

The most interesting periodic event at IUB is the mathematics colloquium as the physicists don't manage to get enough people together for a regular series. Today, we had G. Litvinov who introduced us to idempotent mathematics. The idea is to build upon the group homomorphism x-> h ln(x) for some positive number h that maps the positive reals and multiplication to the reals with addition.

So we can call addition in R "multiplication" in terms of the preimage and we can also define "addition" in terms of the pre-image. The interesting thing is what becomes of this when we take the "classical limit" of h->0: Then "addition" is nothing but the maximim and this "addition" is idempotent: a "+" a = a.

This is an example of an idempotent semiring and in fact it is the generic one: Besides idempotency, it satisfies many of the usual laws: associativity, distributional law, commutativity. Thus you can carry over much of the usual stuff you can do with fields to this extreme limit. Other examples of this structure are Boolean algebras or compact convex sets where "multiplication" is the usual sum of sets and "addition" is the convex hull (obviously, the above example is a special case). Another example are polynomials with non-negative coefficients and for these the degree turns out to be a homomorphism! The obvious generalization of the integral is the supremum and the Fourier transform becomes the Legendre transform (you have to work out what the characters of the addition are!).

This theory has many applications, it seems especially strong for optimization problems. But you can also apply this limiting procedure to algebraic varieties under which they are turned into Newton polytopes.

I enjoyed this seminar especially because it made clear that many constructions can be thought of extreme limits of some even more common, linear constructions.

But now for something completely different: When I came back to my computer, I had received the following email:

I have no idea what he is talking about but maybe one of my readers has. I googled for a passage from the question and found that exactly the same question has also been posted in the comment sections of the Coffee Table and Lubos's blog.

So we can call addition in R "multiplication" in terms of the preimage and we can also define "addition" in terms of the pre-image. The interesting thing is what becomes of this when we take the "classical limit" of h->0: Then "addition" is nothing but the maximim and this "addition" is idempotent: a "+" a = a.

This is an example of an idempotent semiring and in fact it is the generic one: Besides idempotency, it satisfies many of the usual laws: associativity, distributional law, commutativity. Thus you can carry over much of the usual stuff you can do with fields to this extreme limit. Other examples of this structure are Boolean algebras or compact convex sets where "multiplication" is the usual sum of sets and "addition" is the convex hull (obviously, the above example is a special case). Another example are polynomials with non-negative coefficients and for these the degree turns out to be a homomorphism! The obvious generalization of the integral is the supremum and the Fourier transform becomes the Legendre transform (you have to work out what the characters of the addition are!).

This theory has many applications, it seems especially strong for optimization problems. But you can also apply this limiting procedure to algebraic varieties under which they are turned into Newton polytopes.

I enjoyed this seminar especially because it made clear that many constructions can be thought of extreme limits of some even more common, linear constructions.

But now for something completely different: When I came back to my computer, I had received the following email:

Dear Mr. Helling

I would greatly appreciate your response.

Please what is interrelation mutually

fractal attractor of the black hole condensation,

Bott spectrum of the homotopy groups

and moduli space of the nonassociative geometry?

Thank you very much obliged.

[Sender]

I have no idea what he is talking about but maybe one of my readers has. I googled for a passage from the question and found that exactly the same question has also been posted in the comment sections of the Coffee Table and Lubos's blog.

## Thursday, October 13, 2005

### How to read blogs and how not to write blogs

I usually do not have articles here saying basically "check out these articles in other blogs I liked them". Basically this is because I think if you, dear reader, have found this blog you will be able to find others that you like as well, so no need for me to point you around the blogsphere. And , honestly, I don't think too many people read this blog, anyway. I don't have access to the server's log files and I do not have a counter (I must say, I hate counters because often they delay loading a page a lot). But it happens more and more often that I meet somebody in person and she/he tells me that she/he has read this or that in atdotde. So in the end, I might not write for the big bit bucket.

My reporting on Loops '05 was picked up in other places so that might have brought even more reader to my little place. I even got an email from a student in China asking me that he cannot read atdotde anymore (as well as for example Lubos' Reference Frame). Unfortunatly, I had to tell him that this was probably due to the fact that his government decided to block blogger.com from the Chinese part of the net because blogs are way to subversive.

So as a little service for some of my readers who not already now, here is a hint on how to read blogs: Of course you can if you have some minutes of boredom type your friends names into google and surf to their blogs every now and then. That is fine. Maybe at some point, you want to find out, what's new in all those blogs. So you go through your bookmarks (favourites in MS speak) and check I you've seen everything that you find there.

But that is cyber stone age! What you want is a "news aggregator". This is a little program that does this for you periodically and informs you about the new articles it found. You just have to tell it where to look. This comes in form of a URL called the "RSS feed". Often you find little icons in the sidebar of the blogs that link to that URL. In others like this you have to guess. All the blogs on blogger.com it is in the form URL_of_blog/atom.xml so for atdotde it is http://atdotde.blogger.com/atom.xml. You have to tell your news aggregator about this URL. In the simplest form, this is just your web browser. Firefox calls it "live bookmarks": You open the "manage bookmarks" window and select "new life bookmark" from the menu. I use an aggregator called liferea, that even opens a little window once it found anything new, but there are many others.

Coming back to the theme of the beginning, I will for once tell you which blogs I monitor (in no particular order):

Amelies Welt(in German) I know Amelie from a mailing list an like the variety of topics she writes about.

BildBlog They set straight the 'news' from the biggest German tabloid. And it's funny.

Bitch, PhD Academic and feminist blogger. I learned a lot.

String Coffee Table you can chat about strings while nipping a coffee.

Musings The first blog of a physicist I came across. Jacques still sets standards.

Die Schreibmaschine Anna is a friend from Cambridge, currently not very active because...

Broken Ankle Diary a few weeks ago she broke here ankle

Lubos Motl's Reference Frame Strong opinions on physics , global warming, politics.

hep-th The arxiv now also comes in this form but still I prefer to read it in the classic way.

Jochen Weller One of the Quantum Diaries. Jochen was in Cambrigde while I was there.

Preposterous Universe Sean's blog is dead now because he is part of...

Cosmic Variance Currently my best loved blog. Physics and everything else.

Not Even Wrong Peter Woit has one point of criticism of string theory that he keeps repeating. But he is a very reasonable guy.

Daily ACK I met Al on a dive trip in the English Channel. Some astronomy and some Apple and Perl news.

Physics Comments Sounded like a good idea but not really working at least in the hep-th area.

Have fun!

Now I should send trackback pings. This is such a pain with blogger.com...

Ah, I nearly forgot: This article and how academic blogs can hurt your job hunting scares me a lot! (I admit, I found it on Cosmic Variance.)

My reporting on Loops '05 was picked up in other places so that might have brought even more reader to my little place. I even got an email from a student in China asking me that he cannot read atdotde anymore (as well as for example Lubos' Reference Frame). Unfortunatly, I had to tell him that this was probably due to the fact that his government decided to block blogger.com from the Chinese part of the net because blogs are way to subversive.

So as a little service for some of my readers who not already now, here is a hint on how to read blogs: Of course you can if you have some minutes of boredom type your friends names into google and surf to their blogs every now and then. That is fine. Maybe at some point, you want to find out, what's new in all those blogs. So you go through your bookmarks (favourites in MS speak) and check I you've seen everything that you find there.

But that is cyber stone age! What you want is a "news aggregator". This is a little program that does this for you periodically and informs you about the new articles it found. You just have to tell it where to look. This comes in form of a URL called the "RSS feed". Often you find little icons in the sidebar of the blogs that link to that URL. In others like this you have to guess. All the blogs on blogger.com it is in the form URL_of_blog/atom.xml so for atdotde it is http://atdotde.blogger.com/atom.xml. You have to tell your news aggregator about this URL. In the simplest form, this is just your web browser. Firefox calls it "live bookmarks": You open the "manage bookmarks" window and select "new life bookmark" from the menu. I use an aggregator called liferea, that even opens a little window once it found anything new, but there are many others.

Coming back to the theme of the beginning, I will for once tell you which blogs I monitor (in no particular order):

Have fun!

Now I should send trackback pings. This is such a pain with blogger.com...

Ah, I nearly forgot: This article and how academic blogs can hurt your job hunting scares me a lot! (I admit, I found it on Cosmic Variance.)

## Tuesday, October 11, 2005

### IUB is noble

Coming back from Loops '05 I find a note in my mailbox that the International University Bremen has now an Ig-Noble Laurate amongst its faculty: V. Benno Meyer-Rochow has received the prize in fluid dynamics for his work on the pressure produced when penguins pooh.

### More news on the others

**9:30**

The careful reader will have noticed that yesterday, my blogging got sparser and sparser. This was probably due to increasing boreodom/annoyance on my part. Often, I thought the organisers should have applied the charta of sci.physics.research that forbits contributions that are so vague and speculative that they are not even wrong. I could not stand it anymore and had to leave the room when the speaker (I am not going to say who it was) claimed that "the big bang is just a phase transition".

Today, I give it a new shot. And the plenary talks are promising. Currently, John Baez has been giving a nice overview on various incarnations of spin foam models (he listed Feynman diagrams, lattice gauge theory and topological strings among them although I am under the impression that in the last point he is misguided as topological strings in fact take into account the complex/holomorphic structure of the background). However, starting from the point "what kind of matter do we have to include to have a nice continuum limit" he digressed via a Witten anecdote (who answered the question if he thinks LQG is right said that he hoped not because he hoped (in the 90s) that there is only one unique matter content (ie strings) consistend with quantised gravity) to making fun of string theorists asking them to make their homework to check the 10^whatever vacua in the landscape.

The next speaker will be Dijkgraaf who hopefully will do a better job than did Theissen yesterday in presenting that stringy people have interesting, deep stuff to say about physics.

Unfortunately, electro-magnetism lectures back in Bremen require me to run off at 11:00 and catch the train so I will not be able to follow the rest of the conference closely.

**9:51**

Baez got back on track with a nice discussion of how Lorentzlian Triangulation fit into the scheme of things and what role the prescribed time slicing might have on large scale physics (introducting further terms than Lambda and R in the LEEA). He showed also a supposed to be spin foam version of it.

**9:56**

Oh no. They have grad students and young postdocs as chairpersons. Bianca Dittrich just announced "Our next speaker is Robbert Dijkgraaf" and nothing happened. It seems Dijkgraaf didn't make it here on the early morning plane. Now, I can fulfil the annonymous reader's wish and report on the presentation of Laurent Friedel, the next speaker.

**10:20**

Before I power down my laptop: Friedel looks at effects of quantum gravity on low energy particle actions. In order to do that he couples matter to the Ponzano Regge model and then will probably try to integrate out the gravitational degrees of freedom.

## Monday, October 10, 2005

### The Others

I sneaked into the Loops '05 Conference at the AEI at Potsdam. So, I will be able to give you live blogging for today and tomorrow. After some remarks by Nicolai and Thiemann and the usual impedence mismatch between laptops and projectors, Carlo Rovelli has started the first talk. He is on slide 2, and still reviews recent and not so recent devellopments of LQG.

Rovelli talked about his paper on the graviton propagator. If you like he wants to recover Newton's law from his model. The obvious problem of course is that any propagator g(x,y) cannot depend on x or y if everything is diffeomorphism invariant (at least in these people's logic). So he had to include also a dependence on a box around the 'detector' and introduce the metric on the box as boundary values. He seems to get out of this problem by in fact using a relational notion as you would of course have to in any interesting background independent theory (measure not with respect to coordinates but with respect to physical rulers). Then there was a technical part which I didn't quite get and in the end he had something like g(x,y)=1/|x-y|^2 on his slide. This could be interesting. I will download the paper and read it on the train.

Next is Smolin. Again computer problems, this time causing an unscheduled coffee break. Smolin started out talking about problems of bckground independent approaches including unification and the nature of anomalies. Then, however, he decided to focus on another one: How does macroscopic causality arise? He doesn't really know, but looked at some simple models where macro causality is usually destroyed my some non-local edges (like in a small world network). Surprisingly, he claims, these non-local connection do not change macroscopic physics (critical behaviour) a lot and thus they are not really detectable.

Even more, these non-local "defects" could, according to Smolin, play the role of matter. Then he showed another model where instead of a spin network, the physics is in twisted braided ribbon graphs. There, he called some configurations "quarks" and asigned the usual quantum numbers and ribbon transformations for C, P and T. Then it got even better, next slides mentioned the problem of small power in low l modes in the CMB ("scales larger than 1/Lambda"), the Poineer anomly and the Tully Fisher relation that is the empirical observation behind MOND. I have no idea what his theory as to do with all these fancy open probelms. Stefan Theissen next to me makes interesting noises of astonishment.

Next speaker is John Barrett. This talk sounds very solid. He presents a 3+0 dimensional model which to me looks much like a variant of a spin network (a graph with spin labels and certain weight factors for vertices, links, and tetrahedra). He can do Feynman graph like calculations in this model. Further plus: A native speaker of British English.

Last speaker of the forenoon is Stefan Theissen. He tries to explain how gravity arises from string theory to the LQG crowd. Many have left before he started and so far he has only presented string theory as one could have done this already 20 years ago: Einstein's equation as consistency requirement for the sigma model and scattering amplitudes producing the vertices of the Einstein Hilbert action. Solid but not really exciting.

In the afternoon, there are parallel sessions. I chose the "seminar room". Here, Markopoulou presents her idesa that dynamics in some (quantum gravity?) theory has formal similiarities to quantum information processing. In some Ising type model she looks at the block spin transformation and reformulates the fact that low energy fields only talk to the block spins and not to the high frequency fields. With some fancy mathematical machinery, she relates this to error correction where the high frequency fields play the role of noise.

Next is Olaf Dreyer. Very strange. He proposes that quantum mechnics should be deterministic and non-linear. Most of what he says are philosophical statements (and I do by far not agree with all of them) but what seems to be at the core of it is that he does not want macroscopic states that are superpositions of elementary states. I thought that was solved by decoherence long ago...

At least Rovelli asks "[long pause] maybe I didn't understand it. you make very general statements. But where is the physics?"

The next speaker is Wang who expands a bit on what Smolin said in the morning. It's really about Small World Networks (TM). If you have such a network with gauge flux along the edges then in fact a non-local random link looks locally as a charged particle. This is just like in Wheeler's geometrodynamics. The bulk of the talk is about the Ising model on a lattice with a small number of additional random links. The upshot is that the critical temperature and the heat capacity as well as the correltations at criticality do not much depend on the existence of the additional random links.

Martinetti reminds us that time evolution might have a connection with temperature. Concretely, he wants to take the Tomito-Takesaki unitary evolution as time evolution and build a KMS-state out of it. There is a version of the Unruh effect in the language of KMS states and Martinetti works out the corretion to the Unruh temperature from the fact that the observer might have a finite life time. This correction turns out to be so small that by uncertainty, one would have to measure longer than the life time to detect the difference in tempaerture.

I stopped reporting on the afternoon talks as I did not get much out of those. Currently, Rüdiger Vaas, a science journalist, is the last speaker of the day. He at least admits that his talk is on philosophy rather than physics. His topic are the philosophical foundations of big bang physics.

**9:55**Rovelli talked about his paper on the graviton propagator. If you like he wants to recover Newton's law from his model. The obvious problem of course is that any propagator g(x,y) cannot depend on x or y if everything is diffeomorphism invariant (at least in these people's logic). So he had to include also a dependence on a box around the 'detector' and introduce the metric on the box as boundary values. He seems to get out of this problem by in fact using a relational notion as you would of course have to in any interesting background independent theory (measure not with respect to coordinates but with respect to physical rulers). Then there was a technical part which I didn't quite get and in the end he had something like g(x,y)=1/|x-y|^2 on his slide. This could be interesting. I will download the paper and read it on the train.

**11:15**Next is Smolin. Again computer problems, this time causing an unscheduled coffee break. Smolin started out talking about problems of bckground independent approaches including unification and the nature of anomalies. Then, however, he decided to focus on another one: How does macroscopic causality arise? He doesn't really know, but looked at some simple models where macro causality is usually destroyed my some non-local edges (like in a small world network). Surprisingly, he claims, these non-local connection do not change macroscopic physics (critical behaviour) a lot and thus they are not really detectable.

Even more, these non-local "defects" could, according to Smolin, play the role of matter. Then he showed another model where instead of a spin network, the physics is in twisted braided ribbon graphs. There, he called some configurations "quarks" and asigned the usual quantum numbers and ribbon transformations for C, P and T. Then it got even better, next slides mentioned the problem of small power in low l modes in the CMB ("scales larger than 1/Lambda"), the Poineer anomly and the Tully Fisher relation that is the empirical observation behind MOND. I have no idea what his theory as to do with all these fancy open probelms. Stefan Theissen next to me makes interesting noises of astonishment.

**12:15**Next speaker is John Barrett. This talk sounds very solid. He presents a 3+0 dimensional model which to me looks much like a variant of a spin network (a graph with spin labels and certain weight factors for vertices, links, and tetrahedra). He can do Feynman graph like calculations in this model. Further plus: A native speaker of British English.

**13:00**Last speaker of the forenoon is Stefan Theissen. He tries to explain how gravity arises from string theory to the LQG crowd. Many have left before he started and so far he has only presented string theory as one could have done this already 20 years ago: Einstein's equation as consistency requirement for the sigma model and scattering amplitudes producing the vertices of the Einstein Hilbert action. Solid but not really exciting.

**14:55**In the afternoon, there are parallel sessions. I chose the "seminar room". Here, Markopoulou presents her idesa that dynamics in some (quantum gravity?) theory has formal similiarities to quantum information processing. In some Ising type model she looks at the block spin transformation and reformulates the fact that low energy fields only talk to the block spins and not to the high frequency fields. With some fancy mathematical machinery, she relates this to error correction where the high frequency fields play the role of noise.

**15:25**Next is Olaf Dreyer. Very strange. He proposes that quantum mechnics should be deterministic and non-linear. Most of what he says are philosophical statements (and I do by far not agree with all of them) but what seems to be at the core of it is that he does not want macroscopic states that are superpositions of elementary states. I thought that was solved by decoherence long ago...

**15:35**At least Rovelli asks "[long pause] maybe I didn't understand it. you make very general statements. But where is the physics?"

**15:56**The next speaker is Wang who expands a bit on what Smolin said in the morning. It's really about Small World Networks (TM). If you have such a network with gauge flux along the edges then in fact a non-local random link looks locally as a charged particle. This is just like in Wheeler's geometrodynamics. The bulk of the talk is about the Ising model on a lattice with a small number of additional random links. The upshot is that the critical temperature and the heat capacity as well as the correltations at criticality do not much depend on the existence of the additional random links.

**16:25**Martinetti reminds us that time evolution might have a connection with temperature. Concretely, he wants to take the Tomito-Takesaki unitary evolution as time evolution and build a KMS-state out of it. There is a version of the Unruh effect in the language of KMS states and Martinetti works out the corretion to the Unruh temperature from the fact that the observer might have a finite life time. This correction turns out to be so small that by uncertainty, one would have to measure longer than the life time to detect the difference in tempaerture.

**18:15**I stopped reporting on the afternoon talks as I did not get much out of those. Currently, Rüdiger Vaas, a science journalist, is the last speaker of the day. He at least admits that his talk is on philosophy rather than physics. His topic are the philosophical foundations of big bang physics.

## Tuesday, September 20, 2005

### Faster than light or not

I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release.

All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.

So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.

Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.

Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.

For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.

The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.

(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.

What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.

Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.

Update: Links updated DAMTP-->IUB

All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.

So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.

Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.

Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.

For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.

The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.

(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.

What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.

Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.

Update: Links updated DAMTP-->IUB

## Friday, September 16, 2005

### Negative votes and conflicting criteria

Yesterday, Matthijs Bogaards and Dierk Schleicher ran a session on the electoral system for the upcoming general election we are going to have on Sunday in Germany. I had thought I I know how it works but I was proven wrong. Before I was aware that there is something like Arrow's impossibility theorm which states that there is a certain list of criteria your electoral system is supposed to fulfill but which cannot hold all at the same time for any implementation. What typically happens are cyclic preferences (there is a majority for A over B and one for B over C and one for C over A) but I thought all this is mostly academic and does not apply to real elections. I was proven wrong and there is a real chance that there is a paradoxical situation coming up.

Before explaining the actual problem, I should explain some of the background. The system in Germany is quite complicated because it tries to accomodate a number of principles: First, after the war, the British made sure the system contains some component of constituency vote: Each local constituency (electoral district for you Americans) should send one candidate to parliament that is in principle directly responsible to the voters in that district so voters have something like "their representative". Second, proportional vote, that is the number of seats for a party should reflect the percentage of votes for that party in the popular vote. Third, Germany is a federal republic, so the sixteen federal states should each send their own representatives. Finally, there are some practical considerations like the number of seats in parliament should be roughly 600 and you shouldn't need a PhD in math and political science to understand your ballot.

So this is how it works. Actually, it's slightly more complicated but that shall not bother us here. And I am not going into the problem of how to deal with rounding errors (you can of course only have integer seats) which brings with it its own paradoxes. What I am going to cover is how to deal with the fact, that the number of seats has to be non-negative:

The ballot has two columns: In the first, you vote for a candidate from your constituency (which is nominated by its party). In the second, you vote for a party for the proportional vote. Each voter makes one cross in each column, one for a candidate from the constituency and one for a party in the proportional vote. There are half as many constituencies as there are seats in parliament and these are filled immediately according to majority vote of the first column.

The second step is to count the votes in the second column. If a party neither gets more than five percent of those nor wins three or more constituencies their votes are dropped. The rest is used to work out how many of the total of 600 seats each of the parties gets.

Now comes the federal component: Let's consider party A and assume the popular vote says they should get 100 seats. We have to determine how these 100 seats are distributed between the federal states. This is again done proportionally: Party A in federal state (i) gets that percentage of the 100 seats that reflects the percentage of the votes for party from state (i) of the total votes for party A in all of Germany. Let's say this is 10. Further assume that A has won 6 constituencies in federal state (i). Then, in addition to these 6 candiates from the constituencies, the top four candidates from party A's list for state (i) are send to Berlin.

So far, everything is great: Each constituency has "their representative" and the total number of seats for each party is proportional to its share of the popular vote.

Still, there is a problem: The two votes in the two columns are independent. And as the constituencies are determined by majority vote, except in a few special cases (Berlin Kreuzberg where I used to live before moving to Cambridge being one with the only constituency winner from the green party) it does not make much sense to vote for a constituency candidate that is not nominated by one of the two big parties. Any other vote would likely be irrelevant and effectively your only choice is between the candidate of SPD or CDU.

Because of this, it can (and in fact often does for the two big parties) happen that a party wins more constituencies in a federal state than it is entitled to for that state according to the popular vote. In that case (because there are no negative numbers of candidates from the list to balance this) the rule is that all the constituency winners go to parliament and none from the list of that party. The parliament is enlarged for these "excess mandates". So that party gets more seats than their proportion of the popular vote.

This obviously violates the principle of proportional elections but it gets worse: If that happens in a federal state for party A you can hurt this party by voting for it: Take the same numbers as above but assume A has won 11 constituencies in (i). If there are no further excess mandates, in the end, A gets 101 seats in the enlarged parliament of 601 seats. Now, assume A gets an additional proportional vote. It is not impossible that this does not increase A's total share of 100 votes for all of Germany but increases to proportional share for the A's candidates in federal state (i) from 10 to 11. This does not change anything for the represenatives from (i), still the 11 constituency candidates go to Berlin but there is no excess mandate anymore. Thus, overall, A sends only 100 representatives to a parliament of 600, one less than with the additional vote!

As a result, in that situation the vote for A has a negative weight: It decreases A's share in the parliament. Usually, this is not so much of a problem, because the weights of votes depend on what other people have voted (which you do not know when you fill out your ballot) and chances are much higher that your vote has positive weight. So it is still save to vote for your favourite party.

However, this year, there is one constituency in Dresden in the federal state of Saxony where one of the candidates died two weeks before election day. To ensure equal chances in campaining, the election in that constituency has been postponed for two weeks. This means, voters there will know the result from the rest of the country. Now, Saxony is known to be quite conservative so it is not unlikely that the CDU will have excess mandates there. And this might just yield the above situation: Voters from Dresden might hurt the CDU by voting for them in the popular vote and they would know if that were the case. It would still be democratic in a sense, it's just that if voters there prefer CDU or FDP they should vote for FDP and if they prefer SPD or the Greens they should vote for CDU. Still, it's not clear if you can explain that to voters in less then two weeks... I find this quite scary, especially since all polls predict this election to be extremely close and two very different outcomes are withing one standard deviation.

If you are interested in alternative voting systems, Wikipedia is a good starting point. There are many different ones and because of the above mentioned theorem they all have at least one drawback.

Yesterday, there was also a brief discussion of whether one should have a system that allows fewer or more of the small parties in parliament. There are of course the usual arguments of stability versus better representation of minorities. But there is another argument against a stable two party system that is not mentioned often: This is due to the fact that parties can actually change their policies to please more voters. If you assume, political orientation is well represented by a one dimensional scale (usually called left-right), then the situation of icecream salesmen on a beach could occur: There is a beach of 4km with two competing people selling icecream. Where will they stand? For the customers it would be best if they are each 1km from the two ends of the beach so nobody would have to walk more than 1km to buy an icecream and the average walking distance is half a km. However, this is an unstable situation as there is an incentive for each salesman to move further to the middle of the beach to increase the number of customers to which he is closer

than his competitor.

So, in the end, both will meet in the middle of the beach and customers have to walk up to 2km with an average distance of 1km. Plus if that happens with two parties in the political spectrum they will end up with indistinguishable political programs and as a voter you don't have a real choice anymore. You could argue that this has already taken place in the USA or Switzerland (there for other reasons) but that would be unfair to the Democrats.

I should have had many more entries here about politics and the election like my role models on the other side of the Atlantic. I don't know why these never materialised (vitualised?). So, I have to be brief: If you can vote on Sunday, think of where the different parties actually have different plans (concrete, rather than abstract "less unemployment" or "more sunshine") and what the current government has done and if you would like to keep it that way (I just mention the war in Iraq and foreign policy, nuclear power, organic food as a mass market, immigration policy, tax on waste of energy, gay marriage, student fees, reform of academic jobs, renewable energy) your vote should be obvious. Mine is.

The election is over and everybody is even more confused than before. As the obvious choices for coalitions do not have a majority one has to look for the several colourful alternatives and the next few weeks will show us which of the several impossibilities will actually happen. What will definitely happen is that in Dresden votes for the CDU will have negative weight (linked page in German with an excel sheet for your own speculations). So, Dresdeners, vote for CDU if you want to hurt them (and you cannot convince 90% of the inhabitants to vote for the SPD).

Before explaining the actual problem, I should explain some of the background. The system in Germany is quite complicated because it tries to accomodate a number of principles: First, after the war, the British made sure the system contains some component of constituency vote: Each local constituency (electoral district for you Americans) should send one candidate to parliament that is in principle directly responsible to the voters in that district so voters have something like "their representative". Second, proportional vote, that is the number of seats for a party should reflect the percentage of votes for that party in the popular vote. Third, Germany is a federal republic, so the sixteen federal states should each send their own representatives. Finally, there are some practical considerations like the number of seats in parliament should be roughly 600 and you shouldn't need a PhD in math and political science to understand your ballot.

So this is how it works. Actually, it's slightly more complicated but that shall not bother us here. And I am not going into the problem of how to deal with rounding errors (you can of course only have integer seats) which brings with it its own paradoxes. What I am going to cover is how to deal with the fact, that the number of seats has to be non-negative:

The ballot has two columns: In the first, you vote for a candidate from your constituency (which is nominated by its party). In the second, you vote for a party for the proportional vote. Each voter makes one cross in each column, one for a candidate from the constituency and one for a party in the proportional vote. There are half as many constituencies as there are seats in parliament and these are filled immediately according to majority vote of the first column.

The second step is to count the votes in the second column. If a party neither gets more than five percent of those nor wins three or more constituencies their votes are dropped. The rest is used to work out how many of the total of 600 seats each of the parties gets.

Now comes the federal component: Let's consider party A and assume the popular vote says they should get 100 seats. We have to determine how these 100 seats are distributed between the federal states. This is again done proportionally: Party A in federal state (i) gets that percentage of the 100 seats that reflects the percentage of the votes for party from state (i) of the total votes for party A in all of Germany. Let's say this is 10. Further assume that A has won 6 constituencies in federal state (i). Then, in addition to these 6 candiates from the constituencies, the top four candidates from party A's list for state (i) are send to Berlin.

So far, everything is great: Each constituency has "their representative" and the total number of seats for each party is proportional to its share of the popular vote.

Still, there is a problem: The two votes in the two columns are independent. And as the constituencies are determined by majority vote, except in a few special cases (Berlin Kreuzberg where I used to live before moving to Cambridge being one with the only constituency winner from the green party) it does not make much sense to vote for a constituency candidate that is not nominated by one of the two big parties. Any other vote would likely be irrelevant and effectively your only choice is between the candidate of SPD or CDU.

Because of this, it can (and in fact often does for the two big parties) happen that a party wins more constituencies in a federal state than it is entitled to for that state according to the popular vote. In that case (because there are no negative numbers of candidates from the list to balance this) the rule is that all the constituency winners go to parliament and none from the list of that party. The parliament is enlarged for these "excess mandates". So that party gets more seats than their proportion of the popular vote.

This obviously violates the principle of proportional elections but it gets worse: If that happens in a federal state for party A you can hurt this party by voting for it: Take the same numbers as above but assume A has won 11 constituencies in (i). If there are no further excess mandates, in the end, A gets 101 seats in the enlarged parliament of 601 seats. Now, assume A gets an additional proportional vote. It is not impossible that this does not increase A's total share of 100 votes for all of Germany but increases to proportional share for the A's candidates in federal state (i) from 10 to 11. This does not change anything for the represenatives from (i), still the 11 constituency candidates go to Berlin but there is no excess mandate anymore. Thus, overall, A sends only 100 representatives to a parliament of 600, one less than with the additional vote!

As a result, in that situation the vote for A has a negative weight: It decreases A's share in the parliament. Usually, this is not so much of a problem, because the weights of votes depend on what other people have voted (which you do not know when you fill out your ballot) and chances are much higher that your vote has positive weight. So it is still save to vote for your favourite party.

However, this year, there is one constituency in Dresden in the federal state of Saxony where one of the candidates died two weeks before election day. To ensure equal chances in campaining, the election in that constituency has been postponed for two weeks. This means, voters there will know the result from the rest of the country. Now, Saxony is known to be quite conservative so it is not unlikely that the CDU will have excess mandates there. And this might just yield the above situation: Voters from Dresden might hurt the CDU by voting for them in the popular vote and they would know if that were the case. It would still be democratic in a sense, it's just that if voters there prefer CDU or FDP they should vote for FDP and if they prefer SPD or the Greens they should vote for CDU. Still, it's not clear if you can explain that to voters in less then two weeks... I find this quite scary, especially since all polls predict this election to be extremely close and two very different outcomes are withing one standard deviation.

If you are interested in alternative voting systems, Wikipedia is a good starting point. There are many different ones and because of the above mentioned theorem they all have at least one drawback.

Yesterday, there was also a brief discussion of whether one should have a system that allows fewer or more of the small parties in parliament. There are of course the usual arguments of stability versus better representation of minorities. But there is another argument against a stable two party system that is not mentioned often: This is due to the fact that parties can actually change their policies to please more voters. If you assume, political orientation is well represented by a one dimensional scale (usually called left-right), then the situation of icecream salesmen on a beach could occur: There is a beach of 4km with two competing people selling icecream. Where will they stand? For the customers it would be best if they are each 1km from the two ends of the beach so nobody would have to walk more than 1km to buy an icecream and the average walking distance is half a km. However, this is an unstable situation as there is an incentive for each salesman to move further to the middle of the beach to increase the number of customers to which he is closer

than his competitor.

So, in the end, both will meet in the middle of the beach and customers have to walk up to 2km with an average distance of 1km. Plus if that happens with two parties in the political spectrum they will end up with indistinguishable political programs and as a voter you don't have a real choice anymore. You could argue that this has already taken place in the USA or Switzerland (there for other reasons) but that would be unfair to the Democrats.

I should have had many more entries here about politics and the election like my role models on the other side of the Atlantic. I don't know why these never materialised (vitualised?). So, I have to be brief: If you can vote on Sunday, think of where the different parties actually have different plans (concrete, rather than abstract "less unemployment" or "more sunshine") and what the current government has done and if you would like to keep it that way (I just mention the war in Iraq and foreign policy, nuclear power, organic food as a mass market, immigration policy, tax on waste of energy, gay marriage, student fees, reform of academic jobs, renewable energy) your vote should be obvious. Mine is.

**Update:**The election is over and everybody is even more confused than before. As the obvious choices for coalitions do not have a majority one has to look for the several colourful alternatives and the next few weeks will show us which of the several impossibilities will actually happen. What will definitely happen is that in Dresden votes for the CDU will have negative weight (linked page in German with an excel sheet for your own speculations). So, Dresdeners, vote for CDU if you want to hurt them (and you cannot convince 90% of the inhabitants to vote for the SPD).

## Wednesday, September 14, 2005

### Natural scales

When I talk to non-specialists and mention that the Planck scale is where quantum gravity is likely to become relevant sometimes people get suspicious about this type of argument. If I have time, I explain that to probe smaller length details I would need so much CM energy that I create a black home and thus still cannot resolve it. However, if I have less time, I just say: Look, it's relativistic, gravity and quantum, so it's likely that c, G and h play a role. Turn those into a length scale and there is the Planck scale.

If they do not believe this gives a good estimate I ask them to guess the size of an atom: Those are quantum objects, so h is likely to appear, the binding is electromagnetic, so e (in SI units in the combination e^2/4 pi epsilon_0) has to play a role and it comes out of the dynamics of electrons, so m, the electron mass, is likely to feature. Turn this into a length and you get the Bohr radius.

Of course, as all short arguments, this has a flaw: there is a dimensionless quantity around that could spoil dimension arguments: alpha, the fine-structure constant. So you also need to say, that the atom is non-relativistic, so c is not allowed to appear.

You could similarly ask for a scale that is independant of the electric charge, and there it is: Multiply the Bohr radius by alpha and you get the electron Compton wavelength h/mc.

You could as well ask for a classical scale which should be independent of h: Just multiply another power of alpha and you get the classical electron radius e^2/4 pi epsilon_0 m c^2. At the moment, however, I cannot think of a real physical problem where this is the characteristic scale (NB alpha is roughly 1/137, so each scale is two orders of magnitude smaller than the previous).

Update: Searching Google for "classical electron radius" points to scienceworld and wikipedia, both calling it the "Compton radius". Still, there is a difference of an alpha between the Compton wavelength and the Compton radius.

If they do not believe this gives a good estimate I ask them to guess the size of an atom: Those are quantum objects, so h is likely to appear, the binding is electromagnetic, so e (in SI units in the combination e^2/4 pi epsilon_0) has to play a role and it comes out of the dynamics of electrons, so m, the electron mass, is likely to feature. Turn this into a length and you get the Bohr radius.

Of course, as all short arguments, this has a flaw: there is a dimensionless quantity around that could spoil dimension arguments: alpha, the fine-structure constant. So you also need to say, that the atom is non-relativistic, so c is not allowed to appear.

You could similarly ask for a scale that is independant of the electric charge, and there it is: Multiply the Bohr radius by alpha and you get the electron Compton wavelength h/mc.

You could as well ask for a classical scale which should be independent of h: Just multiply another power of alpha and you get the classical electron radius e^2/4 pi epsilon_0 m c^2. At the moment, however, I cannot think of a real physical problem where this is the characteristic scale (NB alpha is roughly 1/137, so each scale is two orders of magnitude smaller than the previous).

Update: Searching Google for "classical electron radius" points to scienceworld and wikipedia, both calling it the "Compton radius". Still, there is a difference of an alpha between the Compton wavelength and the Compton radius.

## Thursday, September 08, 2005

### hep-th/9203227

Reading through the arxive's old news items I became aware of hep-th/9203227 for which the abstract reads

It's quite old and there are some technical problems downloading it.

\Paper: 9203227

From: harvey@witten.uchicago.edu (J. B. Harvey)

Date: Wed 1 Apr 1992 00:25 CST 1992

A solvable string theory in four dimensions,

by J. Harvey, G. Moore, N. Seiberg, and A. Strominger, 30 pp

\We construct a new class of exactly solvable string theories by generalizing

the heterotic construction to connect a left-moving non-compact Lorentzian

coset algebra with a right-moving supersymmetric Euclidean coset algebra. These

theories have no spacetime supersymmetry, and a generalized set of anomaly

constraints allows only a model with four spacetime dimensions, low energy

gauge groups SU(3) and spontaneously broken SU(2)xU(1), and three families

of quarks and leptons. The model has a complex dilaton whose radial mode

is automatically eaten in a Higgs-like solution to the cosmological

constant problem, while its angular mode survives to solve the strong CP

problem at low energy. By adroit use of the theory of parabolic cylinder

functions, we calculate the mass spectrum of this model to all orders in

the string loop expansion. The results are within 5% of measured values,

with the discrepancy attributable to experimental error. We predict a top

quark mass of $176 \pm 5$ GeV, and no physical Higgs particle in the spectrum.

\

It's quite old and there are some technical problems downloading it.

## Tuesday, September 06, 2005

### Local pancake and axis of evil

I do not read the astro-ph archive on a daily basis (as any astro-* or *-ph archive) but I use liferea to stay up to date with a number of blogs. This news aggregator shows a small window with the heading if a new entry appears in the logs that I told it to monitor. This morning, it showed an entry form Physics Comments with the title "Local pancake defeats axis of evil". My first reaktion was that this must be a hoax, there could not be a paper with that title.

But the paper is genuine. I read the four pages over lunch and it looks quite interesting: When you look at the WMAP power spectrum (or COBE for that matters) you realize that for very low l there is much less power than expected from the popular models. Actually, the plot starts with l=2 because l=0 is the 2.73K uniform background and l=1 is the dipole or vector that is attributed to the Doppler shift due the motion of us (the sun) relative to the cosmic rest frame.

What I did not know is that the l=2 and l=3 have a prefered direction and they actually agree (althogh not with the dipole direction, they are perpendicular to it). This fact was realised by Copi, Huterer, Starkman, and Schwarz as I am reminded). I am not entirely sure what this means on a technical level but it could be something like "when this direction is chosen as the z-direction, most power is concentrated in the m=0 component". This could be either due to a systematic error or a statistical coincidence but Vale cites that this is unlikely with 99.9% confidence.

This axis encoded in the l=2 and l=3 modes has been termed "axis of evil" by Land and "varying speed of light and I write a book and offend everybody at my old university" Magueijo. The in the new paper, Vale offers an explanation for this preferred direction:

His idea is that gravitational lensing can mix the modes and what appears to be l=2 and l=3 is actually the dipole that is mixed into these higher l by the lensing. To first order, this is effect is given by

A = T + grad(T).grad(Psi)

(Jacques is right, I need TeX formulas here!) where T is the true temperature field, A is the apprarant temperature field and Psi is a potential that summarizes the lensing. All these fields are function of psi and phi, the coordinates on the celestial sphere.

He then goes on an uses some spherical mass distribution of twice the mass of the Great attractor 30Mpc away from us to work out Psi and eventually A. The point is that the l=1 mode of A is two orders of magnitude stronger than l=2 and l=3 so small mixing could be sufficient.

What i would like to add here is how to obtain some analytical expressions: As always, we expand everything in spherical harmonics. Then

A_lm = T_lm + integral( Y_lm T_l'm' Psi_l"m" grad(Y_l'm').grad(Y_l"m") )

I am too lazy, but with the help of Mathworlds page on spherical harmonics and spherical coordinates you should be able to work out the derivatives and the integral analytically. By choosing coordinates aligned with the dipole, you can assume that in the correction term only the l'=1, m'=0 term contribute.

Finally, the integral of three Y's is given by an expression in Wigner 3j-symbols and those are non-zero only if the rules for addition of angular momentum hold. Everybody less lazy than myself should be able to work out which A_lm are affected by which modes of Psi_l"m" and it should be simple to invert this formula to find the modes of Psi if you assume all power in A comes from T_10. Especially Psi_l"m" should only infulence modes with l and m not too different from l" and m". By looking at the coefficients, maybe one is even able to see that only the dipole component of Psi has a stron infuence and this only on l=2 and l=3 and only for special values of m.

This would then be an explanation of this axis of evil.

Still, the big problem not only remains but gets worse: The observed power in l=2 and l=3 are too small to fit the model and they even get smaller if one subtracts the leaked dipole.

Using the trackback mechanism at the arxive, I found that at CosmoCoffee there is also a discussion of this paper going on. There, people point out that what counts is the motion of the lense relative to the background (Vale is replying to comments there as well).

It seems to me that this should be viewed as a two stage process: First there is the motion of the lense and this is what converts power in l=1 (due to the lense's motion) to l=2 and l=3. Then there is our motion but that affects only l=1 and not l=2 and l=3 anymore. Is that right? In the end, it's our motion that turns l=0 into l=1.

But the paper is genuine. I read the four pages over lunch and it looks quite interesting: When you look at the WMAP power spectrum (or COBE for that matters) you realize that for very low l there is much less power than expected from the popular models. Actually, the plot starts with l=2 because l=0 is the 2.73K uniform background and l=1 is the dipole or vector that is attributed to the Doppler shift due the motion of us (the sun) relative to the cosmic rest frame.

What I did not know is that the l=2 and l=3 have a prefered direction and they actually agree (althogh not with the dipole direction, they are perpendicular to it). This fact was realised by Copi, Huterer, Starkman, and Schwarz as I am reminded). I am not entirely sure what this means on a technical level but it could be something like "when this direction is chosen as the z-direction, most power is concentrated in the m=0 component". This could be either due to a systematic error or a statistical coincidence but Vale cites that this is unlikely with 99.9% confidence.

This axis encoded in the l=2 and l=3 modes has been termed "axis of evil" by Land and "varying speed of light and I write a book and offend everybody at my old university" Magueijo. The in the new paper, Vale offers an explanation for this preferred direction:

His idea is that gravitational lensing can mix the modes and what appears to be l=2 and l=3 is actually the dipole that is mixed into these higher l by the lensing. To first order, this is effect is given by

A = T + grad(T).grad(Psi)

(Jacques is right, I need TeX formulas here!) where T is the true temperature field, A is the apprarant temperature field and Psi is a potential that summarizes the lensing. All these fields are function of psi and phi, the coordinates on the celestial sphere.

He then goes on an uses some spherical mass distribution of twice the mass of the Great attractor 30Mpc away from us to work out Psi and eventually A. The point is that the l=1 mode of A is two orders of magnitude stronger than l=2 and l=3 so small mixing could be sufficient.

What i would like to add here is how to obtain some analytical expressions: As always, we expand everything in spherical harmonics. Then

A_lm = T_lm + integral( Y_lm T_l'm' Psi_l"m" grad(Y_l'm').grad(Y_l"m") )

I am too lazy, but with the help of Mathworlds page on spherical harmonics and spherical coordinates you should be able to work out the derivatives and the integral analytically. By choosing coordinates aligned with the dipole, you can assume that in the correction term only the l'=1, m'=0 term contribute.

Finally, the integral of three Y's is given by an expression in Wigner 3j-symbols and those are non-zero only if the rules for addition of angular momentum hold. Everybody less lazy than myself should be able to work out which A_lm are affected by which modes of Psi_l"m" and it should be simple to invert this formula to find the modes of Psi if you assume all power in A comes from T_10. Especially Psi_l"m" should only infulence modes with l and m not too different from l" and m". By looking at the coefficients, maybe one is even able to see that only the dipole component of Psi has a stron infuence and this only on l=2 and l=3 and only for special values of m.

This would then be an explanation of this axis of evil.

Still, the big problem not only remains but gets worse: The observed power in l=2 and l=3 are too small to fit the model and they even get smaller if one subtracts the leaked dipole.

Using the trackback mechanism at the arxive, I found that at CosmoCoffee there is also a discussion of this paper going on. There, people point out that what counts is the motion of the lense relative to the background (Vale is replying to comments there as well).

It seems to me that this should be viewed as a two stage process: First there is the motion of the lense and this is what converts power in l=1 (due to the lense's motion) to l=2 and l=3. Then there is our motion but that affects only l=1 and not l=2 and l=3 anymore. Is that right? In the end, it's our motion that turns l=0 into l=1.

## Monday, August 29, 2005

### My not so humble opinions on text books

Over at Cosmic Variance, Clifford has asked for opinions on your favourite text book. Me, joining in only as commentator number 85, would like to repost my $.02 worth here:

Number one are by far the Feynman Lectures (Vol. I and II). From these I learned how physicists think.

When it comes to a GR text, it’s clearly MTW. And yes, a tensor is a machine with slots (egg crates etc) and not something with indices that transforms in a particular way (as Weinberg wants to make you believe). [I have to admit, I haven’t really looked at Sean’s book,yet].

Both these are 100% pure fun to read but admittedly, none of them can probably be used to accompany a lecture course. This is why many people who were busy with their courses never properly used them. Due to biographical oddities (three months of spare time between highschool and uni for Feynman, one year of compulsory [German] community/civil service after two years of uni) I actually read these books from beginning to end. I doubt that many other people can claim this. But it’s worth!

In addition, our library had a German/English bilingual edition of the Feynman lectures. So besides physics I could also learn physicists’ English (which is different from the literature English I learned in highschool).

Some other books: Jackson makes a great mouse pad and I have used it to this end for years. Plus it contains everything you ever wanted to know about boundary value problems. But clearly it’s not fun to read.

The writeup of Witten’s lectures in the IAS physics lectures for mathematicians contain lots of interesting insights.

And there is (admittedly not a physics text) “Concrete Mathematics” by Graham, Knuth (the Knuth of TeX) and Patashinik. This is a math book for computer scientists. It’s my standard reference for formulas containing binomials, for generating functions, sums, recurrence relations, and asymptotic expressions. And (probably thanks to Knuth) it’s full of jokes and fun observations. And there are hundreds of problems of a variety of difficulties, rated from “warmups” to “research problems". Therefore it’s also a source for Great Wakering.

Finally, there are the books by Landau and Lifshits. Since my first mechanics course. I have a strong disliking for them, probably based on my own poor judgement. When I first opened vol. 1 I was confused (admitedly I shouldn’t have been) by the fact that they use brackets for the vector product rather than \times like everybody else. Okok, it’s a Lie bracket but still, it makes formulas ugly. And then there is the infamous appendix on what a great guy Landau has been.

Marco Zagermann, who was in my year can still recite the highlights from this appendix: About the logarithmic scale for physicists and how Landau promoted himself on this scale later in his life, how he only read the abstracts of the papers in the Physical Review and then judged the papers either as pathological or how he rederived the results of the paper just from the abstract for himself. And there are more pearls like this.

Number one are by far the Feynman Lectures (Vol. I and II). From these I learned how physicists think.

When it comes to a GR text, it’s clearly MTW. And yes, a tensor is a machine with slots (egg crates etc) and not something with indices that transforms in a particular way (as Weinberg wants to make you believe). [I have to admit, I haven’t really looked at Sean’s book,yet].

Both these are 100% pure fun to read but admittedly, none of them can probably be used to accompany a lecture course. This is why many people who were busy with their courses never properly used them. Due to biographical oddities (three months of spare time between highschool and uni for Feynman, one year of compulsory [German] community/civil service after two years of uni) I actually read these books from beginning to end. I doubt that many other people can claim this. But it’s worth!

In addition, our library had a German/English bilingual edition of the Feynman lectures. So besides physics I could also learn physicists’ English (which is different from the literature English I learned in highschool).

Some other books: Jackson makes a great mouse pad and I have used it to this end for years. Plus it contains everything you ever wanted to know about boundary value problems. But clearly it’s not fun to read.

The writeup of Witten’s lectures in the IAS physics lectures for mathematicians contain lots of interesting insights.

And there is (admittedly not a physics text) “Concrete Mathematics” by Graham, Knuth (the Knuth of TeX) and Patashinik. This is a math book for computer scientists. It’s my standard reference for formulas containing binomials, for generating functions, sums, recurrence relations, and asymptotic expressions. And (probably thanks to Knuth) it’s full of jokes and fun observations. And there are hundreds of problems of a variety of difficulties, rated from “warmups” to “research problems". Therefore it’s also a source for Great Wakering.

Finally, there are the books by Landau and Lifshits. Since my first mechanics course. I have a strong disliking for them, probably based on my own poor judgement. When I first opened vol. 1 I was confused (admitedly I shouldn’t have been) by the fact that they use brackets for the vector product rather than \times like everybody else. Okok, it’s a Lie bracket but still, it makes formulas ugly. And then there is the infamous appendix on what a great guy Landau has been.

Marco Zagermann, who was in my year can still recite the highlights from this appendix: About the logarithmic scale for physicists and how Landau promoted himself on this scale later in his life, how he only read the abstracts of the papers in the Physical Review and then judged the papers either as pathological or how he rederived the results of the paper just from the abstract for himself. And there are more pearls like this.

### LogVyper and summer holiday

After posting my notes on Lee Smolin's paper on the coffee table I left for two weeks of summer holidays which I spent in Berlin. The plan was to catch up with all the amenities of a major capital that you just don't get around to do on ordinary weekends. We were quite successful with this goal and even spent two days on Usedom (an island in the Baltic sea) and due to the weather got up to date with the movies (Sin City (---), LA Crash (++), Melinda and Melinda (+), Colateral (open air, o), The Island (+), and something on DVD (--) which I already forgot.

Update: A. reminds me the DVD was Spanglish and I thought it was (o).

The other thing I did (besides trying to read Thomas Mann's Magic Mountain, I only got to page 120) is I got a new dive computer (a Suunto Vyper) and wrote a GNU Linux software, to download and print dive profiles recorded with it: LogVyper/. It's under the GNU General Public License. Have a look and please give feedback!

Update: A. reminds me the DVD was Spanglish and I thought it was (o).

The other thing I did (besides trying to read Thomas Mann's Magic Mountain, I only got to page 120) is I got a new dive computer (a Suunto Vyper) and wrote a GNU Linux software, to download and print dive profiles recorded with it: LogVyper/. It's under the GNU General Public License. Have a look and please give feedback!

## Tuesday, July 26, 2005

### Bottom line on patents

Now that the EU parliament has stopped the legislation on software patents, it seems time to summarize what we have learned:

The whole problem arises because it is much easier to copy information than to produce it by other means. On the other hand, what's great about information is that you still have it if you give it to somebody else (this is the idea behind open source).

So, there are patents in the first place because you do not want to disfavour companies that do expensive research and development to companies that just save these costs by copying the results of this R&D. The state provides patent facilities because R&D is in it's interest.

The owner of the patent on the other hand should not use it to block progress and competition in the field. He should therefore sell licenses to the patent that reflect the R&D costs. Otherwise patent law would promote large companies and monopolies as these are more likely to be able to afford the costs of the administrative overhead of filing a patent.

Therefore in an ideal world the patent holder should be forced to sell licenses for a fair price that is at most some specific fraction of the realistic costs of the R&D that lead to the patent (and

This system would still promote R&D while stopping companies from exploiting their patents. Furthermore it would prevent trivial patents as those require hardly any R&D and are therefore cheap (probably, you should not be able to patent an idea for which the R&D costs were not significantly higher than the administrative costs of obtaining the patent).

Unfortunately, in the real world it is hard to measure the costs of R&D that a necessary to come up with a certain idea.

The whole problem arises because it is much easier to copy information than to produce it by other means. On the other hand, what's great about information is that you still have it if you give it to somebody else (this is the idea behind open source).

So, there are patents in the first place because you do not want to disfavour companies that do expensive research and development to companies that just save these costs by copying the results of this R&D. The state provides patent facilities because R&D is in it's interest.

The owner of the patent on the other hand should not use it to block progress and competition in the field. He should therefore sell licenses to the patent that reflect the R&D costs. Otherwise patent law would promote large companies and monopolies as these are more likely to be able to afford the costs of the administrative overhead of filing a patent.

Therefore in an ideal world the patent holder should be forced to sell licenses for a fair price that is at most some specific fraction of the realistic costs of the R&D that lead to the patent (and

**not**the commercial value of the products derived from the patent). Furthermore, the fraction could geometrically depend on the number of licenses sold so far such that the 100th license to an idea is cheaper than the first and so on (with the idea that from license fees you could at most asymptotically gain a fixed multiple of your R&D investment).This system would still promote R&D while stopping companies from exploiting their patents. Furthermore it would prevent trivial patents as those require hardly any R&D and are therefore cheap (probably, you should not be able to patent an idea for which the R&D costs were not significantly higher than the administrative costs of obtaining the patent).

Unfortunately, in the real world it is hard to measure the costs of R&D that a necessary to come up with a certain idea.

## Friday, July 22, 2005

### Geek girl

Ever worked out your geek code? Consider yourself a geek? Maybe you should reconsider, check out Jeri Ellsworth, especially the Stanford lecture!

## Thursday, July 14, 2005

### PISA Math

Spiegel online has example problems for the Programme for International Student Assessment (PISA), the international study comparing the problem solving abilities of pubils in different countries in which Germany featured in the lower ranks amongst other develloping countries.

Today, the individual results for the German federal states are published, Bavaria comes out first and the city states (Hamburg, Berlin, Bremen, curiously all places where I was at the uni for some time...) came out last.

However, this might not only be due to superiour schools in conservative, rural federal states but also due to selection: If you are the child of a blue collar worker, your chances of attending high school are six times higher in the city states than in Bavaria. Plus, bad results in these tests can be traced to socially challenge areas with high unemployment and high percentage of immigrants. And those just happen to be more common in the cities than in rural areas.

But what I really wanted to talk about, is the first math problem (for the link in German). My translation is:

The picture shows the footprints of a walking man. The step length P corresponds to the distance between the two most rear points of two successive footprints. For men the formula n/P=140 expresses the approximate relation between n and P, where

n = number of steps per minute

P = step length in meters

This is followed by two questions asking to solve for P in one example, one asks for the walker's speed in m/s and km/h given P.

OK, these questions can be solved with simple algebra. But what I am worried about: n/P = 140 looks really suspicous! Not the the units are lacking (so it's like an astronomer's formula where you are told that you have to insert luminosity in magnitudes and distances in parsec and velocities by quater nautical miles per week), but if they had inserted units, they would see that 140 is actually 140 steps^2/s/m. What is this? They are suggesting that people who make longer steps have a higher frequency of steps?!? C'mon this is at least slightly against my intuition.

But this is a math rather than a natural sciences question, so as long as there are numbers, it's probably ok...

PS: The English version is on page 9 of this pdf.

Today, the individual results for the German federal states are published, Bavaria comes out first and the city states (Hamburg, Berlin, Bremen, curiously all places where I was at the uni for some time...) came out last.

However, this might not only be due to superiour schools in conservative, rural federal states but also due to selection: If you are the child of a blue collar worker, your chances of attending high school are six times higher in the city states than in Bavaria. Plus, bad results in these tests can be traced to socially challenge areas with high unemployment and high percentage of immigrants. And those just happen to be more common in the cities than in rural areas.

But what I really wanted to talk about, is the first math problem (for the link in German). My translation is:

**Walking**The picture shows the footprints of a walking man. The step length P corresponds to the distance between the two most rear points of two successive footprints. For men the formula n/P=140 expresses the approximate relation between n and P, where

n = number of steps per minute

P = step length in meters

This is followed by two questions asking to solve for P in one example, one asks for the walker's speed in m/s and km/h given P.

OK, these questions can be solved with simple algebra. But what I am worried about: n/P = 140 looks really suspicous! Not the the units are lacking (so it's like an astronomer's formula where you are told that you have to insert luminosity in magnitudes and distances in parsec and velocities by quater nautical miles per week), but if they had inserted units, they would see that 140 is actually 140 steps^2/s/m. What is this? They are suggesting that people who make longer steps have a higher frequency of steps?!? C'mon this is at least slightly against my intuition.

But this is a math rather than a natural sciences question, so as long as there are numbers, it's probably ok...

PS: The English version is on page 9 of this pdf.

## Thursday, June 23, 2005

### Sailing on Radiation

Due to summer temperatures, I am not quite able to do proper work, so I waste my time in the blogsphere. Alasdair Allen is an astronomer that I know from diving in the English Channel. In his blog, he reports on the recent fuzz about Cosmos-1 not reaching its orbit.

Cosmos-1 was a satellite that was supposed to use huge mirror sails to catch the solar radiation for propulsion. Al also mentions a paper together with a rebuttal that claims that this whole principle cannot work.

In physics 101, we've all seen the light mill that demonstrates that the photons that bounce off the reflecting sides of the panels transfere their momentum to the wheel. So this shows that you can use radiation to move things.

Well, does it?

Gold argues, that the second law of thermodynamics is in the way of using this effectively. So what's going on? His point is that once the radiation field and the mirrors are in thermal equilibrium, the mirror would emit photos to both sides and there is no net flux of momentum. On general grounds, you should not be able to extract mechanical energy from heat in a world where everything has the same temperature.

The reason that the light mill works is really that the mill is much colder than the radiation. So, it seems to me that the real question (if Gold is right, which I tend to think, but as I said above, it's hot and I cannot really convice myself, that at equilibrium the emission and absorption of photons to both sides balances) is how long it takes for the sails to heat up. If you want to archive a significant amount of acceleration they should be very light which on the other hand means the absolute heat capacity is small.

At least, the rebuttle is so vague, it's written by an engineer of the project, that I don't think he really understood Gold's argument. But it seems, that some physics in the earlier stages of the flight was ill understood, as cosmos-1 did not make it to orbit...

Cosmos-1 was a satellite that was supposed to use huge mirror sails to catch the solar radiation for propulsion. Al also mentions a paper together with a rebuttal that claims that this whole principle cannot work.

In physics 101, we've all seen the light mill that demonstrates that the photons that bounce off the reflecting sides of the panels transfere their momentum to the wheel. So this shows that you can use radiation to move things.

Well, does it?

Gold argues, that the second law of thermodynamics is in the way of using this effectively. So what's going on? His point is that once the radiation field and the mirrors are in thermal equilibrium, the mirror would emit photos to both sides and there is no net flux of momentum. On general grounds, you should not be able to extract mechanical energy from heat in a world where everything has the same temperature.

The reason that the light mill works is really that the mill is much colder than the radiation. So, it seems to me that the real question (if Gold is right, which I tend to think, but as I said above, it's hot and I cannot really convice myself, that at equilibrium the emission and absorption of photons to both sides balances) is how long it takes for the sails to heat up. If you want to archive a significant amount of acceleration they should be very light which on the other hand means the absolute heat capacity is small.

At least, the rebuttle is so vague, it's written by an engineer of the project, that I don't think he really understood Gold's argument. But it seems, that some physics in the earlier stages of the flight was ill understood, as cosmos-1 did not make it to orbit...

## Monday, June 20, 2005

### The future of publishing

Back in Bremen and after finishing my tax declaration for 2004, great wakering provided my with an essay by Michael Peskin on the future of scientific publication. Most of it contains the widely accepted arguments about how The ArXive has revolutionized high energy physics but one aspect was new to me: He proposes that the refereeing process has to be organized by professionals and that roughly 30 percent of the costs of an article in PRL come from this stage of the publishing process. He foresees that this service will always need to be paid for but his business model sounds interesting: As page charges don't work, libraries should pay a sum (depending on the size of the institution but not on the number of papers) to these publishers which then accept papers from authors affiliated with those institutions for refereeing.

This would still require a brave move to get this going but this would have to come from the libraries. And libraries are well aware of the current crisis in the business (PRL is incedibly cheap (2950 US$) compared to NPB which costs 15460 US$ per year for institutions).

Once we are in the process of reforming the publishing process, I think we should also adopt an idea that I learnt from Vijay Balasubramanian: If a paper gets accepted, the name of the referee should also be made public. This would still protect the referee that rejects a paper but would make the referee accountable and more responsible for accepting any nonsense.

This would still require a brave move to get this going but this would have to come from the libraries. And libraries are well aware of the current crisis in the business (PRL is incedibly cheap (2950 US$) compared to NPB which costs 15460 US$ per year for institutions).

Once we are in the process of reforming the publishing process, I think we should also adopt an idea that I learnt from Vijay Balasubramanian: If a paper gets accepted, the name of the referee should also be made public. This would still protect the referee that rejects a paper but would make the referee accountable and more responsible for accepting any nonsense.

## Friday, June 17, 2005

### Still more phenomenology

Internet connectivity is worse than ever, so Jan Plefka and I had to resort to an internet cafe for lunch to get online. So I will just give a breif report of what happened since my last report.

First there was Gordon Kane who urged everybody to think about how to extract physical data from the numbers that are going to come out of LHC. He claimed, that one should not expect (easily obtained) useful numbers on susy except the fact that it exists. Especially, it will be nearly impossible to deduce lagrangian parameters (masses etc) for the susy particles as there are not enough independent observables at LHC to completely determine those.

Still, he points out that it will be important to be trained to understand the message that our experimental friends tell us. To this end, there will be the LHC Olympics where Monte Carlo data of the type as it will come out of the experiment will be provided with some interesting physics beyond the standard model hidden and there will be a competition to figure out what's going on.

Today, Dvali was the first speaker. He presented his model that amounts to an IR modification of gravity (of mass term type) that is just beyond current observational limits from solar system observations that would allow for fit of cosmological data without dark energy. One realization of that modification would be a 5D brane world scenario with a 5D Einstein Hilbert action and a 4D EH action for the pull-back of the metric.

Finally, there was Paul Langacker who explained why it is hard to get seasaw type neutrinos from heterotic Z_3 orbifolds. As everybody knows, in the usual scenario neutrino masses arise from physics around some high energy (GUT, Planck?) scale. Therefore neutrino physics might be the most direct source of ultra high energy physics information and one should seriously try to obtain it from string constructions. According to Langacker, this has so far not been possible (intersectin g brane worlds typically preserve lepton number and are thus incompatible with Majorana masses and he showed that none of the models in the class he studied had a usable neutrino sector).

First there was Gordon Kane who urged everybody to think about how to extract physical data from the numbers that are going to come out of LHC. He claimed, that one should not expect (easily obtained) useful numbers on susy except the fact that it exists. Especially, it will be nearly impossible to deduce lagrangian parameters (masses etc) for the susy particles as there are not enough independent observables at LHC to completely determine those.

Still, he points out that it will be important to be trained to understand the message that our experimental friends tell us. To this end, there will be the LHC Olympics where Monte Carlo data of the type as it will come out of the experiment will be provided with some interesting physics beyond the standard model hidden and there will be a competition to figure out what's going on.

Today, Dvali was the first speaker. He presented his model that amounts to an IR modification of gravity (of mass term type) that is just beyond current observational limits from solar system observations that would allow for fit of cosmological data without dark energy. One realization of that modification would be a 5D brane world scenario with a 5D Einstein Hilbert action and a 4D EH action for the pull-back of the metric.

Finally, there was Paul Langacker who explained why it is hard to get seasaw type neutrinos from heterotic Z_3 orbifolds. As everybody knows, in the usual scenario neutrino masses arise from physics around some high energy (GUT, Planck?) scale. Therefore neutrino physics might be the most direct source of ultra high energy physics information and one should seriously try to obtain it from string constructions. According to Langacker, this has so far not been possible (intersectin g brane worlds typically preserve lepton number and are thus incompatible with Majorana masses and he showed that none of the models in the class he studied had a usable neutrino sector).

## Thursday, June 16, 2005

### More Phenomenology

Now, we are at day three of the string phenomenology and it gets

better by the day: Yesterday, the overall theme was flux vacua and

brane constructions. These classes of models have the great advantage

over heterotic constructions for example that they are much more

concrete (fewer spectral sequences involved) and thus a simple mind

like myself has fewer problem to understand them.

Unfortunately, at the same rate as talks become more interesting (at

least to me, I have to admit that I do not get too excited when people

present the 100th semi-realistic model that might even have fewer

phenomenological shortcomings than the the 99th that was presented at

last year's conference) the internet connectivity get worse and worse:

In principle, there is a WLAN in the lecture hall and the lobby and it

is protected by a VPN. However, the signal strength is so low that the

connection gets lost every other minute resulting in the VPN client

also losing its authentification. As a result, I now type this into my

emacs and hope to later cut and paste it into the forms at

blogger.com.

Today's session started with two presentations that I am sure many

people are not completely convinced by they at least had great

entertainment value: Mike Douglas reviewed his (and collaborators)

counting of vacua and Dimopoulos presented Split Supersymmetry.

Split Supersymmetry is the idea that the scale of susy breaking is

much higher than the weak scale (and the hierarchy is to be explained

by some other mechanism) but the fermionic superpartners still have

masses around (or slightly above) 100GeV. This preserves the MSSM good

properties for gauge unification and provides dark matter candidates

but removes all possible problems coming with relatively light scalars

(CP, FCNC, proton decay). However, it might also lack good motivation

(Update: I was told, keeping this amount of susy prevents the Higgs

mass from becoming too large. This is consistent with upper bounds

coming from loop corrections etc) . But as I learned at the weak scale

there are only four coupling constants that all come from tan(beta) so

they should run and unify at the susy scale.

But the most spectacular prediction would be that LHC would produce

gluinos at a rate of about one per second and as they decay through

the heavy scalars they might well have a life time of several

seconds. As they are colour octets they either bind to q q-bar or to

qqq and thus form R-mesons and R-baryons. These (at least if charged

which a significant fraction would be) would get stuck inside the

detector (for example in the muon chambers) and decay later into jets

that would be easy to observe and do not come from the interaction

area of the detector. So, stay tuned for a few more years.

Talking of interesting accelerator physics beyond the standard model,

Gianguido Dall'Agata urges me to spread a rumour that at some US

accelerator (he doesn't remember which) sees evidence for a Z' that is

a sign of another SU(2) group (coupling to right handed fermions?)

that is broken at much higher scale than the usual SU(2)-left. He

doesn't remember any more details but he promised to dig up the

details. So again, stay tuned.

Finally, I come to at least one reader at Columbia's favourite topic,

The Landscape(tm). Mike gave a review talk that evolved from a talk

that he has already given a number of times, so there was not much

news. I haven't really followed this topic over the last couple of

months so I was updated on a number of aspects and one of them I find

worth discussing. I have to admit it is not really new but at least to

me it got a new twist.

It is the question of which a priori assumptions you are willing to

make. Obviously you want to exclude vacua with N=2 susy as they come

with exact moduli spaces. That is there is a continuum of such vacua

and these would dominate any finite number however large it (or

better: its exponent) it might be. Once you accept that you have to

make some assumption to exclude some "unphysical" vacua you are free

to exclude further: It is common in this business to assume four

non-compact dimensions and put an upper bound on the size of the

compact ones (or a lower bound on KK masses) for empirical

reasons. Furthermore, one could immediately exclude models that for

example unwanted ("exotic") chiral matter. To me (being no expert in

these counting matters), intuition from intersecting branes and their

T-duals, magnetized branes, suggests that this restriction would help

to get rid of really many vacua and in the end you might end up with a

relatively small number of remaining ones.

Philosophically speaking, accepting a priori assumptions (aka

empirical observations) one gives up the idea of a theory of

everything, a theory that predicts every observation you make. Be it

the amount of susy, the number of generations, the mass of the

electron (in Planck units), the spectrum of the CMB, the number of

planets in the solar system, the colour of my car. But (as I have

argued earlier) a hope for such a TOE would have been very optimistic

anyway. This would be a theory, that has only one single solution to

its equations of motion (if that classical concept

applies). Obviously, this is a much stricter requirement than to ask

for a theory without parameters (a property I would expect from a more

realistic TOE). All numerical parameters would actually be vevs of

some scalar fields that are determined by the dynamics and might even

be changing or at least varying between different solutions.

So, we will have to make a priori assumptions. Does this render the

theory unpredictive? Of course not! At least if we can make more

observations that data we had to assume. For example, we could ask for

all string vacua with standard model gauge group, four large

dimensions, susy breaking at around 1TeV and maybe an electron mass of

511keV and some weak coupling constant. Then maybe we end up with an

ensemble of N vacua (hopefully a small number). Then we could go ahead

(if we were really good calculators) and check which of these is

realized and from that moment on we would make predictions. So it

would be a predictive theory, even if the number of vacua would be

infinite if we dropped any of our a priori assumptions.

Still, for the obvious reasons, we would never be able to prove that

we have the correct theory and that there could not be any other, but

this is just because physics is an empirical science and not math.

I think, so far it is hard to disagree with what I have said (although

you might not share some of my hopes/assumptions). It becomes really

controversial if one starts to draw statistical conclusions from the

distribution of vacua as in the end we only live in a single one. This

becomes especially dangerous when combined with the a priori

assumptions: These are of course most effective when they go against

the statistics as then they rule out a larger fraction of vacua. It is

tempting to promote any statement which goes against the statistics

into an a priori assumption and celebrate any statement that is in

line with the weight of the distribution. Try for yourself with the

statement "SUSY is broken at a low scale". This all leaves aside the

problem that so far nobody has had a divine message about the

probability distribution between the 10^300 vacua and why it should be

flat.

better by the day: Yesterday, the overall theme was flux vacua and

brane constructions. These classes of models have the great advantage

over heterotic constructions for example that they are much more

concrete (fewer spectral sequences involved) and thus a simple mind

like myself has fewer problem to understand them.

Unfortunately, at the same rate as talks become more interesting (at

least to me, I have to admit that I do not get too excited when people

present the 100th semi-realistic model that might even have fewer

phenomenological shortcomings than the the 99th that was presented at

last year's conference) the internet connectivity get worse and worse:

In principle, there is a WLAN in the lecture hall and the lobby and it

is protected by a VPN. However, the signal strength is so low that the

connection gets lost every other minute resulting in the VPN client

also losing its authentification. As a result, I now type this into my

emacs and hope to later cut and paste it into the forms at

blogger.com.

Today's session started with two presentations that I am sure many

people are not completely convinced by they at least had great

entertainment value: Mike Douglas reviewed his (and collaborators)

counting of vacua and Dimopoulos presented Split Supersymmetry.

Split Supersymmetry is the idea that the scale of susy breaking is

much higher than the weak scale (and the hierarchy is to be explained

by some other mechanism) but the fermionic superpartners still have

masses around (or slightly above) 100GeV. This preserves the MSSM good

properties for gauge unification and provides dark matter candidates

but removes all possible problems coming with relatively light scalars

(CP, FCNC, proton decay). However, it might also lack good motivation

(Update: I was told, keeping this amount of susy prevents the Higgs

mass from becoming too large. This is consistent with upper bounds

coming from loop corrections etc) . But as I learned at the weak scale

there are only four coupling constants that all come from tan(beta) so

they should run and unify at the susy scale.

But the most spectacular prediction would be that LHC would produce

gluinos at a rate of about one per second and as they decay through

the heavy scalars they might well have a life time of several

seconds. As they are colour octets they either bind to q q-bar or to

qqq and thus form R-mesons and R-baryons. These (at least if charged

which a significant fraction would be) would get stuck inside the

detector (for example in the muon chambers) and decay later into jets

that would be easy to observe and do not come from the interaction

area of the detector. So, stay tuned for a few more years.

Talking of interesting accelerator physics beyond the standard model,

Gianguido Dall'Agata urges me to spread a rumour that at some US

accelerator (he doesn't remember which) sees evidence for a Z' that is

a sign of another SU(2) group (coupling to right handed fermions?)

that is broken at much higher scale than the usual SU(2)-left. He

doesn't remember any more details but he promised to dig up the

details. So again, stay tuned.

Finally, I come to at least one reader at Columbia's favourite topic,

The Landscape(tm). Mike gave a review talk that evolved from a talk

that he has already given a number of times, so there was not much

news. I haven't really followed this topic over the last couple of

months so I was updated on a number of aspects and one of them I find

worth discussing. I have to admit it is not really new but at least to

me it got a new twist.

It is the question of which a priori assumptions you are willing to

make. Obviously you want to exclude vacua with N=2 susy as they come

with exact moduli spaces. That is there is a continuum of such vacua

and these would dominate any finite number however large it (or

better: its exponent) it might be. Once you accept that you have to

make some assumption to exclude some "unphysical" vacua you are free

to exclude further: It is common in this business to assume four

non-compact dimensions and put an upper bound on the size of the

compact ones (or a lower bound on KK masses) for empirical

reasons. Furthermore, one could immediately exclude models that for

example unwanted ("exotic") chiral matter. To me (being no expert in

these counting matters), intuition from intersecting branes and their

T-duals, magnetized branes, suggests that this restriction would help

to get rid of really many vacua and in the end you might end up with a

relatively small number of remaining ones.

Philosophically speaking, accepting a priori assumptions (aka

empirical observations) one gives up the idea of a theory of

everything, a theory that predicts every observation you make. Be it

the amount of susy, the number of generations, the mass of the

electron (in Planck units), the spectrum of the CMB, the number of

planets in the solar system, the colour of my car. But (as I have

argued earlier) a hope for such a TOE would have been very optimistic

anyway. This would be a theory, that has only one single solution to

its equations of motion (if that classical concept

applies). Obviously, this is a much stricter requirement than to ask

for a theory without parameters (a property I would expect from a more

realistic TOE). All numerical parameters would actually be vevs of

some scalar fields that are determined by the dynamics and might even

be changing or at least varying between different solutions.

So, we will have to make a priori assumptions. Does this render the

theory unpredictive? Of course not! At least if we can make more

observations that data we had to assume. For example, we could ask for

all string vacua with standard model gauge group, four large

dimensions, susy breaking at around 1TeV and maybe an electron mass of

511keV and some weak coupling constant. Then maybe we end up with an

ensemble of N vacua (hopefully a small number). Then we could go ahead

(if we were really good calculators) and check which of these is

realized and from that moment on we would make predictions. So it

would be a predictive theory, even if the number of vacua would be

infinite if we dropped any of our a priori assumptions.

Still, for the obvious reasons, we would never be able to prove that

we have the correct theory and that there could not be any other, but

this is just because physics is an empirical science and not math.

I think, so far it is hard to disagree with what I have said (although

you might not share some of my hopes/assumptions). It becomes really

controversial if one starts to draw statistical conclusions from the

distribution of vacua as in the end we only live in a single one. This

becomes especially dangerous when combined with the a priori

assumptions: These are of course most effective when they go against

the statistics as then they rule out a larger fraction of vacua. It is

tempting to promote any statement which goes against the statistics

into an a priori assumption and celebrate any statement that is in

line with the weight of the distribution. Try for yourself with the

statement "SUSY is broken at a low scale". This all leaves aside the

problem that so far nobody has had a divine message about the

probability distribution between the 10^300 vacua and why it should be

flat.

Subscribe to:
Posts (Atom)