Bram Boroson, Master of Subtle Ways and Straight (bram) wrote,
Bram Boroson, Master of Subtle Ways and Straight
bram

  • Mood:

Spin networks: some continuous physics background (to be revised)

Happy birthday, Sir Isaac Newton!!! You were a devious jealous joyless little prick but you gave us the tools to understand and create the modern world!

Some parts will be review, some will swoosh over heads:



I'm going to keep as best I can regular 9-5 hours in the office, working on first my Chandra data and then on the Her X-1 FUSE data for my poster. Though everyone else is mostly away on vacation, I'm a researcher damnit, and this is research time. OTOH, will I be able to resist the warm California weather?

Spin networks and loop quantum gravity: an approach to unifying the two major physics theories, quantum mechanics and general relativity--two theories which right now appear to flatly contradict each other. Superstring theory's where most people in high energy physics think the future is, but I'm intrigued by spin networks.

First off, what I'm reading to educate myself. John Baez Paper 1 is my main source these days. There's also a previous paper John Baez Paper 0. And just now I'm reading on the web this "living review" by Carlo Rovelli.

Ok, before I go show people like pbrane how little I comprehend of this stuff, I thought I'd go over some review material and prerequisites.

"Superspace": one point that I was slow to pick up on is that much of this work in quantum gravity is about 3-dimensional spatial slices of 4-dimensional spacetime. At first when people tried to quantize gravity, they thought general relativity was a theory about 4-dimensional spacetime, and they should treat it like that. Now people think that mucks up the quantum mechanics. When you look at a 3-dimensional spatial slice, not only can that space have intrinsic curvature, but it can also have extrinsic curvature, just as the surface of a sphere has curvature because of its embedding in a higher dimension.

Spin networks themselves are vertices connected by edges--in other words, graphs. In Penrose's original formulation, 3 edges met at every vertex. Associated with each edge was a positive integer representing the amount of spin, in units of an electron's spin (hbar/2). The numbers of the edges that met at a vertex were related by a "triangle inequality." This spin network can also be viewed as a "strand network", which consists of n lines for each edge with number n. The overall network has a probability associated with it. Penrose showed to calculate the probability from the strand network. At each vertex, a strand coming from one edge can go off in any number of ways, joining either of the other two edges and crossing over other edges. In other words, there are a number of different strand networks compatible with each spin network. There are combinatorial rules taking into account all these different strand networks and how the edges are crossed--these let you calculate the probabilities that when you combine two objects with given spin, that you get out certain values of spin as the end result.

In modern spin networks, the graph can be oriented. There can be more than 3 edges at each vertex. Instead of integers at each edge there are representations of the gauge group, and at each vertex there is an "intertwining" operator that relates the edges. I need to understand this better, and how it relates to Wilson loops.

Penrose's original motivation for spin networks was very different from their current use. Penrose was influenced by Mach's principle (a more philosophically oriented relativity principle that says only relative measurents are real), as well as a desire to make physics discrete. Two quantum systems have spin. But how do their spin directions point? From a Mach's principle point of view, that has no absolute meaning. Instead, put the two systems together into a composite system, and see how they mesh. Here's where I previously summarized Penrose's approach.

Modern use of spin networks: the modern use of spin networks (coming from such as Rovelli and Smolin and Markopoulou-Kalamara) arises from the fact that when you look at loop quantum gravity theories, the ways of talking about spacetime are highly redundant. Take away those redundancies (choice of coordinate system or "diffeomorphism", and gauge choice--to be described below) and you're left with only a bare-bones description of the world in terms of connected graphs. The achievement of loop quantum gravity is describing the "kinematic states" of the gravitational field. Although once you have a gravitational field you can change coordinates up the wazoo, the idea is that there are only certain bare-bones "templates" possible for the gravitational field. And if the theory is right, there is a definite "spectrum" of possible areas and volumes in our world (roughly the area in Planck units is proportional to the number of spin network edges that cross a surface, while the volume is proportional to the number of vertices enclosed. For some reason it's a little harder for loop quantum gravity to talk about the lengths of lines than it is for it to talk about areas and volumes.)

Wheeler-DeWitt equation Just as in good old Schrodinger wave function quantum mechanics, the wave function tells you the probability of a particle being somewhere, the gravitational wave function tells you the probability of the spacetime gravitational field having a certain configuration. This is a little problematic. In QM, you "normalize" the wave function by requiring that there is a probability of 1 for the particle to be anywhere at all. So you integrate over all spatial coordinates. But to certify that there is a probability of 1 for any possible gravitational field, you have to do an integration where your "points" are entire configurations of the gravitational field (i.e. those points live in a Hilbert space), and I believe this causes some headaches.

The Wheeler-DeWitt equation is for quantum gravity what the Schrodinger equation is for normal quantum mechanics. Want to see it? It's easy:

H Psi = 0

That's it. But oy, what the fuck does it mean? See here, H is the "Hamiltonian", and the Schrodinger equation, suitably covered up to look simple as it really is:

H Psi = d Psi/ dt

(that's a partial derivative d)

So the Schrodinger equation tells us how the wave function changes over time, but the Wheeler-DeWitt equation just annihilates the wave function! (Remember this is a wave function not of probabilities in space, but probabilities OF spaces!) This also causes a "problem of time" in quantum mechanics.

The reason that the Wheeler-DeWitt equation is so odd is that it's diffeomorphism invariant, a squelchy sounding phrase meaning it don't matter what coordinate system you use. That's a squelchy phrase, I'll think of a better way of putting it.

Metric and Connection: In Einstein's General Relativity, spacetime is curved. This causes much consternation among the uninitiated. What the fuck? How can spacetime be curved? Well, stop trying to think about it intuitively, and think by analogy and pattern. On curved surfaces, for one, the pythagorean theorem doesn't hold. Heck, if you use coordinate systems other than the normal rectangular Cartesian coordinates, the pythagorean theorem doesn't hold. In polar coordinates, for a small change dr and small change dtheta, the squared distance between two points is ds2=dr2+r2 dtheta2. So the distance depends on the actual coordinate value (r) as well as the change in coordinate. This is to be contrasted with the pythagorean theorem which says ds2=dx2+dy2. And then say you had non-orthogonal coordinates (x' and y' axes any 2 non-parallel lines in your x-y plane)--you'd find that the squared distance between two points would contain terms like dx dy. So in general it turns out that to find the distance between two closely separated points, you use something called a "metric":

ds2=gijdxi dxj

i and j are superscripts on xi and xj, they are not powers. I use super and subscripts because if you're rigorous you distinguish between "covariant" and "contravariant" quantities.

On the right hand side, we are summing over i and j (whenever any index is repeated when two things are multiplied means that we will sum over that index--this is called the Einstein summation convention, and Einstein joked that this was his great contribution to pure mathematics!)

So the gij is a function of position (as in the polar coordinate case.)

Note that you can always make a curved surface look flat if you look closely enough. On the surface of a unit sphere you have the metric ds2=dphi2+sin2(phi) dtheta2, and the gtheta,theta term contains the variable sin2(phi), but for a small enough change in phi, this looks like flat space, and the metric g can be made close to the just having 1 on its diagonal and 0 elsewhere.

Alone g doesn't tell you whether space (or spacetime--Einstein's contribution, egged on by Minkowski, was to see that you can add time to the coordinates, with a - sign in its contribution to g, because the fact that the speed of light is a constant translates to 4-d spacetime distance being a constant)--is curved or not. Polar coordinates gave us a funky looking g (it depended on the position r) as did the truly curved coordinates on the surface of a sphere.

So Einstein created another tensor (yes, gij is a tensor, a souped up vector that has two indices and transforms in a special way when you change coordinates--that's one way to think about it without getting into dual spaces.) This one was capitalized Gij, and takes into account second derivatives of gij with position. Actually, Riemann had already created a tensor to determine whether space was curved (0 when flat, nonzero when curved), but the Einstein tensor can be 0 even when space isn't flat, because spacetime can be curved where there is no matter (for example, gravity waves--or more simply, you could be standing at some distance from a mass that curves your space), and Albert wanted to set Gij equal to the matter-energy density tensor (good move.)

The connection is another way to think about space(time) curvature. Imagine you have some vector field, defined at every point in space, and you want to understand its directional derivative. Well, if you move in the direction of coordinate number i, component j of your vector could be changing, but is it changing because the vector field is changing or because the coordinates themselves are changing?

Imagine you're near the north pole. You point south. Now you move in a circle about the pole, and the vector field still points south--but it's rotated a bit because the direction south is itself rotating.

So in general relativity, the connection has 3 components: it tells you how the unit vector in the i direction changes in the j direction when you move along direction k. The connection can be related to the metric or can be taken as basic in its own right.

Essentially it tells you how to compare things when you go from place to place, when the conventions change from place to place.

New variables: Abby Ashtekar found a way of talking about general relativity in terms of new variables, I believe emphasizing the connection over the metric. What was so bad about the old variables? (This is an area where I need more autodidactism.) When you try to quantize general relativity--well, you do it "simple" first, just the gravitational field without matter (for example, empty space with gravity waves.) The equation of general relativity just becomes

Gij=0

Now, when i=0, you only get equations involving the first derivative of the metric g. So one way of writing general relativity is to think of some of these equations as specifying "initial conditions" (just like in ordinary Newtonian mechanics you'd specify the starting velocities and the laws o' physics would tell you the subsequent evolution) and the rest to encode the dynamics.

Though I guess those initial conditions are more complicated, and that causes trouble. These "initial conditions" are more like full-fledged equations linking the velocity and position--as if in Newtonian mechanics, you could only start off in certain states of motion. In fact, the Wheeler-DeWitt equation is the quantum version of one of those equations.

I admit one of the things I'm kind of confused by is under what circumstances the 4-d spacetime is being broken into 3 spatial and one time coordinate...

Also, when you quantize GR, you need a "position" (the metric) and a "conjugate momentum" which turns out to be a nonpolynomial mess which frustrated quantization in the old variables...

Gauges: In quantum mechanics, probability density is defined in terms of the Schrodinger wave function Psi, which gives a complex number for every point in space. The absolute value (norm) of that complex number is the probability density (probability per unit volume). So it's only the norm that matters, why use complex numbers at all?

Well, the difference in the phase (how much of the complex number is real and how much imaginary) from point to point affects the time evolution of the wave function (this is given by Schrodinger's equation.)

But now let's say we throw out the baby with the bathwater and see what happens. Let's say we change the phase everywhere and at all times and in a completely arbitrary (though continuous, let's stay reasonable) fashion. In order to deal with this, you have to add in a compensating factor whenever you take a derivative, whenever you compare the phases of nearby points. This compensating factor is the gradient of the phase--that is, (dtheta/dx,dtheta/dy,dtheta/dz,dtheta/dt), where d is partial derivative.

Now it turns out that electromagnetism can be defined in terms of a 4-vector potential A. The normal eletric potential is one component and there's another 3 component Ab that gives you the magnetic field when you take the curl (for the uninitiated, the curl is a funky kind of derivative of a vector field that gives you another vector field.)

Just as you can make the arbitrary choice of what the eletric potential is--it's only changes that have physical significance--so too you can add the 4-gradient of any field to the A field, and get the same physical result.

So there are two redunancies: phase and A field. Surprise: it turns out that the compensating field you need to add to allow arbitrary phase is also the 4-gradient of a field.

So there is no "right" value for the phase field, any more than there is for the A field alone--only together do they make physical predictions.

And you are really forced into considering A fields once you allow the phase to be locally arbitrary. So electromagnetism is a consequence of gauge invariance.

Now, the electromagnetic 4-potential A is really analagous to the gravitational connection. (Hm, I'm hungry and stupid at the moment--what is the symmetry group of the standard GR connection?)

GR: to look at derivatives from point to point, you need this "connection" which tells you how the coordinate convention is changing

EM: to look at derivatives from point to point, you need the 4-potential to tell you how the phase convention is changing

Lie groups: Pronounced "lee". These are continuous groups--simple example is spinning a dial. You can spin by any angle, you can compound two spins to get the equivalent of a 3rd, there's an "identity" spin (not spinning at all), and there's always an "inverse" spin (in the opposite direction.) This Lie group is called O(2) or U(1), because you can get a rotation in a plane out of multiplying vectors by Orthogonal (length-preserving, or transpose=inverse) 2x2 matrices or by multiplying complex numbers (which can be represented in a plane with one axis real and the other imaginary) by Unitary matrices (transpose complex conjugate=inverse) --well, a 1x1 matrix is just a number! And this complex number is the arbitrary phase of the wave function.

So the gauge symmetry of electromagnetism--saying that you can rotate the phase of the wave function at every point in space--is a U(1) symmetry.

But people (Yang and Mills) asked: what about other symmetries? There's 3-d rotation for example, which can be represented by orthogonal 3x3 matrices (O(3)) or by complex 2x2 matrices (SU(2), the "Pauli" matrices can be thought of as "generators" but I'm a little hazy and lazy on group representations--also when to stick in the S in front of the name--means Special, meaning has determinant of 1, right?)

So you play the same game: at each point in space in order for this "internal coordinate" to take on arbitrary values, you have to have compensating "connection" fields--once you allow for those fields, you get dynamics.

Yang-Mills theory was eventually turned into a theory of the weak nuclear force.

I think one of the innovations of the loop quantum gravity approach is to make GR more similar to Yang-Mills theory. It uses a SU(2) connection, as well as a "square root of the metric" which is almost, but not entirely non-analogous to an electric field:

g gij=EiEj

Here, g is the determinant of the gij matrix. AND this is only for the 3-d "spatial" part of the metric. Because how do you specify a matrix like gij in terms of a vector like Ei? Well, the vector's complex. In 3-d then you have 3x2=6 independent quantities. For the 3x3 matrix, you'd have 9 elements but it has to be symmetric, so that leaves 6 elements there too!

The theory's been changed a few times, and it's a little confusing as to what's real and what's complex--I think the connection A can be made real.

The curvature within the 4-d spacetime is taken into account by the connection, which is also 3d but contains a term that depends on the curvature.

Wilson loops: Wilson loops are the key to the link between ordinary continuous physics and spin networks. Ok, imagine you do a line integral around a closed loop, and what you integrate is the connection A. This is going to be gauge invariant. You can choose different connections A, but you'll get the same value for this loop integral. Why? Well, I understand this easiest in the case of electromagnetism. There, choosing a new gauge means choosing a new local rotation (phase of the wave function.) Then the gradient of this theta(space coordinate) can be added in to the A. Now, integrating over a closed loop the gradient of some function will give you zero! (The same idea behind the fact that any force derived from a potential function will be conservative, i.e. the line integral of the gradient of the potential is zero over a closed path.)

(Actually I'm a little unclear on the Wilson loop being the exponential of this line integral--and then you take the trace, which I suppose is needed when you have a higher dimensional symmetry than U(1)?)
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 7 comments