Interstellar travel

The Alpha Centauri system consists of two stars and at least one planet. It takes light about 4 years to go there from our Sun.

The Alpha Centauri system consists of two stars and at least one planet. It takes light about 4 years to go there from our Sun. Distances are not drawn to scale (otherwise you wouldn’t see anything).

The last two decades have seen significant progress in our understanding of the universe. While previously we only knew the planets and moons in our own solar system, we are now aware of many planets orbiting distant stars. At last count, there are more than 800 of those extrasolar planets (see http://exoplanet.eu/). Even in the star system closest to our sun, the alpha Centauri system, an extrasolar planet candidate has been detected recently.

What would it take to visit another star? This question has been discussed seriously for several decades now, even before any extrasolar planet had been discovered. The point of this blog post will be to recall how hard interstellar travel is (answer: it is really, really hard).

The challenge is best illustrated by taking the speed of the fastest spacecraft flying today: That is the Voyager 1 space probe, flying at about 17 kilometers per second away from our sun. At this speed, it would take Voyager 70,000 years to reach alpha Centauri, if it were flying into that direction. It is safe to assume that no civilization is that patient (or that long-lived), waiting through several ice ages to hear back from a space probe launched tens of thousands of years ago. Thus, faster space probes are called for, preferably with a mission duration of only a few decades to reach the target star.

Why travel to another star?

First, however, we should ask: why would one want to travel there? The short answer is that trying to explore another planet without going there is quite challenging.

Of course, as the detection of extrasolar planets shows, one can at least obtain some information about the planet’s orbit and its mass just by looking at the star. It is even possible to figure out a bit about the planet’s atmosphere, by doing spectroscopy on the small amount of sunlight that is reflected from the planet. In other words, one tries to see the colors of the planet’s atmosphere. This has been achieved recently for a planet orbiting in a star system 130 light years from earth (see http://www.eso.org/public/news/eso1002/). The spectrum that can be teased out from this method is quite rough, since it is extremely challenging to see the planet’s spectrum right next to the significantly brighter star. In any case, this is a promising way to learn more about planetary atmospheres. In the best possible scenario, changes in the atmosphere might then hint towards life processes taking place on the planet.

However, measuring the spectrum in this way only gives an overall view of the planet’s color. It does not reveal the shape of the planet’s surface (the clouds, oceans, continents etc.). In order to take a snapshot of a planet’s surface with a good resolution, one would need to build gigantic telescopes (or telescope arrays). We can illustrate this by taking alpha Centauri as an example, which is our closest neighboring star system at only about 4 light years distance. If we wanted to resolve, say, 1km on the surface of the planet, then we would need a telescope with a diameter the size of the earth! At least, this is the result of a rough estimate based on the standard formula for the resolving power of a telescope.

Therefore, it seems one would need to travel there even if only to take a look and send back some pictures to earth. More ambitious projects would then involve sending a robotic probe down to the surface of the planet (just like the Curiosity rover currently exploring Mars), or even send a team of astronauts. However, as we will see that interstellar travel is really hard, we will be content in our estimates to take the most modest approach, i.e. an unmanned space probe, with the goal to take some close-up pictures. Probably that would mean a spacecraft of about a ton (1000 kg), since that is the size of probes like the Mars Global Surveyor or Voyager 1. Possibly that mass could be reduced, but even if the imaging and processing system were only a few tens of kilograms, one still needs a power source and a radio dish for communicating back to earth.

The options

The Voyager spacecraft was carried up by a rocket and then used, in addition, the gravitational pull of the large planets Jupiter and Saturn to reach its present speed. However, as we have seen above, 17 km per second is just not fast enough. It is a thousand times too slow.

If you want to reach alpha Centauri within 40 years, you need to travel at 10 percent of the speed of light, since alpha Centauri is 4 light years away. Which concepts are out there that provide acceleration to speeds of this kind?

Usual rocket fuel is not good enough. What could work conceivably are ideas based on nuclear propulsion. In Project Orion, a study from the 50s, it was suggested to use nuclear bombs being ignited at the rear end of a spacecraft. Each explosion would give a push to the craft, against a plate. In this way, a spacecraft of about 100,000 tons could reach alpha Centauri in about 100 years. Later project studies (like Project Daedalus) envisaged nuclear fusion of small pellets in a reaction chamber. Again, the design called for a spacecraft on the order of about 50,000 tons. Since the helium-3 required for the fusion pellets is very scarce on earth, it would have to be mined from Jupiter.

If you think these designs sound crazy, you are not alone. Launching such a massive spaceship filled to the rim with nuclear bombs from the surface of the earth is probably not going to happen. And constructing these gigantic ships in space, even though safer, would probably require resources beyond what seems feasible.

All of these nuclear-powered designs are really large spaceships, because they have to carry along a large amount of fuel. The scientific payload would be only a very small fraction of the total mass.

Carrying along the fuel is obviously a nuisance, since a lot of energy is used up for accelerating the fuel and not the payload. This can be avoided in schemes where the power is generated on the home planet and “beamed” to the spaceship. That is the concept behind light sails.

Light sails

One of the less obvious properties of light is that is can exert forces, so called radiation forces. These forces are very feeble. For example, direct sun-light hitting a human body generates a push that is equivalent to the weight of a few grains of sand. That is why you will notice the heat and the brightness, but not the force. Nevertheless, the force can be made larger by increasing the surface area or by increasing the light intensity. And that is the concept behind light sails: Unfold a large reflecting sail and wait for the radiation pressure force to accelerate the sail. Even though the accelerations are still modest, they are good enough if you can afford to be patient. A constant small acceleration acting continuously over hundreds of days can bring you to considerable speeds.

The first proposals for light sails in space seem to have originated from the space flight pioneers Konstantin Tsiolkovsky and Friedrich Zander during the 1920s. The radiation pressure force had been predicted theoretically in the 19th century by James Clerk Maxwell, starting from his equations of electromagnetism, although very early speculations in this direction date back even to Johannes Kepler around 1610. The force had been demonstrated experimentally around 1900 by Lebedev in Moscow and by Nichols and Hull at Dartmouth College in the U.S.

For voyages within the solar system, one could use the light emanating from the Sun. In that case, the craft would be termed a solar sail.

First demonstrations of solar sails

The first demonstrations of solar sails failed, but the failure was not related to the sails themselves. In 2005, the Cosmos 1 mission was launched by the Planetary Society, with additional funding from Cosmos Studios. It was launched onboard a converted intercontinental ballistic missile from a Russian submarine in the Barents Sea. Unfortunately, the rocket failed and the mission was lost. The same fate awaited NanoSail-D, which was launched by NASA in 2008 but again was lost due to rocket failure.

In 2010, the Japanese space agency JAXA demonstrated the first solar sail that also uses solar panels to power onboard systems. This successful project is named IKAROS. It demonstrated propulsion by the radiation pressure force, and after half a year it passed by Venus, taking some pictures. IKAROS is still sailing on. The square-shaped IKAROS sail measures 20m along the diagonal, and it is made up of a very thin plastic membrane, about 10 times thinner than a human hair. The sail is stabilized by spinning around. In this way, the centrifugal force pushes the sail outward from the center, so it does not crumple.

The overall radiation pressure force on IKAROS is still tiny: Only about a milliNewton, which (at a mass of 315 kg) translates into an acceleration that is more than a million times smaller than the gravitational acceleration “g” on Earth. Nevertheless, in 100 days, such a tiny acceleration would already propel the craft over a distance of about 100,000 km. It should be noted that the motion of IKAROS towards Venus was due to the initial velocity given to the craft, not the radiation pressure force (which, as this example demonstrates, would have been too small to go to Venus in half a year).

Artist's depiction of the Japanese IKAROS solar sail (Artist: Andrzej Mirecki, on Wikimedia Commons)

Artist’s depiction of the Japanese IKAROS solar sail (Artist: Andrzej Mirecki, on Wikimedia Commons, from the IKAROS Wikipedia page)

NASA successfully flew a smaller mission, NanoSail-D2 at the end of 2010. The Planetary Society is currently building LightSail-1 as a more advanced successor to Cosmos 1.

Laser sails

For interstellar travel, however, the sun-light quickly becomes too dim, as the sail recedes from the sun. In that case one needs to focus the light onto the sail, such that the radiation power received by the sail does not diminish as the sail moves away. This could be done either via gigantic mirrors focussing a beam of sun-light, or by a large array of lasers. Laser sails were analyzed in the 1980s by the physicist and science-fiction write Robert L. Forward and subsequently by others.

What are the challenges faced by laser sails?

In brief: In order to get a sufficient acceleration, one wants to have a large beam power and a very thin, light-weight material. However, the beam will tend to heat the sail and thus the material should be able to withstand large temperatures. In addition, the space craft will fly through the dust and gas of interstellar space, which rips holes into the sail and again tends to heat up further the material.

In the following, we are going to go through the most important points, illustrating them with estimates.

The power

Since the radiation pressure force is so feeble, a lot of light power is needed. Of course, all of this depends on the mass that has to be accelerated. Suppose for the moment a very modest mass, of only 100 kg.

In addition, the power needed will depend on the acceleration we aim for.

How large is the acceleration we would need for a successful decades-long trip to Alpha Centauri? It turns out that the standard gravitational acceleration on earth (1 g) would be good enough by far: If a spacecraft is accelerated at 1 g for about 35 days, it will have already reached 10% of the speed of light. Since the whole mission takes a few decades, we can easily be more modest, and require only, say, 10% of g. Then it would take about a year to reach 10% of the speed of light. That acceleration amounts to increasing the speed by 1 meter per second every second.

So here is the question: how much light power do you need to accelerate 100 kg at a rate of 1 meter per second every second?

The number turns out to be: 15 Giga Watt!

And that is assuming the optimal situation, where the light gets completely reflected, so as to provide the maximum force.

How large is 15 Giga Watt? This amounts to the total electric power consumption of a country like Sweden (see Wikipedia power consumption article).

Still, there is some leeway here: We can also do with an acceleration phase that lasts a decade, at one percent of g. Then the power is reduced down to a tenth, i.e. 1.5 Giga Watt.  This is roughly the power provided by a nuclear power plant, or by direct sun light hitting an area of slightly more than a square kilometer (if all of that power could be used).

The area and the mass

How large would one want to make the sail? In principle, it could be quite small, if all that light power were focussed on a small area.

However, as we will see, the heating of the structure is a serious concern, and so it is better to dilute the light power over a larger area. As a reasonable approach, let’s assume that the light intensity (power per area) should be like that of direct sunlight hitting an area on earth. That is 1 kilo Watt per square meter. In that case, the 1.5 Giga Watt would have to be distributed over an area of somewhat more than a square kilometer. So the sail would be roughly a kilometer on each side.

The area is important, since it also determines the total mass of the sail. In order to figure out the mass, we also need to know the thickness and the density of the sail. The current solar sails mentioned above each have a thickness of a few micrometer (millionths of a meter), thinner than a human hair.

Even if we just assume 1 micrometer thickness, a square kilometer sail would already have a total mass of 1000 kg (at the density of water). This shows that our modest payload mass of 100 kg is no longer relevant. It is rather the sail mass itself that needs to be accelerated.

Once the sail mass is larger than the payload mass, we rather have to ask what is the acceleration at a given light intensity (e.g. 1 kW per square meter, as assumed above). If the light intensity is fixed, the acceleration becomes independent of the total sail area: Doubling the area doubles the force but also the mass.

Typical proposals for laser-sail missions to reach 10 % of the speed of light assume sail areas of a few square kilometers, total masses on the order of a ton (sail and payload), total power in the range of Giga Watt, and accelerations on the order of a few percent of g.

Focussing the beam

The light beam, originating from our own solar system, has to be focussed onto the sail of a few km diameter, across a distance measuring light years. Basic laws of wave optics dictate that any light beam, even that produced by a laser, will spread as it propagates (diffraction). To keep this spread as small as possible, the beam has to be focussed by a large lens or produced by a large array of lasers. Estimates show that one would need a lens measuring thousands of kilometers to focus the beam over a distance of a light year! Thus, any such system would have to fly in space, which again makes power production more difficult.

The requirements on the size of the lens can be relaxed a bit by having the acceleration only operate during a smaller fraction of the trip. However, even if the beam is switched on only during the first 0.1 light years of travel (as opposed to the full 4 light years), a thousand kilometers are the order of magnitude required for the size of the lens.

The heat

Suppose that power generation were no problem. Suppose you could have cheap access to a power source equivalent to the power consumption of a country like Germany (60 GW) or the US (400 GW). What would be the limit to the acceleration you can achieve? It turns out that powers of this magnitude would not even be needed, since at some point it is not the total power that provides the limiting factor.

The problem is once you fix the material density and the thickness, you can increase the acceleration only by increasing the intensity, i.e. the light power impinging on a square meter. However, at least a small fraction of that power will be absorbed, and it will heat up the sail. The problem is known to anyone who have left their car in direct sunlight, which makes the metal surface very hot. In space, an equilibrium would be established between the power being absorbed and the power being re-radiated from the sail as thermal radiation. Typical materials considered for light sails, like aluminum, have melting points on the order of several hundred to thousand degrees centigrade.

Ideally, the material would reflect most of the light it receives from the beam, absorbing very little. The little heat flux it receives should be re-radiated very efficiently at other wave lengths. Tayloring the optical properties of a sail in this way is possible in principle. However, usually it also means the thickness of the sail has to be increased, e.g. to incorporate different layers of material with a thickness matching the wavelength of light (leading to sails of some micrometers thickness).

In some of the current proposals of laser sails for interstellar travel, it is this heating effect that limits the admissible light intensity and therefore the acceleration.

Several different materials are being considered, among them dielectrics (rather thick but very good reflectivity, little absorption) and metals like aluminum. In addition, one may replace optical light beams by microwave beams, which can be generated more efficiently and be reflected by a thin mesh. The downside of using microwaves is that their wavelength is ten thousand times larger, so the size of the lens is correspondingly larger as well.

The dust

As the sail flies through space, it will ecounter gas atoms and dust particles. Admittedly, matter in interstellar space is spread out very thin (that is why there is almost no friction to begin with!). Nevertheless, the Interstellar Medium is not completely devoid of matter. Somewhat fortunately for light sails, our sun (and its nearest stars) is sitting inside a low-density region, the so-called “Local Bubble“. In the few light years around our sun (in the Local Interstellar Cloud), the density of hydrogen atoms is about 1 atom per ten cubic centimeters, vastly smaller than the density of air.  That is more than a thousand times less atoms per cubic centimeter than in the best man-made vacuum. In addition, there are grains of dust, with sizes on the order of a micrometer.

It is quite simple to calculate how many atoms will hit the surface of the sail: Just take a single atom of the sail’s surface. As the sail flies through space, this atom will encounter a few of the hydrogen atoms of the interstellar medium. How many? That depends on the length of the trip (a few light years) and the density of hydrogen atoms. All told, for a typical atomic radius of 1 Angstrom (0.1 nanometer), our surface atom will encounter about 100 hydrogen atoms. That means roughly: If all of the atoms were to stick to the surface, they would pile up 100 layers thick, which would be on the order of 10 nanometers.

That in itself does not sound dramatic. The crucial significance of the problem is realized only when one takes into account the speed at which the hydrogen atoms and other particles are bombarding the sail: That is 10 percent of the speed of light, since the sail is zipping through space at that speed! Being hit by a shower of projectiles traveling at 10 percent the speed of light does not bode well for the integrity of the sail.

It turns out that the speed itself may actually help. This is because an atom zipping by at 10% of the speed of light has only very little time to interact with the atoms in the sail. For two atoms colliding at this speed, it is better not to view an atom as a solid, albeit fuzzy, object of about 1 Angstrom radius. Rather, each atom consists of a point-like nucleus and a few point-like electrons. When two such atoms zip through each other, it is very unlikely that any of those particles (electrons and nuclei) come very close to each other. They will exert Coulomb forces (on the order of a nanoNewton), but since they are flying by so fast, the forces do not have a lot of time to transfer energy. In this regard, faster atoms really do less damage. Nevertheless, there is some energy transfer, and the biggest part of it is due to incoming interstellar atoms kicking the electrons inside the sail. In a quick-and-dirty estimate (based on some data for proton bombardment of Silicon targets), the typical numbers here are tens or hundreds of keV of energy transferred to a sail of 1 micrometer thickness during one passage of an interstellar atom. This produces heating (and ionization, and some X ray radiation).

Powerful computer simulations are nowadays being used to study such processes in detail (see a 2012 Lawrence Livermore Lab study on energetic protons traveling through aluminum).

You can find a general (non-technical) discussion of this crucial problem for laser sails on the “Centauri Dreams Blog“. The overall conclusion there seems to be optimistic, but the story also does not seem to be completely settled.

The problem could be reduced somewhat by having the acceleration going on only for a shorter time, after which the sail is no longer needed. However, then the acceleration needs to be higher, with a correspondingly larger intensity and heating issues. In any case, the scientific payload needs to be protected all the way, even if the sail could be jettisoned early.

The flyby

Once the space probe has reached the target system, things have to go very fast. At 10 % of the speed of light, the probe would cover the distance between the Earth and the Sun in a mere 80 minutes, and the distance between Earth and Moon in only a second (!). That means, there is precious little time to take the pictures and do all the measurements for which one has been waiting several decades. Presumably the probe would first snap pictures and then later take its time to radio back the results to Earth, where they would arrive 4 years later.

While the probe is flying through the target star system, it is also in much greater danger of running into dust grains and gas atoms, and the scientific instruments need to be protected against that. If the probe were to hit the planet, that could be catastrophic, since even a 1000 kg probe traveling at 10% of the speed of light would set free an energy of about ten hydrogen bombs.

Robert Forward has proposed an ingenious way to actually slow down the probe: This involves two sails, one of which is then jettisoned and afterwards serves as a freely floating reflector to send the light beam back onto the end of the sail that faces away from the Earth. This approach, however, requires even much larger sails and resources.

Conclusion

Interstellar travel is really, really hard if you are not very patient. However, using laser sails, it is not a purely fantastic outlandish idea anymore. In addition, concrete steps towards testing the concepts are being taken right now, with modest solar sails deployed and planned by the Japanese space agency JAXA , by NASA, and by the Planetary Society.

Further information

Historical photos and videos at CERN

Here’s a great resource for anyone interested in the history of physics, especially quantum physics and particle physics. At the CERN document server,

http://cdsweb.cern.ch

you can search for keywords in their large collection of freely accessible photos and videos. Just look into the column where it says “Narrow by collection” and then click on Videos or Photos in the “Multimedia and Outreach” section, before entering your search keyword.

As an example of what you can find there, here’s a beautiful 1985 TV documentary on the Einstein-Podolsky-Rosen paradoxon of fundamental quantum physics, starring interviews with the likes of John Bell (who came up with inequalities to distinguish between Bohr’s and Einstein’s views on the matter) and Alain Aspect (who used these inequalities to decide the question experimentally):

http://cdsweb.cern.ch/record/1064498?ln=en

Community Peer Review (a great new idea)

I have just come across a great site. It is for commenting on scientific papers that appear on the arXiv preprint server.

http://communitypeerreview.blogspot.de/

This is something I have been discussing in general terms with many colleagues over the past few months. It’s really great (if it takes off) to combine the collective knowledge of all the readers of a paper. It is like collecting all the comments that are made at conferences or over lunch. Of course, if the arXiv were to incorporate this functionality, it would really take off. This is a nonlinear process, and the more comments are generated the more will follow. Since all scientists in the relevant fields have accounts on arXiv with their full names, flame wars and polemics in general can easily be avoided. The discussion would at least be at the level of the typical conference question and answer sessions after a talk. And many little questions (and confusion created by definitions and typos etc.) could be addressed by the authors right away.

Dreaming ahead, in the long run, this might even turn into the prevalent model of publishing (without the need for extra peer-reviewed journals). Comments would take the place of peer reviews. To this end, one could then possibly also implement the possibility of submitting anonymous comments, which would be sent out to other registered users for moderation (to prevent polemics and flame wars). And then there’s the whole issue of also ranking the level of experience of commenters (like in physics.stackexchange.com).

Virtually all of the colleagues I’ve talked to agree that they would much prefer their paper being refereed and ranked by many of their knowledgeable peers from the community — rather than two somewhat randomly picked referees (I know it’s pretty hard for editors to pick good referees, there are so many topics; I am an editor myself). What makes matters worse currently is that the referee decision, with all its random factors, ultimately is a digital black/white decision that makes your paper get much attention (like in Nature or Physical Review Letters) or much less attention if it’s rejected there.

Here are some additional thoughts that came up in discussions with colleagues and upon further reflection:

  • You could choose a subset of your colleagues, and then display the arXiv papers that they have commented on (or rated favourably). In the long-term, this could effectively generate community-run topical journals. These are even dynamically generated and tailored exactly to your needs, since it is you who selects the relevant subset of colleagues. These colleagues in this way become something  like editors for that journal addressed to you (and you, in the same way, are effectively selecting articles for others).
  • A simple “like/dislike” or simple rating system would probably be rather counterproductive and likely subject to manipulation. But if the system is more fine-grained, both in terms of rating various aspects of the paper and in terms of keeping track of who did the rating (and how competent they are rated by others!), it could become useful. Imagine you find that a certain paper is rated very favourably by the ‘general, uneducated public’ (those distant to the topic at hand), but rated very unfavourably by the experts. That would tell you something.
  • Unlike the journals, whose publication decision tries to rank the paper immediately, the comment system could actually push a paper to the forefront even after some years. That would be when at least a few experts realize it is important (or some experiments confirm the predictions) and this is reflected in a few high-profile favourable comments (by the experts), which may trigger further interest and discussion. This is certainly much more reliable than the initial decision by a journal.
  • I have heard the following argument in favour of judging a paper in the traditional peer-review/selective journal style: It gives unknown authors at least a better chance to garner some attention. However, it is fair to say that even the traditional peer-review system has some bias towards the more established authors. Even if you ask me in my role as a referee, I have to confess that if I am skeptical about some part of the paper, I may be more inclined to accept it if it comes from a well-known group with a good track record. So I do not think that this argument really speaks against community peer review. On the contrary, one might hope that if enough interest is generated in the course of a discussion, the more established experts will take a look (what the fuss is all about), and if they then come to a positive conclusion and post that publicly, this will boost the paper very much.

Some further links:

Paul Ginsparg’s discussion from 1997 about the role of the arXiv

Paul Ginsparg’s discussion from 2002

Paul Ginsparg’s most recent discussion of the arXiv, from 2011

A more recent discussion on another blog: http://physicsnapkins.wordpress.com/2012/01/16/occupy_scientific_journals/

Summer break

I am off for vacations, and then a workshop in South Korea. In the meantime, for your enjoyment I offer a picture of fluctuating fields. I think “Primordial” might be a good title. Fluctuating fields play some role in a significant part of what my group and I are working on, so someday I will come back to this topic more seriously.

The most important ideas in physics

Here is just a brief list of what I believe to be the most important general ideas in physics (not specific theories, but concepts). For many of those ideas, it took centuries to realize how important they are. Usually the reason is that they are less important in the macroscopic world, i.e. in the phenomena that we observe around us in everyday life.

  • Atoms and particles: Matter is made out of small things that move around, are stable (usually) and do not suddenly jump from here to there. This concept (of stuff made out of particles) seems plausible when looking at grains of sand, but less plausible when looking at water or air which seem perfectly continuous.

  • Everything is moving: On the microscopic scale, things are never at rest and are moving around all the time (either due to thermal or quantum fluctuations). This concept runs contrary to everyday experience, where things tend to stop moving due to friction.

  • Conservation laws: You cannot just create motion out of nothing or stop it completely without converting its energy to some other form. Energy, charge, momentum etc. are conserved under appropriate circumstances. Again, due to friction, it is hard to be accustomed to energy conservation from observing everyday life. On the other hand, one can at least see that energy is never created out of nothing.

  • Oscillations and resonances: When things are arranged in a stable configuration, they have the tendency to oscillate around that configuration at some specific frequencies. The vibrations of a guitar string, of a molecule, or of the electron cloud in an atom are just a few of many examples. Everyday life offers examples like a pendulum clock or musical instruments, but they do not fully reveal just how crucially important this concept is on the microscopic scale.

  • Wave fields and interference: Oscillations can propagate through space, in which case they are called waves. Water waves, matter waves, electromagnetic waves, sound waves and others are all described by similar mathematical equations. The most commonly observed everyday example are water waves, but again, waves are not nearly as prevalent in everyday life as in the microscopic world. Waves show the most important phenomenon of interference whenever they overlap (as seen in the picture).

 

  • Periodical structures: Atoms arrange into crystals (periodic in space), oscillations are periodic in time, wave fields can have plane waves that are periodic both in space and time. Periodical structures do exist in the macroscopic world (caterpillars, sun flowers, etc.), but most of what we observe around us is not really periodic.

  • Symmetry and spontaneous symmetry breaking: When all points in space are equal or all directions are equally good, this has important consequences for the resulting motion (or the structures that form). Sometimes however, a structure may form that is not as symmetric as it could be. A ferromagnet picks a certain direction in space, which is then called spontaneous symmetry breaking. We are well used to symmetry in nature. It is harder to understand the concept of spontaneous symmetry breaking because often symmetry is broken just externally: the external forces (like gravity pointing downward) may spoil the perfect symmetry of the situation.

  • Time evolution and causality: The present state of the world determines what happens next. If we were given complete information, we could calculate what will happen in the next short time interval, and then the interval after that, and so on to infinity. In classical physics, this led to the concept of a “clockwork” universe where in principle everything could be predicted precisely if we were just given the current positions and velocities of all the particles. Nowadays we know that due to chaotic motion, this notion will fail even in classical physics unless we know everything with infinite precision. In quantum mechanics, the type of information we need and the predictions we can make are different, with only statistical predictions possible in principle. Still, mathematically the concept is always the same, and encoded into time-evolution equations (whether classical or quantum). This also means in general: causes come before effects. This concept (of causality, and time evolution depending on the present state) seems very natural given our everyday experiences.

  • “Actio equals reactio” and “no instantaneous action at a distance”: Effects produce a counter-effect, so the particle doing the pushing will also be pushed back. At the same time, no influence between two particles can act instantaneously. If one of the particles starts to move now, the other particle will feel an effect (or a change in effect) not immediately. Rather, the effect is felt at the very earliest after a time has passed that is needed for light to travel between the particles. This is because forces between particles are produced not directly, but rather via wave fields, and no wave field in nature has waves traveling faster than light. However, since the speed of light is so large, in everyday life there seems to be instantaneous action at a distance (e.g. magnetic forces or gravitational forces). As a side-effect of forces being transmitted via wave fields, the “actio equals reactio” first of all applies to the interaction between the particles and the fields. Sometimes the perturbation produced in the field by a particle does not even reach another particle, but is radiated away as a wave.

  • Nature tends to an optimum: In many situations, structures form that minimize some value, for example the energy. Even the trajectories that particles follow, or the path of a light beam, can be understood as optimizing a certain mathematical function. The closest one comes to observing this in everyday life is when one sees all objects falling down, tending towards the minimum energy in the gravitational field.

I know this selection (and the grouping of topics) is subjective. What do you think, what are the most important general concepts in our description of nature?

Down with censorship!

I just realized that WordPress is blocked in China, like so many other websites. See

http://en.wikipedia.org/wiki/List_of_websites_blocked_in_the_People’s_Republic_of_China

This is a shame. A government that does things like that cannot expect to see eye to eye with the rest of the world, no matter how mighty the state has become militarily, economically and politically. It’s a shame that the Chinese people still have to endure this regime. I know quite a number of highly intelligent and well-liked Chinese scientists, and they and their compatriots definitely deserve better! Let’s all hope that they manage a peaceful transition to a real democracy sometime in the coming decades.

By the way, here is a little anecdote regarding censorship: A few years ago I was at a workshop in Usbekistan, in the city of Tashkent. At the end of the workshop, there was a dinner in the conference hotel. As the evening progressed, local custom would have it that the visitors would bring toasts. Here is the toast that the well-known British physicist Sir Michael Berry dared to make, in the presence of the former Usbek minister of Science and Technology: “I have noticed that the website of the BBC is blocked. So, here is to the harmonious reconciliation between the government of Usbekistan and the British Broadcasting Corporation!” (or words to that effect)

(For the physicists, that’s the Berry of the “Berry phase”)

The Higgs boson, explained simply (the real story)

Everyone has been talking about the Higgs boson this past week due to CERN’s sort-of discovery announcement. People keep asking me what this is all about.

So here is my take on the Higgs boson story. Actually, I will try to recount why we need the idea behind the Higgs boson to describe nature. And I will try to do that in a way that doesn’t just appeal to the usual carricature descriptions but gives you the real idea.

Amazingly, the real story is not that difficult.

The story is not that difficult, because there is a more down-to-earth example where the same physical idea applies. This is the propagation of radio waves through a plasma.

In the beginning of the 20th century wireless communication by radio waves was introduced, with the first trans-atlantic radio signal sent by Marconi in 1901 between Cornwall in England and Newfoundland on the North American side.  Since the earth is curved, there is no direct line-of-sight connection between places that far apart, so one may wonder how the radio waves can go from here to there. However, it was realized in those times that radio waves can bounce off a layer high up in the atmosphere. This layer is called the ionosphere, and it extends up from 85 km to about 600 km.

In the ionosphere, energetic ultraviolet radiation from the sun constantly knocks out electrons from the atoms and molecules. As the negatively charged electron escapes, it leaves behind a positively charged atom or molecule, called an ion. The ionosphere thus consists of many negatively and positively charged particles (negative electrons and positive ions), besides some remaining neutral particles.  A gas of charged particles is called a plasma.

Now a plasma has a very important effect on the propagation of electromagnetic waves, as the radio enthusiasts found out early in the 20th century.

Here is what happens in a plasma, and this will be the mechanism that is at the heart of the ideas behind the Higgs.

When an electromagnetic wave (such as a radio wave) enters a plasma, its oscillating electric field sets into motion the electrons and ions. Each of those charged particles is accelerated by the electric field inside the wave and starts to oscillate as well. Any oscillating charged particle also creates an electromagnetic wave. This new wave adds to the original wave. In other words, the electromagnetic wave traveling inside the plasma now has become a new combination of oscillating electric and magnetic fields and oscillating charges. Not surprisingly, this wave has new properties, different from its properties in free space, outside the plasma.

The most important property of any wave is the relation between wavelength and frequency. From this, many important features such as the speed of the wave can be deduced. All the different types of waves (e.g. sound or water waves, electromagnetic waves or matter waves) have their own distinctive relation between wavelength and frequency.

Now for the waves in a plasma, this relation can be calculated, and it looks like this. This graph is the most important part of the story, so we will have a close look.

Frequency versus (inverse) wavelength for waves inside a plasma

Frequency versus (inverse) wavelength for waves inside a plasma

The graph shows the frequency (oscillations per second, measured in Hertz) as a function of the wavelength, for electromagnetic waves inside a plasma (that is the red curve). Actually, we have chosen to plot the frequency against the inverse of the wavelength, so long waves are to the left (large values of the wavelength “lambda”, small values of 1/lambda), and short waves are to the right. This is the way physicists like to plot these relations, and in particular the high-energy physicists dealing with the Higgs and other particles would always do it like shown here.

Now there is obviously an important difference between the waves in free space (dotted line) and the waves inside the plasma (red curve). At long wavelengths, the waves in free space have low frequencies. In fact, for those free-space waves the frequency goes to zero as the wavelength becomes larger and larger, according to the relation “frequency = (speed of the waves)/wavelength”. In contrast, the waves inside the plasma always have a minimum frequency, even when their wavelength becomes very large. This frequency is called the “plasma frequency”. That is the most important difference between electromagnetic waves inside the plasma and those in free space.

The plasma frequency depends on how many charged particles there are in the plasma. If the density of those charged particles is reduced, the plasma frequency will also shrink. In fact, this happens in the ionosphere, because the amount of ultraviolet radiation from the sun fluctuates, and so does the density of the free electrons and ions that are created by this radiation.

You might ask: What happens if a radio wave has a frequency below the plasma frequency? What will happen when this radio wave impinges onto the ionosphere’s plasma? The answer is simple: It is completely reflected! This is the phenomenon that the radio operators use to bounce off waves from the ionosphere. Actually, when the wave does not hit the ionosphere layer directly but at an angle, it can be totally reflected even when its frequency is higher than the plasma frequency. For the ionosphere, the plasma frequency is around 10 million oscillations per second, i.e. 10 Mega-Hertz.

Now we come back to the Higgs mechanism.

In the 60s, it was realized that there was a huge problem in the fundamental theory of particle physics (what has now developed into the so-called “Standard Model”). The mathematics at that stage seemed to say that in addition to the electromagnetic (and the gravitational) interactions, there should be other long-range forces. Of course, no one had observed those forces, so it was clear that there was something missing in the mathematics.

In fact, people knew quite precisely what was missing. In order to describe the weak forces (responsible for beta decay and other processes), the theorists had been required to introduced new wave fields. However, initially it seemed these fields were just like electromagnetic waves in free space, and would correspondingly give rise to long-range forces similar to the Coulomb force. Unfortunately, this completely contradicted experimental data, which showed that these forces were all short-range. More precisely, the known experimental facts implied that the relation between frequency and wavelength for these new wave fields should not look like that for free electromagnetic waves. Rather, it should look exactly like the graph for the waves inside the plasma, shown above!

Initially, it was not at all clear how this could be brought about without breaking other important mathematical features of the theory. The theorists were stuck with the mathematics and did not yet think of useful physical analogies (like the plasma). However, in 1963 Phil Anderson, a solid-state physicist, pointed out that there is an example where exactly the right thing happens. His example were superconductors, i.e. metals at low temperatures that conduct electricity without resistance. There, the electrons can be freely accelerated like in a plasma, and electromagnetic waves are influenced in exactly the way that would be required. I have chosen to tell the story in terms of a plasma, because it is nearer to everyday applications than a superconductor (at least for radio communications), but on the most elementary level needed here, it is really the same physics and the same mathematics.

Anderson’s suggestion (following an earlier hint by Schwinger) was still not a relativistic field theory, as required by particle physicists. That last, important step, of building a working relativistic model, was then taken the following year in three independent scientific articles. One was by Brout and Englert, the other by Peter Higgs, and the third by Gurelnik, Kibble, and Hagen. These publications showed that if one introduced a suitable new field, which is analogous to the plasma (or the superconductor), and couples this new field to the wave fields of the weak force, then everything works out fine. The wave fields change their properties, exactly identical to the change of properties we observed for the electromagnetic waves when they enter a plasma. And as a consequence, the wave fields no longer produce long-range forces but acquire exactly the right properties that are also observed in nature.

So this is the story of the “Higgs mechanism”. In his 1964 article, Peter Higgs then additionally pointed out that, as a sort of side-product, the new field which is required for this mechanism shows some high-frequency oscillations. These high-frequency oscillations could be excited in suitable experiments, and in the language of particle physics they correspond to a new particle, now commonly called the “Higgs boson”.

So that was the story behind the Higgs boson, in terms of waves. Basically, we are finished here, and I hope you enjoyed the explanation. For those who want to learn even more, let me complete the story by pointing out what all of this has to do with mass.

In the beginning of the 20th century, two revolutionary theories were developed in physics, the theory of relativity by Albert Einstein, and the theory of quantum mechanics, by people like Bohr, Einstein, Planck, Heisenberg, Schrödinger, Dirac, and others. Each of those two theories makes an elementary but surprising new statement about the meaning of energy. Quantum mechanics tells us that frequency is connected to energy via the relation

Energy = (Planck’s constant) times frequency

Here Planck’s constant is a fixed number, one of the fundamental constants of nature. For example, this means that electromagnetic waves of a certain frequency can only contain energy that is a multiple of this value. As it were, energy comes in small packets, of the size given by the formula. In the case of the electromagnetic field, these energy packets are called photons, and there are analogous names for other wave fields.

Now switch to relativity. There, we have another interesting relation for the energy, which is so well-known that you find it on T-shirts:

Energy = Mass times (velocity of light, squared)

This means that any particle of a given mass contains a (huge) amount of energy. However, we can also read this the other way around. If we have a packet of energy, it corresponds to a certain amount of mass.

Now, when we combine these equations, it becomes plausible that waves of a given minimum frequency also have a certain minimum energy, and this can be identified as a mass. In this sense, people would say that waves with a minimum frequency like the electromagnetic waves in a plasma correspond to particles of a certain mass.

In fact, the relation can be made much more precise, and here is the (only slightly more mathematical) argument. According to quantum mechanics, the velocity of particles is connected to the (inverse) wavelength of their matter waves, so slow particles have very long wavelengths. If one calculates the energy as a function of velocity according to the theory of relativity, and then uses the theory of quantum mechanics to replace energy by frequency and velocity by wavelength, we arrive at a little surprise: The resulting relation between frequency and wavelength looks exactly like that for the waves inside the plasma, shown above! Moreover, the plasma frequency then corresponds directly to the mass of the particles, or more precisely

(Planck’s constant) times Plasma frequency = Mass  times (velocity of light, squared)

We have now collected everything that is needed to understand why the Higgs mechanism has to do something with “mass”. A high-energy physicist would say that electromagnetic waves propagating in a plasma have “acquired mass” via their interaction with the plasma. By this, the physicist simply means that the graph shown above looks as it does, i.e. in particular it shows a nonzero frequency (the plasma frequency) at long wavelength. In the same way, then, the high-energy physicist would say that the wave fields of the weak force  “acquire mass” via their interaction with the Higgs field.

As we said above, in quantum mechanics the energy inside every wave field comes in discrete packets. For different fields, those have different names. For the electromagnetic field, they are called photons. For the wave fields of the weak force, they are called W and Z bosons. So in particle language, the purpose of the Higgs field was to give mass to the W and Z bosons (and thereby make the corresponding forces short-range). The actual details of how the Higgs mechanism is implemented in the Standard Model were implemented by Abdus Salam in the theory by Glashow and Weinberg. One feature of that theory is that initially photons and W and Z bosons are all parts of the same wave fields, but after the Higgs mechanism kicks in, only the W and Z bosons acquire mass, while the photon remains massless. This is a good thing, because otherwise the Coulomb force would not be long-range, and the world as we know it would not exist.

Today the Higgs mechanism is also used to give mass to the quarks and electrons etc. (via the very same Higgs field as introduced for the W and Z bosons).  So at least in the Standard Model, there is only one Higgs field. Its high-frequency oscillation, known as the Higgs boson, seems now to have been found at CERN, and its energy is about 125 Giga-Electron-Volt. For elementary particles, this is a lot, although not that much larger than the masses of the W and Z bosons which were the initial motivation behind inventing the Higgs field.

On a final note, it should be pointed out that most of our mass is in the protons and neutrons of the atomic nuclei. Now although these are composed of quarks, their mass is mostly due to the binding energy of the quarks and due to the fact that the quarks move around very fast inside the neutrons and protons. Even if the quark masses were zero, the neutron and proton would still have roughly their observed masses (on the order of 1 Giga-Electron-Volt). So although the Higgs field gives mass to the elementary particles, that is only a rather small fraction of the total mass of the planets and stars, because most of that is in the binding energies .

Two potential revolutions in medicine

In the past few months, I have read about two ideas that may revolutionize medicine in the coming decade. They both have to do with data and data analysis, on a large scale.

The first one goes by the name of “Watson”. That is the name of an IBM computer system which got famous recently for beating human contestants on the game show “Jeopardy”.

http://www-03.ibm.com/innovation/us/watson/index.html

The most impressive thing about “Watson” is that it can make sense of information that is written in plain language, with all the ambiguities that this entails. In order to be successful on Jeopardy, it had to analyze a vast treasure of plain text information, and then be able to understand correctly the game’s questions and reply to them by scouring all its information.

Now here is where medicine comes in: While winning Jeopardy was a great public relations success for IBM, in the future they want to turn Watson into a kind of super-powerful omniscient medical assistant, among other uses. Watson will digest the medical literature and then be able to come up with diagnoses once you present it with symptoms.

I was listening to a talk about Watson by the boss of that IBM unit recently (at the American Physical Society March Meeting 2012 in Boston). He recounted the story of a patient who had been saved only at the last minute because the patient’s symptoms had puzzled doctors and actually were due to a very rare condition that only one (human) expert finally interpreted correctly. Upon feeding these very same symptoms in to Watson, the computer came up with a list of possible causes, and the correct guess was among the top few options that Watson listed. It is no wonder that the guys at IBM are very enthusiastic about the prospects.

People have tried to build “expert” databases in the past, by carefully entering information into a computer system. However, the total amount of unprocessed plain text information out there (e.g. in the medical literature) is vastly greater. Therefore, a system like Watson can potentially become a very powerful assistant. Probably no human expert could have the same overview over the whole literature. Of course, a human medical doctor will still have to look at the options that Watson presents, for a sanity check, and to order relevant follow-up tests on the patient.

Here is the second development that is potentially revolutionary. One serious shortcoming of medical diagnosis is the lack of quantitative data. By this, I mean that only when you go to the doctor because you are feeling ill they will start to analyze your blood sample. However, the resulting values will be informative only if they fall drastically outside of some “standard interval”. In other words, you probably have to be already very sick. It would be much better if one had available some standard values for you, and not only for the standard statistical average for the “typical” patient. Better yet, it would be great to have time series information that shows how the values are slowly changing over time. Of course, there will be harmless fluctuations and drifts, but you might spot dangerous trends (or sudden changes) very early. If concentration X had been hovering around some value for a long time with only minor fluctuations, and then one day suddenly begun to slowly drift upward, you may catch this trend and reply to it long before it really gets dangerous.

A first step into this direction has been made recently. Read more about it in this New York Times article:

http://www.nytimes.com/2012/06/03/business/geneticists-research-finds-his-own-diabetes.html

The geneticist Michael Snyder, who directed the study, was also its subject. At first, they did a genetic analysis for him. Then, every two months or so extensive blood tests were performed. Sometime during the study, he developed diabetes. Due to the good time-resolution and the extensive tracking of many different proteins, the research was even able to tell that the illness likely started when he caught a cold. A connection of this type had not been observed before. Since the diabetes was detected early on, treatment could also start at an early stage, reducing the harm.

At the moment, of course, this approach is still much too expensive for the general public. Each blood sample analysis costed about $2500. However, in the future the costs will likely be reduced, and one may also focus on only a smaller subset of the 40,000 molecules that were tracked in this study.

If regular blood tests were combined with the capabilities of an expert computer system like Watson, it is very likely we can detect problems much earlier than now. Of course, this raises once again the question what to do when diagnosis outperforms treatment options, and that will likely (apart from costs) be one of the most hotly debated issues in the years to come.

The future, back then

I am just reading a book about the future. The book is from 1910.

This book was a collection of essays by prominent Germans (of 1910), and it was put together by the journalist Arthur Brehmer. It is called “The world in 100 years” (“Die Welt in 100 Jahren”), and it was recently republished and became the German “Science Book of the Year” 2010.

For all the German readers, this book is highly recommended! It is fascinating on many different levels, both for the predictions it gets right and those which in retrospect sound naive or even highly problematic. In addition, it contains many marvelous illustrations.

The most striking example of what is predicted correctly is the cell-phone. In 1910, wireless communication had just been invented, and it was clear that this would revolutionize many things. In the book, you can read about how they imagined people running around with a small phone in their pocket and being able to call (and maybe even see) their friends across the world.

When they get things wrong, it is often in such a way that even back then one could have identified the problematic assumptions. For example, it is usually very dangerous to extrapolate naively, especially with regard to things that are currently en vogue. Back then, radioactive elements, especially radium, had just been discovered. For the first few years they were seen as a kind of miracle substance, and several authors in the book speculate about all illnesses being conquered in the future by the use of radium.

One can also spot some highly dangerous trends of thinking that would later contribute to disastrous developments.  In several contributions, eugenics is touted as a progressive idea. More generally, there is often the assumption that the state will be able to make sweeping and arbitrary changes to people’s lives, with the majority of the population apparently regarded as nameless masses. As long as it fits the “greater good”, this is considered beneficial without questioning the impact on the individual.

Then there is the typical fallacy of futurist planning: The assumption that cities (or countries etc.) can and will be remade from scratch. Even in 1910 the authors could just have looked back over the centuries and seen that cities rather grow organically and seldom are redone entirely according to some grand new plan. The few exceptions to this rule, artificial cities that were erected in the 20th century, usually are relatively bleak places where people wouldn’t move to unless they have to.

However, one has to say that the 23 authors vary greatly in their opinions, assumptions, ideology, intelligence, wisdom, and consequently in their predictions. One of the best essays (in may mind) is the one by Bertha von Suttner, the peace activist. She describes some of the institutions and ideas that are now established to preserve peace, of course with very incomplete success.

Maybe I will pick some of the essays from the book and comment on them in some more detail in some future blog posts.

Interference

Interference of waves is one of the most important phenomena in nature.

Even two circular waves spreading outward already show quite an intricate pattern. Witness the blue stripes going diagonally across the picture (e.g. in the upper left part). These are the places where the waves cancel each other and the water surface would not move.

Here are a only a few of the many phenomena that involve interference of waves:

  • Rainbow-colored appearance of thin soap or oil films or the appearance of some butterflies or insects
  • Fluctuating strength of a radio signal being scattered from trees, buildings etc., depending on where you try to receive it
  • Highly precise distance measurements, e.g. to detect movements of the earth