Now what has happened to this chessboard ?
(see the previous post, if you haven’t read it already)
There is the reddish glow of the warm sunlight. And, quite generally, the colors of things. But let us take one step after the other.
First, it was warm (it really was a nice summer day when I took that picture). What does it mean for something to be hot or cold, on a basic level? Again, suppose you meet your scientist at the Tuileries opening in the middle 1500s. What would have been the then-state-of-the-art ? Pretty lousy, it turns out!
Thermometers were developed only around 1600 (among others, by Galilei), although the principle that hot substances tend to expand had been known even to the Greek. And even if you have a working thermometer, you still wonder: what is the basic reason for something to appear hot or cold?
The first one to really get it right was Daniel Bernoulli, and he explained the idea in his book Hydrodynamica in 1738. His idea is simple and beautiful: Heat is motion. A gas is made up of billions and billions of molecules. These are not at rest, but constantly moving. Their energy is a direct measure of temperature. The faster they are, the higher the temperature. The same happens in solid substances or liquids, where constantly a kind of “jitter” motion is going on, with particles bouncing back and forth, never really at rest. When a cold body receives heat from a warmer one, its particles start moving around faster. All of this you cannot see directly, because the particles (atoms and molecules) are a thousand times smaller than what even the best light microscope would resolve, but you are witnessing the effects of this microscopic motion by feeling the temperature change.
From there on, the theory of thermodynamics and statistical mechanics continued to develop (at first rather slowly, it must be admitted). A lot of useful insights resulted. For example, in the beginning of the 19th century, a French engineer called Carnot realized that you cannot convert heat entirely into useful mechanical work. That means there are fundamental limits to the efficiency of power plants. Beginning in the middle of the 19th century, Maxwell and Boltzmann put the ideas of Bernoulli about the gas particles on a more quantitative level. All of our microscopic understanding of the properties of materials or the workings of living matter (cells) rests on the principles discovered back then.
As a scientist you are always in danger of getting stuck in your tiny little corner, struggling with the particular research problem that you haven chosen at the moment. So from time to time I like to remind myself of the bigger picture. One good way of doing this is just looking out of the window and trying to think about which of the natural phenomena you see around you everyday we can understand already.
Instead of a view out of my window, here is a picture I took two years ago. It is a fountain in the Tuileries near the Louvre in Paris. You encounter it when you walk from the Louvre to the Obelisk.
Now that seems pretty simple: Some water, the stone of the fountain, and a dove. Also, there’s part of a chair in the foreground.
Try to think for a moment which natural phenomena enter this picture.
There’s the water with the little ripples, there’s the reflection of the sunlight, there’s the material of the stone.
But before we turn to those things, there’s the fact that you can see this image at all. That is, how do your eyes perceive an image? In fact, all of it can be explained by a very simple rule: Light rays travel in straight lines, until they hit a surface, where they are absorbed or reflected, and finally they hit your eyes. Light rays as straight lines is a very powerful concept: For example, it lets you predict how the shadows should look like, if you know where the light comes from. It’s also the reason for the phenomenon of perspective, and light rays are used in computer graphics to calculate the appearance of a three-dimensional scene.
The Tuileries gardens were created in 1564. So how much of the story about light rays was known at the time? It turns out, pretty much everything! Already Euclid had written a treatise called ‘Optics’ where he used light rays, and the Greek and Romans knew how to make some kind of lenses. Wearable eyeglasses based on these concepts had appeared in the 13th century, at around the same time that perspective was discovered in art. So light rays would have seemed a very well-understood concept to every scientist you might have met at the time strolling through the Tuileries. (see Optics page in Wikipedia)
Then, what about the water? Obviously, the most basic ideas about water were known in a qualitative way throughout history. Otherwise you would have a hard time steering ships through water. Archimedes had described the concept of buoyancy already: An object that is lighter than water will be pushed up. And Leonardo da Vinci had been drawing many sketches of swirls of water (vortices), inspecting them closely. But none of them could have given you predictions of how exactly the water currents would look like when you moved a ship or any object through water. Those insights still had to wait more than 200 centuries. People like Newton and Leibniz would first have to develop the idea to describe changes as composed of many very small steps (differential calculus). That was around 1700. About half a century later, mathematics had become advanced enough to describe in the same way small changes in space and time (partial differential equations). So in 1757 Leonard Euler wrote down the first equations of fluid dynamics, describing how the velocity field of a fluid like water (or air) would change with time. If you know the velocity at every point in space at this moment in time, you can predict it for the next moment, and from there on to eternity. (see Euler equations on Wikipedia)
With a few additional steps (like introducing friction into the equations), these equations for fluid flow have become extremely powerful. They can now be used to simulate the flow of air around the wings of an airplane, completely in the computer before the plane ever takes off for the first time. And they predict the changing weather patterns at least for a few days in advance, which is good enough to be useful. All of that came about because people were not content with just knowing in a rough qualitative way how water may behave, but tried to systematically analyze the details, a process that needed centuries because all the mathematical tools first had to be developed.
(to be continued)
I am trying to get back to blogging, so I will recycle this blog. As a matter of fact, I started a new blog over at blogspot (for no particular reason). But then I had to discover that they are blocked in China. In contrast, wordpress is not blocked, so I return back here.
Ever since the beginnings of quantum mechanics, there has been the question around how to get to the classical limit. First was Schrödinger's paper where he presented wavepackets oscillating inside a harmonic well (nowadays known as "coherent states"). It soon turned out that this example was a little too special, as it is the exception to have no spreading of the wave packet over time. Semiclassical methods were soon developed that tried to make use of the classical trajectories, at least in order to describe the propagation of wave packets at high energies, for wave lengths much shorter than the typical length scales of the potential. In this vein of thinking, the Lagrangian was introduced into quantum mechanics by Dirac, and then fully exploited by Feynman to construct the path integral approach.
However, you don't need to go very far away from the ground state in order to illustrate something resembling classical motion. Take, for example, an electron quantum wire – a "waveguide for electrons", infinitely extended in one direction, and bounded in the other directions. The energy eigenstates are freely propagating plane waves in the direction of motion along the wire, multiplied by some transverse eigenfunctions depending on the confining potential in the other directions. Different transverse eigenmodes have different energies, forming several "transport channels" (just like in a microwave guide or an optical fibre for electromagnetic waves).
The general rules of quantum mechanics tell us the following: If you superimpose two states of equal energy but different transverse channels (with different momenta along the wire), you can again form an energy eigenstate. It turns out that this eigenstate resembles the kind of zig-zag-motion, that classically you would expect if you were to launch a particle at an angle into the wire, such that it reflects off the boundaries.
This is best illustrated in pictures. They show the probability density (stationary in time, since this is an energy eigenstate!). The only difference between them is the relative weight of the two superimposed states (with more weight given to the first excited transverse mode in the second picture).
This looks already like a blurry version of the classical motion, doesn't it?
All chips are made out of atoms, but only recently special chips have been made to guide the motion of (ultracold) atoms. These then are called “atom chips”. A bunch of current-carrying wires on the chip is used to create magnetic fields that determine the potential which the atoms see. In this way, complicated interference setups and other structures can be created – in principle. Up to now, many imperfections have still somewhat slowed progress.
That’s why a recent paper by the Schmiedmayer group,
seems to be an important step forward. They show how they can shape their wires much more accurately than before, which bodes well for the future ability to implement all kinds of nice structures in which transport of ultracold atoms will be studied.
Incidentally, this paper also has the honour of being the first that is linked to on this blog (because it happened to be the first that caught my eye on this day’s cond-mat new articles in arXiv).
Why would you want to read this blog?
Suppose you are a physicist (like me) and could use some regularly updated pointers to the most recent literature on all topics dealing with quantum systems, coherence and decoherence, the physics of nanoscale structures, you name it. Suppose besides those links you would even encounter some more or less “coherent” comments and opinions on those articles and how they fit into the general context.
Then this blog might be for you. In any case, I started it after reading about the new trackback feature on the arXiv. Let’s see how this blog works out.