When people explain how our normal experiences give no inkling of relativity and quantum mechanics, the great speed of light and the tiny action quantum are often invoked. Relativity was discovered so late because all normal speeds are so small compared with light’s. Similarly, quantum mechanics was not discovered earlier because all normal actions are huge compared with h . This is true, but in a sense it is also misleading. For physicists at least, relativity is completely comprehensible. The mismatch between the relativistic world and its non-relativistic appearance to us is entirely explained by the speed of light. In contrast, the mere smallness of Planck’s constant does not fully explain the classical appearance of the quantum world. There is a mystery. It is, I believe, intimately tied up with the nature of time. But we must first learn more about the quantum.
Einstein went further than Planck in embracing discreteness. His 1905 paper, written several months before the relativity paper, is extraordinarily prescient and a wonderful demonstration of his ability to draw far-reaching conclusions from general principles. He showed that in some respects radiation behaved as if it consisted of particles. In a bold move, he then suggested that ‘the energy of a beam of light emanating from a certain point is not distributed continuously in an ever increasing volume but is made up of a finite number of indivisible quanta of energy that are absorbed or emitted only as wholes’. Einstein called the putative particles light quanta (much later they were called photons ). In a particularly beautiful argument, Einstein showed that their energy E must be the radiation frequency ω times Planck’s constant: E = hω . This has become one of the most fundamental equations in physics, just as significant as the famous E = mc 2 .
The idea of light quanta was very daring, since a great many phenomena, above all the diffraction, refraction, reflection and dispersion of light, had all been perfectly explained during the nineteenth century in terms of the wave hypothesis and associated interference effects. However, Einstein pointed out that the intensity distributions measured in optical experiments were invariably averages accumulated over finite times and could therefore be the outcome of innumerable ‘hits’ of individual light quanta. Then Maxwell’s theory would correctly describe only the averaged distributions, not the behaviour of the individual quanta. Einstein showed that other phenomena not belonging to the classical successes of the wave theory could be explained better by the quantum idea. He explained and predicted effects in ovens, the generation of cathode rays by ultraviolet radiation (the photoelectric effect), and photoluminescence, all of which defied classical explanation. It was for his quantum paper, not relativity, that Einstein was awarded the 1921 Nobel Prize for Physics.
The great mystery was how light could consist of particles yet exhibit wave behaviour. It was clear to Einstein that there must be some statistical connection between the positions of the conjectured light quanta and the continuous intensities of Maxwell’s theory. Perhaps it could arise through significantly more complicated classical wave equations that described particles as stable, concentrated ‘knots’ of field intensity. Maxwell’s equations would then be only approximate manifestations of this deeper theory. Throughout his life, Einstein hankered after an explanation of quantum effects through classical fields defined in a space-time framework. In this respect he was surprisingly conservative, and he famously rejected the much simpler statistical interpretation provided for his discoveries by the creation of quantum mechanics in the 1920s.
In the following years, Einstein published several important quantum papers, laying the foundations of a quantum theory of the specific heats of solids. However, the next major advance came in 1913 with Danish physicist Niels Bohr’s atomic model. It had long been known that atoms emit radiation only at certain frequencies, called lines because of their appearance in spectra. These spectral lines, which had been arranged purely empirically in regular series, were a great mystery. Everyone assumed that each line must be generated by an oscillatory process of the same frequency in the atoms, but no satisfactory model could be constructed.
Bohr found a quite different explanation. In a famous experiment, the New Zealander Ernest Rutherford had recently shown that the positive charge in atoms (balanced by the negative charge of the electrons) was concentrated in a tiny nucleus. This discovery was itself very surprising and is illustrated by a well-known analogy. If the space of an atom – the region in which the electrons move – is imagined as being the size of a cathedral, the nucleus is the size of a flea. Bohr supposed that an atom was something like the solar system, with the nucleus the ‘Sun’ and the electrons ‘planets’.
However, he made a seemingly outrageous ad hoc assumption. Using the electrostatic force for the known charges of the electron and positive nucleus, he calculated the electron orbits in Newtonian mechanics for the hydrogen atom, which has only one electron. Each such orbit has a definite angular momentum. Bohr suggested that only orbits for which this angular momentum is some exact multiple of Planck’s constant, i.e. 0, h, 2h , ..., can occur in nature. These orbits also have definite energies, now called energy levels . He made the further equally outrageous conjecture that radiation in spectral lines arises when an electron ‘jumps’ (for some unexplained reason) from an orbit with higher energy to one with lower energy. He suggested that the difference £ of these energies is converted into radiation with frequency ω, determined by the relation E = hco found by Einstein for the ‘lump of energy’ associated with radiation of frequency co . Thus, according to Bohr’s theory, an atom emits a light quantum (photon) of a well-defined energy by jumping from one orbit to another.
For hydrogen atoms, it was easy to calculate the energy levels and hence the frequencies of their radiation. Subject to certain further conditions, Bohr’s theory had an immediate success. His hotchpotch of Newtonian theory and strange quantum elements had hardly explained the enigmatic spectral lines, but it did predict their frequencies extraordinarily well, and there could be no doubting that he had found at least some part of a great truth.
During the next decade the Bohr model was applied to more and more atoms, often but not always with success. It was clearly ad hoc. The need for an entirely new theory of atomic and optical phenomena based on consistent quantum principles became ever more transparent, and was keenly felt. Finally, in 1925/6 a complete quantum mechanics was formulated – by Werner Heisenberg in 1925 and Erwin Schrödinger in 1926 (and called, respectively, matrix mechanics and wave mechanics ). At first, it seemed that they had discovered two entirely different schemes that miraculously gave the same results, but quite soon Schrödinger established their equivalence.
Heisenberg’s scheme, or picture , is based on abstract algebra and is often regarded as giving a truer picture. In the form in which quantum theory currently exists, it is more flexible and general. Unfortunately, it is rooted in abstract algebra, making it very difficult to describe in intuitive terms. I shall therefore use the Schrödinger picture. Luckily, this will not detract from what I want to say. In fact, one of the main ideas I want to develop is that the Schrödinger picture is actually more fundamental than the Heisenberg picture, and is the only one that can be used to describe the universe quantum-mechanically. Many physicists will be sceptical about this, but perhaps this is because they study phenomena in an environment and do not consider how local physics might arise from the behaviour of the universe as a whole.
Читать дальше