The quest involves a lot of ‘trial and error’. But it’s becoming possible to calculate the properties of materials, and to do this so fast that millions of alternatives can be computed, far more quickly than actual experiments could be performed. Suppose that a machine came up with a unique and successful recipe. It might have succeeded in the same way as AlphaGo. But it would have achieved something that would earn a scientist a Nobel prize. It would have behaved as though it had insight and imagination within its rather specialised universe—just as AlphaGo flummoxed and impressed human champions with some of its moves. Likewise, searches for the optimal chemical composition for new drugs will increasingly be done by computers rather than by real experiments, just as for many years aeronautical engineers have simulated air flow over wings by computer calculations rather than depending on wind-tunnel experiments.
Equally important is the capability to discern small trends or correlations by ‘crunching’ huge data sets. To take an example from genetics, qualities like intelligence and height are determined by combinations of genes. To identify these combinations would require a machine fast enough to scan large samples of genomes to identify small correlations. Similar procedures are used by financial traders in seeking out market trends and responding rapidly to them, so that their investors can top-slice funds from the rest of us.
My claim that there are limits to what human brains can understand was, incidentally, contested by David Deutsch, a physicist who has pioneered key concepts of ‘quantum computing’. In his provocative and excellent book The Beginning of Infinity , [7]he pointed out that any process is in principle computable. This is true. However, being able to compute something is not the same as having an insightful comprehension of it. Consider an example from geometry, where points in the plane are designated by two numbers, the distance along the x -axis and along the y -axis. Anyone who has studied geometry at school would recognise the equation x 2+ y 2= 1 as describing a circle. The famous Mandelbrot set is described by an algorithm that can be written down in a few lines. And its shape can be plotted by even a modestly powered computer—its ‘Kolmogorov complexity’ isn’t high. But no human who is just given the algorithm can grasp and visualise this immensely complicated ‘fractal’ pattern in the same sense that they can visualise a circle.
We can expect further dramatic advances in the sciences during this century. Many questions that now perplex us will be answered, and new questions will be posed that we can’t even conceive today. We should nonetheless be open-minded about the possibility that despite all our efforts, some fundamental truths about nature could be too complex for unaided human brains to fully grasp. Indeed, perhaps we’ll never understand the mystery of these brains themselves—how atoms can assemble into ‘grey matter’ that can become aware of itself and ponder its origins. Or perhaps any universe complicated enough to have allowed our emergence is for just that reason too complicated for our minds to understand.
Whether the long-range future lies with organic posthumans or with intelligent machines is a matter for debate. But we would be too anthropocentric if we believed that a full understanding of physical reality is within humanity’s grasp, and that no enigmas will remain to challenge our posthuman descendants.
If the number one question astronomers are asked is, Are we alone?, the number two question is surely, Do you believe in God? My conciliatory answer is that I do not, but that I share a sense of wonder and mystery with many who do.
The interface between science and religion still engenders controversy, even though there has been no essential change since the seventeenth century. Newton’s discoveries triggered a range of religious (and antireligious) responses. So, even more, did Charles Darwin in the nineteenth century. Today’s scientists evince a variety of religious attitudes; there are traditional believers as well as hard-line atheists among them. My personal view—a boring one for those who wish to promote constructive dialogue (or even just unconstructive debate) between science and religion—is that, if we learn anything from the pursuit of science, it is that even something as basic as an atom is quite hard to understand. This should induce scepticism about any dogma, or any claim to have achieved more than a very incomplete and metaphorical insight into any profound aspect of existence. As Darwin said, in a letter to the American biologist Asa Gray: ‘I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope and believe as he can’. [8]
Creationists believe that God created the Earth more or less as it is—leaving no scope for emergence of new species or enhanced complexity and paying little regard to the wider cosmos. It is impossible to refute, by pure logic, even someone who claims that the universe was created an hour ago, along with all our memories and all vestiges of earlier history. ‘Creationist’ concepts still hold sway among many US evangelicals and in parts of the Muslim world. In Kentucky there is a ‘creation museum’ with what its promoters describe as a ‘full-size’ Noah’s Ark, 510 feet long, built at a cost of $150 million.
A more sophisticated variant—‘intelligent design’—is now more fashionable. This concept accepts evolution but denies that random natural selection can account for the immensely long chain of events that led to our emergence. Much is made of stages where a key component of living things seems to have required a series of evolutionary steps rather than a single leap, but where the intermediate steps would in themselves confer no survival advantage. But this style of argument is akin to traditional creationism. The ‘believer’ focuses on some details (and there are many) that are not yet understood and argues that the seeming mystery constitutes a fundamental flaw in the theory. Anything can be ‘explained’ by invoking supernatural intervention. So, if success is measured by having an explanation, however ‘flip’, then the ‘intelligent designers’ will always win.
But an explanation only has value insofar as it integrates disparate phenomena and relates them to a single underlying principle or unified idea. Such a principle is Darwinian natural selection as expounded in On the Origin of Species , a book he described as ‘one long argument’. Actually, the first great unifying idea was Newton’s law of gravity, identifying the familiar gravitational pull that holds us on the ground and makes an apple fall with the force that holds the Moon and planets in their orbits. Because of Newton, we need not record the fall of every apple.
Intelligent design dates back to classic arguments: a design needs a designer. Two centuries ago, the theologian William Paley introduced the now-well-known metaphor of the watch and the watchmaker—adducing the eye, the opposable thumb, and so forth as evidence of a benign Creator. [9]We now view any biological contrivance as the outcome of prolonged evolutionary selection and symbiosis with its surroundings. Paley’s arguments have fallen from favour even among theologians. [10]
Paley’s view of astronomy was that it was not the most fruitful science for yielding evidence of design, but ‘that being proved, it shows, above all others, the scale of [the Creator’s] operations’. Paley might have reacted differently if he’d known about the providential-seeming physics that led to galaxies, stars, planets, and the distinctive elements of the periodic table. The universe evolved from a simple beginning—a ‘big bang’—specified by quite a short recipe. But the physical laws are ‘given’ rather than having evolved. Claims that this recipe seems rather special can’t be so readily dismissed as Paley’s biological ‘evidences’ (and a possible explanation in terms of a multiverse is mentioned in section 4.3).
Читать дальше