But these two premises imply that no effective procedure, or algorithm , can list all the uses of a screw driver. This is the famous frame problem in algorithmic computer science, unsolved since Turing and his machine.
But all that has to happen in the evolution of a bacterium in, say, some new environment is that a molecular screwdriver “finds a use” that enhances the fitness of the bacterium and that there be heritable variance for that “use.” Then natural selection will “pull out” this new use by selecting at the level of the bacterium, not the molecular screwdriver.
The profound implication of the newly selected bacterium with the molecular screwdriver is that this evolutionary step changes the very phase space of evolution in an un-pre-statable way. Hence we can write no laws of motion for this evolution, nor can we pre-state the niche boundary conditions noncircularly, so we could not even integrate the laws of motion we cannot write in the first place. Since we cannot list all the uses of the molecular screwdriver, we do not know the sample space of evolution.
Evolution of the biosphere and, a fortiori , of the human economy, legal systems, culture, and history, are entailed by no laws at all . True novelty can arise, beyond the Newtonian Paradigm broken beyond the watershed of life.
Re-enchantment, a path beyond Modernity, is open to us.
WHERE DID YOU GET THAT FACT?
VICTORIA STODDEN
Computational legal scholar; assistant professor of statistics, Columbia University
We are being inundated every day with computational findings, conclusions, and statistics. In op-eds, policy debates, and public discussions, numbers are presented with the finality of a slammed door. In fact we need to know how these findings were reached, so we can evaluate their relevance, their credibility, resolve conflicts when they differ, and make better decisions. Even figuring out where a number came from is a challenge, let alone trying to understand how it was determined.
This is important because of how we reason. In the thousands of decisions we make each day, seldom do we engage in a deliberately rational process anything like gathering relevant information, distilling it into useful knowledge, and comparing options. In most situations, standing around weighing pros against cons is a pretty good way to ensure rape, pillage, and defeat, either metaphorical or real, and miss out on pleasures in life. So of course we don’t very often do it; instead, we make quick decisions based on instinct, intuition, heuristics, and shortcuts honed over millions of years.
Computers, however, are very good at components of the decision-making process that we’re not: They can store vast amounts of data accurately, organize and filter it, carry out blindingly fast computations, and beautifully display the results. Computers can’t (yet?) direct problem solving or contextualize findings, but for certain important sets of questions they are invaluable in enabling us to make much more informed decisions. They operate at scales our brains can’t, and they make it possible to tackle problems at ever greater levels of complexity.
The goal of better decision making is behind the current hype surrounding big data, the emergence of “evidence-based” everything—policy, medicine, practice, management, and issues such as climate change, fiscal predictions, health assessment, even what information you are exposed to online. The field of statistics has been addressing the reliability of results derived from data for a long time, with many successful contributions (for example, confidence intervals, quantifying the distribution of model errors, and the concept of robustness).
The scientific method suggests skepticism when interpreting conclusions and a responsibility to communicate scientific findings transparently so others may evaluate and understand the result. We need to bring these notions into our everyday expectations when presented with new computational results. We should be able to dig in and find out where the statistics came from, how they were computed, and why we should believe them. Those concepts receive almost no consideration when findings are publicly communicated.
I’m not saying we should independently verify every fact that enters our daily life—there just isn’t enough time, even if we wanted to—but the ability should exist where possible, especially for knowledge generated with the help of computers. Even if no one actually tries to follow the chain of reasoning and calculations, more care will be taken when generating the findings when the potential for inspection exists. If only a small number of people look into the reasoning behind results, they might find issues, provide needed context, or be able to confirm their acceptance of the finding as is. In most cases, the technology exists to make this possible.
Here’s an example. When news articles started appearing on the World Wide Web in the 1990s, I remember eagerly anticipating hot-linked stats—being able to click on any number in the text to see where it came from. More than a decade later, this still isn’t routine, and facts are asserted without the possibility of verification. For any conclusions that enter the public sphere, it should be expected that all the steps that generated the knowledge are disclosed, including making the data they’re based on available for inspection whenever possible and making available the computer programs that carried out the data analysis—open data, open source, scientific reproducibility.
Without the ability to question findings, we risk fooling ourselves into thinking we are capitalizing on the Information Age when we’re really just making decisions based on evidence that no one, except perhaps the people who generated it, can actually understand. That’s the door closing.
DOUGLAS T. KENRICK
Professor of psychology, Arizona State University; author, Sex, Murder, and the Meaning of Life
The 2006 movie Idiocracy was hardly Academy Award material, but it began with an interesting premise: Given that there is no strong selection for high IQ in the modern world, people who are less intelligent are having more children than the more intelligent people. Extrapolating that trend for 500 years, the movie’s producers depicted a world populated by numbskulls. Is this a possibility?
There are several causes for concern. To begin with, it is a correct assumption that natural selection is largely agnostic with regard to intelligence. We large-brained hominids like to think that all the information-crunching power in our hypertrophied cortexes will eventually allow us to solve the big problems of modern times, so that our descendants persist into the distant future. But it ain’t necessarily so. Dinosaurs were a lot smarter than cockroaches, and Australopithecines were Einsteinian by comparison, yet the roaches have had a much longer run and are widely expected to outlast Homo sapiens .
Consider a few later phenomena:
1. Even correcting for other factors, people living in larger families have lower IQs.
2. In the modern world, less-educated people reproduce earlier and have larger families than highly educated people.
3. Less-educated people are more likely to hold conservative religious beliefs than are better-educated people.
4. Conservative religiosity is associated with opposition to birth control and abortion. The psychologist Jason Weeden has data suggesting that this is, in fact, close to the heart of the split between the liberal left and the conservative right.
5. Some conservative religions, such as the Church of Jesus Christ of Latter-day Saints, actively encourage large families.
Читать дальше