Time-series prediction is used in weather forecasting, stock market prediction, and to predict disasters. These algorithms analyze a set of historical data points and use that to project which data points might come next in a sequence.
These pattern-matching algorithms use complex mathematics to work their voodoo. You don't need to understand how pattern matching works, or even recall the names of all the various techniques listed above. What you do need to understand is that AI makes it easier for computers to understand the physical world, to make predictions, and to find complex relationships hidden inside data. These tasks are at the root of solving many business problems.
Beyond Deep Learning: The Future of Artificial Intelligence
Most of the AI breakthroughs in the 2010s were built on deep learning technology and neural networks. Dramatic advances in machine vision, natural language processing, prediction, and content generation resulted. And yet, industry luminaries debate whether AI is about to enter a golden age of rapid technological advancement, or rapidly stagnate.
Stagnation or Golden Age?
The argument for stagnation is that deep learning has severe limitations—training needs too many examples and takes too long, and while these AIs pull off some amazing tricks, they have no true understanding of the world. Deep learning is built on algorithms from the mid-1980s and neural network architectures developed in the 1960s. Once we have perfected the implementation of deep learning technology and solved all the problems that we can with it, there are no viable technologies in the pipeline to keep things rolling. The current era of AI deployment will grind to a halt. So goes the stagnation argument.
On the other side of the debate are those who point to promising research that could take AI in new directions and solve a new set of problems.
Capsule networks are the brainchild of Geoff Hinton, the creator of backprop and one of the fathers of deep learning. Capsules overcome some of deep learning's shortcomings. The difference between capsule networks and traditional convolutional neural networks is beyond the scope of this book, but capsules capture some level of understanding about the relationship between features in images, which makes image recognition engines more resilient and better at recognizing objects from many different angles.
AIs are trained to understand something about the world. Typical AIs operate within a bubble. They have no understanding of the way the world works. A lack of common sense limits their abilities. A household robot, on a search to find my reading glasses, should know that my desk and nightstand are good places to look first, and not inside the freezer.
Several organizations are trying to build AIs with common sense. They are building vast databases of the commonsense notions humans use to help them make high-quality decisions. For example, oranges are sweet, but lemons are sour. A tiger won't fit in a shoe box. Water is wet. Oil is viscous. If you overfeed a hamster, it will get fat. We often take this context for granted, but to an AI these notions are not obvious.
Researchers at the Allen Institute crowdsource commonsense insight using Amazon's Mechanical Turk platform. They use machine learning and statistical analysis to extract additional insights and understand the spatial, physical, emotional, and other relationships between things. For example, from a commonsense notion that “A girl ate a cookie,” the system deduces that a cookie is a type of food and that a girl is bigger than a cookie. Allen Institute researchers estimate they need about a million human-sourced pieces of common sense to train their AIs.
The Cyc project, the world's longest-running AI project, takes a different approach. Since 1984, Doug Lenat and his team have hand coded more than 25 million pieces of commonsense knowledge in machine-usable form. Cyc knows things like “Every tree is a plant” and “Every plant dies eventually.” From those pieces of information, it can deduce that every tree will die. Cycorp Company, Cyc's current developers, claims that half of the top 15 companies in the world use Cyc under license. Cyc is used in financial services, healthcare, energy, customer experience, the military, and intelligence.
As they mature, commonsense knowledge systems may help future AIs to answer more complex questions and assist humans in more meaningful ways.
Causal AIs understand cause and effect, while deep learning systems work by finding correlations inside data. To reason, deep learning AIs find complex associations within data and assess the probabilities of association. Reasoning by association has proven adequate for today's simple AI solutions, but correlation does not imply causation. To create an AI with human-level intelligence, researchers will need far more capable machines. Some AI researchers, most notably Dr. Judea Pearl, believe that the best path forward for AI development is to design AIs that understand cause and effect. This would allow AIs to reason based on an understanding of causation. A deep learning AI associates events (A and B happen together) while a causal AI understands that one event caused the another (A caused B), and not the other way around (B caused A). The sophisticated machines needed to solve big problems like climate change will need to understand all of the causational relationships involved in highly complex systems. Causal AI will rely on the commonsense knowledge mentioned in the previous section to give it the vital context it needs for sound reasoning.
Neuromorphic computers are inspired by the way brains work. Today's neural network designs are based on an understanding of neuroscience from the 1960s. Half a century later, we finally have the computing horsepower needed to implement these archaic machine models. The next time you ask Google Assistant or Alexa to play some Beatles music, remember that the foundation of the AI you're using was designed at the time when the Stones and The Beatles were first vying to be top of the charts. Neuromorphic computers, also referred to as cognitive computers, are based on a much more recent understanding of how the brain operates. Nodes are hyperconnected, their connections can change over time (in the same way that the human brain exhibits plasticity), and there is no separation between memory and processing functions.
Major research projects—the Human Brain Project in the European Union and the BRAIN (Brain Research through Advanced Innovative Neurotechnologies) initiative in the United States—seek to advance our understanding of the human brain, mapping and understanding brain function. These efforts, and others like them, push the boundaries of our understanding and offer new frameworks for the design of future neuromorphic computers. New computer chips, inspired by neuromorphic insights, may accelerate AI functions, reduced power consumption for AI tasks, and enable exciting new capabilities.
Whether it's capsule networks, neuromorphic computing, or common sense and causal AI, there are plenty of avenues of research that should fuel future advances in AI in the coming decades.
Narrow, General, and Super Intelligence
All of today's AI is considered to be “narrow” AI. The holy grail of AI research is the development of “general” and “super” AIs. Let's quickly review these three categories.
Artificial Narrow Intelligence (ANI)
Artificial Narrow Intelligence (ANI), also known as weak AI or vertical AI, refers to any AI that solves problems or performs tasks with a level of intelligence equivalent to, or higher than, a human, but only within a narrowly defined domain. Every AI available today, and every AI described in this book, is an example of narrow AI. Narrow AI is only good at the task it was designed for and useless at others. A chess-playing AI can't filter the spam from your email, and your spam filter can't play chess.
Читать дальше