What Is It and Why Is It Important?
Artificial intelligence is the umbrella term that describes the effort to mimic human skills and replicate human intelligence with machines. Various approaches have been used to build artificial intelligence over the years. In the 1980s and 1990s, “knowledge systems” were all the rage. Today, most modern AI uses a technique known as machine learning. Machines learn from examples in the form of training data. Most machine learning systems are built with artificial neural networks (ANN), also known more simply as neural networks.
Electricity, Digital Computers, and Artificial Intelligence
About 120 years ago, electrification changed the world. Electricity refrigerates our food, washes our clothes, lights our homes, and powers our factories. Electricity transformed every industrial sector and now powers modern life.
Roughly 60 years ago, the first digital computers were built. Initially limited in their capability, computers evolved into powerful machines that brought us word processing, spreadsheets, the internet, video games, social media, streaming media, and smartphones. Like electricity before them, computers transformed business and changed our lives.
Artificial intelligence will have an impact as profound as both electrification and the digital computer. AI luminary Andrew Ng is the former chief scientist at Baidu, former lead of the Google Brain project, and now runs Landing.ai, a company that solves manufacturing problems using AI. In 2017, Ng observed, “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years.”
Artificial intelligence is a huge deal, worthy of the first and longest chapter of this book. Sundar Pichai, CEO at Google, once said, “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.” While hyperbolic, this statement from the head of one of the world's most powerful tech companies should make us all sit up and listen.
AI's hype is largely justified. Just as digital technology was a vital component of any successful business strategy in the 1990s and 2000s, so AI must be central to strategic plans of the 2020s.
The Next Era of Computing: Traditional Digital versus Artificial Intelligence
Artificial intelligence and traditional digital computers are complementary. Importantly, AI solves problems that are costly or impossible for traditional computers to solve. The two technologies will coexist and work side-by-side, each solving a different set of problems. For a language translation app on a smartphone, traditional computing presents an attractive interface, while AI handles voice recognition and language translation functions.
AI solves problems using a radically different approach from traditional computers. Traditional computers are programmed to solve problems. Programs apply a set of rules to data and compute results. Said another way, they take rules and data as input, and output results. AI solves problems without using preprogrammed rules. Machine learning, a popular form of AI, takes data and results as its input and infers rules as its output. Through a complex training process, AI finds patterns and associations between data and results and divines its own rules for how they connect. This trait lets us solve entirely new problems with AI. It's why AI can seem so magical: It solves problems we don't know how to solve ourselves.
To find complex associations in data and build rules, AIs must be trained with thousands or sometimes millions of examples. Today's AIs are not like human brains. While some of the organizing principles are the same, human children learning about the world don't have to see a million automobiles before they can recognize one and adorably yell the word “car.”
A 1950s Concept and 1980s Algorithms Meet Modern Computing Horsepower
The term “artificial intelligence” was first coined in the 1950s. The core algorithms behind today's AIs were first proposed in the 1970s and popularized in the mid-1980s. But it was 2012 before the recent crop of AI breakthroughs began to appear. Why the quarter-century delay? Older computers lacked the performance to run AI applications. High-end graphics processors, GPUs from companies like Nvidia, eventually provided the computing horsepower needed. Their parallel number-crunching architectures, designed to create realistic video games, turn out to be pretty good for training an AI. As well as fast computers, AIs need training data to learn from. As digital storage costs fell and broadband speeds increased, data flooded in from many sources: billions of industrial sensors, millions of smart cameras, billions of people sharing trillions of photos and billions of videos, and trillions of clicks on social media. Users upload 500 hours of video to YouTube every minute and more than 1.2 billion photos to Google Photos every day (Source: Wikipedia).
With cheap, powerful computing, an avalanche of training data, and a small army of AI-savvy researchers and developers, artificial intelligence is now poised to solve myriad problems and create many exciting new capabilities.
Artificial intelligence can solve a wide variety of problems. Considering all the possible applications of AI can be overwhelming. I've found it helpful to cluster AI applications into eight broad categories:
1 Machine vision
2 Natural language processing (NLP) and voice platforms
3 Exploration and discovery
4 Better-informed decision-making
5 Predicting the future
6 Seeing the world through a new lens with super sensors
7 Solving complex problems by learning from experience
8 Creating and co-creating content
In all of these eight application categories, AI is used to find patterns and associations in data and to make statistical predictions. Each application uses this fundamental feature in different ways. Apply the associative capabilities of machine learning to images and you get machine vision; apply it to historical data and you get predictions; apply it to cursive text and you get handwriting recognition. In voice platforms, AIs trained on human speech determine what words you are saying. AIs trained on historical weather data are used to make predictions that inform the weather forecast.
Artificial intelligence finds important associations that we may not previously have discovered: the complex association between the molecular structure of a chemical compound and its physical properties, or the complex set of circumstances that lead to an outbreak of disease. This characteristic enables AI to solve problems that we don't yet know how to solve ourselves.
As we review each of the eight main applications of AI, think about how each one might impact your business, your life, and society at large.
Machine Vision: Computers Open Their Eyes
With the advent of artificial intelligence, machines have evolved eyes and ears. Computers can now see, hear, and “understand” something about the world that they inhabit. This understanding is still rudimentary. Computers can recognize an image of an apple and correctly categorize it with the five letters a-p-p-l-e, but they don't understand what an apple is, that it grew on a tree, or what it tastes like.
Machine vision has many interesting applications across a wide range of industries. Stocktaking robots audit shelf contents in grocery stores. Facial recognition algorithms turn faces into passwords. Agricultural robots spot-spray herbicide on weeds. Quality assurance AIs perform visual inspection on the manufacturing line. Autonomous machines—robots, drones, and self-driving vehicles—all rely on machine vision, too.
Читать дальше