1 ...7 8 9 11 12 13 ...16 K-means is one type of clustering algorithm, where data is associated to a cluster based on a means. Kernel density estimation is a density estimation algorithm that uses small groups of closely related data to estimate a distribution.
In the book Artificial Intelligence: A Modern Approach, 3rd edition (Pearson Education India, 2015), Stuart Russell and Peter Norvig described an ability for an unsupervised model to learn patterns by using the input without any explicit feedback.
The most common unsupervised learning task is clustering: detecting potentially useful clusters of input examples. For example, a taxi agent might gradually develop a concept of “good traffic days” and “bad traffic days” without ever being given labeled examples of each by a teacher.
Reinforcement learning uses feedback as an aid in determining what to do next. In the example of the taxi ride, receiving or not receiving a tip along with the fare at the completion of a ride serves to imply goodness or badness.
The main statistical inference techniques for model learning are inductive learning, deductive inference, and transduction. Inductive learning is a common machine learning model that uses evidence to help determine an outcome. Deductive inference reasons top-down and requires that each premise is met before determining the conclusion. In contrast, induction is a bottom-up type of reasoning and uses data as evidence for an outcome. Transduction is used to refer to predicting specific examples given specific examples from a domain.
Other learning techniques include multitask learning, active learning, online learning, transfer learning, and ensemble learning. Multitask learning aims “to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks” ( arxiv.org/pdf/1707.08114.pdf). With active learning, the learning process aims “to ease the data collection process by automatically deciding which instances an annotator should label to train an algorithm as quickly and effectively as possible” ( papers.nips.cc/paper/7010-learning-active-learning-from-data.pdf). Online learning “is helpful when the data may be changing rapidly over time. It is also useful for applications that involve a large collection of data that is constantly growing, even if changes are gradual” (Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd edition , Pearson Education India, 2015).
The variety of opportunities to apply machine learning is extensive. The sheer variety gives credence as to why so many different modes of learning are necessary:
Advertisement serving
Business analytics
Call centers
Computer vision
Companionship
Creating prose
Cybersecurity
Ecommerce
Education
Finance, algorithmic trading
Finance, asset allocation
First responder rescue operations
Fraud detection
Law
Housekeeping
Elderly care
Manufacturing
Mathematical theorems
Medicine/surgery
Military
Music composition
National security
Natural language understanding
Personalization
Policing
Political
Recommendation engines
Robotics, consumer
Robotics, industry
Robotics, military
Robotics, outer space
Route planning
Scientific discovery
Search
Smart homes
Speech recognition
Translation
Unmanned aerial vehicles (drones, cars, ambulance, trains, ships, submarines, planes, etc.)
Virtual assistants
Evaluating how well a model learned can follow a five-point rubric.
Phenomenal: It's not possible to do any better.
Crazy good: Outcomes are better than what any individual could achieve.
Super-human: Outcomes are better than what most people could achieve.
Par-human: Outcomes are comparable to what most people could achieve.
Sub-human: Outcomes are less than what most people could achieve.
Toward the AI-Centric Organization
As with the industrial age and then the information age, the age of AI is an advancement in tooling to help solve or address business problems. Driven by necessity, organizations are going to use AI to aid with automation and optimization. To support data-driven cultures, AI must also be used to predict and to diagnose. AI-centric organizations must revisit all aspects of their being, from strategy to structure and from technology to egos.
Before becoming AI-centric, organizations must first identify their problems, examine their priorities, and decide where to begin. While AI is best for detecting outcomes against a pattern, traditional business rules are not going to disappear. To be AI-centric is to understand what aspects of the business can best be addressed through patterns. Knowing how much tax to pay is never going to be a pattern; a tax calculation is always going to be rule-based.
There are always going to be situations where a decision or action requires a combination of pattern-based and rule-based outcomes. In much the same way, a person may leverage AI algorithms in conjunction with other analytical techniques.
Organizations that avoid or delay AI adoption will, in a worst-case scenario, become obsolete. The changing needs of an organization coupled with the use of AI are going to necessitate an evolution in jobs and skillsets needed. As previously stated, every single job is likely to be impacted in one way or another. Structural changes across industries will lead to new-collar workers spending more of their time on activities regarded as driving higher value.
Employees are likely to demand continuous skill development to remain competitive and relevant. As with any technological shift, AI may, for many years, be subject to scrutiny and debate. Concerns about widening economic divides, personal privacy, and ethical use are not always unfounded, but the potential for consistently providing a positive experience cannot be dismissed. Using a suitable information architecture for AI is likely to be regarded as a high-order imperative for consistently producing superior outcomes.
On occasion, we are likely to have experienced a gut feeling about a situation. We have this sensation in the pit of our stomach that we know what we must do next or that something is right or that something is about to go awry. Inevitably, this feeling is not backed by data.
Gene Kranz was the flight director in NASA's Mission Control room during the Apollo 13 mission in 1970. As flight director, he made a number of gut feel decisions that allowed the lunar module to return safely to Earth after a significant malfunction. This is why we regard AI as augmenting the knowledge worker and not an outright replacement for the knowledge worker. Some decisions require a broader context for decision-making; even if that decision is a gut feel, the decision is still likely to manifest from years of practical experience.
For many businesses, the sheer scale of their operations already means that each decision can't be debated between man and machine to reach a final outcome. Scale, and not the need to find a replacement for repetitive tasks, is the primary driving factor toward needing to build the AI-centric organization.
Through climbing the ladder, organizations will develop practices for data science and be able to harness machine learning and deep learning as part of their enhanced analytical toolkit.
Data science is a discipline , in that the data scientist must be able to leverage and coordinate multiple skills to achieve an outcome, such as domain expertise, a deep understanding of data management, math skills, and programming. Machine learning and deep learning, on the other hand, are techniques that can be applied via the discipline. They are techniques insofar as they are optional tools within the data science toolkit.
Читать дальше