Pedro Domingos - The Master Algorithm - How the Quest for the Ultimate Learning Machine Will Remake Our World

Здесь есть возможность читать онлайн «Pedro Domingos - The Master Algorithm - How the Quest for the Ultimate Learning Machine Will Remake Our World» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: Прочая околокомпьтерная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Algorithms increasingly run our lives. They find books, movies, jobs, and dates for us, manage our investments, and discover new drugs. More and more, these algorithms work by learning from the trails of data we leave in our newly digital world. Like curious children, they observe us, imitate, and experiment. And in the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask.
Machine learning is the automation of discovery-the scientific method on steroids-that enables intelligent robots and computers to program themselves. No field of science today is more important yet more shrouded in mystery. Pedro Domingos, one of the field’s leading lights, lifts the veil for the first time to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He charts a course through machine learning’s five major schools of thought, showing how they turn ideas from neuroscience, evolution, psychology, physics, and statistics into algorithms ready to serve you. Step by step, he assembles a blueprint for the future universal learner-the Master Algorithm-and discusses what it means for you, and for the future of business, science, and society.
If data-ism is today’s rising philosophy, this book will be its bible. The quest for universal learning is one of the most significant, fascinating, and revolutionary intellectual developments of all time. A groundbreaking book, The Master Algorithm is the essential guide for anyone and everyone wanting to understand not just how the revolution will happen, but how to be at its forefront.

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Chapter Four

Sebastian Seung’s Connectome (Houghton Mifflin Harcourt, 2012) is an accessible introduction to neuroscience, connectomics, and the daunting challenge of reverse engineering the brain. Parallel Distributed Processing ,* edited by David Rumelhart, James McClelland, and the PDP research group (MIT Press, 1986), is the bible of connectionism in its 1980s heyday. Neurocomputing ,* edited by James Anderson and Edward Rosenfeld (MIT Press, 1988), collates many of the classic connectionist papers, including: McCulloch and Pitts on the first models of neurons; Hebb on Hebb’s rule; Rosenblatt on perceptrons; Hopfield on Hopfield networks; Ackley, Hinton, and Sejnowski on Boltzmann machines; Sejnowski and Rosenberg on NETtalk; and Rumelhart, Hinton, and Williams on backpropagation. “Efficient backprop,”* by Yann LeCun, Léon Bottou, Genevieve Orr, and Klaus-Robert Müller, in Neural Networks: Tricks of the Trade , edited by Genevieve Orr and Klaus-Robert Müller (Springer, 1998), explains some of the main tricks needed to make backprop work.

Neural Networks in Finance and Investing ,* edited by Robert Trippi and Efraim Turban (McGraw-Hill, 1992), is a collection of articles on financial applications of neural networks. “Life in the fast lane: The evolution of an adaptive vehicle control system,” by Todd Jochem and Dean Pomerleau ( AI Magazine , 1996), describes the ALVINN self-driving car project. Paul Werbos’s PhD thesis is Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences* (Harvard University, 1974). Arthur Bryson and Yu-Chi Ho describe their early version of backprop in Applied Optimal Control * (Blaisdell, 1969).

Learning Deep Architectures for AI ,* by Yoshua Bengio (Now, 2009), is a brief introduction to deep learning. The problem of error signal diffusion in backprop is described in “Learning long-term dependencies with gradient descent is difficult,”* by Yoshua Bengio, Patrice Simard, and Paolo Frasconi ( IEEE Transactions on Neural Networks , 1994). “How many computers to identify a cat? 16,000,” by John Markoff ( New York Times , 2012), reports on the Google Brain project and its results. Convolutional neural networks, the current deep learning champion, are described in “Gradient-based learning applied to document recognition,”* by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner ( Proceedings of the IEEE , 1998). “The $1.3B quest to build a supercomputer replica of a human brain,” by Jonathon Keats ( Wired , 2013), describes the European Union’s brain modeling project. “The NIH BRAIN Initiative,” by Thomas Insel, Story Landis, and Francis Collins ( Science , 2013), describes the BRAIN initiative.

Steven Pinker summarizes the symbolists’ criticisms of connectionist models in Chapter 2 of How the Mind Works (Norton, 1997). Seymour Papert gives his take on the debate in “One AI or Many?” ( Daedalus , 1988). The Birth of the Mind , by Gary Marcus (Basic Books, 2004), explains how evolution could give rise to the human brain’s complex abilities.

Chapter Five

“Evolutionary robotics,” by Josh Bongard ( Communications of the ACM , 2013), surveys the work of Hod Lipson and others on evolving robots. Artificial Life , by Steven Levy (Vintage, 1993), gives a tour of the digital zoo, from computer-created animals in virtual worlds to genetic algorithms. Chapter 5 of Complexity , by Mitch Waldrop (Touchstone, 1992), tells the story of John Holland and the first few decades of research on genetic algorithms. Genetic Algorithms in Search, Optimization, and Machine Learning ,* by David Goldberg (Addison-Wesley, 1989), is the standard introduction to genetic algorithms.

Niles Eldredge and Stephen Jay Gould propose their theory of punctuated equilibria in “Punctuated equilibria: An alternative to phyletic gradualism,” in Models in Paleobiology , edited by T. J. M. Schopf (Freeman, 1972). Richard Dawkins critiques it in Chapter 9 of The Blind Watchmaker (Norton, 1986). The exploration-exploitation dilemma is discussed in Chapter 2 of Reinforcement Learning ,* by Richard Sutton and Andrew Barto (MIT Press, 1998). John Holland proposes his solution, and much else, in Adaptation in Natural and Artificial Systems * (University of Michigan Press, 1975).

John Koza’s Genetic Programming * (MIT Press, 1992) is the key reference on this paradigm. An evolved robot soccer team is described in “Evolving team Darwin United ,”* by David Andre and Astro Teller, in RoboCup-98: Robot Soccer World Cup II , edited by Minoru Asada and Hiroaki Kitano (Springer, 1999). Genetic Programming III ,* by John Koza, Forrest Bennett III, David Andre, and Martin Keane (Morgan Kaufmann, 1999), includes many examples of evolved electronic circuits. Danny Hillis argues that parasites are good for evolution in “Co-evolving parasites improve simulated evolution as an optimization procedure”* ( Physica D , 1990). Adi Livnat, Christos Papadimitriou, Jonathan Dushoff, and Marcus Feldman propose that sex optimizes mixability in “A mixability theory of the role of sex in evolution”* ( Proceedings of the National Academy of Sciences , 2008). Kevin Lang’s paper comparing genetic programming and hill climbing is “Hill climbing beats genetic search on a Boolean circuit synthesis problem of Koza’s”* ( Proceedings of the Twelfth International Conference on Machine Learning , 1995). Koza’s reply is “A response to the ML-95 paper entitled…”* (unpublished; online at www.genetic-programming.com/jktahoe24page.html).

James Baldwin proposed the eponymous effect in “A new factor in evolution” ( American Naturalist , 1896). Geoff Hinton and Steven Nowlan describe their implementation of it in “How learning can guide evolution”* ( Complex Systems , 1987). The Baldwin effect was the theme of a 1996 special issue* of the journal Evolutionary Computation edited by Peter Turney, Darrell Whitley, and Russell Anderson.

The distinction between descriptive and normative theories was articulated by John Neville Keynes in The Scope and Method of Political Economy (Macmillan, 1891).

Chapter Six

Sharon Bertsch McGrayne tells the history of Bayesianism, from Bayes and Laplace to the present, in The Theory That Would Not Die (Yale University Press, 2011). A First Course in Bayesian Statistical Methods ,* by Peter Hoff (Springer, 2009), is an introduction to Bayesian statistics.

The Naïve Bayes algorithm is first mentioned in Pattern Classification and Scene Analysis ,* by Richard Duda and Peter Hart (Wiley, 1973). Milton Friedman argues for oversimplified theories in “The methodology of positive economics,” which appears in Essays in Positive Economics (University of Chicago Press, 1966). The use of Naïve Bayes in spam filtering is described in “Stopping spam,” by Joshua Goodman, David Heckerman, and Robert Rounthwaite ( Scientific American , 2005). “Relevance weighting of search terms,”* by Stephen Robertson and Karen Sparck Jones ( Journal of the American Society for Information Science , 1976), explains the use of Naïve Bayes-like methods in information retrieval.

“First links in the Markov chain,” by Brian Hayes ( American Scientist , 2013), recounts Markov’s invention of the eponymous chains. “Large language models in machine translation,”* by Thorsten Brants et al. ( Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning , 2007), explains how Google Translate works. “The PageRank citation ranking: Bringing order to the Web,”* by Larry Page, Sergey Brin, Rajeev Motwani, and Terry Winograd (Stanford University technical report, 1998), describes the PageRank algorithm and its interpretation as a random walk over the web. Statistical Language Learning ,* by Eugene Charniak (MIT Press, 1996), explains how hidden Markov models work. Statistical Methods for Speech Recognition ,* by Fred Jelinek (MIT Press, 1997), describes their application to speech recognition. The story of HMM-style inference in communication is told in “The Viterbi algorithm: A personal history,” by David Forney (unpublished; online at arxiv.org/pdf/cs/0504020v2.pdf). Bioinformatics: The Machine Learning Approach ,* by Pierre Baldi and Søren Brunak (2nd ed., MIT Press, 2001), is an introduction to the use of machine learning in biology, including HMMs. “Engineers look to Kalman filtering for guidance,” by Barry Cipra ( SIAM News , 1993), is a brief introduction to Kalman filters, their history, and their applications.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World»

Представляем Вашему вниманию похожие книги на «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World»

Обсуждение, отзывы о книге «The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x