Massimo Airoldi - Machine Habitus

Здесь есть возможность читать онлайн «Massimo Airoldi - Machine Habitus» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Machine Habitus: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Machine Habitus»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

We commonly think of society as made of and by humans, but with the proliferation of machine learning and AI technologies, this is clearly no longer the case. Billions of automated systems tacitly contribute to the social construction of reality by drawing algorithmic distinctions between the visible and the invisible, the relevant and the irrelevant, the likely and the unlikely – on and beyond platforms.
Drawing on the work of Pierre Bourdieu, this book develops an original sociology of algorithms as social agents, actively participating in social life. Through a wide range of examples, Massimo Airoldi shows how society shapes algorithmic code, and how this culture in the code guides the practical behaviour of the code in the culture, shaping society in turn. The ‘machine habitus’ is the generative mechanism at work throughout myriads of feedback loops linking humans with artificial social agents, in the context of digital infrastructures and pre-digital social structures.
Machine Habitus

Machine Habitus — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Machine Habitus», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

In these years, the ancient dream of creating ‘thinking machines’ was spread among a new generation of scientists, often affiliated to the MIT lab led by professor Marvin Minsky, known as the ‘father’ of AI research (Natale and Ballatore 2020). Since the 1940s, the cross-disciplinary field of cybernetics had been working on the revolutionary idea that machines could autonomously interact with their environment and learn from it through feedback mechanisms (Wiener 1989). In 1957, the cognitive scientist Frank Rosenblatt designed and built a cybernetic machine called Perceptron, the first operative artificial neural network, assembled as an analogue algorithmic system made of input sensors and resolved into one single dichotomic output – a light bulb that could be on or off, depending on the computational result (Pasquinelli 2017). Rosenblatt’s bottom-up approach to artificial cognition did not catch on in AI research. An alternative top-down approach, now known as ‘symbolic AI’ or ‘GOFAI’ (Good Old-Fashioned Artificial Intelligence), dominated the field in the following decades, up until the boom of machine learning. The ‘intelligence’ of GOFAI systems was formulated as a set of predetermined instructions capable of ‘simulating’ human cognitive performance – for instance by effectively playing chess (Fjelland 2020). Such a deductive, rule-based logic (Pasquinelli 2017) rests at the core of software programming, as exemplified by the conditional IF–THEN commands running in the back end of any computer application.

From the late 1970s, the development of microprocessors and the subsequent commercialization of personal computers fostered the popularization of computer programming. By entering people’s lives at work and at home – e.g. with videogames, word processors, statistical software, etc. – computer algorithms were no longer the reserve of a few scientists working for governments, large companies and universities (Campbell-Kelly et al. 2013). The digital storage of information, as well as its grassroots creation and circulation through novel Internet-based channels (e.g. emails, Internet Relay Chats, discussion forums), translated into the availability of novel data sources. The automated processing of large volumes of such ‘user-generated data’ for commercial purposes, inaugurated by the development of the Google search engine in the late 1990s, marked the transition toward a third era of algorithmic applications.

Platform Era (1998–)

The global Internet-based information system known as the World Wide Web was invented in 1989, and the first browser for web navigation was released to the general public two years later. Soon, the rapid multiplication of web content led to a pressing need for indexing solutions capable of overcoming the growing ‘information overload’ experienced by Internet users (Benkler 2006; Konstan and Riedl 2012). In 1998, Larry Page and Sergey Brin designed an algorithm able to ‘find needles in haystacks’, which then became the famous PageRank of Google Search (MacCormick 2012: 25). Building on graph theory and citation analysis, this algorithm measured the hierarchical relations among web pages based on hyperlinks. ‘Bringing order to the web’ through the data-driven identification of ‘important’ search results was the main goal of Page and colleagues (1999). With the implementation of PageRank, ‘the web is no longer treated exclusively as a document repository, but additionally as a social system’ (Rieder 2020: 285). Unsupervised algorithms, embedded in the increasingly modular and dynamic infrastructure of web services, started to be developed by computer scientists to automatically process, quantify and classify the social web (Beer 2009). As it became possible to extract and organize in large databases the data produced in real time by millions of consumers, new forms of Internet-based surveillance appeared (Arvidsson 2004; Zwick and Denegri Knott 2009). The development of the first automated recommender systems in the early 1990s led a few years later to a revolution in marketing and e-commerce (Konstan and Riedl 2012). Personalized recommendations aimed to predict consumer desires and assist purchasing choices (Ansari, Essegaier and Kohli 2000), with businesses being offered the promise of keeping their customers ‘forever’ (Pine II, Peppers and Rogers 1995). The modular socio-technical infrastructures of commercial platforms such as Google, Amazon and, beginning in the mid 2000s, YouTube, Facebook and Twitter, lie at the core of this historical transition toward the datafication and algorithmic ordering of economy and society (Mayer-Schoenberger and Cukier 2013; van Dijck 2013; Zuboff 2019).

Digital platforms are at once the main context of application of these autonomous machines and the ultimate source of their intelligence. Platformization has been identified as one of the causes of the current ‘eternal spring’ of AI research, since it has finally provided the enormous amount of data and real-time feedback needed to train machine learning models, such as users’ profile pictures, online transactions or social media posts (Helmond 2015). Together with the development of faster and higher performing computers, this access to ‘big’ and relatively inexpensive data made possible the breakthrough of ‘deep learning’ in the 2010s (Kelleher 2019). As Mühlhoff notes (2020: 1869), most industrial AI implementations ‘come with extensive media infrastructure for capturing humans in distributed, human-machine computing networks, which as a whole perform the intelligence capacity that is commonly attributed to the computer system’. Hence, it is not by chance that the top players in the Internet industry, in the US as in China, have taken the lead of the AI race. In 2016, Joaquin Candela, director of the Facebook Applied Machine Learning Group, declared: ‘we’re trying to build more than 1.5 billion AI agents – one for every person who uses Facebook or any of its products’ (Higginbotham 2016, cited in A. Mackenzie 2019: 1995).

Furthermore, while in the Digital Era algorithms were commercially used mainly for analytical purposes, in the Platform Era they also became ‘operational’ devices (A. Mackenzie 2018). Logistic regressions such as those run in SPSS by statisticians in the 1980s could now be operationally embedded in a platform infrastructure and fed with thousands of data ‘features’ in order to autonomously filter the content presented to single users based on adaptable, high-dimensional models (Rieder 2020). The computational implications of this shift have been described by Adrian Mackenzie as follows:

if conventional statistical regression models typically worked with 10 different variables […] and perhaps sample sizes of thousands, data mining and predictive analytics today typically work with hundreds and in some cases tens of thousands of variables and sample sizes of millions or billions. The difference between classical statistics, which often sought to explain associations between variables, and machine learning, which seeks to explore high-dimensional patterns, arises because vector spaces juxtapose almost any number of features. (Mackenzie 2015: 434)

Advanced AI models built using artificial neural networks are now used in chatbots, self-driving cars and recommendation systems, and have enabled the recent expansion of fields such as pattern recognition, machine translation or image generation. In 2015, an AI system developed by the Google-owned company DeepMind was the first to win against a professional player at the complex game of Go. On the one hand, this landmark was a matter of increased computing power. 3On the other hand, it was possible thanks to the aforementioned qualitative shift from a top-down artificial reasoning based on ‘symbolic deduction’ to a bottom-up ‘statistical induction’ (Pasquinelli 2017). AlphaGo – the machine’s name – learned how to play the ancient board game largely on its own, by ‘attempting to match the moves of expert players from recorded games’ (Chen 2016: 6). Far from mechanically executing tasks, current AI technologies can learn from (datafied) experience, a bit like human babies. And as with human babies, once thrown into the world, these machine learning systems are no less than social agents, who shape society and are shaped by it in turn.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Machine Habitus»

Представляем Вашему вниманию похожие книги на «Machine Habitus» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Machine Habitus»

Обсуждение, отзывы о книге «Machine Habitus» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x