Massimo Airoldi - Machine Habitus

Здесь есть возможность читать онлайн «Massimo Airoldi - Machine Habitus» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Machine Habitus: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Machine Habitus»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

We commonly think of society as made of and by humans, but with the proliferation of machine learning and AI technologies, this is clearly no longer the case. Billions of automated systems tacitly contribute to the social construction of reality by drawing algorithmic distinctions between the visible and the invisible, the relevant and the irrelevant, the likely and the unlikely – on and beyond platforms.
Drawing on the work of Pierre Bourdieu, this book develops an original sociology of algorithms as social agents, actively participating in social life. Through a wide range of examples, Massimo Airoldi shows how society shapes algorithmic code, and how this culture in the code guides the practical behaviour of the code in the culture, shaping society in turn. The ‘machine habitus’ is the generative mechanism at work throughout myriads of feedback loops linking humans with artificial social agents, in the context of digital infrastructures and pre-digital social structures.
Machine Habitus

Machine Habitus — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Machine Habitus», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать
Figure 1 Algorithms a conceptual map from Euclid to AlphaGo Critical - фото 1

Figure 1 Algorithms: a conceptual map, from Euclid to AlphaGo

Critical algorithm studies

When algorithms started to be applied to the digital engineering of the social world, only a few sociologists took notice (Orton-Johnson and Prior 2013). In the early 2000s, the sociological hype about the (then new) social networking sites, streaming platforms and dating services was largely about the possible emancipatory outcomes of an enlarged digital connectivity, the disrupting research potential of big data, and the narrowing divide between ‘real’ and ‘virtual’ lives (Beer 2009). However, at the periphery of academic sociology, social scientists working in fields like software studies, anthropology, philosophy, cultural studies, geography, Internet studies and media research were beginning to theorize and investigate the emergence of a new ‘algorithmic life’ (Amoore and Piotukh 2016). In the past decade, this research strand has grown substantially, disrupting disciplinary borders and setting the agenda of important academic outlets. 4Known as ‘critical algorithm studies’ (Gillespie and Seaver 2016), it proposes multiple sociologies of algorithms which tackle various aspects of the techno-social data assemblages behind AI technologies.

A major part of this critical literature has scrutinized the production of the input of automated calculations, that is, the data. Critical research on the mining of data through digital forms of surveillance (Brayne 2017; van Dijck 2013) and labour (Casilli 2019; Gandini 2020) has illuminated the extractive and ‘panopticist’ character of platforms, Internet services and connected devices such as wearables and smartphones (see Lupton 2020; Ruckenstein and Granroth 2020; Arvidsson 2004). Cheney-Lippold (2011, 2017) developed the notion of ‘algorithmic identity’ in order to study the biopolitical implications of web analytics firms’ data harnessing, aimed at computationally predicting who digital consumers are. Similar studies have also been conducted in the field of critical marketing (Cluley and Brown 2015; Darmody and Zwick 2020; Zwick and Denegri-Knott 2009). Furthermore, a number of works have questioned the epistemological grounds of ‘big data’ approaches, highlighting how the automated and decontextualized analysis of large datasets may ultimately lead to inaccurate or biased results (boyd and Crawford 2012; O’Neil 2016; Broussard 2018). The proliferation of metrics and the ubiquity of ‘datafication’ – that is, the transformation of social action into online quantified data (Mayer-Schoenberger and Cukier 2013) – have been indicated as key features of today’s capitalism, which is seen as increasingly dependent on the harvesting and engineering of consumers’ lives and culture (Zuboff 2019; van Dijck, Poell and de Waal 2018).

As STS research did decades earlier with missiles and electric bulbs (MacKenzie and Wajcman 1999), critical algorithm studies have also explored how algorithmic models and their data infrastructures are developed, manufactured and narrated, eventually with the aim of making these opaque ‘black boxes’ accountable (Pasquale 2015). The ‘anatomy’ of AI systems is the subject of the original work of Crawford and Joler (2018), at the crossroads of art and research. Taking Amazon Echo – the consumer voice-enabled AI device featuring the popular interface Alexa – as an example, the authors show how even the most banal human–device interaction ‘requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data’ (Crawford and Joler 2018: 2). Behind the capacity of Amazon Echo to hear, interpret and efficiently respond to users’ commands, there is not only a machine learning model in a constant process of optimization, but also a wide array of accumulated scientific knowledge, natural resources such as the lithium and cobalt used in batteries, and labour exploited in the mining of both rare metals and data. Several studies have looked more closely into the genesis of specific platforms and algorithmic systems, tracing their historical evolution and practical implementation while simultaneously unveiling the cultural and political assumptions inscribed in their technicalities (Rieder 2017; D. MacKenzie 2018; Helmond, Nieborg and van der Vlist 2019; Neyland 2019; Seaver 2019; Eubanks 2018; Hallinan and Striphas 2016; McKelvey 2018; Gillespie 2018). Furthermore, since algorithms are also cultural and discursive objects (Beer 2017; Seaver 2017; Bucher 2017; Campolo and Crawford 2020), researchers have investigated how they are marketed and – as often happens – mythicized (Natale and Ballatore 2020; Neyland 2019). This literature shows how the fictitious representation of calculative devices as necessarily neutral, objective and accurate in their predictions is ideologically rooted in the techno-chauvinistic belief that ‘tech is always the solution’ (Broussard 2018: 7).

A considerable amount of research has also asked how and to what extent the output of algorithmic computations – automated recommendations, micro-targeted ads, search results, risk predictions, etc. – controls and influences citizens, workers and consumers. Many critical scholars have argued that the widespread delegation of human choices to opaque algorithms results in a limitation of human freedom and agency (e.g. Pasquale 2015; Mackenzie 2006; Ananny 2016; Beer 2013a, 2017; Ziewitz 2016; Just and Latzer 2017). Building on the work of Lash (2007) and Thrift (2005), the sociologist David Beer (2009) suggested that online algorithms not only mediate but also ‘constitute’ reality, becoming a sort of ‘technological unconscious’, an invisible force orienting Internet users’ everyday lives. Other contributions have similarly portrayed algorithms as powerful ‘engines of order’ (Rieder 2020), such as Taina Bucher’s research on how Facebook ‘programmes’ social life (2012a, 2018). Scholars have examined the effects of algorithmic ‘governance’ (Ziewitz 2016) in a number of research contexts, by investigating computational forms of racial discrimination (Noble 2018; Benjamin 2019), policy algorithms and predictive risk models (Eubanks 2018; Christin 2020), as well as ‘filter bubbles’ on social media (Pariser 2011; see also Bruns 2019). The political, ethical and legal implications of algorithmic power have been discussed from multiple disciplinary angles, and with a varying degree of techno-pessimism (see for instance Beer 2017; Floridi et al. 2018; Ananny 2016; Crawford et al. 2019; Campolo and Crawford 2020).

Given the broad critical literature on algorithms, AI and their applications – which goes well beyond the references mentioned above (see Gillespie and Seaver 2016) – one might ask why an all-encompassing sociological framework for researching intelligent machines and their social implications should be needed. My answer builds on a couple of questions which remain open, and on the understudied feedback loops lying behind them.

Open questions and feedback loops

The notion of ‘feedback loop’ is widely used in biology, engineering and, increasingly, in popular culture: if the outputs of a technical system are routed back as inputs, the system ‘feeds back’ into itself. Norbert Wiener – the founder of cybernetics – defines feedback as ‘the property of being able to adjust future conduct by past performance’ (1989: 33). According to Wiener, feedback mechanisms based on the measurement of performance make learning possible, both in the animal world and in the technical world of machines – even when these are as simple as an elevator (1989: 24). This intuition turned out to be crucial for the subsequent development of machine learning research. However, how feedback processes work in socio-cultural contexts is less clear, especially when these involve both humans and autonomous machines. While mid-twentieth-century cyberneticians like Wiener saw the feedback loop essentially as a mechanism of control producing stability within complex systems, they ‘did not quite foresee its capacity to generate emergent behaviours’ (Amoore 2019: 11). In the words of the literary theorist Katherine Hayles: ‘recursivity could become a spiral rather than a circle’ (2005: 241, cited in Amoore 2019).

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Machine Habitus»

Представляем Вашему вниманию похожие книги на «Machine Habitus» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Machine Habitus»

Обсуждение, отзывы о книге «Machine Habitus» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x