Helga Nowotny - In AI We Trust

Здесь есть возможность читать онлайн «Helga Nowotny - In AI We Trust» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

In AI We Trust: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «In AI We Trust»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.
At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.
As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.

In AI We Trust — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «In AI We Trust», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Another thread is interwoven with the sense of direction that takes its inspiration from a remarkable human discovery: the idea of the future as an open horizon, full of as yet unimaginable possibilities and inherently uncertain. The open horizon extends into the vast space of what is yet unknown, pulsating with the dynamics of what is possible. Human creativity is ready to explore it, with science and art at the forefront. It is this conception of the future which is at stake when predictive algorithms threaten to fill the present with their apparent certainty, and when human behaviour begins to conform to these predictions.

The larger frame of this book is set by a co-evolutionary trajectory on which humankind has embarked together with the digital machines it has invented and deployed. Co-evolution means that a mutual interdependence is in the making, with flexible adaptations on both sides. Digital beings or entities like the robots created by us are mutating into our significant Others. We have no clue where this journey will lead or how it will end. However, in the long course of human evolution, it is possible that we have become something akin to a self-domesticating species that has learned to value cooperation and, at least to some extent, decrease its potential for aggression. That capacity for cooperation could now extend to digital machines. We have already reached the point of starting to believe that the algorithm knows us better than we know ourselves. It then comes to be seen as a new authority to guide the self, one that knows what is good for us and what the future holds.

The road ahead: how to live forward and understand life backwards

Scientific predictions are considered the hallmark of modern science. Notably physics advances by inventing new theoretical concepts and the instruments to test predictions derived from them. The computational revolution that began in the middle of the last century has been boosted by the vastly increased computational power and Deep Learning methods that took off in the twenty-first century. Together with access to an unprecedented and still growing amount of data, these developments have extended the power of predictions and their applicability across an enormous range of natural and social phenomena. Scientific predictions are no longer confined to science.

Ever since, predictive analytics has become highly profitable for the economy and pervaded the entire social fabric. The operation of algorithms underlies the functioning of technological products that have disrupted business models and created new markets. Harnessed by the marketing and advertisement industry, instrumentalized by politicians seeking to maximize votes, and quickly adopted by the shadowy world of secret services, hackers and fraudsters exploiting the anonymity of the internet, the use of predictive analytics has convinced consumers, voters and health-conscious citizens that these powerful digital instruments are there to serve our needs and latent desires.

Much of their successful spread and eager adoption is due to the fact that the power of predictive algorithms is performative. An algorithm has the capability to make happen what it predicts when human behaviour follows the prediction. Performativity means that what is enacted, pronounced or performed can affect action, as shown in the pioneering work on the performativity of speech acts and non-verbal communication by J. L. Austin, Judith Butler and others. Another well-known social phenomenon is captured in the Thomas theorem – ‘If men define situations as real, they are real in their consequences’ – dating back to 1928 and later reformulated by Robert K. Merton in terms of self-fulfilling prophecy. The time has come to acknowledge what sociologists have long since known and apply it also to predictive algorithms.

The propensity of people to orient themselves in relation to what others do, especially in unexpected or threatening circumstances, enhances the power of predictive algorithms. It magnifies the illusion of being in control. But if the instrument gains the upper hand over understanding we lose the capacity for critical thinking. We end up trusting the automatic pilot while flying blindly in the fog. There are, however, situations in which it is crucial to deactivate the automatic pilot and exercise our own judgement as to what to do.

When visualizing the road ahead, I see a situation where we have created a highly efficient instrument that allows us to follow and foresee the evolving dynamics of a wide range of phenomena and activities, but where we largely fail to understand the causal mechanisms that underlie them. We rely increasingly on what predictive algorithms tell us, especially when institutions begin to align with their predictions, often unaware of the unintended consequences that will follow. We trust not only the performative power of predictive analytics but also that it knows which options to lay out for us, again without considering who has designed these options and how, or that there might be other options equally worth considering.

At the same time, distrust of AI creeps in and the concerns grow. Some of them, like the fears about surveillance or the future of work, are well known and widely discussed. Others are not so obvious. When self-fulfilling prophecies begin to proliferate, we risk returning to a deterministic worldview in which the future appears as predetermined and hence closed. The space vital to imagining what could be otherwise begins to shrink. The motivation as well as the ability to stretch the boundaries of imagination is curtailed. To rely purely on the efficiency of prediction obscures the need for understanding why and how. The risk is that everything we treasure about our culture and values will atrophy.

Moreover, in a world governed by predictive analytics there is neither a place nor any longer the need for accountability. When political power becomes unaccountable to those over whom it is exercised, we risk the destruction of liberal democracy. Accountability rests on a basic understanding of cause and effect. In a democracy, this is framed in legal terms and is an integral part of democratically legitimated institutions. If this is no longer guaranteed, surveillance becomes ubiquitous. Big data gets even bigger and data is acquired without understanding or explanation. We become part of a fine-tuned and interconnected predictive system that is dynamically closed upon itself. The human ability to teach to others what we know and have experienced begins to resemble that of a machine that can teach itself and invent the rules. Machines have neither empathy nor a sense of responsibility. Only humans can be held accountable and only humans have the freedom to take on responsibility.

Luckily, we have not arrived at this point as yet. We can still ask: Do we really want to live in an entirely predictable world in which predictive analytics invades and guides our innermost thoughts and desires? This would mean renouncing the inherent uncertainty of the future and replacing it with the dangerous illusion of being in control. Or are we ready to acknowledge that a fully predictable world is never achievable? Then we would have to muster the courage to face the danger that a falsely perceived deterministic world implies. This book has been written as an argument against the illusion of a wholly predictable world and for the courage – and wisdom – needed to live with uncertainty.

Obviously, my journey does not end there. ‘Life can only be understood backwards, but it must be lived forward.’ This quotation from Søren Kierkegaard awaits an interpretation in relation to our movements between online and offline worlds, between the virtual self, the imagined self and the ‘real’ self. How does one live forward under these conditions, given their opportunities and constraints? The quotation implies a disjunction between Life as an abstraction that transcends the personal, and living as the conscious experience that fills every moment of our existence. With the stupendous knowledge we now have about Life in all its diversity, forms and levels, about its origins in the deep past and its continued evolution, is not now the moment to bring this knowledge to bear on how to live forward? The human species has overtaken biological evolution whose product we still are. Science and technology have enabled us to move forward at accelerating speed along the pathways of a cultural evolution that we are increasingly able to shape.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «In AI We Trust»

Представляем Вашему вниманию похожие книги на «In AI We Trust» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «In AI We Trust»

Обсуждение, отзывы о книге «In AI We Trust» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x