Helga Nowotny - In AI We Trust

Здесь есть возможность читать онлайн «Helga Nowotny - In AI We Trust» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

In AI We Trust: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «In AI We Trust»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.
At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.
As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.

In AI We Trust — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «In AI We Trust», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Yet, despite all these observations and analyses, a gap remained between the global scale on which these processes unfolded and my personal life which, fortunately, continued without major perturbations. Even the local impacts were being played out either in far-away places or remained local in the sense that they were soon to be overtaken by other local events. Most of us are cognizant that these major societal transformations will have huge impacts and numerous unintended consequences; and yet, they remain on a level of abstraction that is so overwhelming it is difficult to grasp intellectually in all its complexity. The gap between knowing and acting, between personal insight and collective action, between thinking at the level of the individual and thinking institutions globally, appears to shield us from the immediate impact that these far-reaching changes will have.

Finally, it struck me that there exists an entry point that allows me to connect curiosity-driven and rigorous scientific inquiry with personal experience and intuition about what is at stake: the increasingly important role played by prediction, in particular by predictive algorithms and analytics. Prediction, obviously, is about the future, yet it reacts back on how we conceive the future in the present. When applied to complex systems, prediction faces the non-linearity of processes. In a non-linear system, changes in input are no longer proportional to changes in output. This is the reason why such systems appear as unpredictable or chaotic. Here we are: we want to expand the range of what can be reliably predicted, yet we also realize that complex systems defy the linearity that still underpins so much of our thinking, perhaps as a heritage of modernity.

The behaviour of complex systems is difficult for us to grasp and often appears counter-intuitive. It is exemplified by the famous butterfly effect, where the sensitive dependence on initial conditions can result in large differences at a later stage, as when the flapping of a butterfly’s wings in the Amazon leads to a tornado making landfall in Texas. But such metaphors are not always at hand, and I began to wonder whether we are even able to think in non-linear ways. Predictions about the behaviour of dynamic complex systems often come in the garb of mathematical equations embedded in digital technologies. Simulation models do not speak directly to our senses. Their outcome and the options they produce need to be interpreted and explained. Since they are perceived as being scientifically objective, they are often not questioned any further. But then predictions assume the power of agency that we attribute to them. If blindly followed, the predictive power of algorithms turns into a self-fulfilling prophecy – a prediction becomes true simply because people believe in it and act accordingly.

So, I set out to bridge the divide between the personal, in this case the predictions we experience as being addressed to us as individuals, and the collective as represented by complex systems. We are familiar and at ease with messages and forms of communication at the inter-personal level, while, unless we adopt a professional and scientific stance, we experience everything connected with a system as an external, impersonal force that impinges on us. Might it not be, I wondered, that we are so easily persuaded to trust a predictive algorithm because it reaches us on a personal level, while we distrust the digital system, whatever we mean by it or associate with it, because it is perceived as impersonal?

In science, we speak about different levels, organized in hierarchical ways, with each level following its own rules or laws. In the social sciences, including economics, the gap persists in the form of a micro-level and macro-level divide. But none of the epistemological considerations that follow seemed to provide what I was looking for: a way of seeing across these divides, either by switching perspectives or, much more challenging, by trying to find a pluri-perspectival angle that would allow me access to both levels. I have therefore tried to find a way to combine the personal and the impersonal, the effect of predictive algorithms on us as individuals and the effects that digitalization has on us as societies.

Although most of this book was written before a new virus wreaked havoc around the globe, exacerbated by the uncoordinated and often irresponsible policy response that followed, it is still marked by the impact of the COVID-19 pandemic. Unexpectedly, the emergence of the coronavirus crisis revealed the limitations of predictions. A pandemic is one of those known unknowns that are expected to happen. It is known that more are likely to occur, but it is unknown when and where. In the case of the SARS-CoV-2 virus, the gap between the predictions and the lack of preparedness soon became obvious. We are ready to blindly follow the predictions algorithms deliver about what we will consume, our future behaviour and even our emotional state of mind. We believe what they tell us about our health risks and that we should change our lifestyles. They are used for police profiling, court sentencing and much more. And yet we were unprepared for a pandemic that had been long predicted. How could this have gone so wrong?

Thus the COVID-19 crisis, itself likely to turn from an emergency into a more chronic condition, strengthened my conviction that the key to understanding the changes we are living through is linked to what I call the paradox of prediction. When human behaviour, flexible and adaptive as it is, begins to conform to what the predictions foretell, we risk returning to a deterministic world, one in which the future has already been set. The paradox is poised at the dynamic but volatile interface between present and future: predictions are obviously about the future, but they act directly on how we behave in the present.

The predictive power of algorithms enables us to see further and to assess the various outcomes of emergent properties in complex systems obtained through simulation models. Backed by vast computational power, and trained on an enormous amount of data extracted from the natural and social world, we can observe predictive algorithms in action and analyse their impact. But the way we do this is paradoxical in itself: we crave to know the future, but largely ignore what predictions do to us. When do we believe them and which ones do we discard? The paradox stems from the incompatibility between an algorithmic function as an abstract mathematical equation, and a human belief which may or may not be strong enough to propel us to action.

Predictive algorithms have acquired a rare power that unfolds in several dimensions. We have come to rely on them in ways that include scientific predictions with their extensive range of applications, like improving weather forecasts or the numerous technological products designed to create new markets. They are based on techniques of predictive analytics that have resulted in a wide range of products and services, from the analysis of DNA samples to predict the risk of certain diseases, to applications in politics where the targeting of specific groups whose voting profile has been established through data trails has become a regular feature of campaigning. Predictions have become ubiquitous in our daily lives. We trade our personal data for the convenience, efficiency and cost-savings of the products we are offered in return by the large corporations. We feed their insatiable appetite for more data and entrust them with information about our most intimate feelings and behaviour. We seem to have embarked on an irreversible track of trusting them. Predictive analytics reigns supreme in financial markets where automated trading and fintech risk assessments were installed long ago. They are the backbone of the military’s development of autonomous weapons, the actual deployment of which would be a nightmare scenario.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «In AI We Trust»

Представляем Вашему вниманию похожие книги на «In AI We Trust» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «In AI We Trust»

Обсуждение, отзывы о книге «In AI We Trust» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x