The remarkable speed of recent advances in AI and its convergence with the sustainability crisis invites the question: What is different this time? We are already becoming conscious of the limitations of our spatial habitat, and face multiple challenges when it comes to using the available resources in a sustainable manner. These range from managing the transition to clean energy, to maintaining biodiversity and making cities more liveable, to drastically curbing plastic pollution and managing the increasing amount of waste. No wonder there is a growing concern that the control we can exert will be further diminished. The machines we have created are expected to take over many jobs currently performed by humans, but our capacity for control will shrink even further because these machines will monitor and limit our actions and possibilities. For these reasons much wisdom will be needed to better understand how AI affects and limits human agency.
I soon realized that I had touched only the surface of deeper transformational processes that we will have to think about together. The future will be dominated by digital technologies while we simultaneously face a sustainability crisis, and both of these transitions are linked with changes in the temporal structures and regimes that shape our lives and society. Digital technologies bring the future into the present, while the sustainability crisis confronts us with the past and challenges us to develop new capabilities for the future. Whatever solutions we come up with must integrate the human dimension and our altered relationship to the natural and technologically transformed environment. These were some of the underlying questions that kept me going, humming quietly but persistently in the background while I continued my search. My journey took me to a number of international meetings, workshops and conferences where some of these issues were discussed. For example, there were meetings on how to protect rights to privacy, which received special legal status in Europe through the General Data Protection Regulation (GDPR). Europe is perceived to play only a side role in the geopolitical competition between the two AI superpowers, the United States and China, a competition sometimes referred to as the digital arms race for supremacy in the twenty-first century, and which has recently been rekindled in alarming ways. Many Europeans take solace in the fact that they at least have a regulatory system to protect them, even if they acknowledge that neither the GDPR nor other forms of vigilance against intrusion by the large transnational corporations are sufficient in practice.
Other items on the agenda of discussion fora about digitalization were concerned with the risks arising from the ongoing processes of automation. Foremost was the burning issue of the future of work and the potential risks that digitalization entails for liberal democracies. It seemed to me that the fear that more jobs would be lost than could be created in time was being felt much more strongly in the United States than in Europe, partly due to still-existing European welfare provisions and partly because digitalization had not yet visibly hit professionals and the middle class. The threats to liberal democracies became more apparent when populist, nationalist and xenophobic waves swept across many countries. They were nurtured by sinister phenomena such as ‘fake news’ and Trojan horses, with unknown hackers and presumed foreign secret services engaged in micro-targeting specific groups with their made-up messages. More generally, they appeared intent on undermining existing democratic institutions while supporting political leaders with authoritarian tendencies. Digital technologies and social media were being appropriated as the means to erode democratic principles and the rule of law, while the internet, it seemed, had turned into an unrestrained and unregulated space for the diffusion of hate and contempt.
My regular visits to Singapore provided a different angle on how societies might embrace digitalization, and a unique opportunity to observe a digitally and economically advanced country in action. I gathered insights into Singapore’s much-vaunted educational system, and observed the reliance of the bureaucracy on digital technologies but also its high standards of efficiency and maintenance of equally high levels of trust in government. What impressed me most, however, was the country’s delicate and always precarious balance between a widely shared sense of its vulnerability – small, without natural resources and surrounded by large and powerful neighbours – and the equally widely shared determination to be well prepared for the future. Here was a country that perceived itself as still being a young nation, drawing much of its energy from the remarkable economic wealth and social well-being it has achieved. This energy now had to be channelled into a future it was determined to shape. Nowhere else did I encounter so many debates, workshops, reports and policy measures focused on a future that, despite remaining uncertain, was to be deliberated and carefully planned for, taking in the many contingencies that would arise. Obviously, it would be a digital future. The necessary digital skills were to be cultivated and all available digital tools put to practical use.
More insights and observations came from attending international gatherings on the future of Artificial Intelligence. In my previous role as President of the European Research Council (ERC), I participated in various World Economic Forum meetings. The WEF wants to be seen as keenly engaged in digital future building. At the meetings I attended, well-known figures from the world of technology and business mingled with academics and corporate researchers working at the forefront of AI. It was obvious that excitement about the opportunities offered by digital technologies had to be weighed against their possible risks if governments and the corporate world wanted to avert a backlash from citizens concerned about the pace of technological change. The many uncertainties regarding how this would be played out were recognized, but the solutions offered were few.
Other meetings in which I participated had the explicit aim of involving the general public in a discussion about the future of AI, such as the Nobel Week Dialogue 2015 in Gothenburg, or the Falling Walls Circle in Berlin in 2018. There were also visits to IT and robotics labs and workshops tasked with setting up various kinds of digital strategies. I gained much from ongoing discussions with colleagues at the Vienna Complexity Science Hub and members of their international network, allowing me glimpses into complexity science. By chance, I stumbled into an eye-opening conference on digital humanism, a trend that is gradually expanding to become a movement.
Scattered and inconclusive as these conversations mostly were, they nevertheless projected the image of a dynamic field rapidly moving forward. The main protagonists were eager to portray their work as incorporating their responsibility of moving towards a ‘beneficial AI’ or similar initiatives. There was a notable impatience to demonstrate that AI researchers and promoters were aware of the risks involved, but the line between sincere concern and the insincere attempts of large corporations to claim ‘ethics ownership’ was often blurred as well. Human intelligence might indeed one day be outwitted by AI, but the discussants seldom dwelt on the difference between the two. Instead, they offered reassurances that the risks could be managed. Occasionally, the topic of human stupidity and the role played by ignorance were touched upon as well. And at times, a fascination with the ‘sweetness of technology’ shimmered through, similar to that J. Robert Oppenheimer described when he spoke about his infatuation with the atomic bomb.
Читать дальше