Societal Responsibility of Artificial Intelligence

Здесь есть возможность читать онлайн «Societal Responsibility of Artificial Intelligence» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Societal Responsibility of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Societal Responsibility of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

The digital world is characterized by its immediacy, its density of information and its omnipresence, in contrast to the concrete world. Significant changes will occur in our society as AI becomes integrated into many aspects of our lives.
This book focuses on this vision of universalization by dealing with the development and framework of AI applicable to all. It develops a moral framework based on a neo-Darwinian approach – the concept of Ethics by Evolution – to accompany AI by observing a certain number of requirements, recommendations and rules at each stage of design, implementation and use. The societal responsibility of artificial intelligence is an essential step towards ethical, eco-responsible and trustworthy AI, aiming to protect and serve people and the common good in a beneficial way.

Societal Responsibility of Artificial Intelligence — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Societal Responsibility of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

It is from this vision of universalization that we felt the need to write this book around the framework of AI applicable to all. As a result, we have developed a moral framework to support digital AI projects by observing a number of requirements, recommendations and rules, elaborated, verified and discussed at each stage of design, implementation and use. This allowed us to design ethical criteria, according to our determinants, both essential and universal, based on the principle of Ethics by Design 7 or Human Rights by Design to move toward a totally innovative principle of Ethics by Evolution that we will develop throughout this book. The objective is to achieve AI that is safer, more secure and better adapted to our needs, both ethical and human, over time. This will help optimize our ability to monitor progress against criteria of sustainability and social cohesion. AI is, therefore, not an end in itself, but rather a means to increase individual and societal well-being.

ETHICS BY DESIGN.–

An approach that integrates ethical requirements and recommendations from the design of NICTs.

ETHICS BY EVOLUTION.–

It is an approach that incorporates recommendations and ethical rules, in an evolutionary manner over time, throughout the lifecycle of NICTs, i.e. until its implementation and evolutionary use.

This book is intended to categorize ethical issues related to the digital environment, both from the point of view of the user and the designer of digital solutions and/or services. It invites reflection (what questions businesses can ask themselves about digital ethics) and suggests avenues for action. It is an approach that aims to provide guidelines to bring out the values that we want to collectively put forward to help legislators to formulate laws that will build a framework for AI. This repository is not exhaustive. It is intended to be general, open to all contributions and evolving. It must be regularly updated to ensure its consistency and constant relevance as the digital environment and our technological knowledge evolves. It is intended as a reminder of the company’s regulatory duties, which precisely define what is permitted or prohibited, and the sanctions that apply. The company has an obligation to comply, and this does not concern the area of ethics. However, the means by which it complies can be the subject of ethical reflection.

Finally, this book is addressed to all stakeholders involved in the development, deployment or use of AI, including organizations, companies, public services, researchers, individuals or other entities. This document should, therefore, be considered as the first building block of a discussion between these different actors toward an ethical, responsible, trustworthy AI aimed at protecting and serving in a beneficial way individuals and the common good for a better adoption at the global level.

1 1In France, crowdsourcing is defined according to the Commission générale de terminologie et de néologie (2014) as the “mode of completion of a project or a product calling for contributions from a large number of people, generally Internet users”. JORF, 0179(91), 12995.

2 2ISO 2382-28:1995 defines artificial intelligence as “the capability of a functional unit to perform functions that are generally associated with human intelligence, such as reasoning and learning”.

3 3IEEE P7000: Model Process for Addressing Ethical Concerns During System Design; IEEE P7001: Transparency of Autonomous Systems; IEEE P7002: Data Privacy Process; IEEE P7003: Algorithmic Bias Considerations; IETF Research into Human Rights Protocol Considerations draft.

4 4CNIL (2017). Comment permettre à l’homme de garder la main ? Les enjeux éthiques des algorithmes et de l’intelligence artificielle. Summary report of the public debate led by the CNIL in the context of the mission of ethical reflection entrusted by law for a digital Republic.

5 5These seven essential requirements include human factor and human control, technical robustness and security, privacy and data governance, transparency, diversity, non-discrimination and equity, societal and environmental well-being, and accountability.

6 6On May 22, 2019, through the OECD Council of Ministers, 42 countries (the 36 OECD countries and Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania) adopted the principles set out in the OECD Recommendation on AI, making it the first intergovernmental agreement to stimulate innovation and build confidence in AI by promoting a responsible approach to trusted AI, while ensuring respect for human rights and democratic values.

7 7This consists of integrating ethical rules and requirements from the design and learning of these NICTs, prohibiting direct or indirect damage to the fundamental values protected by the conventions.

Конец ознакомительного фрагмента.

Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Societal Responsibility of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Societal Responsibility of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Societal Responsibility of Artificial Intelligence»

Обсуждение, отзывы о книге «Societal Responsibility of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x