Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Здесь есть возможность читать онлайн «Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

ARTIFICIAL INTELLIGENCE AND QUANTUM COMPUTING FOR ADVANCED WIRELESS NETWORKS
A practical overview of the implementation of artificial intelligence and quantum computing technology in large-scale communication networks Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Military: Originally, the current famous XAI’s initiative was begun by military researchers [14], and the growing visibility of XAI today is due largely to the call for research by Defense Advanced Research Projects Agency (DARPA) and the solicitation of DARPA projects. AI in the military arena also suffers from the AI explainability problem. Some of the challenges of relying on autonomous systems for military operations are discussed in [15]. As in the healthcare domain, this often involves life and death decisions, which again leads to similar types of ethical and legal dilemmas. The academic AI research community is well represented in this application domain with the DARPA Ambitious XAI program, along with some research initiatives that study explainability in this domain [16].

XAI can also find interesting applications in other domains like cybersecurity, education, entertainment, government, and image recognition. An interesting chart of potential harms from automated decision making was presented by Future of Privacy Forum [17]: it depicts the various spheres of life where automated decision making can cause injury and where providing automated explanations can turn them into trustworthy processes; these areas include employment, insurance and social benefits, housing, and differential pricing of goods and services.

4.1 Explainability Methods

The majority of works classify the methods according to three criteria: (i) the complexity of interpretability, (ii) the scope of interpretability, and (iii) the level of dependency on the used ML model. Next, we will describe the main features of each class and give examples from current research.

4.1.1 The Complexity and Interoperability

The complexity of an ML model is directly related to its interpretability. In general, the more complex the model, the more difficult it is to interpret and explain. Thus, the most straightforward way to get to interpretable AI/ML would be to design an algorithm that is inherently and intrinsically interpretable. Many works have been reported in that direction. Letham et al. [18] presented a model called Bayesian Rule Lists (BRL) based on decision tree; the authors claimed that preliminary interpretable models provide concise and convincing capabilities to gain domain experts’ trust. Caruana et al. [1] described an application of a learning method based on generalized additive models to the pneumonia problem. They proved the intelligibility of their model through case studies on real medical data.

Xu et al. [19] introduced an attention‐based model that automatically learns to describe the content of images. They showed through visualization how the model is able to interpret the results. Ustun and Rudin [20] presented a sparse linear model for creating a data‐driven scoring system called SLIM. The results of this work highlight the interpretability capability of the proposed system in providing users with qualitative understanding due to their high level of sparsity and small integer coefficients. A common challenge, which hinders the usability of this class of methods, is the trade‐off between interpretability and accuracy [21]. As noted by Breiman [22], “accuracy generally requires more complex prediction methods … [and] simple and interpretable functions do not make the most accurate predictors.” In a sense, intrinsic interpretable models come at the cost of accuracy.

An alternative approach to interpretability in ML is to construct a highly complex uninterpretable black‐box model with high accuracy and subsequently use a separate set of techniques to perform what we could define as a reverse engineering process to provide the needed explanations without altering or even knowing the inner works of the original model. This class of methods offers, then, a post‐hoc explanation [23]. Though it could be significantly complex and costly, most recent work done in the XAI field belongs to the post‐hoc class and includes natural language explanations [24], visualizations of learned models [25], and explanations by example [26].

So, we can see that interpretability depends on the nature of the prediction task. As long as the model is accurate for the task, and uses a reasonably restricted number of internal components, intrinsic interpretable models are sufficient. If, however, the prediction target involves complex and highly accurate models, then considering post‐hoc interpretation models is necessary. It should also be noted that in the literature there is a group of intrinsic methods for complex uninterpretable models. These methods aim to modify the internal structure of a complex black‐box model that are not primarily interpretable (which typically applies to a DNN that we are interested in) to mitigate their opacity and thus improve their interpretability [27]. The used methods may either be components that add additional capabilities, components that belong to the model architecture [28, 29], for example, as part of the loss function [30], or as part of the architecture structure, in terms of operations between layers [31, 32].

4.1.2 Global Versus Local Interpretability

Global interpretability facilitates the understanding of the whole logic of a model and follows the entire reasoning leading to all the different possible outcomes. This class of methods is helpful when ML models are crucial to inform population‐level decisions, such as drugs consumption trends or climate change [33]. In such cases, a global effect estimate would be more helpful than many explanations for all the possible idiosyncrasies. Works that propose globally interpretable models include the aforementioned additive models for predicting pneumonia risk [1] and rule sets generated from sparse Bayesian generative models [18]. However, these models are usually specifically structured and thus limited in predictability to preserve uninterpretability. Yang et al. [33] proposed a Global model Interpretation via Recursive Partitioning called GIRP to build a global interpretation tree for a wide range of ML models based on their local explanations. In their experiments, the authors highlighted that their method can discover whether a particular ML model is behaving in a reasonable way or is overfit to some unreasonable pattern. Valenzuela‐Escárcega et al. [34] proposed a supervised approach for information extraction that provides a global, deterministic interpretation. This work supports the idea that representation learning can be successfully combined with traditional, pattern‐based bootstrapping yielding models that are interpretable. Nguyen et al. [35] proposed an approach based on activation maximization – synthesizing the preferred inputs for neurons in neural networks – via a learned prior in the form of a deep generator network to produce a global interpretable model for image recognition. The activation maximization technique was previously used by Erhan et al. [36]. Although a multitude of techniques is used in the literature to enable global interpretability, global model interpretability is difficult to achieve in practice, especially for models that exceed a handful of parameters. In analogy with humans, who focus their effort on only part of the model in order to comprehend the whole of it, local interpretability can be more readily applied.

Explaining the reasons for a specific decision or single prediction means that interpretability is occurring locally. Ribeiro et al. [37] proposed LIME for Local Interpretable Model‐Agnostic Explanation. This model can approximate a black‐box model locally in the neighborhood of any prediction of interest. Work in [38], extends LIME using decision rules. Leave‐one covariate‐out (LOCO) [39] is another popular technique for generating local explanation models that offer local variable importance measures. In [40], the authors present a method capable of explaining the local decision taken by arbitrary nonlinear classification algorithms, using the local gradients that characterize how a data point has to be moved to change its predicted label. A set of works using similar methods for image classification models was presented in [41–44]. It is a common approach to understanding the decisions of image classification systems by finding regions of an image that are particularly influential for the final classification. Also called sensitivity maps, saliency maps, or pixel attribution maps [45], these approaches use occlusion techniques or calculations with gradients to assign an “importance” value to individual pixels that are meant to reflect their influence on the final classification. On the basis of the decomposition of a model’s predictions on the individual contributions of each feature, Robnik‐ картинка 365ikonja and Kononenko [46] proposed explaining the model prediction for one instance by measuring the difference between the original prediction and the one made with omitting a set of features. A number of recent algorithms can be also found in [47–58].

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Представляем Вашему вниманию похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Обсуждение, отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x