Chapter 3(Artificial Neural Networks): We are witnessing the rapid, widespread adoption of AI [9] in our daily life, which is accelerating the shift toward a more algorithmic society. Our focus is on reviewing the unprecedented new opportunities opened up by using AI in deploying and optimization of communication networks. In this chapter, we will discuss the basis of artificial neural networks (ANNs) [10] including multilayer neural networks, training and backpropagation, finite‐impulse response (FIR) architecture spatial temporal representations, derivation of temporal backpropagation, applications in time series prediction, auto‐regressive linear prediction, nonlinear prediction, adaptation and iterated predictions as well as multiresolution FIR neural‐network‐based learning algorithm applied to network traffic prediction. Traffic prediction is important for timely reconfiguration of the network topology or traffic rerouting to avoid congestion or network slicing.
Chapter 4(Explainable NN): Even with the advancements of AI described in the previous chapter, a key impediment to the use of AI‐based systems is that they often lack transparency. Indeed, the black‐box nature of these systems allows powerful predictions, but they cannot be directly explained. This problem has triggered a new debate on explainable AI (XAI) [11–14].
XAI is a research field that holds substantial promise for improving the trust and transparency of AI‐based systems. It is recognized as the main support for AI to continue making steady progress without disruption. This chapter provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Here, we review the existing approaches regarding the topic, discuss trends surrounding related areas, and present major research trajectories covering a number of problems related to Explainable NN. This, in particular, includes such topics as using XAI: the need and the application opportunities for XAI; explainability strategies: complexity‐related methods, scoop, and model‐related methods; XAI measurement: evaluating explanations; XAI perception: human in the loop ;XAI antithesis: explain or predict discussion; toward more formalism; human‐machine teaming; explainability methods composition; other explainable intelligent systems; and the economic perspective.
Chapter 5(Graph Neural Networks): Graph theory is a basic tool for modeling communication networks in the form G(N,E), where N is the set of nodes and E the set of links (edges) interconnecting the nodes. Recently, the methodology of analyzing graphs with ML have been attracting increasing attention because of the great expressive power of graphs; that is, graphs can be used to represent a large number of systems across various areas including social science (social networks) [15, 16], natural science (physical systems [17, 18] and protein–protein interaction networks [19]), knowledge graphs [20], and many other research areas [21] including communication networks, which is our focus in this book. As a unique non‐Euclidean data structure for ML, graph analysis focuses on node classification, link prediction, and clustering. GNNs are deep‐learning‐based methods that operate on graph domain. Due to its convincing performance and high interpretability, GNN has recently been a widely applied graph analysis method. In this chapter, we will illustrate the fundamental motivations of GNNs and demonstrate how we can use these tools to analyze network slicing. The chapter includes GNN modeling, computation of the graph state, the learning algorithm, transition and output function implementations, linear and nonlinear (non‐positional) GNN, computational complexity, and examples of Web page ranking and network slicing.
Chapter 6(Learning Equilibria and Games): A comprehensive network optimization also includes the cost of implementing specific solutions. More generally, all negative effects caused by a certain decision in the choice of network parameters such as congestion, power consumption, and spectrum misuse, can be modeled as a cost. On the other hand, most economic theory relies on equilibrium analysis, making use of either Nash equilibrium or one of its refinements [22–31]. One justification of this is to argue that Nash equilibrium might arise as a result of learning and adaptation. In this chapter, we investigate theoretical models of learning in games. A variety of learning models have been proposed, with different motivations. Some models are explicit attempts to define dynamic processes that lead to Nash equilibrium play. Other learning models, such as stimulus response or reinforcement models, were introduced to capture laboratory behavior. These models differ widely in terms of what prompts players to make decisions and how sophisticated players are assumed to behave. In the simplest models, players are just machines who use strategies that have worked in the past. They may not even realize they are in a game. In other models, players explicitly maximize payoffs given beliefs that may involve varying levels of sophistication. Thus, we will look at several approaches including best response dynamics (BRD), fictitious play (FP), RL, joint utility and strategy learning (JUSTE), trial and error learning (TE), regret matching learning, Q‐learning, multi‐armed bandits, and imitation learning.
Chapter 7(AI Algorithms in Networks): Finally, at the end of Part I of the book, in this chapter we present an extensive set of examples of solving practical problems in networks by using AI. This includes a survey of specific AI‐based algorithms used in networks, such as for controlled caching in small cell networks; channel and power level selection; controlling network self‐organization; proactive caching; big data learning for AI‐controlled resource allocation; GNN for prediction of resource requirements; and multi‐armed bandit estimators for Markov channels.
In particular, we consider AI‐based algorithms for traffic classification, traffic routing, congestion control, resource management, fault management, Quality of Service (QoS) and Quality of Experience (QoE) management, network security, ML for caching in small cell networks, Q‐learning‐based joint channel and power level selection in heterogeneous cellular networks, stochastic non‐cooperative game, multi‐agent Q‐learning, Q‐learning for channel and power level selection, ML for self‐organizing cellular networks, learning in self‐configuration, RL for SON coordination, SON function model, RL, RL‐based caching, system model, optimality conditions, big data analytics in wireless networks, evolution of analytics, data‐driven networks optimization, GNNs, network virtualization, GNN‐based dynamic resource management, deep reinforcement learning (DRL) for multioperator network slicing, game equilibria by DRL, deep Q‐learning for latency limited network virtualization, DRL for dynamic VNF migration, multi‐armed bandit estimator (MBE), and network representation learning.
Chapter 8(Fundamentals of Quantum Communications) :During the last few years, the research community has turned its attention to quantum computing [32–36] with the objective of combining it with classical communications in order to achieve certain performance targets, such as throughput, round trip delay, and reliability targets at a low computational complexity. As we will discuss in more detail in this chapter, there are numerous optimization problems in wireless communications systems that may be solved at a reduced number of cost function evaluations (CFEs) by employing quantum algorithms. Although we do not attempt to cover the problems of quantum computer design itself, in this chapter we will discuss the basics of QC technology in order to understand better how this technology can enable significant improvements in the design and optimization of communication networks. These fundamentals include discussions on the qubit system, algebraic representation of quantum states, entanglement, geometrical (2D, 3D) representation of quantum states, quantum logical gates, tensor computing, the Hadamard operator H, and the Pauli and Toffoli gates.
Читать дальше