Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Здесь есть возможность читать онлайн «Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

ARTIFICIAL INTELLIGENCE AND QUANTUM COMPUTING FOR ADVANCED WIRELESS NETWORKS
A practical overview of the implementation of artificial intelligence and quantum computing technology in large-scale communication networks Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать
where d his the number of hidden neurons Then we have 587 588 - фото 796

where d his the number of hidden neurons. Then, we have

(5.87) 588 589 590 - фото 797

(5.88) 589 590 where the aforementioned - фото 798

(5.89) 590 where the aforementioned Kronecker product properties have been used - фото 799

(5.90) where the aforementioned Kronecker product properties have been used It - фото 800

where the aforementioned Kronecker product properties have been used.

It follows that (vec ( R u,v)) ′· ∂vec( A n,u)/ ∂w can be written as the sum of the four contributions represented by Eqs. (5.87)– (5.90). The second and the fourth terms – Eqs. (5.88)and (5.90)– can be computed directly using the corresponding formulas. The first one can be calculated by observing that картинка 801looks like the function computed by a three‐layered FNN that is the same as h wexcept for the activation function of the last layer. In fact, if we denote by such a network then 591 holds where A similar reason - фото 802such a network, then

(5.91) holds where A similar reasoning can be applied also to the third - фото 803

holds, where A similar reasoning can be applied also to the third contribution Required - фото 804. A similar reasoning can be applied also to the third contribution.

Required number of operations: The above method includes two tasks: the matrix multiplications of Eqs. (5.87)– (5.90)and the backpropagation as defined by Eq. (5.91). The former task consists of several matrix multiplications. By inspection of Eqs. (5.87)– (5.90), the number of floating point operations is approximately estimated as 2 s 2+ 12 s hi h+ 10 s 2· hi h, where hi hdenotes the number of hidden‐layer neurons implementing the function h . The second task has approximately the same cost as a backpropagation phase through the original function h w. Such a value is obtained from the following observations: for an a × b matrix C and a b × c matrix D , the multiplication CD requires approximately 2 abc operations; more precisely, abc multiplications and ac ( b − 1) sums. If D is a diagonal b × b matrix, then CD requires 2 ab operations. Similarly, if C is an a × b matrix, D is a b × a matrix, and P ais the aa matrix defined above and used in Eqs. (5.87)– (5.90), then computing vec( CD ) P ccosts only 2 ab operations provided that a sparse representation is used for P α. Finally, a 1, a 2, a 3are already available, since they are computed during the forward phase of the learning algorithm. Thus, the complexity of computing ∂p w/ ∂w is Note however that even if the sum in Eq 585ranges over all the arcs of - фото 805. Note, however, that even if the sum in Eq. (5.85)ranges over all the arcs of the graph, only those arcs ( n , u ) such that R n, u≠ 0 have to be considered. In practice, R n, u≠ 0 is a rare event, since it happens only when the columns of the Jacobian are larger than μ , and a penalty function was used to limit the occurrence of these cases. As a consequence, a better estimate of the complexity of computing ∂p w/ ∂w is O where t Ris the average number of nodes u such that R n u 0 holds for some - фото 806, where t Ris the average number of nodes u such that R n, u≠ 0 holds for some n .

1 Instructions b = (∂ew/∂o)(∂Gw/∂x)(x, lN) and =(∂ew/∂o)(∂Gw/∂w)(x, lN): The terms b and c can be calculated by the backpropagation of ∂ew/∂o through the network that implements gw . Since such an operation must be repeated for each node, the time complexity of instructions b = (∂ew/∂o)(∂Gw/∂x)(x, lN) and c = (∂ew/∂o)(∂Gw/∂w)(x, lN) is for all the GNN models.

2 Instruction = z(t)(∂Fw/∂w)(x, l): By definition of Fw, fw , and BP, we have(5.92)

where y = [ l n, x u, l (n, u), l u] and BP 1indicates that we are considering only the first part of the output of BP. Similarly

(5.93) where y l n x u l n u l u These two equations provide a direct - фото 807

where y = [ l n, x u, l (n, u), l u]. These two equations provide a direct method to compute d in positional and nonlinear GNNs, respectively.

For linear GNNs, let denote the the output of h wand note that holds where and - фото 808denote the the output of h wand note that

holds where and are the element in position i j of mat - фото 809

holds where картинка 810and картинка 811are the element in position i , j of matrix A n, uand the corresponding output of the transition network, respectively, while картинка 812is the the element of vector картинка 813is the corresponding output of the forcing network (see Eq. (5.83))], and is the i th element of x u Then where y - фото 814is the i ‐th element of x u. Then

where y δ ne n z t and - фото 815

where y δ ne n z t and is a vector that stores - фото 816 δ = ∣ ne[ n ] ∣ · z ′( t ), and is a vector that stores in the position corresponding to i j that is - фото 817is a vector that stores in the position corresponding to i j that is Thus in linear GNNs d is - фото 818in the position corresponding to i , j , that is, Thus in linear GNNs d is computed by calling the backpropagation procedure - фото 819. Thus, in linear GNNs, d is computed by calling the backpropagation procedure on each arc and node.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Представляем Вашему вниманию похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Обсуждение, отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x