Gérard Favier - Matrix and Tensor Decompositions in Signal Processing

Здесь есть возможность читать онлайн «Gérard Favier - Matrix and Tensor Decompositions in Signal Processing» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Matrix and Tensor Decompositions in Signal Processing: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Matrix and Tensor Decompositions in Signal Processing»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.

Matrix and Tensor Decompositions in Signal Processing — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Matrix and Tensor Decompositions in Signal Processing», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

For the processing operations themselves, we can distinguish between several different classes:

– supervised/non-supervised (blind or semi-blind), i.e. with or without training data, for example, to solve classification problems, or when a priori information, called a pilot sequence, is transmitted to the receiver for channel estimation;

– real-time (online)/batch (offline) processing;

– centralized/distributed;

– adaptive/blockwise (with respect to the data);

– with/without coupling of tensor and/or matrix models;

– with/without missing data.

It is important to distinguish batch processing, which is performed to analyze data recorded as signal and image sets, from the real-time processing required by wireless communication systems, recommendation systems, web searches and social networks. In real-time applications, the dimensionality of the model and the algorithmic complexity are predominant factors. The signals received by receiving antennas, the information exchanged between a website and the users and the messages exchanged between the users of a social network are time-dependent. For instance, a recommendation system interacts with the users in real-time, via a possible extension of an existing database by means of machine learning techniques. For a description of various applications of tensors for data mining and machine learning, see Anandkumar et al . (2014) and Sidiropoulos et al . (2017).

Tensor-based processings lead to various types of optimization algorithm as follows:

– constrained/unconstrained optimization;

– iterative/non-iterative, or closed-form;

– alternating/global;

– sequential/parallel.

Furthermore, depending on the information that is available a priori , different types of constraints can be taken into account in the cost function to be optimized: low rank, sparseness, non-negativity, orthogonality and differentiability/smoothness. In the case of constrained optimization, weights need to be chosen in the cost function according to the relative importance of each constraint and the quality of the a priori information that is available.

Table I.4presents a few examples of cost functions that can be minimized for the parameter estimation of certain third-order tensor models (CPD, Tucker, coupled matrix Tucker (CMTucker) and coupled sparse tensor factorization (CSTF)), for the imputation of missing data in a tensor and for the estimation of a sparse data tensor with a low-rank constraint expressed in the form of the nuclear norm of the tensor.

REMARK I.1.– We can make the following remarks:

– the cost functions presented in Table I.4correspond to data fitting criteria. These criteria, expressed in terms of tensor and matrix Frobenius norms (||.||F), are quadratic in the difference between the data tensor χ and the output of CPD and TD models, as well as between the data matrix Y and a matrix factorization model, in the case of the CMTucker model. They are trilinear and quadrilinear, respectively, with respect to the parameters of the CPD and TD models to be estimated, and bilinear with respect to the parameters of the matrix factorization model;

– for the missing data imputation problem using a CPD or TD model, the binary tensor , which has the same size as χ, is defined as:

[I.2] The purpose of the Hadamard product denoted of with the differenc - фото 39

The purpose of the Hadamard product (denoted картинка 40) of картинка 41, with the difference between χ and the output of the CPD and TD models, is to fit the model to the available data only, ignoring any missing data for model estimation. This imputation problem, known as the tensor completion problem, was originally dealt with by Tomasi and Bro (2005) and Acar et al . (2011a) using a CPD model, followed by Filipovic and Jukic (2015) using a TD model. Various articles have discussed this problem in the context of different applications. An overview of the literature will be given in the next volume;

Table I.4. Cost functions for model estimation and recovery of missing data

ProblemsData Matrix and Tensor Decompositions in Signal Processing - изображение 42
Estimation Cost functions
CPD TD CMTucker - фото 43
TD CMTucker CSTF - фото 44
CMTucker CSTF Imputation Cost functions CPD - фото 45
CSTF Imputation Cost functions CPD TD - фото 46
Imputation Cost functions
CPD TD Imputation with lowrank constraint Cost functions CPD - фото 47
TD Imputation with lowrank constraint Cost functions CPD TD - фото 48
Imputation with low-rank constraint Cost functions
CPD TD for the imputation problem with the lowrank constraint the term χ - фото 49
TD for the imputation problem with the lowrank constraint the term χ in the - фото 50

– for the imputation problem with the low-rank constraint, the term χ in the cost function replaces the low-rank constraint with the nuclear norm of χ, since the function rank (χ) is not convex, and the nuclear norm is the closest convex approximation of the rank. In Liu et al. (2013), this term is replaced by where Xn represents the mode-n unfolding of χ7;

– in the case of the CMTucker model, the coupling considered here relates to the first modes of the tensor χ and the matrix Y of data via the common matrix factor A.

Coupled matrix and tensor factorization (CMTF) models were introduced in Acar et al . (2011b) by coupling a CPD model with a matrix factorization and using the gradient descent algorithm to estimate the parameters. This type of model was used by Acar et al . (2017) to merge EEG and fMRI data with the goal of analyzing brain activity. The EEG signals are modeled with a normalized CPD model (see Chapter 5), whereas the fMRI data are modeled with a matrix factorization. The data are coupled through the subjects mode (see Table I.1). The cost function to be minimized is therefore given by:

[I.3] where the column vectors of the matrix factors A B C have unit norm Σis a - фото 51

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Matrix and Tensor Decompositions in Signal Processing»

Представляем Вашему вниманию похожие книги на «Matrix and Tensor Decompositions in Signal Processing» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Matrix and Tensor Decompositions in Signal Processing»

Обсуждение, отзывы о книге «Matrix and Tensor Decompositions in Signal Processing» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x