Tormod Næs - Multiblock Data Fusion in Statistics and Machine Learning

Здесь есть возможность читать онлайн «Tormod Næs - Multiblock Data Fusion in Statistics and Machine Learning» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Multiblock Data Fusion in Statistics and Machine Learning: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Multiblock Data Fusion in Statistics and Machine Learning»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Multiblock Data Fusion in Statistics and Machine Learning
Explore the advantages and shortcomings of various forms of multiblock analysis, and the relationships between them, with this expert guide Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences
Multiblock Data Fusion in Statistics and Machine Learning: Applications in the Natural and Life Sciences

Multiblock Data Fusion in Statistics and Machine Learning — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Multiblock Data Fusion in Statistics and Machine Learning», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Figure 9.2 Score plots of IDIOMIX, OS-SCA and GSCA for the genomicsfusion; always score 3 (SC3) on the y-axes and score 1 (SC1)on the x-axes. The third component clearly differs among themethods. Source: Smilde et al. (2020). Licensed under CC BY 4.0.

Figure 9.3 True design used in mixture preparation (blue) versus the columnsof associated factor matrix corresponding to the mixture mode extracted by the BIBFA model (red) and the ACMTF model (red). Source: Acar et al. (2015). Reproduced with permission from IEEE.

Figure 9.4 Cross-validation results for the penalty parameter λ binof themutation block (left) and for the drug response, transcriptome,and methylation blocks ( λ quan, right) in the PESCA model.More explanation, see text. Adapted from Song et al. (2019).

Figure 9.5 Explained variances of the PESCA (a) and MOFA (b) model on the CCL data. From top to bottom: drug response, methylation, transcriptome,and mutation data. The values are percentages of explained variation. More explanation, see text. Adapted from Song et al. (2019).

Figure 9.6 From multiblock data to three-way data.

Figure 9.7 Decision tree for selecting an unsupervised method. For abbreviations,see the legend of Table 9.1. The furthest left leaf is empty but alsoCD methods can be used in that case. For more explanation, see text.

Figure 10.1 Results from multiblock redundancy analysis of theWine data, showing Yscores ( u r) and block-wiseweights for each of the four input blocks (A, B, C, D).

Figure 10.2 Pie chart of the sources of contribution to thetotal variance (arbitrary sector sizes for illustration).

Figure 10.3 Flow chart for the NI-SL method.

Figure 10.4 An illustration of SO-N-PLS, modelling a responseusing a two-way matrix, X 1, and a three-way array, X 2

Figure 10.5 Path diagram for a wine tasting study. The blocks repre-sent the different stages of a wine tasting experiment andthe arrows indicate how the blocks are linked. Source: (Næs et al. , 2020). Reproduced with permission from Wiley.

Figure 10.6 Wine data. PCP plots for prediction of block D from blocks A, B, andC. Scores and loadings from PCA on the predicted y-values on top.The loadings from projecting the orthogonalised X-blocks (exceptthe first which is used as is) onto the scores at the bottom. Source:Romano et al. (2019). Reproduced with permission from Wiley & Sons.

Figure 10.7 An illustration of the multigroup setup, wherevariables are shared among Xblocks and relatedto responses, Y, also sharing their own variables.

Figure 10.8 Decision tree for selecting a supervisedmethod. For more explanation, see text.

Figure 11.1 Output from use of scoreplot() on a pca object.

Figure 11.2 Output from use of loadingplot() on a cca object.

Figure 11.3 Output from use of scoreplot(pot.sca,labels = ”names”) (SCA scores in 2 dimensions).

Figure 11.4 Output from use of loadingplot(pot.sca,block = ”Sensory”, labels = ”names”) (SCA loadings in 2 dimensions).

Figure 11.5 Output from use of plot(can.statis$statis) (STATIS summary plot).

Figure 11.6 Output from use of scoreplot() (ASCA scores in 2 dimensions).

Figure 11.7 Output from use of scoreplot() (ASCA scores in 1 dimension).

Figure 11.8 Output from use of loadingplot() (ASCA scores in 2 dimensions).

Figure 11.9 Output from use of scoreplot() (block-scores).

Figure 11.10 Output from use of loadingplot() (block-loadings).

Figure 11.11 Output from use of scoreplot() andloadingweightplot() on an object from sMB-PLS.

Figure 11.12 Output from use of maage().

Figure 11.13 Output from use of maageSeq().

Figure 11.14 Output from use of loadingplot() on an sopls object.

Figure 11.15 Output from use of scoreplot() on an sopls object.

Figure 11.16 Output from use of scoreplot() on a pcp object.

Figure 11.17 Output from use of plot() on a cvanova object.

Figure 11.18 Output from use of scoreplot() on a popls object.

Figure 11.19 Output from use of loadingplot() on a popls object.

Figure 11.20 Output from use of loadingplot() on a rosa object.

Figure 11.21 Output from use of scoreplot() on a rosa object.

Figure 11.22 Output from use of image() on a rosa object.

Figure 11.23 Output from use of image() withparameter ”residual” on a rosa object.

Figure 11.24 Output from use of scoreplot() on an mbrda object.

Figure 11.25 Output from use of plot() on an lpls object.Correlation loadings from blocks are coloured andoverlaid each other to visualise relations across blocks.

List of Tables

Table 1.1 Overview of methods. Legend: U = unsupervised, S = supervised, C = complex, HOM = homogeneous data, HET = heterogeneous data, SEQ = sequential, SIM = simultaneous, MOD = model-based, ALG = algorithm-based, C = common, CD = common/distinct, CLD = common/local/distinct, LS = least squares, ML = maximum likelihood, ED =eigendecomposition, MC = maximising correlations/covariances. For abbreviations of the methods, see Section 1.11

Table 1.2 Abbreviations of the different methods.

Table 2.1 Formal treatment of types of data scales. The first column refersto the scale-type. The second column gives examples of suchscale-types. The third column defines the scale-type in termsof permissible transformations (see text). Finally, the fourthcolumn gives the permissible statistics for the types of scales.

Table 2.2 Different methods for fusing two data blocks, indicating the properties in terms of explained variation within and between the blocks. Thelast two columns refer to whether the methods favour explaining within- or between-block variation. For more explanation, see text.

Table 2.3 The matrices of which the weights ware eigenvectorsin its original form and using the SVDs of Xand Y.

Table 4.1 Overview of the data sets used in the genomics example.

Table 5.1 Overview of methods. Legend: U=unsupervised,S=supervised, C=complex, HOM=homogeneous data,HET=heterogeneous data, SEQ=sequential, SIM=simultaneous, MOD=model-based, ALG= algorithm-based, C=common,CD=common/distinct, CLD=common/local/distinct, LS=least squares, ML=maximum likelihood, ED=eigendecomposition,MC=maximising correlations/covariances. Forabbreviations of the methods, see Section 1.11.

Table 5.2 Different types of SCA, where D mis a diagonal matrixand Φis a positive definite matrix (see Section 2.8). The correlations and variances pertain to the block-scores (see text).

Table 5.3 Proportions of explained variance per component (C1, C2,…)and total in each of the blocks for the two different methods. Legend: conc is the abbreviation of concatenated; yellow is distinct for TIV; red is distinct for LAIV; green is common (see text).

Table 5.4 Properties of methods for common and distinctcomponents. The matrix Dindicates a diagonalmatrix with all positive elements on its diagonal.

Table 6.1 Overview of methods. Legend: U=unsupervised,S=supervised, C=complex, HOM=homogeneous data,HET=heterogeneous data, SEQ=sequential, SIM=simultaneous, MOD=model-based, ALG= algorithm-based, C=common,CD=common/distinct, CLD=common/local/distinct, LS=least squares, ML=maximum likelihood, ED=eigendecomposition, MC=maximising correlations/covariances. Forabbreviations of the methods, see Section 1.11.

Table 7.1 Overview of methods. Legend: U=unsupervised,S=supervised, C=complex, HOM=homogeneous data,HET=heterogeneous data, SEQ=sequential, SIM=simultaneous, MOD=model-based, ALG= algorithm-based, C=common,CD=common/distinct, CLD=common/local/distinct, LS=least squares, ML=maximum likelihood, ED=eigendecomposition, MC=maximising correlations/covariances. Forabbreviations of the methods, see Section 1.11.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Multiblock Data Fusion in Statistics and Machine Learning»

Представляем Вашему вниманию похожие книги на «Multiblock Data Fusion in Statistics and Machine Learning» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Multiblock Data Fusion in Statistics and Machine Learning»

Обсуждение, отзывы о книге «Multiblock Data Fusion in Statistics and Machine Learning» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x