A. K. Md. Ehsanes Saleh - Rank-Based Methods for Shrinkage and Selection
Здесь есть возможность читать онлайн «A. K. Md. Ehsanes Saleh - Rank-Based Methods for Shrinkage and Selection» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.
- Название:Rank-Based Methods for Shrinkage and Selection
- Автор:
- Жанр:
- Год:неизвестен
- ISBN:нет данных
- Рейтинг книги:3 / 5. Голосов: 1
-
Избранное:Добавить в избранное
- Отзывы:
-
Ваша оценка:
- 60
- 1
- 2
- 3
- 4
- 5
Rank-Based Methods for Shrinkage and Selection: краткое содержание, описание и аннотация
Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Rank-Based Methods for Shrinkage and Selection»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.
A practical and hands-on guide to the theory and methodology of statistical estimation based on rank Rank-Based Methods for Shrinkage and Selection: With Application to Machine Learning
Rank-Based Methods for Shrinkage and Selection
Rank-Based Methods for Shrinkage and Selection — читать онлайн ознакомительный отрывок
Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Rank-Based Methods for Shrinkage and Selection», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.
Интервал:
Закладка:
3 Chapter 3Figure 3.1 Key shrinkage R-estimators to be considered.Figure 3.2 The ADRE of the shrinkage R-estimator using the optimal c and URE.Figure 3.3 The ADRE of the preliminary test (or hard threshold) R-estimator for different Δ 2based on λ*=2ln(2).Figure 3.4 The ADRE of nEnet R-estimators.Figure 3.5 Figure of the ADRE of all R-estimators for different Δ 2.
4 Chapter 4Figure 4.1 Boxplot and Q–Q plot using ANOVA table data.Figure 4.2 LS-ridge and ridge R traces for fertilizer problem from ANOVA table data.Figure 4.3 LS-LASSO and LASSOR traces for the fertilizer problem from the ANOVA table data.Figure 4.4 Effect of variance on shrinkage using ridge and LASSO traces.Figure 4.5 Hard threshold and positive-rule Stein–Saleh traces for ANOVA table data.
5 Chapter 8Figure 8.1 Left: the qq-plot for the diabates data sets; Right: the distribution of the residuals.
6 Chapter 11Figure 11.1 Sigmoid function.Figure 11.2 Outlier in the context of logistic regression.Figure 11.3 LLR vs. RLR with one outlier.Figure 11.4 LLR vs. RLR with no outliers.Figure 11.5 LLR vs. RLR with two outliers.Figure 11.6 Binary classification – nonlinear decision boundary.Figure 11.7 Binary classification comparison – nonlinear boundary.Figure 11.8 Ridge comparison of number of correct solutions with n = 337.Figure 11.9 LLR-ridge regularization showing the shrinking decision boundary.Figure 11.10 LLR, RLR and SVM on the circular data set with mixed outliers.Figure 11.11 Histogram of passengers: (a) age and (b) fare.Figure 11.12 Histogram of residuals associated with the null, LLR, RLR, and SVM cases for the Titanic data set. SVM probabilities were extracted from the sklearn.svm package.Figure 11.13 RLR-ridge trace for Titanic data set.Figure 11.14 RLR-LASSO trace for the Titanic data set.Figure 11.15 RLR-aLASSO trace for the Titanic data set.
7 Chapter 12Figure 12.1 Computational unit (neuron) for neural networks.Figure 12.2 Sigmoid and relu activation functions.Figure 12.3 Four-layer neural network.Figure 12.4 Neural network example of back propagation.Figure 12.5 Forward propagation matrix and vector operations.Figure 12.6 ROC curve and random guess classifier line based on the RLR classifier on the Titanic data...Figure 12.7 Neural network architecture for the circular data set.Figure 12.8 LNNs and RNNs on the circular data set ( n = 337) with nonlinear decision boundaries.Figure 12.9 Convergence plots for LNNs and RNNs for the circular data set.Figure 12.10 ROC plots for LNNs and RNNs for the circular data set.Figure 12.11 Typical setup for supervised learning methods. The training set is used to build the model.Figure 12.12 Examples from test data set with cat = 1, dog = 0.Figure 12.13 Unrolling of an RGB image into a single vector.Figure 12.14 Effect of over-fitting, under-fitting and regularization.Figure 12.15 Convergence plots for LLN and RNNs (test size = 35).Figure 12.16 ROC plots for LLN and RNNs (test size = 35).Figure 12.17 Ten representative images from the MNIST data set.Figure 12.18 LNN and RNN convergence traces – loss vs. iterations (Χ100).Figure 12.19 Residue histograms for LNNs (0 outliers) and RNNs (50 outliers).Figure 12.20 These are 49 potential outlier images reported by RNNs.Figure 12.21 LNN (0 outliers) and RNN (144 outliers) residue histograms.Figure 12.22 LNN and RNN confusion matrices and MCC scores. 418
List of Tables
1 Chapter 1 Table 1.1 Comparison of mean and median on three data sets. Table 1.2 Examples comparing order and rank statistics. Table 1.3 Belgium telephone data set. Table 1.4 Comparison of LS and Theil estimations... Table 1.5 Walsh averages for the set {0.1, 1.2, 2.3, 3.4, 4.5, 5.0, 6.6, 7.7, 8.8, 9.9, 10.5}. Table 1.6 The individual terms that are summed in Dn ( β ) and Ln ( β ) for the telephone data set. Table 1.7 The terms that are summed in Dn ( θ ) and Ln ( θ ) for the telephone data set.Table 1.8 The LS and R estimations of slope and intercept...Table 1.9 Interpretation of L 1/L 2loss and penalty functions
2 Chapter 2Table 2.1 Swiss fertility data set.Table 2.2 Swiss fertility data set definitions.Table 2.3 Swiss fertility estimates and standard errors for least squares (LS) and rank (R).Table 2.4 Swiss data subset ordering using | t .value |Table 2.5 Swiss data models with adjusted R 2values.Table 2.6 Estimates with outliers from diabetes data before standardization.Table 2.7 Estimates. MSE and MAE for the diabetes dataTable 2.8 Enet estimates, training MSE and test MSE as a function of α for the diabetes data
3 Chapter 3Table 3.1 The ADRE values of ridge for different values of Δ 2Table 3.2 Maximum and minimum guaranteed ADRE of the preliminary test R-estimator for different values of α .Table 3.3 The ADRE values of the Saleh-type R-estimator for λmax*=2π and different Δ 2Table 3.4 The ADRE values of the positive-rule Saleh-type R-estimator for λmax*=2π and different Δ 2Table 3.5 The ADRE of all R-estimators for different Δ 2
4 Chapter 4Table 4.1 Table of (hypothetical) corn crop yield from six different fertilizers.Table 4.2 Table of p -values from pairwise comparisons of fertilizers.
5 Chapter 8Table 8.1 The VIF values of the diabetes data set.Table 8.2 Estimations for the diabetes data *. (The numbers in parentheses are the corresponding standard errors).
6 Chapter 11Table 11.1 LLR algorithm.Table 11.2 RLR algorithm.Table 11.3 Car data set.Table 11.4 Ridge accuracy vs. λ 2with n = 337 (six outliers).Table 11.5 RLR-LASSO estimates vs. λ 1with number of correct predictions.Table 11.6 Sample of Titanic training data.Table 11.7 Specifications for the Titanic data set.Table 11.8 Number of actual data entries in each column.Table 11.9 Cross-tabulation of survivors based on sex.Table 11.10 Cross-tabulation using Embarked for the Titanic data set.Table 11.11 Sample of Titanic numerical training data.Table 11.12 Number of correct predictions for Titanic training and test sets.Table 11.13 Train/test set accuracy for LLR-ridge. Optimal value at (*).Table 11.14 Train/test set accuracy for RLR-ridge. Optimal value at (*).Table 11.15 Train/Test set accuracy for LLR-LASSO. Optimal value at (*).Table 11.16 Train/test set accuracy for RLR-LASSO. Optimal value at (*).
7 Chapter 12Table 12.1 RNN-ridge algorithm.Table 12.2 Interpretation of the confusion matrix.Table 12.3 Confusion matrix for Titanic data sets using RLR...Table 12.4 Number of correct predictions (percentages) and AUROC of LNN-ridge.Table 12.5 Input ( x ij), output ( y i) and predicted values p ~( x i) for the image classification problem.Table 12.6 Confusion matrices for RNNs and LNNs (test size = 35).Table 12.7 Accuracy metrics for RNNs vs. LNNs (test size = 35).Table 12.8 Train/test set accuracy for LNNs. F 1score is associated with the test set.Table 12.9 Train/test set accuracy for RNNs. F 1score is associated with the test set.Table 12.10 Confusion matrices for RNNs and LNNs (test size = 700).Table 12.11 Accuracy metrics for RNNs vs. LNNs (test size = 700).Table 12.12 MNIST training with 0 outliers.Table 12.13 MNIST training with 90 outliers.Table 12.14 MNIST training with 180 outliers.Table 12.15 MNIST training with 270 outliers.Table 12.16 Table of responses and probability outputs.
Guide
1 Cover
2 Title page
3 Copyright
4 Dedication
5 List of Figures
6 Table of Contents
7 List of Figures
8 List of Tables
9 Foreword
10 Preface
11 Begin Reading
12 Bibliography
13 Author Index
14 Subject Index
15 End User License Agreement
Pages
1 i
2 ii
3 iii
4 iv
5 v
6 vi
7 vii
Читать дальшеИнтервал:
Закладка:
Похожие книги на «Rank-Based Methods for Shrinkage and Selection»
Представляем Вашему вниманию похожие книги на «Rank-Based Methods for Shrinkage and Selection» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.
Обсуждение, отзывы о книге «Rank-Based Methods for Shrinkage and Selection» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.