53. Brown, A.M., A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet. Comput. Methods Programs Biomed ., 65, 3, 191–200, 2001.
54. Tripepi, G., Jager, K.J., Dekker, F.W., Zoccali, C., Linear and logistic regression analysis. Kidney Int ., 73, 7, 806–810, 2008.
55. Press, S.J. and Wilson, S., Choosing between logistic regression and discriminant analysis. J. Am. Stat. Assoc ., 73, 364, 699–705, 1978.
56. Menard, S., Applied Logistic Regression Analysis , vol. 106, Sage, United State of America, 2002.
57. Demartines, P. and Hérault, J., Curvilinear component analysis: A self-organizing neural network for nonlinear mapping of data sets. IEEE Trans. Neural Networks , 8, 1, 148–154, 1997.
58. Max, T.A. and Burkhart, H.E., Segmented polynomial regression applied to taper equations. For. Sci ., 22, 3, 283–289, 1976.
59. Bendel, R.B. and Afifi, A.A., Comparison of stopping rules in forward “stepwise” regression. J. Am. Stat. Assoc ., 72, 357, 46–53, 1977.
60. Mahmood, Z. and Khan, S., On the use of k-fold cross-validation to choose cutoff values and assess the performance of predictive models in stepwise regression. Int. J. Biostat ., 5, 1, 1–21, 2009.
61. Hoerl, A.E., Kannard, R.W., Baldwin, K.F., Ridge regression: Some simulations. Commun. Stat.-Theory Methods , 4, 2, 105–123, 1975.
62. Fearn, T., A misuse of ridge regression in the calibration of a near infrared reflectance instrument. J. R. Stat. Soc.: Ser. C (Appl. Stat.) , 32, 1, 73–79, 1983.
63. Hans, C., Bayesian Lasso Regression. Biometrika , 96, 4, 835–845, 2009.
64. Zou, H. and Hastie, T., Regularization and variable selection via the elastic net. J. R. Stat. Soc.: Series B (Stat. Methodol.) , 67, 2, 301–320, 2005.
65. Ogutu, J.O., Schulz-Streeck, T., Piepho, H.P., Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions, in: BMC Proceedings , 2012, December, BioMed Central, Vol. 6, No. S2, p. S10.
66. Brieuc, M.S., Waters, C.D., Drinan, D.P., Naish, K.A., A practical introduction to Random Forest for genetic association studies in ecology and evolution. Mol. Ecol. Resour ., 18, 4, 755–766, 2018.
67. Jurka, T.P., Collingwood, L., Boydstun, A.E., Grossman, E., van Atteveldt, W., RTextTools: A Supervised Learning Package for Text Classification. R J ., 5, 1, 6–12, 2013.
68. Criminisi, A., Shotton, J., Konukoglu, E., Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis ., 7, 2–3, 81–227, 2012.
69. Shi, T. and Horvath, S., Unsupervised learning with random forest predictors. J. Comput. Graph. Stat ., 15, 1, 118–138, 2006.
70. Settouti, N., Daho, M.E.H., Lazouni, M.E.A., Chikh, M.A., Random forest in semi-supervised learning (Co-Forest), in: 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA) , 2013, May, IEEE, pp. 326–329.
71. Gu, L., Zheng, Y., Bise, R., Sato, I., Imanishi, N., Aiso, S., Semi-supervised learning for biomedical image segmentation via forest oriented super pixels (voxels), in: International Conference on Medical Image Computing and Computer-Assisted Intervention , 2017, September, Springer, Cham, pp. 702–710.
72. Fiaschi, L., Köthe, U., Nair, R., Hamprecht, F.A., Learning to count with regression forest and structured labels, in: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) , 2012, November, IEEE, pp. 2685–2688.
73. Welinder, P., Branson, S., Perona, P., Belongie, S.J., The multidimensional wisdom of crowds, in: Advances in Neural Information Processing Systems , pp. 2424–2432, 2010.
74. Oza, N.C., Online bagging and boosting, in: 2005 IEEE international conference on systems, man and cybernetics , 2005, October, vol. 3, IEEE, pp. 2340–2345.
75. Wang, G., Hao, J., Ma, J., Jiang, H., A comparative assessment of ensemble learning for credit scoring. Expert Syst. Appl ., 38, 1, 223–230, 2011.
76. Yerima, S.Y., Sezer, S., Muttik, I., High accuracy android malware detection using ensemble learning. IET Inf. Secur ., 9, 6, 313–320, 2015.
77. Weinberger, K.Q. and Saul, L.K., Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res ., 10, Feb, 207–244, 2009.
78. Keller, J.M., Gray, M.R., Givens, J.A., A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern ., 15, 4, 580–585, 1985.
79. Jain, R., Camarillo, M.K., Stringfellow, W.T., Drinking Water Security for Engineers, Planners, and Managers , Oxford: Elsevier, Inc, United Kingdom 2014.
80. Meyer-Baese, A. and Schmid, V.J., Pattern Recognition and Signal Analysis in Medical Imaging , Elsevier, Netherlands, 2014.
81. Staelens, S. and Buvat, I., Monte Carlo simulations in nuclear medicine imaging, in: Advances in Biomedical Engineering , pp. 177–209, Elsevier, Netherlands, 2009.
82. Murthy, S.K., Automatic construction of decision trees from data: A multi-disciplinary survey. Data Min. Knowl. Discovery , 2, 4, 345–389, 1998.
83. Criminisi, A., Shotton, J., Konukoglu, E., Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis ., 7, 2–3, 81–227, 2012.
84. Tanha, J., van Someren, M., Afsarmanesh, H., Semi-supervised self-training for decision tree classifiers. Int. J. Mach. Learn. Cybern ., 8, 1, 355–370, 2017.
85. Zahir, N. and Mahdi, H., Snow Depth Estimation Using Time Series Passive Microwave Imagery via Genetically Support Vector Regression (case Study Urmia Lake Basin). The Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci ., 40, 1, 555, 2015.
86. Gualtieri, J.A. and Cromp, R.F., Support vector machines for hyperspectral remote sensing classification, in: 27th AIPR Workshop: Advances in Computer-Assisted Recognition , 1999, January, vol. 3584, International Society for Optics and Photonics, pp. 221–232.
87. Brefeld, U. and Scheffer, T., Co-EM support vector learning, in: Proceedings of the Twenty-First International Conference on Machine Learning , 04, July, p. 16, 20.
88. Chang, C.C. and Lin, C.J., LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) , 2, 3, 1–27, 2011.
89. Hsu, C.W. and Lin, C.J., A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Networks , 13, 2, 415–425, 2002.
90. Shawe-Taylor, J. and Cristianini, N., Support Vector Machines , vol. 2, Cambridge University Press, Cambridge, 2000.
91. Marto, A., Hajihassani, M., Jahed Armaghani, D., Tonnizam Mohamad, E., Makhtar, A.M., A novel approach for blast-induced flyrock prediction based on imperialist competitive algorithm and artificial neural network. Sci. World J ., 2014, 1–11, 2014.
92. Zhang, Z. and Friedrich, K., Artificial neural networks applied to polymer composites: A review. Compos. Sci. Technol ., 63, 14, 2029–2044, 2003.
93. Maind, S.B. and Wankar, P., Research paper on basic of artificial neural network. Int. J. Recent Innovation Trends Comput. Commun ., 2, 1, 96–100, 2014.
94. Dahl, G.E., Yu, D., Deng, L., Acero, A., Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Processing , 20, 1, 30–42, 2011.
Читать дальше