1 ...8 9 10 12 13 14 ...23 12. Freitag, D., Machine learning for information extraction in informal domains. Mach. Learn ., 39, 2–3, 169–202, 2000.
13. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., Improving language understanding by generative pre-training, URL https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/languageunsupervised/languageunderstanding paper.pdf, 2018.
14. Garcia, V. and Bruna, J., Few-shot learning with graph neural networks, In Proceedings of the International Conference on Learning Representations (ICLR) , 3, 1–13, 2018.
15. Miyato, T., Maeda, S.I., Koyama, M., Ishii, S., Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell ., 41, 8, 1979–1993, 2018.
16. Tarvainen, A. and Valpola, H., Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, in: Advances in Neural Information Processing Systems , pp. 1195–1204, 2017.
17. Baldi, P., Autoencoders, unsupervised learning, and deep architectures, in: Proceedings of ICML Workshop on Unsupervised and Transfer Learning , 2012, June, pp. 37–49.
18. Srivastava, N., Mansimov, E., Salakhudinov, R., Unsupervised learning of video representations using lstms, in: International Conference on Machine Learning , 2015, June, pp. 843–852.
19. Niebles, J.C., Wang, H., Fei-Fei, L., Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vision , 79, 3, 299–318, 2008.
20. Lee, H., Grosse, R., Ranganath, R., Ng, A.Y., Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM , 54, 10, 95–103, 2011.
21. Memisevic, R. and Hinton, G., Unsupervised learning of image transformations, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition , 2007, June, IEEE, pp. 1–8.
22. Dy, J.G. and Brodley, C.E., Feature selection for unsupervised learning. J. Mach. Learn. Res ., 5, Aug, 845–889, 2004.
23. Kim, Y., Street, W.N., Menczer, F., Feature selection in unsupervised learning via evolutionary search, in: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , 2000, August, pp. 365–369.
24. Shi, Y. and Sha, F., Information-theoretical learning of discriminative clusters for unsupervised domain adaptation, Proceedings of the International Conference on Machine Learning , 1, pp. 1079–1086, 2012.
25. Balakrishnan, P.S., Cooper, M.C., Jacob, V.S., Lewis, P.A., A study of the classification capabilities of neural networks using unsupervised learning: A comparison with K-means clustering. Psychometrika , 59, 4, 509–525, 1994.
26. Pedrycz, W. and Waletzky, J., Fuzzy clustering with partial supervision. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) , 27, 5, 787–795, 1997.
27. Andreae, J.H., The future of associative learning, in: Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems , 1995, November, IEEE, pp. 194–197.
28. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M., Playing atari with deep reinforcement learning, in: Neural information Processing System (NIPS) ’13 Workshop on Deep Learning , 1, pp. 1–9, 2013.
29. Abbeel, P. and Ng, A.Y., Apprenticeship learning via inverse reinforcement learning, in: Proceedings of the Twenty-First International Conference on Machine learning , 2004, July, p. 1.
30. Wiering, M. and Van Otterlo, M., Reinforcement learning. Adapt. Learn. Optim ., 12, 3, 2012.
31. Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K., Maximum entropy inverse reinforcement learning, in: Aaai , vol. 8, pp. 1433–1438, 2008.
32. Rothkopf, C.A. and Dimitrakakis, C., Preference elicitation and inverse reinforcement learning, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases , 2011, September, Springer, Berlin, Heidelberg, pp. 34–48.
33. Anderson, M.J., Carl Linnaeus: Father of Classification , Enslow Publishing, LLC, New York, 2009.
34. Becker, H.S., Problems of inference and proof in participant observation. Am. Sociol. Rev ., 23, 6, 652–660, 1958.
35. Zaffalon, M. and Miranda, E., Conservative inference rule for uncertain reasoning under incompleteness. J. Artif. Intell. Res ., 34, 757–821, 2009.
36. Sathya, R. and Abraham, A., Comparison of supervised and unsupervised learning algorithms for pattern classification. Int. J. Adv. Res. Artif. Intell ., 2, 2, 34–38, 2013.
37. Tao, D., Li, X., Hu, W., Maybank, S., Wu, X., Supervised tensor learning, in: Fifth IEEE International Conference on Data Mining (ICDM’05) , 2005, November, IEEE, p. 8.
38. Krawczyk, B., Woźniak, M., Schaefer, G., Cost-sensitive decision tree ensembles for effective imbalanced classification. Appl. Soft Comput ., 14, 554–562, 2014.
39. Wang, B., Tu, Z., Tsotsos, J.K., Dynamic label propagation for semi-supervised multi-class multi-label classification, in: Proceedings of the IEEE International Conference on Computer Vision , pp. 425–432, 2013.
40. Valizadegan, H., Jin, R., Jain, A.K., Semi-supervised boosting for multi-class classification, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases , 2008, September, Springer, Berlin, Heidelberg, pp. 522–537.
41. Lapp, D., Heart Disease Dataset, retrieved from https://www.kaggle.com/johnsmith88/heart-disease-dataset.
42. Yildiz, B., Bilbao, J.I., Sproul, A.B., A review and analysis of regression and machine learning models on commercial building electricity load forecasting. Renewable Sustainable Energy Rev ., 73, 1104–1122, 2017.
43. Singh, Y., Kaur, A., Malhotra, R., Comparative analysis of regression and machine learning methods for predicting fault proneness models. Int. J. Comput. Appl. Technol ., 35, 2–4, 183–193, 2009.
44. Verrelst, J., Muñoz, J., Alonso, L., Delegido, J., Rivera, J.P., Camps-Valls, G., & Moreno, J., Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and-3. Remote Sens. Environ ., 118, 127–139, 2012.
45. Razi, M.A. and Athappilly, K., A comparative predictive analysis of neural networks (NNs), nonlinear regression and classification and regression tree (CART) models. Expert Syst. Appl ., 29, 1, 65–74, 2005.
46. Lu, Q. and Lund, R.B., Simple linear regression with multiple level shifts. Can. J. Stat ., 35, 3, 447–458, 2007.
47. Tranmer, M. and Elliot, M., Multiple linear regression, The Cathie Marsh Centre for Census and Survey Research (CCSR) , vol. 5, pp. 30–35, 2008.
48. Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W., Applied Linear Statistical Models , vol. 5, McGraw-Hill Irwin, New York, 2005.
49. Noorossana, R., Eyvazian, M., Amiri, A., Mahmoud, M.A., Statistical monitoring of multivariate multiple linear regression profiles in phase I with calibration application. Qual. Reliab. Eng. Int ., 26, 3, 291–303, 2010.
50. Ngo, T.H.D. and La Puente, C.A., The steps to follow in a multiple regression analysis, in: SAS Global Forum , 2012, April, vol. 2012, pp. 1–12.
51. Hargreaves, B.R. and McWilliams, T.P., Polynomial trendline function flaws in Microsoft Excel. Comput. Stat. Data Anal ., 54, 4, 1190–1196, 2010.
52. Dethlefsen, C. and Lundbye-Christensen, S., Formulating State Space Models in R With Focus on Longitudinal Regression Models , Department of Mathematical Sciences, Aalborg University, Denmark, 2005.
Читать дальше