Predicting Heart Failure

Здесь есть возможность читать онлайн «Predicting Heart Failure» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Predicting Heart Failure: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Predicting Heart Failure»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

PREDICTING HEART FAILURE
Predicting Heart Failure: Invasive, Non-Invasive, Machine Learning and Artificial Intelligence Based Methods
Predicting Heart Failure
Predicting Heart Failure: Invasive, Non-Invasive, Machine Learning and Artificial Intelligence Based Methods

Predicting Heart Failure — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Predicting Heart Failure», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

1.6.2.1.1 Decision Trees

One of the prominent algorithms related to the classification task is the decision tree. The model presented with the tree data structure is learned directly from the data. Through the tree induction process, the characteristics of the training data are processed on the tree. The decision tree algorithm we often use and are used to seeing is the C4.5 algorithm [29]. Since the C4.5 algorithm is suitable for working with both numerical and categorical input variables, it can be used in many data sets.

1.6.2.1.2 Naive Bayes

The Naive Bayes classifier is a probability-based classifier. It is an algorithm that tries to find the final probabilities P (Cj | A) of the test data with the help of the preliminary probabilities P (A | Cj) learned from the training data. The algorithm is based on the Bayes’ theorem [30]. According to Bayes’ theorem, events are interrelated and there is a relationship between probabilities P (A | C) and P (C | A), P (A), P (C). Therefore, while calculating the value of P (A | C) with the help of Bayes, we use the equation P (C | A) = (P (A | C) P (C)) / (P (A)). The Naive Bayes approach is used to solve the zero probability problem of Bayesian approach. Thanks to the naive approach, it is assumed that there is no relationship between the events and the process is shortened. Thus, it is possible to get rid of sparsity in the data relatively.

1.6.2.1.3 Support Vector Machines

SVMs were first introduced by Vapnik [31]. The technique uses what we call support vectors to distinguish between data points belonging to different classes. The method aims to find the hyperplane that will best distinguish (margin maximization) different classes from each other. In its simplest form, it distinguishes two-class spaces from each other with the help of two equations w Tx + b = + 1 and w Tx + b = -1. SVMs were first developed in accordance with linear classification and, later, kernel functions for nonlinear spaces were developed. Kernel functions express a transformation between linear and nonlinear spaces. There are types such as linear, polynomial, radial basis function, and sigmoid. Depending on the nature of the data used, kernel functions can be superior to each other.

1.6.2.1.4 K-Nearest Neighbor

The k-nearest neighbor (k-NN) algorithm is a distance-based classifier, which looks at the neighbors of the data point to classify a data object whose class is unknown. A majority vote is made for the classification decision. The two prominent parameters for the algorithm are the k (neighbor) number and the distance (distance) function. There is no exact method for determining the number of neighbors, so the ideal k value is often found after trials. The cosine similarity, Manhattan, Euclidian, or Chebyshev distance is used as the distance function. One of the problems with the k-NN algorithm is the scale problem. When the method based on operating in geometric space gives a scale problem, the problem is solved by feature engineering.

1.6.2.1.5 Neural Nets

An ANN is a machine learning method that emulates human learning. ANNs, which are frequently used in classification problems, are also used in clustering and optimization processes. Although the simplest neural network model is perceptron, multilayer perceptron is often used in classification problems. Deep learning methods, which have been used in many important tasks recently, are based on ANNs. The adaptability and parallel processing capability of ANNs make them a powerful option for many problems.

1.6.2.2 Unsupervised Learning

Unsupervised learning works with untagged data and its purpose is to create clusters based on the characteristics of the data. Unlike supervised learning, untagged data is used instead of labeled data. After the data are divided into groups according to their similarity or distance, labeling is done with the help of an expert. Two applications that stand out in unsupervised learning are clustering and association rule mining. Clustering is the assignment of data points to groups called clusters. It has two types: partitioned and hierarchical methods. In partitioned clustering, a data point can only be in one cluster. In hierarchical clustering, a point can be hierarchically located in more than one cluster. In association rules mining, association rules focused on finding rules based on relationships between events are used in mining relationships between attributes.

1.6.2.2.1 K-Means

The K-means segmentation clustering algorithm was first developed in 1967 by MacQueen [32]. The purpose of the algorithm is to divide the data into K clusters. Each cluster is presented with a center of gravity named centroid. The K value is determined by the user. An iterative method is used to divide the data into clusters. The distance function obtains the point to which each data point will be assigned.

1.6.2.2.2 Apriori Algorithm

The apriori algorithm is the prominent algorithm in association rules mining. It finds common patterns in the transaction database and performs rule generation [33]. With the help of the obtained rules, the occurrence of another event can be predicted after an event occurs. It is a frequently preferred algorithm especially because it helps to establish relationships between events.

1.6.3 Machine Learning Supported HF Studies

Machine learning, the most common application of artificial intelligence, reveals patterns in data by continuously improving the ability to learn from data and the prediction and diagnosis of cardiovascular disease [34]. When the machine learning based diagnosis system of HF is considered as input, process, and output modules, the modules can be presented as follows. The input module contains data to be used by the decision support system, such as physical examination data, laboratory results, clinical data, ECG monitoring data, and electrocardiography data. The transaction module is the module that contains machine learning algorithms, which are mainly supervised and unsupervised learning algorithms. In diagnosing HF the machine learning algorithms currently used include nearest neighbor, self-organizing maps, multilayer perceptron, classification and regression trees, random forests, SVMs, neural networks, logistic regression, decision trees, clustering, and fuzzy-genetic and neuro-fuzzy expert systems. In the output module, information such as the presence of HF, risk of HF events, evaluation of left ventricular deterioration, response to advanced therapies, and risk of death is attempted to be determined.

When the literature on machine learning methods ( Table 1.2), which is an important option in diagnosing HF, is examined, it will be seen that the use of HRV stands out in many studies. In one of the case studies, Yang et al. [35] used a scoring method to diagnose HF. In the study, with the help of two SVM models, it was first checked whether the person has HF. If the result was normal, the second SVM model came into play and classified the person being examined as healthy or prone to HF. The scores were matched with the SVM model outputs and diagnostic outputs were obtained according to the score ranges.

The aim of the study by Son et al. [36] was to distinguish between CHF and shortness of breath problems. The study was initially made with 72 features; rough sets and logistic regression techniques were used to reduce the number of variables. The accuracy of the classification obtained according to the features selected with the help of coarse clusters was 97.5%, and the classification accuracy obtained with the features selected based on logistic regression was measured as 88.7%.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Predicting Heart Failure»

Представляем Вашему вниманию похожие книги на «Predicting Heart Failure» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Predicting Heart Failure»

Обсуждение, отзывы о книге «Predicting Heart Failure» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x