Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Здесь есть возможность читать онлайн «Savo G. Glisic - Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

ARTIFICIAL INTELLIGENCE AND QUANTUM COMPUTING FOR ADVANCED WIRELESS NETWORKS
A practical overview of the implementation of artificial intelligence and quantum computing technology in large-scale communication networks Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Maximum features to consider for split is the number of features to consider while searching for the best split. These will be randomly selected. As a rule of thumb, the square root of the total number of features works well, but up to 30–40% of the total number of features should be checked. Higher values can lead to overfitting, but this is case specific.

2.1.4 Trees in R and Python

There are multiple packages available in R to implement decision trees, such as ctree, rpart, and tree. Here is an example:

> library(rpart) > x <- cbind(x_train,y_train) # grow tree > fit <- rpart(y_train ~ ., data = x,method="class") > summary(fit) #Predict Output > predicted= predict(fit,x_test)

In the code above, y_train and x_train represent dependent and independent variables, respectively, and x represents training data. Similarly, in Python we have the following:

#Import Library #Import other necessary libraries like pandas, numpy… from sklearn import tree #Assumed you have, X (predictor) and Y (target) for training dataset and x_test(predictor) of test_dataset # Create tree object model = tree.DecisionTreeClassifier(criterion='gini') # for classification, here you can change the algorithm as gini or entropy (information gain) by default it is gini # model = tree.DecisionTreeRegressor() for regression # Train the model using the training sets and check score model.fit(X, y) model.score(X, y) #Predict Output predicted= model.predict(x_test)

2.1.5 Bagging and Random Forest

Bagging is a technique used to reduce the variance of our predictions by combining the result of multiple classifiers modeled on different subsamples of the same dataset. The steps followed in bagging are as follows:

Form multiple datasets: Sampling is done with replacement on the original data, and new datasets are formed. These new datasets can have a fraction of the columns as well as rows, which are generally hyperparameters in a bagging model. Taking row and column fractions less than one helps in making robust models that are less prone to overfitting.

Develop multiple classifiers: Classifiers are built on each dataset. In general, the same classifier is modeled on each dataset, and predictions are made.

Integrate classifiers: The predictions of all the classifiers are combined using a mean, median, or mode value depending on the problem at hand. The combined values are generally more robust than those from a single model. It can be theoretically shown that the variance of the combined predictions is reduced to 1/n (n: number of classifiers) of the original variance, under some assumptions.

There are various implementations of bagging models. Random forest is one of them, and we will discuss it next.

In random forest, we grow multiple trees as opposed to a single tree. To classify a new object based on attributes, each tree gives a classification, and we say the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest), and in case of regression, it takes the average of outputs from different trees.

In R packages, random forests have simple implementations. Here is an example;

> library(randomForest) > x <���‐ cbind(x_train,y_train) # Fitting model > fit <���‐ randomForest(Species ~ ., x,ntree=500) > summary(fit) #Predict Output > predicted= predict(fit,x_test)

2.1.6 Boosting GBM and XGBoost

By definition, “boosting” refers to a family of algorithms that convert weak learner to strong learners. To convert a weak learner to a strong learner, we will combine the prediction of each weak learner using methods such as average/weighted average or considering a prediction that has a higher vote. So, boosting combines weak learners (base learners) to form a strong rule. An immediate question that arises is how boosting identifies weak rules.

To find a weak rule, we apply base learning (ML) algorithms with a different distribution. Each time a base learning algorithm is applied, it generates a new weak prediction rule. This is an iterative process. After many iterations, the boosting algorithm combines these weak rules into a single strong prediction rule.

For choosing the right distribution, here are the steps: (i) The base learner takes all the distributions and assigns equal weights or attention to each observation. (ii) If any prediction error is caused by the first base learning algorithm, we pay greater attention to observations having prediction error. Then, we apply the next base learning algorithm. (iii) Iterate Step 2 until the limit of the base learning algorithm is reached or higher accuracy is achieved.

Finally, boosting combines the outputs from weak learners and creates a strong learner, which eventually improves the prediction power of the model. Boosting pays greater attention to examples that are misclassified or have higher errors generated by preceding weak learners.

There are many boosting algorithms that enhance a model’s accuracy. Next, we will present more details about the two most commonly used algorithms: Gradient Boosting (GBM) and XGBoost.

GBM versus XGBoost:

Standard GBM implementation has no regularization as in XGBoost, and therefore it also helps to reduce overfitting.

XGBoost is also known as a “regularized boosting” technique.

XGBoost implements parallel processing and is much faster than GBM.

XGBoost also supports implementation on Hadoop.

XGBoost allow users to define custom optimization objectives and evaluation criteria. This adds a whole new dimension to the model, and there is no limit to what we can do.

XGBoost has an in‐built routine to handle missing values.

The user is required to supply a value that is different from other observations and pass that as a parameter. XGBoost tries different things as it encounters a missing value on each node and learns which path to take for missing values in the future:

A GBM would stop splitting a node when it encounters a negative loss in the split. Thus, it is more of a greedy algorithm.

XGBoost, on the other hand, make splits up to the maximum depth specified and then starts pruning the tree backward, removing splits beyond which there is no positive gain. Another advantage is that sometimes a split of negative loss, say −2, may be followed by a split of positive loss, +10. GBM would stop as soon as it encounters −2. However, XGBoost will go deeper, and it will see a combined effect of +8 of the split and keep both.

XGBoost allows user to run a cross‐validation at each iteration of the boosting process, and thus it is easy to obtain the exact optimum number of boosting iterations in a single run. This is unlike GBM, where we have to run a grid search, and only limited values can be tested.

User can start training an XGBoost model from its last iteration of the previous run. This can be a significant advantage in certain specific applications. GBM implementation of sklearn also has this feature, so they are evenly matched in this respect.

GBM in R and Python: Let us first start with the overall pseudocode of the GBM algorithm for two classes:

1 Initialize the outcome.

2 Iterate from 1 to total number of trees.2.1 Update the weights for targets based on previous run (higher for the ones misclassified).2.2 Fit the model on selected subsample of data.2.3 Make predictions on the full set of observations.2.4 Update the output with current results taking into account the learning rate.

3 Return the final output.

GBM in R:

> library(caret) > fitControl <- trainControl(method = "cv", number = 10, #5folds) > tune_Grid <- expand.grid(interaction.depth = 2, n.trees = 500, shrinkage = 0.1, n.minobsinnode = 10) > set.seed(825) > fit <- train(y_train ~ ., data = train, method = "gbm", trControl = fitControl, verbose = FALSE, tuneGrid = gbmGrid) > predicted= predict(fit,test,type= "prob")[,2]

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Представляем Вашему вниманию похожие книги на «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks»

Обсуждение, отзывы о книге «Artificial Intelligence and Quantum Computing for Advanced Wireless Networks» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x