# Ensemble Techniques

What is XGBoost?

• Optimized gradient-boosting machine learning library
• Originally ritten in C++
• Has API in several languages:

• Python
• R
• Scala
• Julia
• Java

What n=makes XGBoost so popular?

• Speed and Perfomance
• Core algorithm is parallelizable across CPU and GPU cores and also multuple GPUs across networks of computers.

## Random Forest

1. Create a Bootstrapped Dataset.

1. Same size as the original.
2. We are allowed to pick the same sample more than once.
2. Create a Decision Tree using the bootstraped dataset, but only use a random subset of variables (or columns) at each step.

1. Let's say you have [age, height, weight, gender], you only use 2 variable out of 4.
2. There is a better way to find the optimal value (using Grid Search)
3. Split the Root Node (randomly select 2 variables, say [age, weight]).

1. Use Gini/Entrophy to choose which one gives the proper split.
4. And we build the tree as usual, but only considering a random subset variables (2 in our case) at each step.
5. Now goto step 1 and repeat.

1. Make a new bootstraped dataset.
2. Build a tree considering a subset of variables at each step.
3. YOu build 100's of trees.

Now we have the trees, how do we use it?

1. Take a new data and pass it through all the trees.

1. 80 trees yes, 20 trees no
2. Yes received the most votes, so we will conclude Yes as the output.

### Bootstraping the data + aggregate results is called Bagging

How do we evaluate it?

Remember we allowed duplicate entries in the bootstraped dataset, so there will be few data which is not included. Typically, about 1/3 of the original data does not end up in the dootstraped dataset. This is called the Out-of-Bag Dataset

1. We run this Out-of-Bag sample through all of the trees that were built without it.
2. Do the same for all Out-of-Bag samples for all of the trees.
3. The propotion of Out-of-Bag samples that were incorrectly classified is the Out-of-Bag Error.

OK, we now know how to:

1. Build a Random Forest
2. Use a Random Forest
3. Estimate the accuracy of a RAndom Forest
4. Now lets go to step 1 to improve the model

1. Remember when we built the tree, we only used 2 Variables to make a decision at each step.
2. Now we update to value to 3 (total is 3) calculate the Out-of-Bag Error. (We cannot use 1 or all (4) for making decision)
3. We test a bunch of different settings and choose the most accurate random forest.
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50, criterion="gini", max_depth=4, min_samples_split=3, random_state=0)

More Here

sklearn.ensemble.RandomForestClassifier

AdaBoost with Decision Trees and Random Forests Main Differences

1. The Trees are usually just a node and two leaves (calles stumps).

1. Forest of Stumps.
2. Stumps are technically weak learners.
3. In Random Forest, each trees are different in size and depth.
2. Some stumps get more say in the final classification that others.

1. In Random Forest, each tree has an equal vote on the final classification.
3. Trees are made in order, the error that the first stump makes influence how the second stump is made, etc.

1. In Random Forest each trees is made independeltly of the others.

1. Take a dataset (8 samples for example) and assign the sample weight to each of the sample.

1. All samples get the same weight and that makes all samples equally important.
2. Sample weight is 1/total No. of samples => 1/8
2. Build the stumps and calculate the Gini Index

1. The stump with the lower gini index will be the first stump.
3. Calculate the Amount of Say each stump has.

1. Amount of say = 1/2 * log((1 - Total Error)/Total Error)
2. Say (Positive, Neutral, Negative)
4. Increase/Decrease Sample Weight.

1. For Incorrectly Classified (Increase Weight) => New Sample Weight = Sample weight * e^amount of say
2. For Correctly Classified (Decrease Weight) => New Sample Weight = Sample weight * e^-amount of say
3. Now we normalize the sample weights, so they add upto 1.
4. Now update the sample weight with the new sample weight.
5. Now we create a New dataset of same size, but will make sure the new collection contains samples with larger Sample Weights (Allow Duplicates).

1. Now that the error made by the previous stump influence how the next stump is built.

How to make Classification?

1. Pass the data through all the stumps and do the prediction (yes or no).
2. Get the amount of say for each stump.
3. And classified based on the total sum of the say.

1. Builds Stumps (Node with 2 leaves)
2. Calculates the Amount of say.
3. Created new data set with more weights.
4. Repeates untill no. of defined stumps created.
5. Finally Classification.

1. Starts by making a single leaf, instead of tree or stump.

1. For regression, it is the Average Value.
2. Then the Gradient Boost Builds a tree.

1. This tree is based on the error made by the previous tree
2. Unlike AdaBoost this tree is usually larger than a stump. (GB still restricts the size of the tree).
3. Max No. of tree often used is between 8 and 32.
3. And finally it scales the tree by equal amount.

1. Single leaf (Average Value)
2. Calculate the residuals
3. Build a tree to predict the residuals.

1. Data would end up in different leaves
2. Average if more than one residuals end up in a leaf.
4. Now we combine the original leaf with the new tree to make a new prediction.
5. New Prediction = Avg. Weight + (LR * Tree Prediction)
6. Now go to step 2 and repeat

1. Each time new tree is built.
2. Now combine the new tree with previous tree and initial weight.
3. Every tree is scaled by Learning Rate.

## XGBoost

Steps

1. 0.5 as initial prediction (Reg or Class)
2. Calculate the Residuals
3. Build a regression to fit the residuals.

1. Unlike GB, XGB builds a unique regression tree
4. XGBoost Trees

1. Each tree starts as a single leaf and all the residuals go the leaf.
2. Calculate the Quality Score or Similarity Score for the Residuals.
3. Similarity Score = (Sum of R)^2 / (No. of R + lambda) lambda -> Regularization Term
4. REsiduals cancel each other, and lambda = 0, similarity = 4 for 1 leaf
5. Further split the node Threshold (Threshold is the Average of observations)

1. Calculate the similarity score.
6. Calculate Gain => LeftSim + RightSim - RootSim
7. Now calculate the Gain for the other thresholds.

1. Build the tree
2. Calculate similarity score for each leaf
3. And calculate Gain
8. Compare all the Gain

1. Gain -> 120, 14, 56
2. So we will use the 120 (dosage < 15) for the first branch in the tree.
9. Iterate the same process for the leaves with more then one residuals.

1. Step 5
2. Calculate Similarity Scores and Gain.
10. You can further increase the levels but the default is to allow up to 6 levels.
11. Prune the Tree

1. Take a value galled Gamma
2. Gain - Gamma, if the value is negative we prune the tree and if positive, not remove the branch.
3. If the leaf node gives +ve value and root node gives -ve value, we do not remove the root.
12. Sometimes we will end up with removing the entire tree itself. (Extreme Pruning)
13. Now we again build the tree, this time we change the lambda(Reg. Term)

1. Lambda = 1, to prevent overfitting
14. Setting lambda 0, does not turn off pruning.
15. Calculate the output value for each leaves.

1. Sum of R / (No. of R + lambda) [similar to similarity score, except we do not square]
2. Calculate output value for all trees
3. Our first tree is complete
16. Make New Prediction

1. Same as GB
2. 0.5 + LR X Output Value
3. Here LR is known as eta (e), default = 0.3
4. New prediction will be moving closer to the values giving low residual values.
17. Now build a new tree based on the new residuals.

1. Goto step 4

Summary

1. we calculate similarity scores and gain to determing how to split the data
2. we prune tree by calculating the diffrence between Gain values and Gamma (y)

1. -ve prune tree
2. +ve no prune
3. Calculate output values