Archive 2019

Pandas–Operations

There are various useful pandas operation available which is really handy in data analysis.

In this lecture we are going to cover following topics:

  • How to find unique values.
  • How to select data with multiple conditions?
  • How to apply function on a particular column?
  • How to remove column?
  • How to get column and index name?
  • Sorting by column
  • Checking null value
  • Filling in NaN values with something else
  • Pivot table creation
  • Change column name in pre-existing data frame.
  • .map() and .apply() function
  • Get column name in the data frame
  • Change order of column in the data frame
  • Add new column in existing data frame
  • Data type conversion
  • Date and time conversion

Let’s see all these operations in python

Data Science – An Introduction

What is Data science?

It is a study that deals with the identification and extraction of meaningful information from data sources with the help of various scientific methods and algorithms. This helps in better decision making, promotional offers and predictive analytics for any business or organization.

What are the skills required to be a Data scientist?

  • Programing Skill
    • Python
    • R
    • Database Query Languages.
  • Statistics and Probability
  • BI Tools – Tableau, Power BI, Qlik Sense
  • Business Domain Knowledge

Data Science Life Cycle:

 

 

Data Scientist VS  Data Analyst VS  Data Engineer

 Data Analyst:

It is an entry-level job for those professionals who are interested in getting into a data-related job. Organisation expect from Data Analyst to understand data handling, modeling and reporting techniques along with a strong understanding of the business. A Data analyst required a good knowledge of visualization tools and database. There are two most popular and common tools used by the data analysts are SQL and Microsoft Excel.

It is necessary for the data analyst to have good presentation skills. This helps them to communicate the end results with the team and help them to reach proper solutions.

Data Engineer:

A Data Engineer specializes in preparing data for analytical usage. They have good idea about Data pipelining with performance optimization. A Data Engineer required strong technical background with the ability to create and integrate APIs. Data Engineering also involves the development of platforms and architectures for data processing.

So what skills required being a Data Engineer?

  • Data Warehousing & ETL
  • Advanced programming knowledge
  • Machine learning concept knowledge
  • In-depth knowledge of SQL/ database
  • Hadoop-based Analytics
 
Data Scientist:

A data scientist is a person who applies their knowledge in statistics and building machine learning models to make predictions to answer the key business questions. They use to deal with big messy data set and a big data wranglers. They apply their math, programing and statistics skills on the data set to clean and organize.

Once data is in clean form then Data scientist apply machine learning algorithm to find hidden insights in the data and draw a meaningful summary out of that.

Skill set for a data scientist:-

  • In depth programing knowledge of SAS/R/Python.
  • Statistics and Mathematics concepts.
  • Machine learning algorithm.
  • Python library such as Pandas, numpy, scypi, Matplotlib, Seaborn, StatsModels.

Machine Learning – Introduction

What is machine learning?

Machine learning is a field of computer science which gives computer to learn from example through self-improvement and without being explicitly coded by programmer. In simple words, ML is a type of artificial intelligence that extracts patterns out of raw data by using an algorithm or method.  It is the most exciting technology in recent years.

ML is used in various tasks like fraud detection, predictive maintenance, portfolio optimization, automate task, clustering, sentiment analysis, image recognition, recommendation system and many more.

Prerequisites for Machine learning:

Reader should know basic python, python library like NumPy, Scikit-learn, Scipy, Matplotlib and seaborn. If these topics are new for you then we highly recommend you please go through Python for Data Science Tutorial.

Why Machine Learning?

Let’s understand it with an example, Think of a day when the sky is full of dark clouds and thunderstorms. The 1st thing that comes to your mind is, it’s going to rain today.

How did you know that it’s going to rain?

You know it because, in your life, whenever you have seen the sky behaving the same then it has rained, that’s what Machine Learning is all about.  

A machine is train to be learn from past experiences (data feed in) with respect to some class of tasks and it is performance in a given task improves with the experience.

Any technology user today has benefitted from machine learning. Facial recognition technology allows social media platforms to help users tag and share photos of friends. Optical character recognition (OCR) technology converts images of text into movable type. Recommendation engines, powered by machine learning, suggest what movies or television shows to watch next based on user preferences. Self-driving cars that rely on machine learning to navigate may soon be available to consumers. Risk analysis  for banking and finance industry. These all types of work is happening through machine learning.

Machine Learning Lifecycle:

Data Science process

What does it hold for the future?

Remember the robot helpers you saw in I, Robot? Imagine those in our day-to-day lives. Helping clean up our homes and generally making life even easier.

Traffic annoying you? How about you relaxed in the air conditioning of your car, and it took care of taking you to your destination? On its own?

Or how about as soon as you entered your doctor’s office, they have access to all your relevant medical details. Enabling them to provide you with a more personalized diagnosis?

Below image are few among hundreds of ways it makes our lives easier.

Future of machine learning

Types of Machine Learning

There are several Machine Learning algorithm and techniques which is used to build models for solving real-life problems by using data.

Now let’s discuss each type in details.

Supervised Learning:

Supervised learning technique is use when data set is structured. Structured dataset is one which has both input and output parameters. It is called supervised learning because we have a dataset which acts as a teacher and its role is to train the model or the machine. Once the model gets trained it can start making a prediction or decision when new data is given to it.

Supervised learning is the one where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output.

Y = f(X)

The goal is to approximate the mapping function so well that whenever you get  new input data (x), the machine can easily predict the output variables (Y) for that data.

Supervised Learning Process

Unsupervised Learning:

Unsupervised learning is where we only have input data (X) and no corresponding output variables i.e Y.

The unsupervised model learns through observation and finds structures in the data. Once the model is given a dataset, it automatically finds patterns and relationships in the dataset by creating clusters in it. Unsupervised learning is used for raw datasets. Its main task is to convert raw data to structured data.

Unsupervised learning process

Now let’s understand both type of learning in details with example.

Suppose you had a basket and it is filled with some different kinds of fruits, your task is to arrange them as groups. For understanding let me clear the names of the fruits in our basket. We have four types of fruits. They are: apple, banana, grape and cherry.

SUPERVISED LEARNING:

  • You already learn from your previous work about the physical characters of fruits.
  • So arranging the same type of fruits at one place is easy now.
  • Your previous work is called as training data in data mining.
  • So, you already learn the things from your train data, this is because of response variable.
  • Response variable mean just a decision variable.

You can observe response variable below (FRUIT NAME) .

NO. SIZE COLOR SHAPE FRUIT NAME
1 Big Red Rounded shape with a depression at the top Apple
2 Small Red Heart-shaped to nearly globular Cherry
3 Big Green Long curving cylinder Banana
4 Small Green Round to oval, Bunch shape Cylindrical Grape
  • Suppose you have taken an new fruit from the basket then you will see the size, color and shape of that particular fruit.
  • If size  is Big, color is Red, shape is rounded shape with a depression at the top, you will conform the fruit name as apple and you will put in apple group. Likewise for other fruits also.
  • Job of groping fruits was done and happy ending.
  • You can observe in the table that  a column was labelled as “FRUIT NAME” this is called as response variable.
  • If you learn the thing before from training data and then applying that knowledge to the test data(for new fruit), This type of learning is called as Supervised Learning.
  • Classification come under Supervised learning.

UNSUPERVISED LEARNING

  • Suppose you had a basket and it is filled with some different types fruits,your task is to arrange them as groups.
  • This time you don’t know anything about that fruits, honestly saying this is the first time you have seen them.
  • So how will you arrange them? What will you do first?
  • You will take a fruit and you will arrange them by considering physical character of that particular fruit. suppose you have considered color.
  • Then you will arrange them on considering base condition as color.
  • Then the groups will be something like this.
  • RED COLOR GROUP: apples & cherry fruits.
  • GREEN COLOR GROUP: bananas & grapes.
  • So now you will take another physical character such as size .
  • RED COLOR AND BIG SIZE: apple.
  • RED COLOR AND SMALL SIZE: cherry fruits.
  • GREEN COLOR AND BIG SIZE: bananas.
  • GREEN COLOR AND SMALL SIZE: grapes.
  • Job done happy ending.
  • Here you didn’t know learn anything before, means no train data and no response variable.
  • This type of learning is known as unsupervised learning.
  • Clustering comes under unsupervised learning.

Semi-Supervised Learning:

As per name suggestion same supervised learning is a combination of Supervise learning and unsupervised learning and uses both labelled and unlabelled data for training. We use this type of Machine Learning for classification, regression, and prediction. Examples of semi-supervised learning are face- and voice-recognition applications.

Reinforcement Learning:

It follows traditional types of data analysis where algorithm discovers data through a process of trial and error and find out what is the best outcome.

There are three main components make up reinforcement learning: the agent, the environment, and the actions. The agent is the learner or decision-maker, the environment includes everything that the agent interacts with, and the actions are what the agent does.

Reinforcement Learning Process

Following are the hierarchy of machine learning.

Classification VS Regression

Before going to start working on machine learning model, we need to understand difference between classification and regression problem. Classification and Regression are two major prediction problems which are usually dealt in Data mining.

Although Classification and Regression come under the same umbrella of Supervised Machine Learning and share the common concept of using past data to make predictions, or take decisions, that’s where their similarity ends.

Regression in machine learning:

A regression problem is when the output variable is a real or continuous value, such as “salary” or “weight” or “sales”.

In machine learning, regression algorithms try to calculate the mapping function (f) from the input variables (x) to numerical or continuous output variables (y). In this case, y is a real value, which can be an integer or a floating point value. Therefore, regression prediction problems are usually quantities or sizes.

For example, when provided with a dataset about houses and you are asked to predict their prices that are a regression task because price will be a continuous output.

Common regression algorithms are: Linear regression, Support Vector Regression (SVR), and regression trees.

Note – Logistic regression, have the name “regression” in their names but they are not regression algorithms.

Classification in machine learning:

A classification problem is when the output variable is a category, such as “black” or “blue” or “disease” and “no disease”.

In classification algorithms we try to calculate the mapping function (f) from the input variables (x) to discrete or categorical output variables (y).

For example, we have a house dataset and we have to predict whether the prices for the houses “sell more or less than the recommended retail price”.  Here, the houses will be classified whether their prices fall into two discrete categories: above or below the said price.

Common classification algorithms are logistic regression, Naïve Bayes, decision trees, and K Nearest Neighbours.

So following are the main differences:

Basic for comparisonClassificationRegression
DefinitionA classification problem is when the output variable
is category such as ‘blue’
or ‘black’, disease and
no disease
A regression problem is
when the output variable is real or continuous value
such as sales, weight, salary
Involve prediction ofCategorical valueContinuous value
AlgorithmDecision tree, logistic regression, etcRegression tree (Random forest), Linear regression, etc.
Nature of the predicted dataUnorderedOrdered
Method of calculationMeasuring accuracy Measurement of root mean square error

Linear Regression-Theory

Linear regression is a supervised machine learning technique where we need to predict a continuous output, which has a constant slope.

There are two main types of linear regression:

1. Simple Regression:

Through simple linear regression we predict response using single features.

If you recall, the line equation (y = mx + c) we studied in schools. Let’s understand what these parameters say and how this equation works in linear regression.

Y = βo + β1X + ∈

Where, Y = Dependent Variable ( This is the variable which we want to predict )

            X = Independent Variable ( This is the variable which we use to make prediction )

            βo – This is the intercept term. It is the prediction value you get when X = 0

            β1 – This is the slope term. It explains the change in Y when X changes by 1 unit.

∈ – This represents the residual value, i.e. the difference between actual and predicted values.

2. Multivariable regression:

It is nothing but extension of simple linear regression. It attempts to model the relationship between two or more features and a response by fitting a linear equation to observed data.

Multi variable linear equation might look like this, where w represents the coefficients, or weights, our model will try to learn.

f(x,y,z)=w1x+w2y+w3z

Let’s understand it with example.

In a company for sales predictions, these attributes might include a company’s advertising spend on radio, TV, and newspapers.

Sales=w1Radio+w2TV+w3News

Linear Regression geometrical representation

So our goal in linear regression model is:

Find a line or plane that best fits the data points. Here best fit means minimise the sum of errors across our training data.

Types of Deliverable in linear regression:

Typically there are following questions that a business wanted to know

  1. They wanted to know their sales or profit prediction.
  2. Drivers(What drives the sales?)
    • All variable that have significant beta.
    • Which factors are detrimental /incremental?
    •  All the drivers, which one should target first?(Variable with highest absolute value)
  3. How to predict drivers?
    • To answer this question, you need calculate (beta*X )for each X variable and you need to choose the highest value and accordingly you can choose your driver after that convince business why you have chosen the particular driver.

So now the question arises how we calculate Beta values?

To calculate the beta values we will use OLS(ordinary least squared) method.

Assumptions of Linear Regression:

1. X variables (Explanatory variable) should be linearly related to Y (Response Variable):

Meaning:

If you plot a scatter plot between x variable and Y, most of the data point should be around the straight line.

How to check?

Draw the scatter plot between each x variable and y variable.

What happens if the assumption is violated?

MSE(Mean Squared Error) will be high. MSE is nothing but the average of squared error occurred between the predicted values and actual values. It can be written as:

Linear Regression in Machine Learning

Where,

N=Total number of observation
Yi = Actual value
(a1xi+a0)= Predicted value.

What to do if variable is not linear?

  • Drop the variable – But in this case will loose the information.
  • Take log(x+1) of x variables. 

2.Residual or the Y variable should be normally distributed:

Meaning:

Residuals (errors) or Y, when plotted in a histogram produces a bell shaped curve.

Residuals: The distance between the actual value and predicted values is called residual. If the observed points are far from the regression line, then the residual will be high, and so cost function will high. If the scatter points are close to the regression line, then the residual will be small and hence the cost function.

How to check?

Plot a histogram of Y, when plotted histogram produces a bell- shaped curve then it follows normality.

Or we can also use  q-q plot(quantile- quantile plot) of residuals

What happens if the assumption is violated?

It means all the P values has been calculated wrongly.

What to do if assumption is violated?

In that case we need to transform our Y such a way so that it become normal. To do that we need to use log of Y.

3.There should not be any relationship between X variables (i.e no multicollinearity)

Meaning:

X variable should not have any linear relationship between themselves. It’s obvious that we don’t want same information in repeat mode.

How to check?

  1. Calculate correlation between every X with every other X variable.
  2. Second method is to, calculate VIF(Variance influence factor)

What happens if the assumption is violated?

Your beta’s values sign will fluctuate.

What to do if assumption is violated?

Drop those X variable whose VIF is greater than 10(VIF>10)

4. The variance of error should remain constant over value of Y (Homoscedasticity/ No heteroskedasticity )

Meaning:

Spread of residuals should remain constant with values of Y.

How to check?

Draw scatter plot of residuals VS Y.

What happens if the assumption is violated?

Your P value will not accurate.

What to do if assumption is violated?

In that case we need to transform our Y such a way so that it become normal. To do that we need to use log of Y.

5. There should not be any auto-correlation between the residuals.

Meaning:

Correlation of residuals with lead residuals. Here lead residuals means next calculated residual.

How to check?

Use DW stats(Durbin Watson Stats)

            If DW stats ~ 2, then no auto correlation.

What happens if the assumption is violated?

Your P value will not accurate.

What to do if assumption is violated?

Understand the reason why it is happening?

If autocorrelation is due to Y then cannot build linear regression model.

If autocorrelation is due to X then drop that X variable.

How to check Model Performance?

The Goodness of fit determines how the line of regression fits the set of observations. The process of finding the best model out of various models is called optimization. It can be achieved by below method:

R-squared method:

  • R-squared is a statistical method that determines the goodness of fit.
  • It measures the strength of the relationship between the dependent and independent variables on a scale of 0-100%.
  • The high value of R-square determines the less difference between the predicted values and actual values and hence represents a good model.
  • It is also called a coefficient of determination, or coefficient of multiple determination for multiple regression.
  • It can be calculated from the below formula:
Linear Regression in Machine Learning

In the next lecture we will see how to implement leaner regression in python.

Linear regression with python

Company Objective:

Let’s suppose You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.

The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They’ve hired you on contract to help them figure it out! Let’s get started!

Just follow the steps below to analyze the customer data (it’s fake, don’t worry I didn’t give you real credit card numbers or emails ). Click here to download

Click here to download .ipnyb notebook

Logistic Regression-Theory

As these days in analytics interview most of the interviewer ask questions about two algorithms which is logistic and linear regression. But why is there any reason behind?

Yes, there is a reason behind that these algorithm are very easy to interpret. I believe you should have in-depth understanding of these algorithms.

In this article we will learn about logistic regression in details. So let’s deep dive in Logistic regression.

What is Logistic Regression?

Logistic regression is a classification technique which helps to predict the probability of an outcome that can only have two values. Logistic Regression is used when the dependent variable (target) is categorical.

Types of logistic Regression:

  • Binary(Pass/fail or 0/1)
  • Multi(Cats, Dog, Sheep)
  • Ordinal(Low, Medium, High)

On the other hand, a logistic regression produces a logistic curve, which is limited to values between 0 and 1. Logistic regression is similar to a linear regression, but the curve is constructed using the natural logarithm of the “odds” of the target variable, rather than the probability.

What is Sigmoid Function:

To map predicted values with probabilities, we use the sigmoid function. The function maps any real value into another value between 0 and 1. In machine learning, we use sigmoid to map predictions to probabilities.

S(z) = 1/1+e−z

Where:

  • s(z)  = output between 0 and 1 (probability estimate)
  • z = input to the function (your algorithm’s prediction e.g.  b0 + b1*x)
  • e = base of natural log

Graph

In Linear Regression, we use the Ordinary Least Square (OLS) method to determine the best coefficients to attain good model fit but In Logistic Regression, we use maximum likelihood method to determine the best coefficients and eventually a good model fit.

How Maximum Likelihood method works?

For a binary classification (1/0), maximum likelihood will try to find the values of  b0 and b1 such that the resultant probabilities are close to either 1 or 0.

Logistic Regression Assumption:

I got a very good consolidated assumption on Towards Data science website, which I am putting here.

  • Binary logistic regression requires the dependent variable to be binary.
  • For a binary regression, the factor level 1 of the dependent variable should represent the desired outcome.
  • Only meaningful variables should be included.
  • The independent variables should be independent of each other. That is, the model should have little or no multicollinearity.
  • The independent variables are linearly related to the log of odds.
  • Logistic regression requires quite large sample sizes.

Performance evaluation methods of Logistic Regression.

Akaike Information Criteria (AIC):

We can say AIC works as a counter part of adjusted R square in multiple regression. The thumb rules of AIC are Smaller the better. AIC penalizes increasing number of coefficients in the model. In other words, adding more variables to the model wouldn’t let AIC increase. It helps to avoid overfitting.

To measure AIC of a single mode will not fruitful. To use AIC correctly build 2-3 logistic model and compare their AIC. The model which will have lowest AIC will relatively batter.

Null Deviance and Residual Deviance:

  • Null deviance is calculated from the model with no features, i.e. only intercept. The null model predicts class via a constant probability.
  • Residual deviance is calculated from the model having all the features. In both null and residual lower the value batter the model is.

Confusion Matrix:

It is nothing but a tabular representation of Actual vs Predicted values. This helps us to find the accuracy of the model and avoid overfitting. This is how it looks like

So now we can calculate the accuracy.

True Positive Rate (TPR):

It shows how many positive values, out of all the positive values, have been correctly predicted.

The formula to calculate the true positive rate is (TP/TP + FN). Or TPR =  1 - False Negative Rate. It is also known as Sensitivity or Recall.

False Positive Rate (FPR):

It shows how many negative values, out of all the negative values, have been incorrectly predicted.

The formula to calculate the false positive rate is (FP/FP + TN). Also, FPR = 1 - True Negative Rate.

True Negative Rate (TNR):

It represents how many negative values, out of all the negative values, have been correctly predicted. The formula to calculate the true negative rate is (TN/TN + FP). It is also known as Specificity.

False Negative Rate (FNR):

It indicates how many positive values, out of all the positive values, have been incorrectly predicted. The formula to calculate false negative rate is (FN/FN + TP).

Precision:

It indicates how many values, out of all the predicted positive values, are actually positive. The formula is (TP / TP + FP)

F Score:

F score is the harmonic mean of precision and recall. It lies between 0 and 1. Higher the value, better the model. Formula is  2((precision*recall) / (precision + recall)).

Receiver Operator Characteristic (ROC):

ROC is use to determine the accuracy of a classification model. It determines the model’s accuracy using Area Under Curve (AUC). Higher the area batter the model. ROC is plotted between True Positive Rate (Y axis) and False Positive Rate (X Axis).

In below graph yellow line represents the ROC curve at 0.5 thresholds. At this point, sensitivity = specificity.

Decision Tree – Theory

Decision tree is very simple yet a powerful algorithm for classification and regression. As name suggest it has tree like structure. It is a non-parametric technique. A decision tree typically starts with a single node, which branches into possible outcomes. Each of those outcomes leads to additional nodes, which branch off into other possibilities. This gives it a treelike shape.

For example of a decision tree can be explained using below binary tree. Let’s suppose you want to predict whether a person is fit by their given information like age, eating habit, and physical activity, etc. The decision nodes here are questions like ‘What’s the age?’, ‘Does he exercise?’, ‘Does he eat a lot of pizzas’? And the leaves, which are outcomes like either ‘fit’, or ‘unfit’. In this case this was a binary classification problem (yes or no type problem).

There are two main types of Decision Trees:

Classification trees (Yes/No types)

What we’ve seen above is an example of classification tree, where the outcome was a variable like ‘fit’ or ‘unfit’. Here the decision variable is categorical.

Regression trees (Continuous data types)

Here the decision or the outcome variable is Continuous, e.g. a number like 123.

Image source google.com

The top-most item, in this example, “Age < 30 ?” is called the root. It’s where everything starts from. Branches are what we call each line. A leaf is everything that isn’t the root or a branch.

A general algorithm for a decision tree can be described as follows:

  1. Pick the best attribute/feature. The best attribute is one which best splits or separates the data.
  2. Ask the relevant question.
  3. Follow the answer path.
  4. Go to step 1 until you arrive to the answer.

Terms used with Decision Trees:

  1. Root Node – It represents entire population or sample and this further gets divided into two or more similar sets.
  2. Splitting – Process to divide a node into two or more sub nodes.
  3. Decision Node – A sub node is divided further sub node, called decision node.
  4. Leaf/Terminal Node – Node which do not split further called leaf node.
  5. Pruning – When we remove sub-nodes of a decision node, this process is called pruning.
  6. Branch/ Sub-tree – A sub-section of entire tree is called branch or subtree.
  7. Parent and child node – A node, which is divided into sub-nodes is called parent node of sub-nodes whereas sub-nodes are the child of parent node.

Let’s understand above terms with the below image

Image Source google.com

Types of Decision Trees

  1. Categorical Variable decision tree – Decision Tree which has categorical target variable then it called as categorical variable.
  2. Continuous Variable Decision Tree – Decision Tree which has continuous target variable then it is called as Continuous Variable Decision Tree.

Advantages of Decision Tree

  1. Easy to understand – Algorithm is very easy to understand even for people from non-analytical background. A person without statistical knowledge can interpret them.
  2. Useful in data exploration – It is the fastest algorithm to identify most significant variables and relation between variables. It help us to identify those variables which has better power to predict target variable.
  3. Decision tree do not required more effort from user side for data preparation.
  4. This algorithm is not affected by outliers or missing value to an extent, so it required less data cleaning effort as compare to other model.
  5. This model can handle both numerical and categorical variables.
  6. The number of hyper-parameters to be tuned is almost null.

Disadvantages of Decision Tree

  1. Over Fitting – It is the most common problem in decision tree. This issue has resolved by setting constraints on model parameters and pruning. Over fitting is an phenomena where your model create a complex tree that do not generalize the data very well.
  2. Small variations in the data can result completely different tree which mean it unstable the model. This problem is called variance, which need to lower by method like bagging and boosting.
  3. If some class is dominate in your model then decision tree learner can create a biased tree. So it is recommended to balance the data set prior to fitting with the decision tree.
  4. Calculations can become complex when there are many class label.

Decision Tree Flowchart

Image Source google.com

How does a tree decide where to split?

In decision tree making splits effect the accuracy of model. The decision criteria are different for classification and regression trees. Decision tree splits the nodes on all available variables and then selects the split which results in most homogeneous sub-nodes.

The algorithm selection is also based on type of target variables. The four most commonly used algorithms in decision tree are:

  1. CHAID – Chi-Square Interaction Detector
  2. CART – Classification and regression trees.

Let’s discuss both methods in detail

CHAID – Chi-Square Interaction Detector

It is an algorithm to find out the statistical significance between the differences between sub-nodes and parent node. It works with categorical target variable such as “yes” or “no”.

Algorithm follows following steps:

  1. Iterate all available x variables.
    1. Check if the variable is numeric
    2. If the variable is numeric make it categorical by decile and percentile.
    3. Figure out all possible cuts.
    4. For each possible cut it will do Chi-Square test and store the P value
    5. Choose that cut which give least p value.
  2. Cut the data using that variable and that cut which gives least P value.

CART – Classification and regression trees

There are basically two subtypes for this algorithm.

Gini index:

It says, if we select two items from a population at random then they must be of same class and probability for this is 1 if population is pure. It works with categorical target variable “Success” or “Failure”.

Gini = 1-P^2 – (1-p)^2 , Here p is the probability

Gain = Gini of parents leaf – weighted average of Gini of the nodes (Weights are proportional to population of each child node)

Steps to Calculate Gini for a split

  1. Iterate all available x variables.
    1. Check if the variable is numeric
    2. If the variable is numeric make it categorical by decile and percentile.
    3. Figure out all possible cuts.
    4. Calculate gain for each split
    5. Choose that cut which gives the highest cut.
  2. Cut the data using that variable and that cut which gives maximum gain

Entropy Tree:

To understand entropy tree we need to first understand what entropy is?

Entropy – Entropy is basically measures the level of impurity in a group of examples. If the sample is completely homogeneous, then the entropy is zero and if the sample is an equally divided (50% — 50%), it has entropy of one.

Entropy = -p log2 p — q log2q

Here p and q is the probability of success and failure respectively in that node. Entropy is also used with categorical target variable. It chooses the split which has lowest entropy compared to parent node and other splits. The lesser the entropy, the better it is.

Gain = Entropy of parents leaf – weighted average of entropy of the nodes (Weights are proportional to population of each child node)

Steps to Calculate Entropy for a split

  1. Iterate all available x variables.
    1. Check if the variable is numeric
    2. If the variable is numeric make it categorical by decile and percentile.
    3. Figure out all possible cuts.
    4. Calculate gain for each split
    5. Choose that cut which gives the highest cut.
  2. Cut the data using that variable and that cut which gives maximum gain.

Decision Tree Regression

As we have discussed above with the help of decision tree we can also solve the regression problem. So let’s see what the steps are.

Following steps are involved in algorithm.

  1. Iterate all available x variables.
    1. Check if the variable is numeric
    2. If the variable is numeric make it categorical by decile and percentile.
    3. Figure out all possible cuts.
    4. For each cuts calculate MSE
    5. Choose that cut and that variable which gives the minimum MSE.
  2. Cut the data using that variable and that cut which gives minimum MSE.

Stopping Criteria of Decision Tree

  1. Pure Node – If tree find a pure node, that particular leaf will stop growing.
  2. User defined depth
  3. Minimum observation in the node
  4. Minimum observation in the leaf