Beruflich Dokumente
Kultur Dokumente
You can apply regression to scenarios that require prediction or causal inference.
You can use regression to understand the extent to which the area of a house affects the housing
prices.
Regression can show how one variable varies with respect to another variable.
For example, the price of a wine bottle can vary depending on the average growing season
temperature.
For example, if the area of the house is an independent variable and the price of the house is a
dependent variable, you cannot conclude that houses with larger areas will increase the price of the
house.
Simple Model
Simple Model
The first column provides the price of the house and the second column provides the number of
houses sold. You want to fit a model where the price of a given house can predict the number of
houses sold.
The X axis represents the price of the house and the Y axis represents the number of houses sold.
One way of predicting the number of houses sold is by using the arithmetic mean . This is the Base
Model.
Irrespective of the price of a given house, the number of houses sold will be a **constant **
according to the model.
Alternate Model
Alternate Model
In this model the line describes the data better than the mean.
If this is the best model why is it not passing through all the lines ?
Best Model
Best Model
There is no straight line that can pass through all the points.
5 of 14
Model Representation
Model Representation
y - Dependent variable
x - Independent variable
e - Error measure
In this diagram,
The actual values are scattered and the predicted values are along the line.
The difference between actual and predicted values gives the error. This is also called the residual
error (e).
The parameters (Beta0 and Beta1) are chosen to minimize the total error between the actual and
predicted values.
Measure of Quality
You have seen how to fit a model that best describes the data. However, you can never get a perfect
fit.
How will you measure the error/deviation in a model that is fit to the data ?
Sum of Squared Errors
Sum of Squared Errors (SSE) is a measure of the quality of the Regression Line .
If there are n data points, then the SSE is the sum of square of the residual errors .
SSE is small for the Line of Best Fit and big for the baseline model.
The line with the minimum SSE is the Regression Line. SSE is sometimes difficult to interpret
because,
So, is there a better way to gauge the quality of the Regression Model ?
RMSE
RMSE
At times, the SSE is difficult to interpret and the units are difficult to comprehend. So, the alternative
measure of quality is the Root Mean Square Error (RMSE).
RMSE shrinks the magnitude of error by taking the square root of SSE divided by the number of
observations (n).
The SSE values for baseline model is the Total Sum of Square values(SST)
RSquare = 1 - ((SSE) / (SST))
R Sq = 0 means the model is just as good as the base line and there is no improvement from the
baseline model.
R Sq = 1 means it is a perfect model. Ideally, you should strive towards getting the R Sq close to 1 .
But some models with R Sq = 0 are also accepted depending on the scenario.
Model Interpretation
y = 249.85714 - 0.7928571x
For a unit increase in price of the house, .793 lesser houses are sold .
B0 is 249.85714
B1 is -0.7928571
Playground Setup
To setup the playground and try the code, install the following: -Go the terminal and enter the
following command **pip install --user statsmodels **.
If your installation is successful you should get the following message:
Descriptive Statistics
Let us now setup the initial data for our regression analysis. You will need to load Price and No of
house units sold into the data frame.
import pandas as pd
price = [160,180,200,220,240,260,280]
sale = [126,103,82,75,82,40,20]
print(houseDf)
print(priceDF)
Statsmodels Usage
Let us now see how to fit the data and get the regression outputs in Python.
Statsmodel can take input similar to R (Pass the variables with the dataframe) or take input as
arrays.
####Input as dataframe:
import statsmodels.api as sm
smfModel = smf.ols('y~x',data=houseDf).fit()
print(smfModel.summary())
DF Residuals: The degrees of freedom of the residuals (Difference between the number of
observations and parameters).
DF Model: The degrees of freedom of the model (The number of parameters estimated in the model
excluding the constant term) .
R-squared: Measure that says how well the model has performed with respect to the baseline
model.
Data Prep
Now that you understand how to perform regression analysis using statsmodel, it's time execute the
data set created using the following code:
import pandas as pd
boston = load_boston()
california = fetch_california_housing()
dataset['target'] = boston.target
print(dataset.head())
In the previous topics you have learnt how to predict one variable from another.
In this topic you will learn how to predict a variable using more than one variable.
MLR Representation
MLR Representation
The MLR model is represented as,
y - Dependent variable
x - Independent variable
e - Error measure
MLR
Multiple Regression helps in predicting a single variable using multiple independent variables. This
improves the model by increasing the accuracy
In today's complex world a given phenomenon(variable) is affected by more than one variable.
Hence it is advised to opt for a Multiple Regression Model
Consider that for a given dependent variable y, there are 4 independent variables x1,x2,x3 and x4
that affect the outcome. A possible way of building a Multiple Regression Model is to first use each
independent variable separately against the dependent variable and measure the R2 value.
Another way of doing this is by incrementally adding each dependent variable and measuring the R2
value for each combination.
During this model fitting process, some variables will contribute significantly to the model but some
might not. It is better to remove variables that are not of significance to the model. -So, how do we
check if a variable is significant for the output? Let's take a look at that in the following cards.
More variables can increase the accuracy of the model. But sometimes the incremental value of
adding each new variable might decrease.
According to the Law of Diminishing Returns, the marginal improvement decreases as new variables
are added.
For example,
- Finally when you add x4 to this model the R Sq might become .87.
In this process the incremental value has reduced from .05 to .02
MLR Data
MLR Data
Price(thousands of $) x
MLR Equation
The number of houses sold is a linear function of both the price of a house and number of cars sold
A unit increase in the number of cars sold increases the number of houses sold by a proportion of
.35
A unit increase in price of a house decreases the number of houses sold by a proportion of .82
B0 252.85965
B1 -0.824935
B2 0.3592748
You will see the computation of B0,B1,B2 in the next set of cards.
Multi collinearity happens when two independent variables in a Multiple Regression model are
correlated to each other. This will affect the outcome of your regression model.
The best way to avoid multi collinearity is to omit one of the independent variables that is highly
correlated with the other. The variable to omit depends on how the variable behaves in the
presence of other variables.
Price of the House , Number of units sold and the number of cars sold.
Let us create a dataframe from the list using the following code.
import pandas as pd
price = [160,180,200,220,240,260,280]
sale = [126,103,82,75,82,40,20]
cars = [0,9,19,5,25,1,20]
houseDf = pd.concat([priceDF,saleDF,carsDf],axis=1)
Here we fit the model by giving the dependent (number of units sold) and independent variables
(price of the house, number of cars sold).
X = houseDf.drop(['y'], axis=1)
y = houseDf.y
Xc = sm.add_constant(X)
linear_regression = sm.OLS(y,Xc)
fitted_model = linear_regression.fit()
fitted_model.summary()
Coef column gives the value of estimated coefficients (B0, B1, B2 etc.) .
If the coef is zero then that independent variable does not predict the dependent variable correctly.
Std err denotes how much each coefficient varies from the estimated value
- The smaller the value, the more significant a given variable is to the model.
Constant Term 252.85965 is 0.001 - meaning - this term is significant in predicting the output
X - House Price - -0.557 is 0.098 - this term is also significant in predicting the output. z - car sales -
0.322 is 0.668 this term is not so significant in predicting the output.
Handling Multicollinearity
A good practice while fitting multiple regression model is to check if there is any correlation among
the independent variables.
Tips
Reject that variable with correlation outside the range -0.7 and 0.7 with any other variable.
Data Prep
Hope you've understood how to deal with multiple variables and perform multiple regressions. Let
us consider the dataset created using the following code for further practice.
boston = load_boston()
california = fetch_california_housing()
Hands On Prep
From the previous card load all the variables other than target into a variable named X
Run a correlation among all independent variables to check for multi collinearity
Occam's razor
When you have two Multiple Regression Models fit for a given data set ,if one is simple and another
is complex , choose the simple model.
Whenever you are in the Model Building exercise , start with a simple model and then build
complexity on top of it.
Data Understanding
Understanding data is the most important step before building a model. You should not apply
regression for the sake of applying . After applying Regression, work to interpret the results and
derive the appropriate insights required for further analysis.
Feature Scaling
Your data-set might contain different features like independent variables (columns) with different
magnitudes. So always bring them to a proper scale for ease of operation. This process is called
feature scaling.
You can achieve Feature scaling with the help of either Normalization or Standardization depending
on the magnitude of the variables.
Normalization
import numpy as np
normalized_sampleData = preprocessing.normalize(sampleData)
normalized_sampleData
output: array([[-0.58834841, -0.19611614, 0.78446454]])
Standardization
Standardization is the process of removing the arithmetic mean and dividing by the standard
deviation.
X = np.array([[1,2,3,4,5]])
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)