web.archive.org

Simple linear regression: Definition from Answers.com

A simple linear regression is a linear regression in which there is only one covariate (predictor variable).

Simple linear regression is used to evaluate the linear relationship between two variables. One example could be the relationship between muscle strength and lean body mass. Another way to put it is that simple linear regression is used to develop an equation by which we can predict or estimate a dependent variable given an independent variable.

Given a sample  (Y_i, X_i), \, i = 1, \ldots, n , the regression model is given by

Y_i = a + bX_i + \varepsilon_i

Where Yi is the dependent variable, a is the y intercept, b is the gradient or slope of the line, Xi is independent variable and  \varepsilon_i is a random term associated with each observation.

The linear relationship between the two variables (i.e. dependent and independent) can be measured using a correlation coefficient e.g. the Pearson product moment correlation coefficient.

Estimating the regression line

The parameters of the linear regression model,  Y_i = a + bX_i + \varepsilon_i , can be estimated using the method of ordinary least squares. This method finds the line that minimizes the sum of the squares of errors,  \sum_{i = 1}^n \varepsilon_{i}^2 .

The minimization problem can be solved using calculus, producing the following formulas for the estimates of the regression parameters:

 \hat{b} = \frac {\sum_{i=1}^{N}  (x_{i} - \bar{x})(y_{i} - \bar{y}) }  {\sum_{i=1}^{N} (x_{i} - \bar{x}) ^2}
 \hat{a} = \bar{y} - \hat{b} \bar{x}

Ordinary least squares produces the following features:

1. The line goes through the point  (\bar{x},\bar{y}) . This is easily seen rearranging the expression  \hat{a} = \bar{y} - \hat{b} \bar{x} as  \bar{y} = \hat{a} + \hat{b} \bar{x} , which shows that the point  (\bar{x},\bar{y}) verifies the fitted regression equation.

2. The sum of the residuals is equal to zero, if the model includes a constant. To see why, minimize  \sum_{i = 1}^n \varepsilon_i^2 = \sum_{i = 1}^n (y_i - a - b x_i)^2 with respect to a taking the following partial derivative:

 \frac{\partial}{\partial a} \sum_{i = 1}^n \varepsilon_i^2 = -2 \sum_{i = 1}^n (y_i - a - b x_i)
Setting this partial derivative to zero and noting that  \hat{\varepsilon}_i = y_i - \hat{a} - \hat{b} x_i yields  \sum_{i = 1}^n \hat{\varepsilon}_i = 0 as desired.

3. The linear combination of the residuals in which the coefficients are the x-values is equal to zero.

4. The estimates are unbiased.

Alternative formulas for the slope coefficient

There are alternative (and simpler) formulas for calculating  \hat{b} :

 \hat{b} = \frac {\sum_{i=1}^{N} {(x_{i}y_{i})} - N \bar{x} \bar{y}}  {\sum_{i=1}^{N} (x_{i})^2 - N \bar{x}^2}  = r \frac {s_y}{s_x} = \frac {Covar(x,y)}{Var(x)}

Here, r is the correlation coefficient of X and Y, sx is the sample standard deviation of X and sy is the sample standard deviation of Y.

Inference

Under the assumption that the error term is normally distributed, the estimate of the slope coefficient has a normal distribution with mean equal to b and standard error given by:

 s_ \hat{b} = \sqrt { \frac {\sum_{i=1}^N \hat{\varepsilon_i}^2 /(N-2)} {\sum_{i=1}^N (x_i - \bar{x})^2} }.

A confidence interval for b can be created using a t-distribution with N-2 degrees of freedom:

 [ \hat{b} - s_ \hat{b} t_{N-2}^*,\hat{b} + s_ \hat{b} t_{N-2}^*]

Numerical example

Suppose we have the sample of points {(1,-1),(2,4),(6,3)}. The mean of X is 3 and the mean of Y is 2. The slope coefficient estimate is given by:

 \hat{b} = \frac {(1 - 3)((-1) - 2) + (2 - 3)(4 - 2) + (6 - 3)(3 - 2)} {(1 - 3)^2 + (2 - 3)^2 + (6 - 3)^2 } = 7/14 = 0.5.

The standard error of the coefficient is 0.866. A 95% confidence interval is given by

[0.5 − 0.866 × 12.7062, 0.5 + 0.866 × 12.7062] = [−10.504, 11.504].

Mathematical derivation of the least squares estimates

Assume that  Y_i = \alpha + \beta X_i + \varepsilon is a stochastic simple regression model and let  (y_i, x_i), \, i = 1, \ldots, n be a sample of size n. Here the sample is seen as observable nonrandom variables but the calculations don't change when assuming that the sample is represented by random variables  (Y_1, X_1), \ldots, (Y_n, X_n) .

Let Q be the sum of squared errors:

 Q(\alpha, \beta) := \sum_{i = 1}^n (y_i - \alpha - \beta x_i)^2

Then taking partial derivatives with respect to α and β:


\begin{align}
\frac{\partial Q}{\partial \alpha}(\alpha, \beta) &= -2 \sum_{i = 1}^n (y_i - \alpha - \beta x_i) \\
\frac{\partial Q}{\partial \beta}(\alpha, \beta) &= 2 \sum_{i = 1}^n (y_i - \alpha - \beta x_i)(-x_i)
\end{align}

Setting  \frac{\partial Q}{\partial \alpha}(\alpha, \beta) and  \frac{\partial Q}{\partial \beta}(\alpha, \beta) to zero yields


\begin{align}
n \hat{\alpha} + \hat\beta \sum_{i = 1}^n x_i &= \sum_{i = 1}^n y_i \\
\hat{\alpha} \sum_{i = 1}^n x_i + \hat\beta \sum_{i = 1}^n x_i^2 &= \sum_{i = 1}^n x_i y_i 
\end{align}

which are known as the normal equations and can be written in matrix notation as

 \begin{pmatrix} n & \sum_{i = 1}^n x_i \\ \sum_{i = 1}^n x_i & \sum_{i = 1}^n x_i^2 \end{pmatrix} \begin{pmatrix} \hat\alpha \\ \hat\beta \end{pmatrix} = \begin{pmatrix} \sum_{i = 1}^n y_i \\ \sum_{i = 1}^n x_i y_i \end{pmatrix}

Using Cramer's rule we get

 
\begin{align}
\hat{\alpha} = \frac{\sum_{i = 1}^n y_i \sum_{i = 1}^n x_i^2 - \sum_{i = 1}^n x_i y_i \sum_{i = 1}^n x_i}{n \sum_{i = 1}^n x_i^2 - \left( \sum_{i = 1}^n x_i \right)^2} \\
\hat{\beta} = \frac{n \sum_{i = 1}^n x_i y_i - \sum_{i = 1}^n x_i \sum_{i = 1}^n y_i}{n \sum_{i = 1}^n x_i^2 - \left( \sum_{i = 1}^n x_i \right)^2} 
\end{align}

Dividing the last expression by n:

 \hat{\beta} = \frac{\sum_{i = 1}^n x_i y_i  - n \bar{x} \bar{y}}{\sum_{i = 1}^n x_i^2 - n \bar{x}^2}

Isolating  \hat\alpha from the first normal equation yields


\begin{align}
n \hat\alpha &= \sum_{i = 1}^n y_i - \beta \sum_{i = 1}^n x_i \\
\hat{\alpha} &= \frac{1}{n} \sum_{i = 1}^n y_i - \hat{\beta} \frac{1}{n} \sum_{i = 1}^n x_i \\
&= \bar{y} - \hat{\beta} \bar{x}
\end{align}

which is a common formula for  \hat\alpha in terms of  \hat\beta and the sample means.

 \hat{\beta} may also be written as


\hat{\beta} = \frac{\sum_{i = 1}^n (x_i - \bar{x})(y_i - \bar{y})}{\sum_{i = 1}^n (x_i - \bar{x})^2}

using the following equalities:


\begin{align}
\sum_{i = 1}^n (x_i - \bar{x})^2 &= \sum_{i = 1}^n (x_i^2 - 2x_i \bar{x} + \bar{x}^2) \\
&= \sum_{i = 1}^n x_i^2 -2 \bar{x} \underbrace{\sum_{i = 1}^n x_i}_{n \bar{x}} + n \bar{x}^2 \\
&= \sum_{i = 1}^n x_i^2 - n \bar{x}^2 \\
\sum_{i = 1}^n (x_i - \bar{x})(y_i - \bar{y}) &= \sum_{i = 1}^n (x_i y_i - \bar{y} x_i - \bar{x} y_i + \bar{x} \bar{y}) \\
&= \sum_{i = 1}^n x_i y_i - \bar{y} \underbrace{\sum_{i = 1}^n x_i}_{n \bar{x}} - \bar{x} \underbrace{\sum_{i = 1}^n y_i}_{n \bar{y}} + n \bar{x} \bar{y} \\
&= \sum_{i = 1}^n x_i y_i - n \bar{y} \bar{x} - n \bar{x} \bar{y} + n \bar{x} \bar{y} \\
&= \sum_{i = 1}^n x_i y_i - n \bar{x} \bar{y}
\end{align}

The following calculation shows that  (\hat{\alpha}, \hat{\beta}) is a minimum.


\begin{align}
\frac{\partial Q}{\partial \alpha} (\alpha, \beta) &= -2 \sum_{i = 1}^n y_i + 2n \alpha + 2 \beta \sum_{i = 1}^n x_i \\
\frac{\partial^2 Q}{\partial \alpha^2} (\alpha, \beta) &= 2n \\
\frac{\partial Q}{\partial \beta} (\alpha, \beta) &= -2 \sum_{i = 1}^n x_iy_i + 2 \alpha \sum_{i = 1}^n x_i + 2 \beta \sum_{i = 1}^n x_i^2 \\
\frac{\partial^2 Q}{\partial \beta^2} (\alpha, \beta) &= 2 \sum_{i = 1}^n x_i^2 \\
\frac{\partial^2 Q}{\partial \alpha \partial \beta} (\alpha, \beta) &= \frac{\partial^2 Q}{\partial \beta \partial \alpha} (\alpha, \beta) = 2 \sum_{i = 1}^n x_i 
\end{align}

Hence the Hessian matrix of Q is given by


\begin{align}
D^2 Q(\alpha, \beta) = \begin{pmatrix} 2n & 2 \sum_{i = 1}^n x_i \\ 2 \sum_{i = 1}^n x_i & 2 \sum_{i = 1}^n x_i^2 \end{pmatrix} \\
|D^2 Q(\alpha, \beta)| &= 4n \sum_{i = 1}^n x_i^2 - 4 \left( \sum_{i = 1}^n x_i \right)^2 \\
&= 4n \sum_{i = 1}^n x_i^2 - 4n^2 \bar{x}^2 \\
&= 4n \left( \sum_{i = 1}^n x_i^2 - n \bar{x}^2 \right) \\
&= 4n \sum_{i = 1}^n (x_i - \bar{x})^2 > 0
\end{align}

Since | D2Q(α,β) | > 0 and 2n > 0, D2Q(α,β) is positive definite for all (α,β) and  (\hat{\alpha}, \hat{\beta}) is a minimum.

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)