web.archive.org

coefficient of determination: Definition and Much More from Answers.com

  • ️Wed Jul 01 2015

In statistics, the coefficient of determination R2 is the proportion of variability in a data set that is accounted for by a statistical model. In this definition, the term "variability" is defined as the sum of squares. There are equivalent expressions for R2 based on an analysis of variance decomposition. A general version, based on comparing the variability of the estimation errors with the variability of the original values, is

{R^{2} = {1-{SS_E \over SS_T}}}.

Another version is common in statistics texts but holds only if the modelled values are obtained by ordinary least squares regression (which must include a fitted intercept or constant term): it is

{R^{2} = {SS_R \over SS_T} }.

In the above definitions,

SS_T=\sum_i (y_i-\bar{y})^2, SS_R=\sum_i (\hat{y_i}-\bar{y})^2, SS_E=\sum_i (y_i - \hat{y_i})^2,

where {y_i},\hat{y_i} are the original data values and modelled values respectively. That is, SST is the total sum of squares, SSR is the regression sum of squares, and SSE is the sum of squared errors. In some texts, the abbreviations SSR and SSE have the opposite meaning: SSR stands for the residual sum of squares (which then refers to the sum of squared errors in the upper example) and SSE stands for the explained sum of squares (another name for the regression sum of squares).

In the second definition, R2 is the ratio of the variability of the modelled values to the variability of the original data values. Another version of the definition, which again only holds if the modelled values are obtained by ordinary least squares regression, gives R2 as the square of the correlation coefficient between the original and modelled data values.

R2 is a statistic that will give some information about the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression line approximates the real data points. An R2 of 1.0 indicates that the regression line perfectly fits the data.

In some (but not all) instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimising SSE. In this case R-squared increases as we increase the number of variables in the model (R-squared will not decrease). This illustrates a drawback to one possible use of R2, where one might try to include more variables in the model until "there is no more improvement". This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R-squared but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. However, the conclusion that that R-squared increases with extra variables no longer holds, but downward variations are usually small. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis.

Explanation and interpretation of R2

For expository purposes, consider a linear model of the form

{Y_i = \beta_0 + \sum_j^p {\beta_j X_{i,j}} + \varepsilon_i},

where, for the i'th case, Yi is the response variable, Xi,1,...,Xi,p are p regressors, and \varepsilon_i is a mean zero error term. The quantities β0,...,βp are unknown coefficients, whose values are determined by least squares. The coefficient of determination R2 is a measure of the global fit of the model. Specifically, R2 is an element of [0,1] and represents the proportion of variability in Yi that may be attributed to some linear combination of the regressors (explanatory variables) in X.

More simply, R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in y, while R2 = 0 indicates no 'linear' relationship between the response variable and regressors. An interior value such as R2 = 0.7 may be interpreted as follows: "Approximately seventy percent of the variation in the response variable can be explained by the explanatory variable. The remaining thirty percent can be explained by unknown, lurking variables or inherent variability."

A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation." In other words, while correlations may provide valuable clues regarding causal relationships among variables, a high correlation between two variables does not represent adequate evidence that changing one variable has resulted, or may result, from changes of other variables.

In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable.

Inflation of R2

In least squares regression, R2 is weakly increasing in the number of regressors in the model. As such, R2 cannot be used as a meaningful comparison of models with different numbers of covariants. As a reminder of this, some authors denote R2 by R2p, where p is the number of columns in X

Demonstration of this property is trivial. To begin, recall that the objective of least squares regression is (in matrix notation)

\min_b SS_E(b) \Rightarrow \min_b \sum_i (y_i - X_ib)^2\,

The optimal value of the objective is weakly smaller as additional columns of X are added, by the fact that relatively unconstrained minimization leads to a solution which is weakly smaller than relatively constrained minimization. Given the previous conclusion and noting that SST depends only on y, the non-decreasing property of R2 follows directly from the definition above.

Adjusted R2

Adjusted R2 is a modification of R2 that adjusts for the number of explanatory terms in a model. Unlike R2, the adjusted R2 increases only if the new term improves the model more than would be expected by chance. The adjusted R2 can be negative, and will always be less than or equal to R2. The adjusted R2 is defined as

{1-(1-R^{2}){n-1 \over n-p-1}} = {1-{SS_E \over SS_T}}{df_t \over df_e}

where p is the total number of regressors in the linear model, and n is sample size.

The principal behind the Adjusted R2 statistic can be seen by rewriting the ordinary R2 as

{R^{2} = {1-{VAR_E \over VAR_T}}}

where VARE = SSE / n and VART = SST / n are estimates of the variances of the errors and of the observations, respectively. These estimates are replaced by notionally "unbiased" versions: VARE = SSE / (n - p - 1) and VART = SST / (n - 1).

Adjusted R2 does not have the same interpretation as R2. As such, care must be taken in interpreting and reporting this statistic. Adjusted R2 is particularly useful in the Feature selection stage of model building.

Adjusted R2 is not always better than R2: adjusted R2 will be more useful only if the R2 is calculated based on a sample, not the entire population. For example, if our unit of analysis is a state, and we have data for all counties, then adjusted R2 will not yield any more useful information than R2.

Notes on interpreting R2

R2 does NOT tell whether:

  • the independent variables are a true cause of the changes in the dependent variable
  • omitted-variable bias exists
  • the correct regression was used; or
  • the most appropriate set of independent variables has been chosen
  • Co-linearity is present in the data

External links

See also

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)