web.archive.org

covariance: Definition and Much More from Answers.com

  • ️Wed Jul 01 2015

In probability theory and statistics, covariance is the measure of how much two random variables vary together (as distinct from variance, which measures how much a single variable varies). If two variables tend to vary together (that is, when one of them is above its expected value, then the other variable tends to be above its expected value too), then the covariance between the two variables will be positive.

On the other hand, if one of them is above its expected value and the other variable tends to be below its expected value, then the covariance between the two variables will be negative.

The covariance between two real-valued random variables X and Y, with expected values Failed to parse (unknown function\scriptstyle): \scriptstyle E(X)\,=\,\mu

and Failed to parse (unknown function\scriptstyle): \scriptstyle E(Y)\,=\,\nu
is defined as
\operatorname{Cov}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu)), \,

where E is the expected value operator. This can also be written:

\operatorname{Cov}(X, Y) = \operatorname{E}(X \cdot Y) - \mu \nu. \,

If X and Y are independent, then their covariance is zero. This follows because under independence,

E(X \cdot Y)=E(X) \cdot E(Y)=\mu\nu.

Recalling the second form of the covariance given above, and substituting, we get

\operatorname{Cov}(X, Y) = \mu \nu - \mu \nu = 0.

The converse, however, is not true: if X and Y have covariance zero, they need not be independent.

The units of measurement of the covariance Cov(X, Y) are those of X times those of Y. By contrast, correlation, which depends on the covariance, is a dimensionless measure of linear dependence.

Random variables whose covariance is zero are called uncorrelated.

Properties

If X, Y are real-valued random variables and a, b are constant ("constant" in this context means non-random), then the following facts are a consequence of the definition of covariance:

\operatorname{Cov}(X, X) = \operatorname{Var}(X)\,
\operatorname{Cov}(X, Y) = \operatorname{Cov}(Y, X)\,
\operatorname{Cov}(aX, bY) = ab\, \operatorname{Cov}(X, Y)\,
\operatorname{Cov}(X+a, Y+b) = \operatorname{Cov}(X, Y)\,
\operatorname{Cov}(aX+bY, cW+dV) = ac\,\operatorname{Cov}(X,W)+ad\,\operatorname{Cov}(X,V)+bc\,\operatorname{Cov}(Y,W)+bd\,\operatorname{Cov}(Y,V)\,

For sequences X1, ..., Xn and Y1, ..., Ym of random variables, we have

\operatorname{Cov}\left(\sum_{i=1}^n {X_i}, \sum_{j=1}^m{Y_j}\right) =    \sum_{i=1}^n{\sum_{j=1}^m{\operatorname{Cov}\left(X_i, Y_j\right)}}.\,

For a sequence X1, ..., Xn of random variables, we have

\operatorname{Var}\left(\sum_{i=1}^n X_i \right) = \sum_{i=1}^n \operatorname{Var}(X_i) + 2\sum_{i,j\,:\,i<j} \operatorname{Cov}(X_i,X_j).

Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies the abstract properties of an inner product:

(1) bilinear: for constants a and b and random variables X, Y, and U, Cov(aX + bY, U) = a Cov(X, U) + bCov(Y, U)
(2) symmetric: Cov(X, Y) = Cov(Y, X)
(3) positive definite: Var(X) = Cov(X, X) ≥ 0, and Cov(X, X) = 0 implies that X is a constant random variable (K).

It follows that covariance is an inner product over a vector space whose elements are random variables, with the vector addition operation defined as the sum of random variables X + Y and the scalar multiplication operation defined as multiplication of a random variable by a scalar, aX. Here, the "random variable" equated with a single vector is actually a probability distribution, not just a particular value that the random variable might take.

Covariance matrices and operators

For column-vector valued random variables X and Y with respective expected values μ and ν, and respective scalar components m and n, the covariance is defined to be the m×n matrix called the covariance matrix:

\operatorname{Cov}(X, Y) = \operatorname{E}((X-\mu)(Y-\nu)^\top).\,

For vector-valued random variables, Cov(XY) and Cov(YX) are each other's transposes.

Even more generally, for a probability measure P on a Hilbert space H with inner product 〈 , 〉, the covariance operator of P is the operator Cov : H → H given by

\langle \mathrm{Cov} x, y \rangle = \int_{H} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \mathbf{P} (z)

for all x and y in H. Cov is a self-adjoint operator (the infinite-dimensional analogy of the transposition symmetry in the finite-dimensional case); when P is a centred Gaussian measure, Cov is also a nuclear operator.

The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation matrix. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.

See also

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)