en.unionpedia.org

Correlation, the Glossary

Index Correlation

In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data.[1]

Table of Contents

  1. 111 relations: Absolute value, Annals of Statistics, Anscombe's quartet, Arithmetic mean, Autocorrelation, Autoregressive model, Bias of an estimator, Binary data, Bivariate data, Canonical correlation, Cauchy–Schwarz inequality, Causality, Coefficient of colligation, Coefficient of determination, Coefficient of multiple correlation, Cointegration, Concordance correlation coefficient, Conditional expectation, Conference on Neural Information Processing Systems, Confidence distribution, Consistent estimator, Cophenetic correlation, Copula (probability theory), Correlation coefficient, Correlation does not imply causation, Correlation function, Correlation gap, Correlation ratio, Covariance, Covariance and correlation, Covariance matrix, Cross-correlation, Definite matrix, Demand curve, Distance correlation, Dual total correlation, Dykstra's projection algorithm, Ecological correlation, Elliptical distribution, Entropy (information theory), Estimation, Exchangeable random variables, Expected value, Exploratory data analysis, Extreme weather, Fraction of variance unexplained, Francis Galton, Frank Anscombe, Genetic correlation, Goodman and Kruskal's gamma, ... Expand index (61 more) »

Absolute value

In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign.

See Correlation and Absolute value

Annals of Statistics

The Annals of Statistics is a peer-reviewed statistics journal published by the Institute of Mathematical Statistics.

See Correlation and Annals of Statistics

Anscombe's quartet

Anscombe's quartet comprises four datasets that have nearly identical simple descriptive statistics, yet have very different distributions and appear very different when graphed.

See Correlation and Anscombe's quartet

Arithmetic mean

In mathematics and statistics, the arithmetic mean, arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection.

See Correlation and Arithmetic mean

Autocorrelation

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Correlation and Autocorrelation are covariance and correlation.

See Correlation and Autocorrelation

Autoregressive model

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc.

See Correlation and Autoregressive model

Bias of an estimator

In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated.

See Correlation and Bias of an estimator

Binary data

Binary data is data whose unit can take on only two possible states.

See Correlation and Binary data

Bivariate data

In statistics, bivariate data is data on each of two variables, where each value of one of the variables is paired with a value of the other variable.

See Correlation and Bivariate data

Canonical correlation

In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. Correlation and canonical correlation are covariance and correlation.

See Correlation and Canonical correlation

Cauchy–Schwarz inequality

The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms.

See Correlation and Cauchy–Schwarz inequality

Causality

Causality is an influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly responsible for the effect, and the effect is partly dependent on the cause.

See Correlation and Causality

Coefficient of colligation

In statistics, Yule's Y, also known as the coefficient of colligation, is a measure of association between two binary variables.

See Correlation and Coefficient of colligation

Coefficient of determination

In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).

See Correlation and Coefficient of determination

Coefficient of multiple correlation

In statistics, the coefficient of multiple correlation is a measure of how well a given variable can be predicted using a linear function of a set of other variables.

See Correlation and Coefficient of multiple correlation

Cointegration

Cointegration is a statistical property of a collection of time series variables.

See Correlation and Cointegration

Concordance correlation coefficient

In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability. Correlation and concordance correlation coefficient are covariance and correlation.

See Correlation and Concordance correlation coefficient

Conditional expectation

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution.

See Correlation and Conditional expectation

Conference on Neural Information Processing Systems

The Conference and Workshop on Neural Information Processing Systems (abbreviated as NeurIPS and formerly NIPS) is a machine learning and computational neuroscience conference held every December.

See Correlation and Conference on Neural Information Processing Systems

Confidence distribution

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest.

See Correlation and Confidence distribution

Consistent estimator

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0.

See Correlation and Consistent estimator

Cophenetic correlation

In statistics, and especially in biostatistics, cophenetic correlation (more precisely, the cophenetic correlation coefficient) is a measure of how faithfully a dendrogram preserves the pairwise distances between the original unmodeled data points. Correlation and cophenetic correlation are covariance and correlation.

See Correlation and Cophenetic correlation

Copula (probability theory)

In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval.

See Correlation and Copula (probability theory)

Correlation coefficient

A correlation coefficient is a numerical measure of some type of linear correlation, meaning a statistical relationship between two variables.

See Correlation and Correlation coefficient

Correlation does not imply causation

The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. Correlation and correlation does not imply causation are covariance and correlation.

See Correlation and Correlation does not imply causation

Correlation function

A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. Correlation and correlation function are covariance and correlation.

See Correlation and Correlation function

Correlation gap

In stochastic programming, the correlation gap is the worst-case ratio between the cost when the random variables are correlated to the cost when the random variables are independent.

See Correlation and Correlation gap

Correlation ratio

In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or sample. Correlation and correlation ratio are covariance and correlation.

See Correlation and Correlation ratio

Covariance

Covariance in probability theory and statistics is a measure of the joint variability of two random variables. Correlation and Covariance are covariance and correlation.

See Correlation and Covariance

Covariance and correlation

In probability theory and statistics, the mathematical concepts of covariance and correlation are very similar.

See Correlation and Covariance and correlation

Covariance matrix

In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Correlation and covariance matrix are covariance and correlation.

See Correlation and Covariance matrix

Cross-correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. Correlation and cross-correlation are covariance and correlation.

See Correlation and Cross-correlation

Definite matrix

In mathematics, a symmetric matrix \ M\ with real entries is positive-definite if the real number \ \mathbf^\top M \mathbf\ is positive for every nonzero real column vector \ \mathbf\, where \ \mathbf^\top\ is the row vector transpose of \ \mathbf ~. More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number \ \mathbf^* M \mathbf\ is positive for every nonzero complex column vector \ \mathbf\, where \ \mathbf^*\ denotes the conjugate transpose of \ \mathbf ~.

See Correlation and Definite matrix

Demand curve

A demand curve is a graph depicting the inverse demand function, a relationship between the price of a certain commodity (the y-axis) and the quantity of that commodity that is demanded at that price (the x-axis).

See Correlation and Demand curve

Distance correlation

In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. Correlation and distance correlation are covariance and correlation.

See Correlation and Distance correlation

Dual total correlation

In information theory, dual total correlation, information rate, excess entropy,Nihat Ay, E. Olbrich, N. Bertschinger (2001). Correlation and dual total correlation are covariance and correlation.

See Correlation and Dual total correlation

Dykstra's projection algorithm

Dykstra's algorithm is a method that computes a point in the intersection of convex sets, and is a variant of the alternating projection method (also called the projections onto convex sets method).

See Correlation and Dykstra's projection algorithm

Ecological correlation

In statistics, an ecological correlation (also spatial correlation) is a correlation between two variables that are group means, in contrast to a correlation between two variables that describe individuals. Correlation and ecological correlation are covariance and correlation.

See Correlation and Ecological correlation

Elliptical distribution

In probability and statistics, an elliptical distribution is any member of a broad family of probability distributions that generalize the multivariate normal distribution.

See Correlation and Elliptical distribution

Entropy (information theory)

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes.

See Correlation and Entropy (information theory)

Estimation

Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

See Correlation and Estimation

Exchangeable random variables

In statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence X1, X2, X3,...

See Correlation and Exchangeable random variables

Expected value

In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average.

See Correlation and Expected value

Exploratory data analysis

In statistics, exploratory data analysis (EDA) is an approach of analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods.

See Correlation and Exploratory data analysis

Extreme weather

Extreme weather includes unexpected, unusual, severe, or unseasonal weather; weather at the extremes of the historical distribution—the range that has been seen in the past.

See Correlation and Extreme weather

Fraction of variance unexplained

In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) Y which cannot be explained, i.e., which is not correctly predicted, by the explanatory variables X.

See Correlation and Fraction of variance unexplained

Francis Galton

Sir Francis Galton (16 February 1822 – 17 January 1911) was a British polymath and the originator of the behavioral genetics movement during the Victorian era.

See Correlation and Francis Galton

Frank Anscombe

Francis John Anscombe (13 May 1918 – 17 October 2001) was an English statistician.

See Correlation and Frank Anscombe

Genetic correlation

In multivariate quantitative genetics, a genetic correlation (denoted r_g or r_a) is the proportion of variance that two traits share due to genetic causes, the correlation between the genetic influences on a trait and the genetic influences on a different traitPlomin et al., p. 123 estimating the degree of pleiotropy or causal overlap.

See Correlation and Genetic correlation

Goodman and Kruskal's gamma

In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities.

See Correlation and Goodman and Kruskal's gamma

Goodman and Kruskal's lambda

In probability theory and statistics, Goodman & Kruskal's lambda (\lambda) is a measure of proportional reduction in error in cross tabulation analysis.

See Correlation and Goodman and Kruskal's lambda

Human height

Human height or stature is the distance from the bottom of the feet to the top of the head in a human body, standing erect.

See Correlation and Human height

Hypergeometric function

In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases.

See Correlation and Hypergeometric function

Iconography of correlations

In exploratory data analysis, the iconography of correlations, or representation of correlations, is a data visualization technique which replaces a numeric correlation matrix by its graphical projection onto a diagram, on which the “remarkable” correlations are plotted as solid lines (positive correlations) or dotted lines (negative correlations); either shorter lengths, or thicker lines, or both, represent greater correlation projection components.

See Correlation and Iconography of correlations

Identity (mathematics)

In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain range of validity.

See Correlation and Identity (mathematics)

Illusory correlation

In psychology, illusory correlation is the phenomenon of perceiving a relationship between variables (typically people, events, or behaviors) even when no such relationship exists.

See Correlation and Illusory correlation

Independence (probability theory)

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes.

See Correlation and Independence (probability theory)

Interclass correlation

In statistics, the interclass correlation (or interclass correlation coefficient) is a measure of a relation between two variables of different classes (types), such as the weights of 10-year-old sons and of their 40-year-old fathers. Correlation and interclass correlation are covariance and correlation.

See Correlation and Interclass correlation

Interval (mathematics)

In mathematics, a (real) interval is the set of all real numbers lying between two fixed endpoints with no "gaps".

See Correlation and Interval (mathematics)

Intraclass correlation

In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. Correlation and intraclass correlation are covariance and correlation.

See Correlation and Intraclass correlation

Joint probability distribution

Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs.

See Correlation and Joint probability distribution

Karl Pearson

Karl Pearson (born Carl Pearson; 27 March 1857 – 27 April 1936) was an English eugenicist, mathematician, and biostatistician. He has been credited with establishing the discipline of mathematical statistics. He founded the world's first university statistics department at University College London in 1911, and contributed significantly to the field of biometrics and meteorology.

See Correlation and Karl Pearson

Kendall rank correlation coefficient

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. Correlation and Kendall rank correlation coefficient are covariance and correlation.

See Correlation and Kendall rank correlation coefficient

Lift (data mining)

In data mining and association rule learning, lift is a measure of the performance of a targeting model (association rule) at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice targeting model.

See Correlation and Lift (data mining)

Line (geometry)

In geometry, a straight line, usually abbreviated line, is an infinitely long object with no width, depth, or curvature, an idealization of such physical objects as a straightedge, a taut string, or a ray of light.

See Correlation and Line (geometry)

Linear independence

In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector.

See Correlation and Linear independence

Logistic regression

In statistics, the logistic model (or logit model) is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables.

See Correlation and Logistic regression

Marginal distribution

In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset.

See Correlation and Marginal distribution

Matrix norm

In the field of mathematics, norms are defined for elements within a vector space.

See Correlation and Matrix norm

Mean dependence

In probability theory, a random variable Y is said to be mean independent of random variable X if and only if its conditional mean E(Y \mid X.

See Correlation and Mean dependence

Modifiable areal unit problem

The modifiable areal unit problem (MAUP) is a source of statistical bias that can significantly impact the results of statistical hypothesis tests.

See Correlation and Modifiable areal unit problem

Moment (mathematics)

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph.

See Correlation and Moment (mathematics)

Monotonic function

In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order.

See Correlation and Monotonic function

Multivariate normal distribution

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions.

See Correlation and Multivariate normal distribution

Multivariate t-distribution

In statistics, the multivariate t-distribution (or multivariate Student distribution) is a multivariate probability distribution.

See Correlation and Multivariate t-distribution

Mutual information

In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables.

See Correlation and Mutual information

Newton's method

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function.

See Correlation and Newton's method

Normal distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable.

See Correlation and Normal distribution

Odds ratio

An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of event A taking place in the presence of B, the and odds of A in the absence of B. Due to symmetry, odds ratio reciprocally calculates the ratio of the odds of B occurring in the presence of A, and the odds of B in the absence of A.

See Correlation and Odds ratio

Outlier

In statistics, an outlier is a data point that differs significantly from other observations.

See Correlation and Outlier

Pearson correlation coefficient

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data.

See Correlation and Pearson correlation coefficient

Point-biserial correlation coefficient

The point biserial correlation coefficient (rpb) is a correlation coefficient used when one variable (e.g. Y) is dichotomous; Y can either be "naturally" dichotomous, like whether a coin lands heads or tails, or an artificially dichotomized variable.

See Correlation and Point-biserial correlation coefficient

Polychoric correlation

In statistics, polychoric correlation is a technique for estimating the correlation between two hypothesised normally distributed continuous latent variables, from two observed ordinal variables. Correlation and polychoric correlation are covariance and correlation.

See Correlation and Polychoric correlation

Posterior probability

The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule.

See Correlation and Posterior probability

Quadrant count ratio

The quadrant count ratio (QCR) is a measure of the association between two quantitative variables. Correlation and quadrant count ratio are covariance and correlation.

See Correlation and Quadrant count ratio

Quantile

In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way.

See Correlation and Quantile

Random variable

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events.

See Correlation and Random variable

Rank correlation

In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the ordering labels "first", "second", "third", etc. Correlation and rank correlation are covariance and correlation.

See Correlation and Rank correlation

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features').

See Correlation and Regression analysis

Regression dilution

Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable.

See Correlation and Regression dilution

Robust statistics

Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect.

See Correlation and Robust statistics

Scaled correlation

In statistics, scaled correlation is a form of a coefficient of correlation applicable to data that have a temporal component such as time series. Correlation and scaled correlation are covariance and correlation.

See Correlation and Scaled correlation

Scatter plot

A scatter plot, also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram, is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data.

See Correlation and Scatter plot

Spearman's rank correlation coefficient

In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter \rho (rho) or as r_s, is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). Correlation and Spearman's rank correlation coefficient are covariance and correlation.

See Correlation and Spearman's rank correlation coefficient

Spurious relationship

In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor (referred to as a "common response variable", "confounding factor", or "lurking variable").

See Correlation and Spurious relationship

Standard deviation

In statistics, the standard deviation is a measure of the amount of variation of a random variable expected about its mean.

See Correlation and Standard deviation

Standard score

In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured.

See Correlation and Standard score

Statistic

A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose.

See Correlation and Statistic

Statistical model

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population).

See Correlation and Statistical model

Statistical population

In statistics, a population is a set of similar items or events which is of interest for some question or experiment.

See Correlation and Statistical population

Statistics

Statistics (from German: Statistik, "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data.

See Correlation and Statistics

Subindependence

In probability theory and statistics, subindependence is a weak form of independence.

See Correlation and Subindependence

Sufficient statistic

In statistics, sufficiency is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset.

See Correlation and Sufficient statistic

Summary statistics

In descriptive statistics, summary statistics are used to summarize a set of observations, in order to communicate the largest amount of information as simply as possible.

See Correlation and Summary statistics

Tautology (logic)

In mathematical logic, a tautology (from ταυτολογία) is a formula or assertion that is true in every possible interpretation.

See Correlation and Tautology (logic)

Time series

In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order.

See Correlation and Time series

Toeplitz matrix

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant.

See Correlation and Toeplitz matrix

Total correlation

In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information. Correlation and total correlation are covariance and correlation.

See Correlation and Total correlation

In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname. Correlation and Uncorrelatedness (probability theory) are covariance and correlation.

See Correlation and Uncorrelatedness (probability theory)

Wilmott (magazine)

Wilmott Magazine is a mathematical finance and risk management magazine, combining technical articles with humor pieces.

See Correlation and Wilmott (magazine)

1

1 (one, unit, unity) is a number representing a single or the only entity.

See Correlation and 1

References

[1] https://en.wikipedia.org/wiki/Correlation

Also known as Association (statistics), Coorelation coeficient, Corelation, Correlate, Correlated, Correlated variables, Correlation & dependence, Correlation (in statistics), Correlation (statistics), Correlation and dependence, Correlation matrices, Correlation matrix, Correlation structure, Correlation structures, Correlational Design, Correlational data, Correlational designs, Correlational research, Correlational studies, Correlational study, Correlations, Direct correlation, Linear correlation, Linear relationship, Positive correlation, Positively correlated, Sample correlation, Simple correlation, Statistical association, Statistical correlation, Stratified analysis.

, Goodman and Kruskal's lambda, Human height, Hypergeometric function, Iconography of correlations, Identity (mathematics), Illusory correlation, Independence (probability theory), Interclass correlation, Interval (mathematics), Intraclass correlation, Joint probability distribution, Karl Pearson, Kendall rank correlation coefficient, Lift (data mining), Line (geometry), Linear independence, Logistic regression, Marginal distribution, Matrix norm, Mean dependence, Modifiable areal unit problem, Moment (mathematics), Monotonic function, Multivariate normal distribution, Multivariate t-distribution, Mutual information, Newton's method, Normal distribution, Odds ratio, Outlier, Pearson correlation coefficient, Point-biserial correlation coefficient, Polychoric correlation, Posterior probability, Quadrant count ratio, Quantile, Random variable, Rank correlation, Regression analysis, Regression dilution, Robust statistics, Scaled correlation, Scatter plot, Spearman's rank correlation coefficient, Spurious relationship, Standard deviation, Standard score, Statistic, Statistical model, Statistical population, Statistics, Subindependence, Sufficient statistic, Summary statistics, Tautology (logic), Time series, Toeplitz matrix, Total correlation, Uncorrelatedness (probability theory), Wilmott (magazine), 1.