en.unionpedia.org

Functional data analysis, the Glossary

Index Functional data analysis

Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum.[1]

Table of Contents

  1. 47 relations: Additive model, Bernard Silverman, Bochner integral, Compact operator on Hilbert space, Conditional expectation, Covariance operator, Curse of dimensionality, Diffeomorphism, Dimensionality reduction, Dynamic time warping, Eigenvalues and eigenvectors, Empirical orthogonal functions, Euclidean space, Expected value, Fourier series, Functional principal component analysis, Functional regression, Generalized functional linear model, Generalized linear model, Hierarchical clustering, Hilbert space, Inner product space, James O. Ramsay, K-means clustering, Kolmogorov continuity theorem, Kosambi–Karhunen–Loève theorem, Linear map, Linear predictor function, Linear regression, Lp space, Mercer's theorem, Mixture model, Modes of variation, Pettis integral, Polynomial regression, Python (programming language), R (programming language), Sobolev space, Spectral theorem, Speech recognition, Spline (mathematics), Square-integrable function, Statistics, Stochastic process, Tensor product, Variance, Variance function.

  2. Statistical analysis
  3. Statistical data types

Additive model

In statistics, an additive model (AM) is a nonparametric regression method.

See Functional data analysis and Additive model

Bernard Silverman

Sir Bernard Walter Silverman, (born 22 February 1952) is a British statistician and former Anglican clergyman.

See Functional data analysis and Bernard Silverman

Bochner integral

In mathematics, the Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions.

See Functional data analysis and Bochner integral

Compact operator on Hilbert space

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators (representable by finite-dimensional matrices) in the topology induced by the operator norm.

See Functional data analysis and Compact operator on Hilbert space

Conditional expectation

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution.

See Functional data analysis and Conditional expectation

Covariance operator

In probability theory, for a probability measure P on a Hilbert space H with inner product \langle \cdot,\cdot\rangle, the covariance of P is the bilinear form Cov: H × H → R given by for all x and y in H. The covariance operator C is then defined by (from the Riesz representation theorem, such operator exists if Cov is bounded).

See Functional data analysis and Covariance operator

Curse of dimensionality

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience.

See Functional data analysis and Curse of dimensionality

Diffeomorphism

In mathematics, a diffeomorphism is an isomorphism of differentiable manifolds.

See Functional data analysis and Diffeomorphism

Dimensionality reduction

Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.

See Functional data analysis and Dimensionality reduction

Dynamic time warping

In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed.

See Functional data analysis and Dynamic time warping

Eigenvalues and eigenvectors

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation.

See Functional data analysis and Eigenvalues and eigenvectors

Empirical orthogonal functions

In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data.

See Functional data analysis and Empirical orthogonal functions

Euclidean space

Euclidean space is the fundamental space of geometry, intended to represent physical space.

See Functional data analysis and Euclidean space

Expected value

In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average.

See Functional data analysis and Expected value

Fourier series

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions.

See Functional data analysis and Fourier series

Functional principal component analysis

Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data.

See Functional data analysis and Functional principal component analysis

Functional regression

Functional regression is a version of regression analysis when responses or covariates include functional data.

See Functional data analysis and Functional regression

Generalized functional linear model

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types (continuous or discrete) on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes.

See Functional data analysis and Generalized functional linear model

Generalized linear model

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression.

See Functional data analysis and Generalized linear model

Hierarchical clustering

In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters.

See Functional data analysis and Hierarchical clustering

Hilbert space

In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional.

See Functional data analysis and Hilbert space

Inner product space

In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product.

See Functional data analysis and Inner product space

James O. Ramsay

James O. Ramsay (born 5 September 1942) is a Canadian statistician and Professor Emeritus at McGill University, Montreal, who developed much of the statistical theory behind multidimensional scaling (MDS).

See Functional data analysis and James O. Ramsay

K-means clustering

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster.

See Functional data analysis and K-means clustering

Kolmogorov continuity theorem

In mathematics, the Kolmogorov continuity theorem is a theorem that guarantees that a stochastic process that satisfies certain constraints on the moments of its increments will be continuous (or, more precisely, have a "continuous version").

See Functional data analysis and Kolmogorov continuity theorem

Kosambi–Karhunen–Loève theorem

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem (after Damodar Dharmananda Kosambi) states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval.

See Functional data analysis and Kosambi–Karhunen–Loève theorem

Linear map

In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that preserves the operations of vector addition and scalar multiplication.

See Functional data analysis and Linear map

Linear predictor function

In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.

See Functional data analysis and Linear predictor function

Linear regression

In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables).

See Functional data analysis and Linear regression

Lp space

In mathematics, the spaces are function spaces defined using a natural generalization of the ''p''-norm for finite-dimensional vector spaces.

See Functional data analysis and Lp space

Mercer's theorem

In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions.

See Functional data analysis and Mercer's theorem

Mixture model

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs.

See Functional data analysis and Mixture model

Modes of variation

In statistics, modes of variation are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample.

See Functional data analysis and Modes of variation

Pettis integral

In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality.

See Functional data analysis and Pettis integral

Polynomial regression

In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x).

See Functional data analysis and Polynomial regression

Python (programming language)

Python is a high-level, general-purpose programming language.

See Functional data analysis and Python (programming language)

R (programming language)

R is a programming language for statistical computing and data visualization.

See Functional data analysis and R (programming language)

Sobolev space

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of ''Lp''-norms of the function together with its derivatives up to a given order.

See Functional data analysis and Sobolev space

Spectral theorem

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis).

See Functional data analysis and Spectral theorem

Speech recognition

Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers.

See Functional data analysis and Speech recognition

Spline (mathematics)

In mathematics, a spline is a function defined piecewise by polynomials.

See Functional data analysis and Spline (mathematics)

Square-integrable function

In mathematics, a square-integrable function, also called a quadratically integrable function or L^2 function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.

See Functional data analysis and Square-integrable function

Statistics

Statistics (from German: Statistik, "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data.

See Functional data analysis and Statistics

Stochastic process

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a sequence of random variables in a probability space, where the index of the sequence often has the interpretation of time. Functional data analysis and stochastic process are statistical data types.

See Functional data analysis and Stochastic process

Tensor product

In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \rightarrow V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otimes W denoted.

See Functional data analysis and Tensor product

Variance

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable.

See Functional data analysis and Variance

Variance function

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean.

See Functional data analysis and Variance function

See also

Statistical analysis

Statistical data types

References

[1] https://en.wikipedia.org/wiki/Functional_data_analysis