web.archive.org

convolution: Definition and Much More from Answers.com

  • ️Wed Jul 01 2015
For the usage in formal language theory, see convolution (computer science).

In mathematics and, in particular, functional analysis, convolution is a mathematical operator which takes two functions f and g and produces a third function that in a sense represents the amount of overlap between f and a reversed and translated version of g.

Typically, one of the functions is taken to be a fixed filter impulse response, and is known as a kernel. Such a convolution is a kind of generalized moving average, as one can see by taking the kernel to be an indicator function of an interval.

Visual explanation of convolution. First, create a dummy variable (in this case τ) and make each waveform a function of this variable. Second, time-invert one of the waveforms (it does not matter which) and add t. This allows the function to "slide" back and forth on the τ-axis while remaining stationary with respect to t. (The front edge of the "travelling" waveform is always t–1 in this case.) Finally, start one function at negative infinity and slide it all the way to positive infinity. Wherever the two functions intersect, find the integral of their product. The resulting waveform (not shown here) is the convolution of the two functions.

Enlarge

Visual explanation of convolution. First, create a dummy variable (in this case τ) and make each waveform a function of this variable. Second, time-invert one of the waveforms (it does not matter which) and add t. This allows the function to "slide" back and forth on the τ-axis while remaining stationary with respect to t. (The front edge of the "travelling" waveform is always t–1 in this case.) Finally, start one function at negative infinity and slide it all the way to positive infinity. Wherever the two functions intersect, find the integral of their product. The resulting waveform (not shown here) is the convolution of the two functions.

The convolution of f\, and g\, is written f * g \,. It is defined as the integral of the product of the two functions after one is reversed and shifted. As such, it is a particular kind of integral transform:

(f  * g )(t) = \int_{a}^{b} f(\tau) g(t - \tau)\, d\tau

The integration range depends on the domain on which the functions are defined; often a = -∞ and b = +∞. While the symbol t\, is used above, it need not represent the time domain. In the case of a finite integration range, f\, and g\, are often considered to extend periodically in both directions, so that the term Failed to parse (unknown function\displaystyle): \displaystyle g(t-\tau)

does not imply a range violation.  This use of periodic domains is sometimes called a cyclic, circular or periodic convolution.  Of course, extension with zeros is also possible.  Using zero-extended or infinite domains is sometimes called a linear convolution, especially in the discrete case below.

Discrete convolution

Normal convolution

For discrete functions, one can use a discrete version of the convolution. It is given by

(f  * g)(m) = \sum_n {f(n) g(m - n)} \,

When multiplying two polynomials, the coefficients of the product are given by the convolution of the original coefficient sequences, in this sense (using extension with zeros as mentioned above).

Generalizing the above cases, the convolution can be defined for any two integrable functions defined on a locally compact topological group (see convolutions on groups below).

A different generalization is the convolution of distributions.

Evaluating discrete convolutions takes O(N2) arithmetic operations.

Fast convolution

In practice, digital signal processing typically uses fast convolution to increase the speed of the convolution.

Calculating convolution via a fast convolution algorithm consists of taking the fast Fourier transform (see FFT) of two separate sequences, multiplying them together, and then computing the inverse fast Fourier transform, known as the IFFT.

Fast convolution can be implemented using circular convolution.

When using large sequences, evaluating fast discrete convolutions takes O(N log N) arithmetical operations.

Properties

Commutativity

f * g = g * f  \,

Associativity

f  * (g  * h) = (f  * g)  * h \,

Distributivity

f  * (g + h) = (f  * g) + (f  * h) \,

Identity element

f * \delta = \delta * f = f \,

where δ denotes the Dirac delta

Associativity with scalar multiplication

a (f  * g) = (a f)  * g = f  * (a g) \,

for any real (or complex) number a\,.

Differentiation rule

\mathcal{D}(f  * g) = \mathcal{D}f  * g = f  * \mathcal{D}g \,

where \mathcal{D}f denotes the derivative of f or, in the discrete case, the difference operator \mathcal{D}f(n) = f(n+1) - f(n). Consequently, convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as either f or g is, whichever is greater.

Convolution theorem

The convolution theorem states that

\mathcal{F}(f  * g) = k \left[\mathcal{F} (f)\right] \cdot \left[\mathcal{F} (g)\right]

where \mathcal{F}(f)\, denotes the Fourier transform of f, and k is a constant which depends upon the specific normalization of the Fourier transform (e.g., k = 1 if \mathcal{F}\left[f(x)\right]\equiv\int^\infty_{-\infty}f(x)\exp(\pm 2 \pi i x \xi)dx). Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform.

See also less trivial Titchmarsh convolution theorem.

Convolutions on groups

If G is a suitable group endowed with a measure m (for instance, a locally compact Hausdorff topological group with the Haar measure) and if f and g are real or complex valued m-integrable functions of G, then we can define their convolution by

(f * g)(x) = \int_G f(y)g(xy^{-1})\,dm(y) \,

The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T):

T f(x) =  \frac{1}{2 \pi} \int_{\mathbb{T}} f(y) g( x - y) dy.

The operator T is compact. A direct calculation shows that its adjoint T* is convolution with

\bar{g}(-y).

By the commutativity property cited above, T is normal, i.e. T*T = TT*. Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have

h_k (x) = e^{ikx},\;

which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.

The above example may convince one that convolutions arise naturally in the context of harmonic analysis on groups. For more general groups, it is also possible to give, for instance, a Convolution Theorem, however it is much more difficult to phrase and requires representation theory for these types of groups and the Peter-Weyl theorem . It is very difficult to do these calculations without more structure, and Lie groups turn out to be the setting in which these things are done.

Convolution of measures

If μ and ν are measures on the family of Borel subsets of the real line, then the convolution μ * ν is defined by

(\mu * \nu)(A) = (\mu \times \nu)(\{\, (x,y) \in \mathbb{R}^2 \,:\, x+y \in A \,\}).

In case μ and ν are absolutely continuous with respect to Lebesgue measure, so that each has a density function, then the convolution μ * ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions.

If μ and ν are probability measures, then the convolution μ * ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν.

Applications

Convolution and related operations are found in many applications of engineering and mathematics.

  • In statistics, as noted above, a weighted moving average is a convolution.
  • In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.
  • In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the shadow on the table when you hold your hand between the table and a light source) is the convolution of the shape of the light source that is casting the shadow and the object whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp image with the shape of the iris diaphragm. The photographic term for this is bokeh.
  • Similarly, in digital image processing, convolutional filtering plays an important role in many important algorithms in edge detection and related processes.
  • In linear acoustics, an echo is the convolution of the original sound with a function representing the various objects that are reflecting it.
  • In artificial reverberation (digital signal processing, pro audio), convolution is used to map the impulse response of a real room on a digital audio signal (see previous and next point for additional information).
  • In electrical engineering and other disciplines, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function. See LTI system theory and digital signal processing.
  • In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.
  • In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance.
  • This is the fundamental problem term in the Navier Stokes Equations relating to the Clay Mathematics Millennium Problem and the associated million dollar prize.
  • In digital signal processing, frequency filtering can be simplified by convolving two functions (data with a filter) in the time domain, which is analogous to multiplying the data with a filter in the frequency domain.

See also

External links

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)