web.archive.org

Skellam distribution: Information and Much More from Answers.com

  • ️Wed Jul 01 2015
Skellam
Probability mass function
Examples of the probability mass function for the Skellam distribution.
Examples of the probability mass function for the Skellam distribution. The horizontal axis is the index k. (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.)
Cumulative distribution function
Parameters \mu_1\ge 0,~~\mu_2\ge 0
Support {..., - 2, - 1,0,1,2,...}
Probability mass function (pmf) e^{-(\mu_1\!+\!\mu_2)} \left(\frac{\mu_1}{\mu_2}\right)^{k/2}\!\!I_k(2\sqrt{\mu_1\mu_2})
Cumulative distribution function (cdf)
Mean \mu_1-\mu_2\,
Median N/A
Mode
Variance \mu_1+\mu_2\,
Skewness \frac{\mu_1-\mu_2}{(\mu_1+\mu_2)^{3/2}}
Excess kurtosis 1/(\mu_1+\mu_2)\,
Entropy
Moment-generating function (mgf) e^{-(\mu_1+\mu_2)+\mu_1e^t+\mu_2e^{-t}}
Characteristic function e^{-(\mu_1+\mu_2)+\mu_1e^{it}+\mu_2e^{-it}}

The Skellam distribution is the discrete probability distribution of the difference K1 - K2 of two correlated or uncorrelated random variables K1 and K2 having Poisson distributions with different expected values μ1 and μ2. It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in certain sports where all scored points are equal, such as baseball, hockey and soccer.

Only the case of uncorrelated variables will be considered in this article. See Karlis & Ntzoufras, 2003 for the use of the Skellam distribution to describe the difference of correlated Poisson-distributed variables.

Recall that probability mass function of a Poisson distribution with mean μ is given by

f(k;\mu)={\mu^k\over k!}e^{-\mu}\,

The Skellam probability mass function is the cross-correlation of two Poisson distributions: (Skellam, 1946)

f(k;\mu_1,\mu_2)   =\sum_{n=-\infty}^\infty   \!f(k\!+\!n;\mu_1)f(n;\mu_2)
=e^{-(\mu_1+\mu_2)}\sum_{n=-\infty}^\infty   {{\mu_1^{k+n}\mu_2^n}\over{n!(k+n)!}}
= e^{-(\mu_1+\mu_2)}   \left({\mu_1\over\mu_2}\right)^{k/2}I_k(2\sqrt{\mu_1\mu_2})

where I k(z) is the modified Bessel function of the first kind. The above formulas have assumed that any term with a negative factorial is set to zero. The special case for μ1 = μ2( = μ) is given by (Irwin, 1937):

f\left(k;\mu,\mu\right) = e^{-2\mu}I_k(2\mu)

Note also that, using the limiting values of the Bessel function for small arguments, we can recover the Poisson distribution as a special case of the Skellam distribution for μ2 = 0.

Properties

The Skellam probability mass function is of course normalized:

\sum_{k=-\infty}^\infty f(k;\mu_1,\mu_2)=1.

We know that the generating function for a Poisson distribution is:

G\left(t;\mu\right)= e^{\mu(t-1)}.

It follows that the generating function G(t12) for a Skellam probability function will be:

G(t;\mu_1,\mu_2) = \sum_{k=0}^\infty f(k;\mu_1,\mu_2)t^k
= G\left(t;\mu_1\right)G\left(1/t;\mu_2\right)\,
= e^{-(\mu_1+\mu_2)+\mu_1 t+\mu_2/t}.

Notice that the form of the generating function implies that the distribution of the sums or the differences of any number of independent Skellam-distributed variables are again Skellam-distributed.

It is sometimes claimed that any linear combination of two Skellam-distributed variables are again Skellam-distributed, but this is clearly not true since any multiplier other than +/-1 would change the support of the distribution.

The moment-generating function is given by:

M\left(t;\mu_1,\mu_2\right) = G(e^t;\mu_1,\mu_2)
= \sum_{k=0}^\infty { t^k \over k!}\,m_k

which yields the raw moments mk . Define:

Failed to parse (unknown function\stackrel): \Delta\ \stackrel{\mathrm{def}}{=}\ \mu_1-\mu_2\,
Failed to parse (unknown function\stackrel): \mu\ \stackrel{\mathrm{def}}{=}\ (\mu_1+\mu_2)/2.\,


Then the raw moments mk are

m_1=\left.\Delta\right.\,
m_2=\left.2\mu+\Delta^2\right.\,
m_3=\left.\Delta(1+6\mu+\Delta^2)\right.\,

The central moments M k are

M_2=\left.2\mu\right.,\,
M_3=\left.\Delta\right.,\,
M_4=\left.2\mu+12\mu^2\right..\,

The mean, variance, skewness, and kurtosis excess are respectively:

\left.\right.E(n)=\Delta\,
\sigma^2=\left.2\mu\right.\,
\gamma_1=\left.\Delta/(2\mu)^{3/2}\right.\,
\gamma_2=\left.1/2\mu\right..\,

The cumulant-generating function is given by:

Failed to parse (unknown function\stackrel): K(t;\mu_1,\mu_2)\ \stackrel{\mathrm{def}}{=}\ \ln(M(t;\mu_1,\mu_2)) = \sum_{k=0}^\infty { t^k \over k!}\,\kappa_k


which yields the cumulants:

\kappa_{2k}=\left.2\mu\right.
\kappa_{2k+1}=\left.\Delta\right. .

For the special case when μ1 = μ2, an asymptotic expansion of the modified Bessel function of the first kind yields for large μ:

f(k;\mu,\mu)\sim   {1\over\sqrt{4\pi\mu}}\left[1+\sum_{n=1}^\infty   (-1)^n{\{4k^2-1^2\}\{4k^2-3^2\}\cdots\{4k^2-(2n-1)^2\}   \over n!\,2^{3n}\,(2\mu)^n}\right]

(Abramowitz & Stegun 1972, p. 377). Also, for this special case, when k is also large, and of order of the square root of 2μ, the distribution tends to a normal distribution:

f(k;\mu,\mu)\sim   {e^{-k^2/4\mu}\over\sqrt{4\pi\mu}}.

These special results can easily be extended to the more general case of different means.

References

  • Abramowitz, M. and Stegun, I. A. (Eds.). 1972. Modified Bessel functions I and K. Sections 9.6–9.7 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing, pp. 374–378. New York: Dover.
  • Irwin, J. O. 1937. The frequency distribution of the difference between two independent variates following the same Poisson distribution. Journal of the Royal Statistical Society: Series A 100 (3): 415–416.
  • Karlis, D. and Ntzoufras, I. 2003. Analysis of sports data using bivariate Poisson models. Journal of the Royal Statistical Society: Series D (The Statistician) 52 (3): 381–393. doi:10.1111/1467-9884.00366
  • Karlis D. and Ntzoufras I. (2006). Bayesian analysis of the differences of count data . Statistics in Medicine 25, 1885-1905. [1]
  • Skellam, J. G. 1946. The frequency distribution of the difference between two Poisson variates belonging to different populations. Journal of the Royal Statistical Society: Series A 109 (3): 296.
Image:Bvn-small.png Probability distributions []
Univariate Multivariate
Discrete: Benford • Bernoulli • binomialBoltzmann • categorical • compound Poisson • discrete phase-type • degenerate • Gauss-Kuzmin • geometrichypergeometriclogarithmicnegative binomialparabolic fractalPoisson • Rademacher • Skellamuniform • Yule-Simon • zeta • Zipf • Zipf-Mandelbrot Ewensmultinomialmultivariate Polya
Continuous: Beta • Beta prime • Cauchy • chi-square • Dirac delta function • Coxian • Erlangexponentialexponential power • F • fadingFermi-DiracFisher's z • Fisher-Tippett • Gammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-logistic • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponentialinverse chi-square (scaled inverse chi-square) • inverse Gaussian • inverse gamma (scaled inverse gamma) • Kumaraswamy • Landau • Laplace • Lévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speed • Nakagaminormal (Gaussian)normal-gamma • normal inverse Gaussian • Pareto • Pearson • phase-type • polar • raised cosine • Rayleigh • relativistic Breit-Wigner • Rice • shifted Gompertz • Student's ttriangular • truncated normal • type-1 Gumbel • type-2 Gumbel • uniformVariance-Gamma • Voigt • von Mises • Weibull • Wigner semicircle • Wilks' lambda Dirichlet • Generalized Dirichlet distribution . inverse-Wishart • Kent • matrix normalmultivariate normalmultivariate Studentvon Mises-Fisher • Wigner quasi • Wishart
Miscellaneous: bimodal • Cantorconditionalequilibriumexponential family • Infinite divisibility (probability) • location-scale family • marginal • maximum entropy • posterior • priorquasisamplingsingular

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)