en.wikipedia.org

Divided differences - Wikipedia

  • ️Sat Apr 30 2011

From Wikipedia, the free encyclopedia

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions.[citation needed] Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.[1]

Divided differences is a recursive division process. Given a sequence of data points {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}, the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.

Given n + 1 data points {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} where the {\displaystyle x_{k}} are assumed to be pairwise distinct, the forward divided differences are defined as: {\displaystyle {\begin{aligned}{\mathopen {[}}y_{k}]&:=y_{k},&&k\in \{0,\ldots ,n\}\\{\mathopen {[}}y_{k},\ldots ,y_{k+j}]&:={\frac {[y_{k+1},\ldots ,y_{k+j}]-[y_{k},\ldots ,y_{k+j-1}]}{x_{k+j}-x_{k}}},&&k\in \{0,\ldots ,n-j\},\ j\in \{1,\ldots ,n\}.\end{aligned}}}

To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values: {\displaystyle {\begin{matrix}x_{0}&y_{0}=[y_{0}]&&&\\&&[y_{0},y_{1}]&&\\x_{1}&y_{1}=[y_{1}]&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&y_{2}=[y_{2}]&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&y_{3}=[y_{3}]&&&\\\end{matrix}}}

Note that the divided difference {\displaystyle [y_{k},\ldots ,y_{k+j}]} depends on the values {\displaystyle x_{k},\ldots ,x_{k+j}} and {\displaystyle y_{k},\ldots ,y_{k+j}}, but the notation hides the dependency on the x-values. If the data points are given by a function f, {\displaystyle (x_{0},y_{0}),\ldots ,(x_{k},y_{n})=(x_{0},f(x_{0})),\ldots ,(x_{n},f(x_{n}))} one sometimes writes the divided difference in the notation {\displaystyle f[x_{k},\ldots ,x_{k+j}]\ {\stackrel {\text{def}}{=}}\ [f(x_{k}),\ldots ,f(x_{k+j})]=[y_{k},\ldots ,y_{k+j}].}Other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are: {\displaystyle f[x_{k},\ldots ,x_{k+j}]={\mathopen {[}}x_{0},\ldots ,x_{n}]f={\mathopen {[}}x_{0},\ldots ,x_{n};f]=D[x_{0},\ldots ,x_{n}]f.}

Divided differences for {\displaystyle k=0} and the first few values of {\displaystyle j}: {\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{1},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{0}}}={\frac {{\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}-{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}}{x_{2}-x_{0}}}={\frac {y_{2}-y_{1}}{(x_{2}-x_{1})(x_{2}-x_{0})}}-{\frac {y_{1}-y_{0}}{(x_{1}-x_{0})(x_{2}-x_{0})}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{1},y_{2},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{0}}}\end{aligned}}}

Thus, the table corresponding to these terms upto two columns has the following form: {\displaystyle {\begin{matrix}x_{0}&y_{0}&&\\&&{y_{1}-y_{0} \over x_{1}-x_{0}}&\\x_{1}&y_{1}&&{{y_{2}-y_{1} \over x_{2}-x_{1}}-{y_{1}-y_{0} \over x_{1}-x_{0}} \over x_{2}-x_{0}}\\&&{y_{2}-y_{1} \over x_{2}-x_{1}}&\\x_{2}&y_{2}&&\vdots \\&&\vdots &\\\vdots &&&\vdots \\&&\vdots &\\x_{n}&y_{n}&&\\\end{matrix}}}

The divided difference scheme can be put into an upper triangular matrix: {\displaystyle T_{f}(x_{0},\dots ,x_{n})={\begin{pmatrix}f[x_{0}]&f[x_{0},x_{1}]&f[x_{0},x_{1},x_{2}]&\ldots &f[x_{0},\dots ,x_{n}]\\0&f[x_{1}]&f[x_{1},x_{2}]&\ldots &f[x_{1},\dots ,x_{n}]\\0&0&f[x_{2}]&\ldots &f[x_{2},\dots ,x_{n}]\\\vdots &\vdots &&\ddots &\vdots \\0&0&0&\ldots &f[x_{n}]\end{pmatrix}}.}

Then it holds

Polynomials and power series

[edit]

The matrix {\displaystyle J={\begin{pmatrix}x_{0}&1&0&0&\cdots &0\\0&x_{1}&1&0&\cdots &0\\0&0&x_{2}&1&&0\\\vdots &\vdots &&\ddots &\ddots &\\0&0&0&0&\;\ddots &1\\0&0&0&0&&x_{n}\end{pmatrix}}} contains the divided difference scheme for the identity function with respect to the nodes {\displaystyle x_{0},\dots ,x_{n}}, thus {\displaystyle J^{m}} contains the divided differences for the power function with exponent {\displaystyle m}. Consequently, you can obtain the divided differences for a polynomial function {\displaystyle p} by applying {\displaystyle p} to the matrix {\displaystyle J}: If {\displaystyle p(\xi )=a_{0}+a_{1}\cdot \xi +\dots +a_{m}\cdot \xi ^{m}} and {\displaystyle p(J)=a_{0}+a_{1}\cdot J+\dots +a_{m}\cdot J^{m}} then {\displaystyle T_{p}(x)=p(J).} This is known as Opitz' formula.[2][3]

Now consider increasing the degree of {\displaystyle p} to infinity, i.e. turn the Taylor polynomial into a Taylor series. Let {\displaystyle f} be a function which corresponds to a power series. You can compute the divided difference scheme for {\displaystyle f} by applying the corresponding matrix series to {\displaystyle J}: If {\displaystyle f(\xi )=\sum _{k=0}^{\infty }a_{k}\xi ^{k}} and {\displaystyle f(J)=\sum _{k=0}^{\infty }a_{k}J^{k}} then {\displaystyle T_{f}(x)=f(J).}

Alternative characterizations

[edit]

{\displaystyle {\begin{aligned}f[x_{0}]&=f(x_{0})\\f[x_{0},x_{1}]&={\frac {f(x_{0})}{(x_{0}-x_{1})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})}}\\f[x_{0},x_{1},x_{2}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}\\f[x_{0},x_{1},x_{2},x_{3}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})\cdot (x_{0}-x_{3})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})\cdot (x_{1}-x_{3})}}+\\&\quad \quad {\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})\cdot (x_{2}-x_{3})}}+{\frac {f(x_{3})}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\f[x_{0},\dots ,x_{n}]&=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod _{k\in \{0,\dots ,n\}\setminus \{j\}}(x_{j}-x_{k})}}\end{aligned}}}

With the help of the polynomial function {\displaystyle \omega (\xi )=(\xi -x_{0})\cdots (\xi -x_{n})} this can be written as {\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{\omega '(x_{j})}}.}

If {\displaystyle x_{0}<x_{1}<\cdots <x_{n}} and {\displaystyle n\geq 1}, the divided differences can be expressed as[4] {\displaystyle f[x_{0},\ldots ,x_{n}]={\frac {1}{(n-1)!}}\int _{x_{0}}^{x_{n}}f^{(n)}(t)\;B_{n-1}(t)\,dt} where {\displaystyle f^{(n)}} is the {\displaystyle n}-th derivative of the function {\displaystyle f} and {\displaystyle B_{n-1}} is a certain B-spline of degree {\displaystyle n-1} for the data points {\displaystyle x_{0},\dots ,x_{n}}, given by the formula {\displaystyle B_{n-1}(t)=\sum _{k=0}^{n}{\frac {(\max(0,x_{k}-t))^{n-1}}{\omega '(x_{k})}}}

This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and {\displaystyle B_{n-1}} is the Peano kernel for the divided differences, all named after Giuseppe Peano.

Forward and backward differences

[edit]

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.

Given n+1 data points {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} with {\displaystyle x_{k}=x_{0}+kh,\ {\text{ for }}\ k=0,\ldots ,n{\text{ and fixed }}h>0} the forward differences are defined as {\displaystyle {\begin{aligned}\Delta ^{(0)}y_{k}&:=y_{k},\qquad k=0,\ldots ,n\\\Delta ^{(j)}y_{k}&:=\Delta ^{(j-1)}y_{k+1}-\Delta ^{(j-1)}y_{k},\qquad k=0,\ldots ,n-j,\ j=1,\dots ,n.\end{aligned}}}whereas the backward differences are defined as: {\displaystyle {\begin{aligned}\nabla ^{(0)}y_{k}&:=y_{k},\qquad k=0,\ldots ,n\\\nabla ^{(j)}y_{k}&:=\nabla ^{(j-1)}y_{k}-\nabla ^{(j-1)}y_{k-1},\qquad k=0,\ldots ,n-j,\ j=1,\dots ,n.\end{aligned}}} Thus the forward difference table is written as:{\displaystyle {\begin{matrix}y_{0}&&&\\&\Delta y_{0}&&\\y_{1}&&\Delta ^{2}y_{0}&\\&\Delta y_{1}&&\Delta ^{3}y_{0}\\y_{2}&&\Delta ^{2}y_{1}&\\&\Delta y_{2}&&\\y_{3}&&&\\\end{matrix}}}whereas the backwards difference table is written as:{\displaystyle {\begin{matrix}y_{0}&&&\\&\nabla y_{1}&&\\y_{1}&&\nabla ^{2}y_{2}&\\&\nabla y_{2}&&\nabla ^{3}y_{3}\\y_{2}&&\nabla ^{2}y_{3}&\\&\nabla y_{3}&&\\y_{3}&&&\\\end{matrix}}}

The relationship between divided differences and forward differences is[5] {\displaystyle [y_{j},y_{j+1},\ldots ,y_{j+k}]={\frac {1}{k!h^{k}}}\Delta ^{(k)}y_{j},}whereas for backward differences:[citation needed]{\displaystyle [{y}_{j},y_{j-1},\ldots ,{y}_{j-k}]={\frac {1}{k!h^{k}}}\nabla ^{(k)}y_{j}.}

  1. ^ Isaacson, Walter (2014). The Innovators. Simon & Schuster. p. 20. ISBN 978-1-4767-0869-0.
  2. ^ de Boor, Carl, Divided Differences, Surv. Approx. Theory 1 (2005), 46–69, [1]
  3. ^ Opitz, G. Steigungsmatrizen, Z. Angew. Math. Mech. (1964), 44, T52–T54
  4. ^ Skof, Fulvia (2011-04-30). Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008. Springer Science & Business Media. p. 40. ISBN 978-88-470-1836-5.
  5. ^ Burden, Richard L.; Faires, J. Douglas (2011). Numerical Analysis (9th ed.). Cengage Learning. p. 129. ISBN 9780538733519.
  • Louis Melville Milne-Thomson (2000) [1933]. The Calculus of Finite Differences. American Mathematical Soc. Chapter 1: Divided Differences. ISBN 978-0-8218-2107-7.
  • Myron B. Allen; Eli L. Isaacson (1998). Numerical Analysis for Applied Science. John Wiley & Sons. Appendix A. ISBN 978-1-118-03027-1.
  • Ron Goldman (2002). Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling. Morgan Kaufmann. Chapter 4:Newton Interpolation and Difference Triangles. ISBN 978-0-08-051547-2.