6. A Note about Divided Differences
≪ 5. Numerical Integration and Differentiation | Table of Contents | 7. Gaussian Elimination and Matrix Factorisations ≫In the previous discussion, we described a perspective on interpolation, numerical differentiation, and quadrature that utilised linear algebra. On the interpolation problem, we described it as follows: given points \(x_0,\ldots, x_n\) and a point \(x\), determine a set of coefficients \(c_0(x), \ldots, c_n(x)\) such that \[p(x) = c_0(x) p\left( x_0 \right)+c_1(x) p\left( x_1 \right)+\cdots + c_n(x) p\left( x_n \right)\] for all polynomials \(p\) of degree at most \(n\).
We did this by analysing the dual space of \(P_n\), the set of all such polynomials, and in essence, we are saying that the linear functional “evaluation at \(x\)” lies in the linear span of \(\left\lbrace \operatorname{ev} _{x_0}, \ldots, \operatorname{ev} _{x_n} \right\rbrace\).
We then picked a basis \(\left\lbrace p_0,\ldots, p_n \right\rbrace\) of \(P_n\) to establish a matrix equation \[\begin{pmatrix} c_0(x) \\ \vdots \\ c_n(x) \end{pmatrix} = \begin{pmatrix} p_0\left( x_0 \right) & \cdots & p_0\left( x_n \right) \\ \vdots & \vdots & \vdots \\ p_n\left( x_0 \right) & \cdots & p_n\left( x_n \right) \end{pmatrix} ^{-1} \begin{pmatrix} p_0(x) \\ \vdots \\ p_n(x) \end{pmatrix}.\]
Then, given data points \(y_0,\ldots, y_n\), the interpolation of the data points \(\left( x_0,y_0 \right),\ldots, \left( x_n,y_n \right)\) at \(x\) is given by \[ \begin{align*}\begin{pmatrix} c_0(x) & \cdots & c_n(x) \end{pmatrix} \begin{pmatrix} y_0\\ \vdots \\ y_n \end{pmatrix} & \\ = \begin{pmatrix} p_0(x) & \cdots & p_n(x) \end{pmatrix} & \begin{pmatrix} p_0\left( x_0 \right) & \cdots & p_n\left( x_0 \right) \\ \vdots & \vdots & \vdots \\ p_0\left( x_n \right) & \cdots & p_n\left( x_n \right) \end{pmatrix} ^{-1} \begin{pmatrix} y_0 \\ \vdots \\ y_n \end{pmatrix}.\end{align*}\]
Note that the square matrix got transposei. When \(p_k(x) = x^k\) is our basis, Neville’s method is factoring the product of the first two terms into a chain of simple matrices, and we descirbed how the intermediate factors provided more information about lower-order interpolating polynomials.
I gave an incorrect characterisation of the divided differencing method. One can certainly view it as selecting a different dual basis, but I think it’s more accurate to describe it as a smarter choice of \(\left\lbrace p_0, \ldots, p_n \right\rbrace\). Rather than using the standard choice, one defines \(p_k(x) = \prod _{j=0}^{k-1} \left( x-x_k \right)\) as the basis. This makes the matrix upper triangular, and the method of divided differences is just applying forward substitution to solve for the product with \(\begin{pmatrix} y_0 & \cdots y_n \end{pmatrix} ^\top\)!