Stable Newton-Cotes Formulas

The purpose of this page is to propose numerical integration formulas for uniformly spaced data which are numerically stable even for high orders of approximation.

We consider widely used Newton-Cotes formulas from the perspective of digital filter analysis exploring their infamous instability using frequency domain tools.

Then we derive stable Newton-Cotes quadrature rules which are based on least squares approximation instead of interpolation. Reference formulas and their properties are provided.

One of the classical tasks of numerical integration is to estimate integral of a function f(x) on x\in[a,b]:

(1)   \[ I(f)=\int_a^b f(x)\,\mathrm{d}x \]

having only discrete values {f_i=f(x_i)} sampled at equidistant points x_i with step h:

    \begin{align*} x_i &= a+i\,h,\,\,\,\,i=0,\dots,N-1\\ h &=\frac{b-a}{N-1} \end{align*}

Newton-Cotes numerical integration rule Q_N(f) is a weighted sum of function values:

(2)   \[ Q_N(f)=\sum_{i=0}^{N-1}c_i\,f(x_i) \]

where coefficients \{c_i\} are derived from interpolating polynomial fitted for points \{(x_i, f_i)\}:

    \begin{align*} Q_2(f) &= \frac{h}{2}(f_0+f_1) && \text{trapezoidal rule}\\ Q_3(f) &= \frac{h}{3}(f_0+4\,f_1+f_2) && \text{Simpson's rule}\\ Q_4(f) &= \frac{3\,h}{8}(f_0+3\,f_1+3\,f_2+f_3) && \text{Simpson's 3/8 rule}\\ Q_5(f) &= \frac{2\,h}{45}(7\,f_0+32\,f_1+12\,f_2+32\,f_3+7\,f_4) && \text{Boole's rule}\\ Q_6(f) &= \frac{5\,h}{288}(19\,f_0+75\,f_1+50\,f_2+50\,f_3+75\,f_4+19\,f_5) \end{align*}

Goal of this simple idea is to build interpolating polynomial P_{N-1}(x) of the highest degree allowed by number of nodes \{x_i\} (=degrees of freedom in the system) and assume that I(P_{N-1}) \approx I(f). It is implied that polynomial of high degree is able to resemble behavior of more complex functions and minimizes estimation error as a result.

However this implication is not true in general case. Quality of polynomial approximation of a function f(x) depends drastically on how interpolation nodes \{x_i\} are chosen [Trefethens, Rivling]. It is well known (see Runge’s phenomenon) that in case of equally-spaced data approximation error could in fact become worse as we increase number of interpolation nodes and use polynomial of higher order.

This makes impossible to guarantee convergence of Q_N(f)\rightarrow I(f) for N\rightarrow\infty with the exception of very specific type of functions [Krylov 145]. Which means that high order Newton-Cotes rules tend to give inferior results compared to lower order formulas.

Spectral properties of Newton-Cotes formulas

Let’s study properties of Newton-Cotes formulas in frequency domain (or spectral properties) using DSP tools and terminology. NC rules are symmetric digital filters with finite impulse response. One of the most important characteristic of digital filter is magnitude/frequency response – function which shows how much filter damps or amplifies magnitude of particular frequency contained in input data.

Response functions of the first several NC formulas are presented below:

As we can see classical NC rules tend to suppress high frequencies (noise) in the input data and leave lowest harmonics (pure signal) almost intact. In that sense they can be called smoothing or low-pass filters.

Simpson 3/8 rule has the strongest suppression of noise (most stable). Plain Simpson has the weakest. Boole’s formula damps mid-range harmonics but gives somewhat larger response on the noisy high frequencies. Overall they can be considered as stable and acceptable for practical applications from DSP point of view.

Let’s take a look on magnitude responses of high-order Newton-Cotes rules where numerical instability shows up.

Here we have much more interesting situation. Rules are not low-pass anymore, actually they can be called “super” noise amplifiers as they strengthen high frequencies by great extend. This makes formulas numerically unstable, as rounding errors (or other “noisy” components) are amplified and might deteriorate the accuracy.

Although it is well known that high-order Newton-Cotes formulas are not suitable for applications, frequency domain gives us clear illustration for one of the “unsuitability” reasons – noise amplification. Obviously it is closely related to the characteristic of error resistance/propagation, \norm{\boldsymbol{c}}_1 = \sum |c_i|, which is commonly used to measure stability of numerical methods in classical analysis.

To overcome this issue usually researcher switches from equidistant integration rules to one of the more advanced quadrature of Gauss type (Legendre, Laguerre, Jacobi or else). There nodes are distributed more densely towards interval boundaries which allows preventing “wild” behavior of interpolating polynomial and leads to stable computations.

However in many practical applications only uniformly spaced function values are available and stable Newton-Cotes type formulas are needed. In the following sections we propose novel approach on how to build them using least square polynomial fit instead of interpolation.

Low Noise Newton-Cotes rules

There are several ways on how to improve high-order Newton-Cotes formulas. The most obvious is to re-target some of the degrees of freedom in the system from contributing to highest approximating order to regularization and stronger noise suppression. Natural way of doing this is to use least squares approximation instead of interpolation.

On the contrary to interpolation, least squares make possible to derive several integration filters of the same approximation order. Here we consider the most required O(h^5) and O(h^7) formulae.

    \[ \begin{tabular}{|r|c|l|} \hline $N$& Classic Newton-Cotes Formulas of $O(\displaystyle{{h}^{5}\,f^{(4)}})$& Error Term\\ \hline &&\\ 3&$\displaystyle{\frac{h}{3}\,\left( f_{{0}}+4\,f_{{1}}+f_{{2}} \right)}$&$\displaystyle{\frac{{h}^{5}}{90}\,f^{(4)}(\xi)}$\\ &&\\ 4&$\displaystyle{\frac{3\,h}{8}\,\left(f_{{0}}+3\,f_{{1}}+3\,f_{{2}}+f_{{3}}\right)}$&$\displaystyle{\frac{3\,{h}^{5}}{80}\,f^{(4)}(\xi)}$\\ &&\\ \hline $N$& Stable/Low Noise Newton-Cotes Formulas of $O(\displaystyle{{h}^{5}\,f^{(4)}})$& Error Term\\ \hline &&\\ 5&$\displaystyle{\frac{4\,h}{105}\,\left( 11\,f_{{0}}+26\,f_{{1}}+31\,f_{{2}}+26\,f_{{3}}+11\,f_{{4}} \right)}$&$\displaystyle{\frac{34\,{h}^{5}}{315}\,f^{(4)}(\xi)}$\\ &&\\ 6&$\displaystyle{\frac{5\,h}{336}\,\left( 31\,f_{{0}}+61\,f_{{1}}+76\,f_{{2}}+76\,f_{{3}}+61\,f_{{4}}+31\,f_{{5}}\right)}$&$\displaystyle{\frac{265\,{h}^{5}}{1008}\,f^{(4)}(\xi)}$\\ &&\\ 7&$\displaystyle{\frac{h}{14}\,\left( 7\,f_{{0}}+12\,f_{{1}}+15\,f_{{2}}+16\,f_{{3}}+15\,f_{{4}}+12\,f_{{5}}+7\,f_{{6}} \right)}$&$\displaystyle{\frac{39\,{h}^{5}}{70}\,f^{(4)}(\xi)}$\\ &&\\ \hline \end{tabular} \]

First two rules are classics – Simpson’s and Simpson’s 3/8. Others have much stronger noise suppression in cost of larger constant in error term. Magnitude responses are below for comparison:

Formulas of higher approximation order can be derived analogously:

    \[ \begin{tabular}{|r|c|l|} \hline $N$& Classic Newton-Cotes Formulas of $O(\displaystyle{{h}^{7}\,f^{(6)}})$& Error Term\\ \hline &&\\ 5&$\displaystyle{\frac{2\,h}{45}\,\left( 7\,f_{{0}}+32\,f_{{1}}+12\,f_{{2}}+32\,f_{{3}}+7\,f_{{4}} \right)}$&$\displaystyle{\frac{8\,{h}^{7}}{945}\,f^{(6)}(\xi)}$\\ &&\\ 6&$\displaystyle{\frac{5\,h}{288}\,\left( 19\,f_{{0}}+75\,f_{{1}}+50\,f_{{2}}+50\,f_{{3}}+75\,f_{{4}}+19\,f_{{5}} \right)}$&$\displaystyle{\frac{275\,{h}^{7}}{12096}\,f^{(6)}(\xi)}$\\ &&\\ \hline $N$& Stable/Low Noise Newton-Cotes Formulas of $O(\displaystyle{{h}^{7}\,f^{(6)}})$& Error Term\\ \hline &&\\ 7&$\displaystyle{\frac{h}{770}\,\left( 268\,f_{{0}}+933\,f_{{1}}+786\,f_{{2}}+646\,f_{{3}}+786\,f_{{4}}+933\,f_{{5}}+268\,f_{{6}} \right)}$&$\displaystyle{\frac{17\,{h}^{7}}{308}\,f^{(6)}(\xi)}$\\ &&\\ 8&$\displaystyle{\frac{7\,h}{31680}\,\left[ \begin{matrix}  1657\,(f_{{0}}+f_{{7}})+5157\,(f_{{1}}+f_{{6}})+\\  4947\,(f_{{2}}+f_{{5}})+4079\,(f_{{3}}+f_{{4}}) \end{matrix} \right]   }$&$\displaystyle{\frac{11767\,{h}^{7}}{95040}\,f^{(6)}(\xi)}$\\  &&\\ 9&$\displaystyle{\frac{8\,h}{6435}\,\left[ \begin{matrix} 309\,(f_{{0}}+f_{{8}})+869\,(f_{{1}}+f_{{7}})+\\ 904\,(f_{{2}}+f_{{6}})+779\,(f_{{3}}+f_{{5}})+713\,f_{{4}} \end{matrix} \right]   }$&$\displaystyle{\frac{2696\,{h}^{7}}{10395}\,f^{(6)}(\xi)}$\\  &&\\ \hline \end{tabular} \]

with responses:

All regularized Newton-Cotes rules show improved resistance to noise (leading to elevated numerical stability) in comparison with classic counterparts. Proposed formulas can be used as a basement for composite rules, adaptive integration algorithms and even more exotic overlapped compound rules.

Stable Newton-Cotes formulas of High Order

Least squares can also be used to derive numerically stable rules of high orders. Here we consider filters of O(h^{11}). Usual NC formula for the case N=9 is highly unstable and cannot be used in practice – it amplifies noise and its norm is high \sum |c_i|=1.45. Our approach allows to overcome both issues.

Stable Newton-Cotes quadrature of O(h^{11}) needs additional six function evaluations to achieve \sum |c_i|=1.0:

Classic NC formula is marked red, proposed by green. As we can see new rule is perfectly stable.

Actually this approach allows building the whole family of stabilized rules for any particular O(h^{m}). Case of N=15 is just the minimum N which required to achieve \sum |c_i|=1.0 for O(h^{11}). All filters of N>15 are also stable (although with larger constant in error term) and usable.

If you are interested in coefficients of the stabilized filters please contact me by email/or write comment below. I omitted them since they would clatter the page.

*
If you find these ideas interesting please do not hesitate to contact me for further discussion.

**
All equations on the page are rendered using QuickLaTeX. Also I have used Maple in all derivations, C++ for validation and testing.


1 Star2 Stars3 Stars4 Stars5 Stars (5 votes, average: 5.00)
Loading...

22 Comments

  1. renato
    Posted May 24, 2011 at 5:42 pm | #

    I’m writing a piece of code to integrate the motion equation of an AC electric motor coupled with a pump.
    Given the shape of the motor torque-speed curve I have problem of stability of solution when approaching tha equilibrium point (the slope of motor torque is almost vertical there).
    I tried your 9 term enhanced and stibilized formula and I got far more stable results than by newton cotes formulae, but not yet stability. So I would like to try with your 15 terms formula.
    Could you tell me the coefficients I need to use? It would be very kind of you.

    thank you

    Renato

    • Posted May 25, 2011 at 9:51 am | #

      Renato,

      Here is formula for N=15 with O(h^{11}):

      h*(0.2905619972*(f[0]+f[14]) + 1.539912853*(f[1]+f[13]) + 0.3033266023*(f[2]+f[12])
      +1.466074838*(f[3]+f[11]) + 1.146540574*(f[4]+f[10]) + 0.5808175934*(f[5]+f[9])
      +0.9764347770*(f[6]+f[8]) + 1.392661532*f[7])

      I think you will get better results for your task using composite rules based on overlapped Newton-Cotes formulas.
      Enhanced composite rules (2)-(4) will fit perfectly.

  2. renato
    Posted May 27, 2011 at 6:05 pm | #

    Hallo Pavel,
    Just to inform you, I didn’t succeded to stabilized my computational scheme, I think it’s intrinsecally unstable. The only thing to do is reduce the time step.
    Anyhow, your 9 terms stabilized formula gave the best outcome, better than the 15 terms one. I find it’s an outstanding quadrature scheme!. The overlapped formula showed a strange behaviour, introducing oscillations, but perhaps I have erroneously implemented it.
    I wish to thank you one more time for your suggestions and help. It will be useful to me for future work too.

    Best regarsds

    Renato

    • Posted May 29, 2011 at 1:41 pm | #

      Thank you for your feedback!

      If you allowed to change time step arbitrary the best bet would be to check Gauss-Legendre Quadrature or Double Exponential Quadrature. Latter is particularly good when function is having singularities, like in your case.

      As for overlapped composite rules, could you send me your code – I’ll study why it is not working as expected?

  3. Posted November 25, 2011 at 12:56 pm | #

    Hi Pavel,

    I am interested in your Least-Square integration equations. Your work seems impressive and original. Always nice to find new algorithms.

    Regards,

    Namir

  4. Nikos
    Posted December 23, 2011 at 10:22 am | #

    Dear Pavel

    I tried to reproduce the Low Noise formula for N=5: h*(a[2]*f[0]+a[1]*f[1]+a[0]*f[2]+a[1]*f[3]+a[2]*f[4])

    I have obtained:

    a[0] = -4/3+6*a[2], a[1] = 8/3-4*a[2] for the algebraic order 4, but how can I obtain the coefficient a[2]? For this case which is the new equation for reduce noise?

    Thank you for your help

    • Posted December 26, 2011 at 10:06 am | #

      Nikos,
      To reproduce formula for N=5 we use least squares fit with cubic polynomial: \sum\limits_{k=0}^{4}{(P_3(x_k)-f_k)^2}\rightarrow\min.
      Cubic has less unknown parameters(4) than data samples(5) – this pushes fitted polynomial to cut high frequency component in signal.
      Thus noise suppression is obtained naturally from least squares approximation. Classic formulae use interpolation which is disastrous for noisy data.

      • Nikos
        Posted December 27, 2011 at 4:34 am | #

        So, the relation is: \sum_{i=1}^{4}|c_{i}| = 1 is the correct one? (c_{i} are the coefficients of the method).

        Do you agree with the above relation?

        • Posted December 27, 2011 at 11:17 am | #

          Norm \norm{\boldsymbol{c}}_1 = \sum |c_i| is a stability measure for integration formula. Classic NC formulas of high order have \sum |c_i|>1 which means they are unstable.

          My approach allows to derive stable high order rules with \sum |c_i|=1 – this is maybe the most important result of this work.
          However we do not employ this condition in derivation procedure directly – it comes naturally from least squares approximation which brings stability.

          So, yes, N=5 has \sum |c_i|=1.

          • Nikos
            Posted December 27, 2011 at 7:33 pm | #

            Thanks Pavel

            I wish you a very very happy 2012!!!

          • Posted December 28, 2011 at 9:30 am | #

            Same to you!!!

            And please let me know on your results with stable Newton-Cotes rules….

  5. Panagiotis
    Posted May 27, 2012 at 4:26 am | #

    Dear Pavel

    I have seen your work and it is very interesting

    Can you please explain me how you have derived the graph for Simpson’s for example rule?

    Thanks in advance

  6. azam
    Posted November 16, 2013 at 1:35 am | #

    hi
    thank you for text .it was very interesting .

    azam

  7. RobS
    Posted December 31, 2014 at 11:02 pm | #

    I found my way here because I was looking at least-squares based numerical quadrature.

    You might find this interesting …

    For your new rules: Since the underlying function is a least-squares polynomial the least squares integrator is related to a Savitzky-Golay filter. For example if you fit a cubic to 5 points you can derive the integrator coefficients by multiplying the least-square estimated y-values from the Savitzky-Golay filter with the Newton-Coates integration coefficients – which effectively integrates the underlying least-squares polynomial exactly. For example,

    integral = weights_NC5 * SG3_5_coeff matrix * y_values
    = [0.2095 0.4952 0.5905 0.4952 0.2095] * y_values
    = weights for stable NC3 * y_values
    Here,
    weights_NC5 = (1/45) * [ 7 32 12 32 7 ]
    and,
    SG3_5_coeff matrix = Savitzky-Golay Coefficient matrix
    = [ 0.9857 0.0571 -0.0857 0.0571 -0.0143]
    [ 0.0571 0.7714 0.3429 -0.2286 0.0571]
    [ -0.0857 0.3429 0.4857 0.3429 -0.0857]
    [ 0.0571 -0.2286 0.3429 0.7714 0.0571]
    [ -0.0143 0.0571 -0.0857 0.0571 0.9857]

    It is interesting to create least-squares quadrature rules for other well known quadrature rules. These rules obviously don’t integrate as high a order polynomial as their interpolating brothers.

    For example take the Gauss 5-point rule (exactly integrates poly degree 9), then fit a least squares cubic. We get the a quadrature rule for the Gauss 5-point abscissa with weights,

    w_lsGauss5_3 = [0.241778663 0.464888004 0.586666667 0.464888004 0.241778663]

    It only exactly integrates a polynomial degree 3 but it integrates higher order polynomials an impressively low error.
    (a bit like Clenshaw-Curtis I suppose).

    Anyway, thanks for the site. Cool stuff!

    Rob

    • Posted August 17, 2015 at 10:16 pm | #

      Interesting observation. My initial idea was to consider classic formulas from frequency domain perspective.

      As it turned out – frequency domain gives us much more degrees of freedom on how to build formulas for numerical methods (and more tools for analysis).

      Condition of exactness on polynomials can be formulated in frequency domain as condition at only 1-point! Usually frequency response (and its derivatives) of the formula should have certain values at origin. That’s it, 1 -point condition.

      On the rest of frequencies – we can impose any conditions without loosing the exactness on polynomials. For example, we can make formulas smooth and noise-robust (e.g. numerical differentiation or smoothing) or even suppress some range of noisy frequencies or else. Infinite number of possible criteria :).

      The nice thing is that all conditions (like least-squares) can also be re-formulated in terms of frequency domain, so that we don’t loose any criteria.

      Numerical integration methods are bit more complex, but recently (and finally!) I understood how they can be transferred to frequency domain. As it turns out, polynomial approximation can be re-formulated in terms of \text{sinc}(\omega) function. This is some very interesting fact on its own. Using this, now we have freedom to build non-uniform rules with elevated smoothness, or other properties we want.

  8. Toon Weyens
    Posted May 18, 2016 at 9:16 pm | #

    Dear Pavel,

    Very interesting ideas that you are writing down here!

    I was wondering whether it is possible to extend this to numerical differentiation as well, using the same philosophy of taking an excess of points to maximize the bandwidth. It is well-known that finite differences suffer from inaccuracies for higher orders as well…

    Regards,
    Toon Weyens

  9. Elmar Zander
    Posted November 16, 2016 at 6:40 pm | #

    Hi Pavel,

    very interesting idea. However, I have a problem understanding the frequency response plot you show. I’ve coded the rules and frequency response function in mathematica, and I get the same results as you show in your plot when I choose h=1/(N-1) but f_i=f(a+i), not f_i(a+ih). I further plotted the response of the exact integral from 0 to 1 and that showed even worse damping then all the Newton-Cotes rules.
    However, when I choose f_i = f(a+ih) the results look quite differently. Then the trapezoidal rule seems to damp far to much and all the others damp a lot less, but are very close in damping to the exact integration.

    Am I getting something wrong here?

    Kind regards,
    Elmar

    • Posted November 17, 2016 at 12:20 pm | #

      Hi Elmar,

      Thank you for your interest.
      WLOG, to build the frequency responses, we assume a=-1/2 and b=1/2, so that step has simple expression h=1/(N-1).

      Lets consider formula 4h/105 as an example. After plugging in the h formula has the coefficients: [11 26 31 26 11]/105

      Frequency response of a symmetric filter with coeffs. \{c_k\} has the form: H(\omega) = c_0 + 2\sum{c_k\,\cos(k\,\omega)}

      In our case, c_0 = 31/105,\,\,c_1 = 26/105,\,\,c_2 = 11/105, so that response becomes:

          \[H(\omega)  = {\frac{31}{105}}+{\frac {52\,\cos \left( w \right) }{105}}+{\frac {22 \,\cos \left( 2\,w \right) }{105}}\]

      Now we can plot it for \omega=[0,\pi] to reproduce the plot above.

      (In article I use normalized frequency range to make response invariant of sampling step, so I plot |H(2\pi\omega)| against \omega=[0,0.5].)

  10. Karl
    Posted May 1, 2022 at 5:26 pm | #

    Dear Pavel,
    Thanks for the very informative website. I am trying your 15 term stable NC and it works very well. Would it be possible for you to give a formula for estimating the error when using in an adaptive routine. I am using one and two subintervals in my routine. Also, it would be great if you would post the least squares derivation method for these stable NC methods. Thanks.

Post a Reply to azam

Your email is never published nor shared.

Use native LaTeX syntax to include formulas: $ ... $, \[ ... \], etc. Do not forget to preview comment before posting.

Also you may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Subscribe without commenting