We consider widely used Newton-Cotes formulas from the perspective of digital filter analysis exploring their infamous instability using frequency domain tools.

Then we derive stable Newton-Cotes quadrature rules which are based on least squares approximation instead of interpolation. Reference formulas and their properties are provided.

One of the classical tasks of numerical integration is to estimate integral of a function on :

(1)

having only discrete values sampled at equidistant points with step :

Newton-Cotes numerical integration rule is a weighted sum of function values:

(2)

where coefficients are derived from interpolating polynomial fitted for points :

Goal of this simple idea is to build interpolating polynomial of the highest degree allowed by number of nodes (=degrees of freedom in the system) and assume that . It is implied that polynomial of high degree is able to resemble behavior of more complex functions and minimizes estimation error as a result.

However this implication is not true in general case. Quality of polynomial approximation of a function depends drastically on how interpolation nodes are chosen [Trefethens, Rivling]. It is well known (see Runge’s phenomenon) that **in case of equally-spaced data approximation error could in fact become worse as we increase number of interpolation nodes and use polynomial of higher order**.

This makes impossible to guarantee convergence of for with the exception of very specific type of functions [Krylov 145]. Which means that high order Newton-Cotes rules tend to give inferior results compared to lower order formulas.

## Spectral properties of Newton-Cotes formulas

Let’s study properties of Newton-Cotes formulas in frequency domain (or spectral properties) using DSP tools and terminology. NC rules are symmetric digital filters with finite impulse response. One of the most important characteristic of digital filter is magnitude/frequency response – function which shows how much filter damps or amplifies magnitude of particular frequency contained in input data.

Response functions of the first several NC formulas are presented below:

As we can see classical NC rules tend to suppress high frequencies (noise) in the input data and leave lowest harmonics (pure signal) almost intact. In that sense they can be called smoothing or low-pass filters.

Simpson 3/8 rule has the strongest suppression of noise (most stable). Plain Simpson has the weakest. Boole’s formula damps mid-range harmonics but gives somewhat larger response on the noisy high frequencies. Overall they can be considered as stable and acceptable for practical applications from DSP point of view.

Let’s take a look on magnitude responses of high-order Newton-Cotes rules where numerical instability shows up.

Here we have much more interesting situation. Filters are not low-pass anymore, actually they can be called “super” noise amplifiers as they strengthen high frequencies by great extend. This makes formulas very unstable.

Although it is well known that high-order Newton-Cotes formulas are not suitable for applications, frequency domain gives us clear illustration for one of the “unsuitability” reasons – noise amplification. Obviously it is closely related to the characteristic of error resistance/propagation, , which is commonly used to measure stability of numerical methods in classical analysis.

To overcome this issue usually researcher switches from equidistant integration rules to one of the more advanced quadrature of Gauss type (Legendre, Laguerre, Jacobi or else). There nodes are distributed more densely towards interval boundaries which allows preventing “wild” behavior of interpolating polynomial and leads to stable computations.

However in many practical applications only uniformly spaced function values are available and stable Newton-Cotes type formulas are needed. In the following sections we propose novel approach on how to build them using least square polynomial fit instead of interpolation.

## Low Noise Newton-Cotes rules

There are several ways on how to improve high-order Newton-Cotes formulas. The most obvious is to re-target some of the degrees of freedom in the system from contributing to highest approximating order to regularization and stronger noise suppression. Natural way of doing this is to use least squares approximation instead of interpolation.

On the contrary to interpolation, least squares make possible to derive several integration filters of the same approximation order. Here we consider the most required and formulae.

First two rules are classics – Simpson’s and Simpson’s 3/8. Others have much stronger noise suppression in cost of larger constant in error term. Magnitude responses are below for comparison:

Formulas of higher approximation order can be derived analogously:

with responses:

All regularized NC rules shows improved resistance to noise (leading to elevated numerical stability) in comparison with classic counterparts. Proposed formulas can be used as a basement for composite rules, adaptive integration algorithms and even more exotic overlapped compound rules of my own design.

## Stable Newton-Cotes formulas of High Order

Least squares can also be used to derive numerically stable rules of high orders. Here we consider filters of . Usual NC formula for the case is highly unstable and cannot be used in practice – it amplifies noise and its norm is high . Our approach allows to overcome both issues.

Stable Newton-Cotes quadrature of needs additional six function evaluations to achieve :

Classic NC formula is marked red, proposed by green. As we can see new rule is perfectly stable.

Actually this approach allows building the whole family of stabilized rules for any particular . Case of is just the minimum which required to achieve for . All filters of are also stable (although with larger constant in error term) and usable.

If you are interested in coefficients of the stabilized filters please contact me by email/or write comment below. I omitted them since they would clatter the page.

*

If you find these ideas interesting please do not hesitate to contact me for further discussion.

**

All equations on the page are rendered using QuickLaTeX. Also I have used Maple in all derivations, C++ for validation and testing.

## 15 Comments

#I’m writing a piece of code to integrate the motion equation of an AC electric motor coupled with a pump.

Given the shape of the motor torque-speed curve I have problem of stability of solution when approaching tha equilibrium point (the slope of motor torque is almost vertical there).

I tried your 9 term enhanced and stibilized formula and I got far more stable results than by newton cotes formulae, but not yet stability. So I would like to try with your 15 terms formula.

Could you tell me the coefficients I need to use? It would be very kind of you.

thank you

Renato

#Renato,

Here is formula for with :

h*(0.2905619972*(f[0]+f[14]) + 1.539912853*(f[1]+f[13]) + 0.3033266023*(f[2]+f[12])

+1.466074838*(f[3]+f[11]) + 1.146540574*(f[4]+f[10]) + 0.5808175934*(f[5]+f[9])

+0.9764347770*(f[6]+f[8]) + 1.392661532*f[7])

I think you will get better results for your task using composite rules based on overlapped Newton-Cotes formulas.

Enhanced composite rules (2)-(4) will fit perfectly.

#Hallo Pavel,

Just to inform you, I didn’t succeded to stabilized my computational scheme, I think it’s intrinsecally unstable. The only thing to do is reduce the time step.

Anyhow, your 9 terms stabilized formula gave the best outcome, better than the 15 terms one. I find it’s an outstanding quadrature scheme!. The overlapped formula showed a strange behaviour, introducing oscillations, but perhaps I have erroneously implemented it.

I wish to thank you one more time for your suggestions and help. It will be useful to me for future work too.

Best regarsds

Renato

#Thank you for your feedback!

If you allowed to change time step arbitrary the best bet would be to check Gauss-Legendre Quadrature or Double Exponential Quadrature. Latter is particularly good when function is having singularities, like in your case.

As for overlapped composite rules, could you send me your code – I’ll study why it is not working as expected?

#Hi Pavel,

I am interested in your Least-Square integration equations. Your work seems impressive and original. Always nice to find new algorithms.

Regards,

Namir

#Hi Namir,

Thank you.

#Dear Pavel

I tried to reproduce the Low Noise formula for N=5: h*(a[2]*f[0]+a[1]*f[1]+a[0]*f[2]+a[1]*f[3]+a[2]*f[4])

I have obtained:

a[0] = -4/3+6*a[2], a[1] = 8/3-4*a[2] for the algebraic order 4, but how can I obtain the coefficient a[2]? For this case which is the new equation for reduce noise?

Thank you for your help

#Nikos,

To reproduce formula for N=5 we use least squares fit with cubic polynomial: .

Cubic has less unknown parameters(4) than data samples(5) – this pushes fitted polynomial to cut high frequency component in signal.

Thus noise suppression is obtained naturally from least squares approximation. Classic formulae use interpolation which is disastrous for noisy data.

#So, the relation is: is the correct one? ( are the coefficients of the method).

Do you agree with the above relation?

#Norm is a stability measure for integration formula. Classic NC formulas of high order have which means they are unstable.

My approach allows to derive stable high order rules with – this is maybe the most important result of this work.

However we do not employ this condition in derivation procedure directly – it comes naturally from least squares approximation which brings stability.

So, yes, N=5 has .

#Thanks Pavel

I wish you a very very happy 2012!!!

#Same to you!!!

And please let me know on your results with stable Newton-Cotes rules….

#Dear Pavel

I have seen your work and it is very interesting

Can you please explain me how you have derived the graph for Simpson’s for example rule?

Thanks in advance

#hi

thank you for text .it was very interesting .

azam

#I found my way here because I was looking at least-squares based numerical quadrature.

You might find this interesting …

For your new rules: Since the underlying function is a least-squares polynomial the least squares integrator is related to a Savitzky-Golay filter. For example if you fit a cubic to 5 points you can derive the integrator coefficients by multiplying the least-square estimated y-values from the Savitzky-Golay filter with the Newton-Coates integration coefficients – which effectively integrates the underlying least-squares polynomial exactly. For example,

integral = weights_NC5 * SG3_5_coeff matrix * y_values

= [0.2095 0.4952 0.5905 0.4952 0.2095] * y_values

= weights for stable NC3 * y_values

Here,

weights_NC5 = (1/45) * [ 7 32 12 32 7 ]

and,

SG3_5_coeff matrix = Savitzky-Golay Coefficient matrix

= [ 0.9857 0.0571 -0.0857 0.0571 -0.0143]

[ 0.0571 0.7714 0.3429 -0.2286 0.0571]

[ -0.0857 0.3429 0.4857 0.3429 -0.0857]

[ 0.0571 -0.2286 0.3429 0.7714 0.0571]

[ -0.0143 0.0571 -0.0857 0.0571 0.9857]

It is interesting to create least-squares quadrature rules for other well known quadrature rules. These rules obviously don’t integrate as high a order polynomial as their interpolating brothers.

For example take the Gauss 5-point rule (exactly integrates poly degree 9), then fit a least squares cubic. We get the a quadrature rule for the Gauss 5-point abscissa with weights,

w_lsGauss5_3 = [0.241778663 0.464888004 0.586666667 0.464888004 0.241778663]

It only exactly integrates a polynomial degree 3 but it integrates higher order polynomials an impressively low error.

(a bit like Clenshaw-Curtis I suppose).

Anyway, thanks for the site. Cool stuff!

Rob