Numerical Differentiation

This section is about methods of calculation derivative numerically. Description covers classic central differences, Savitzky-Golay (or Lanczos) filters for noisy data and original smooth differentiators.


1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 4.50)
Loading ... Loading ...

6 Comments

  1. Posted January 20, 2010 at 11:35 pm | #

    Pavel,

    I like your neat website, but I have one comment regarding your smooth noise-robust differentiation formulas. In the attached Excel spreadsheet I compare your formula for first-order differentiation for N = 5 with that for central differencing, either using no input noise (as when used in optimizing software) or with noisy input data. I use a highly nonlinear test function, F(x) = x20, with a simple x0-value, x0 = 1.234, and plot the results as the negative 10-based logarithm (“p”, as in pH) of the absolute error (“E”) as a function of the negative logarithm of the step amplitude h.

    (Some notes on nomenclature. In the spreadsheet I use j instead of your N (which in statistical data analysis usually refers to the number of data or samples taken), and d (for distance) instead of your h (for height). My pE is sometimes called LRE, for logarithm of the relative error, but here it is the negative logarithm of the absolute error. The added noise is from Excel’s Random number generator, under Tools > Data analysis, and I have used a small relative error in x0.)

    The interesting result is that central differencing is superior (as long as one can find the optimum step size, for which I recently published an algorithm, see http://www.ias.ac.in/chemsci/Pdf-Sep2009/935.pdf for noise-free data, but that they become equivalent with almost any realistic amount of external noise (which I here distinguish from the cancellation noise generated “internally” by the computer). Your formulas do have a small advantage at very small d-values, where the cancellation noise is reduced because your formulas in general use smaller coefficients, in the example illustrated –1, –2, 0, 2, 1 versus 1, –8, 0, 8, –1, while the tables are turned in the region of large d-values (small pd). But in the broad region where the external noise dominates, the results appear to be identical.

    I did not test the Lanczos formula, but it appears to be rather equivalent to yours in these respects.

    Am I missing something?

    Bob

    • Posted June 22, 2010 at 5:35 pm | #

      Dear Robert,

      Thank you very much for your feedback.
      I am sorry for the late answer.

      Let me make some comments on your comparisons.

      1. Case when input data contain no noise.
      Central difference formula (-1,-8,0,8,1) has approximation order of O(h^4\,f^{(5)}).
      Whereas noise robust differentiator (NRD) your are comparing to has only O(h^2\,f^{(2)}).
      So, it is only natural that difference outperforms NRD in such unfair conditions.

      It would be fair to compare noise robust differentiators from the second table (n=4) with the same approximating property as central difference (-1,-8,0,8,1) has.
      For example, please try this one: (-5,-12,-39,0,39,12,5).

      Or it would be fair to compare (-1,-2,0,2,1) with difference (-1,0,1) which also has O(h^2\,f^{(2)}) error term.

      2. Case when input data contain noise.
      NRD filter (-1,-2,0,2,1) has the weakest noise suppression capability of all family of smooth differentiators. It just suppresses high frequency noise near \omega=\pi. If data contain noise in lower frequencies than longer NRD filters should be used to suppress it.

      That is the beauty of NRD filters – they can be selected flexibly based on data properties.
      As longer NRD filters as stronger noise suppression.

      To see this clearly please try longer filter like N=15:

           \[ {f^{\prime}(x^*)\approx \frac{1}{24576}\frac{3267\,\Delta_{{1}}+3476\,\Delta_{{2}}+1507\,\Delta_{{3}}-16\,\Delta_{{ 4}}-305\,\Delta_{{5}}-124\,\Delta_{{6}}-17\,\Delta_{{7}}}{h}} \]

      where

          \[ \Delta_k = f_k-f_{-k}\]

      NRD filters uses their degrees of freedom to balance between approximation order and noise suppression. Central differences uses all coefficients to attain maximum approximation order.

      This is a little confusing but allows great flexibility. For instance, we have only one central
      difference which produce exact derivative for 1,\, x,\, x^2,\, x^3,\, x^4: (-1,-8,0,8,1).

      Whereas we have the whole family of noise robust differentiators of such approximation order, ranging from ones with weak noise suppression(short filters) to filters with any desired noise suppression capability (long filters).

      Hope this could be of some help.

  2. Shijun
    Posted July 12, 2011 at 12:14 pm | #

    Dear Avel

    I like this website, and i am performing one study about electronic properties of low-dimensional structures. May be the central differences method could be emplpyed to calculate my problems. However my mathematical programm ability is poor. Could you provide some examples about solving 3 D shrodinger equation in this website. I think most peopole will like them.

  3. Posted October 17, 2012 at 9:36 am | #

    Hi Pavel,

    I just started using you QuickLatex plugin and it’s great. The plugin works great, but I wanted to know how I could get my latex to display as sharp as it does on your website. It may be the theme I am using (Tarski), but I am not sure. Here is my web page with some latex on it

    http://kennychowdhary.me/

    As you can see, the latex works fine, but the sharpness is so much nicer on your web page. Any thoughts?

    Thanks,
    Kenny

  4. Olaf Hellmuth
    Posted October 16, 2014 at 5:28 am | #

    Dear Pavel,
    I found your website very interesting and helpful for me in solving an engineering problem (honestly, it is the only website
    providing me direct help.)
    I will cite your website in a manuscript correspondingly.
    Unfortunately, I cannot rederive all of the proposed equations but will compare the numerical derivatives
    with suitable analytical derivatives.
    My question concerns:
    (a) your approximation of \partial^2 f/(\partial x \partial y) (posted on February 20, 2009);
    your expression is valid for \Delta x = \Delta y=h.
    Can you eventually provide me also with the corresponding derivative for
    \Delta x \ne \Delta y ?
    (b) Amins expressions for \partial^3 f/(\partial x \partial y^2) and \partial f/(\partial x^2 \partial y), posted
    on March 31, 2012, are of interest for me.
    You answered, that ‘at first glance’ this formulas seems to be all right.
    Can you confirm the correctness of the derivatives? Your own third-order derivatives employ at least 5 points,
    Amins expression does only employ 3 points (which must not be incorrect, of course).
    I want to implement this in my application (calculation of combined uncertainties for highly nonlinear functions).
    Thus, a confirmation or 5-point approximation would be very helpful.

    Many thanks in advance
    Olaf

    • Posted October 29, 2014 at 2:20 pm | #

      Dear Olaf,

      (a). Just use \Delta x \Delta y in denominator instead of h^2.
      (b). I think Amin’s expression is correct.

      Sorry for delayed reply.
      Pavel.

Post a Comment

Your email is never published nor shared.

Use native LaTeX syntax to include formulas: $ ... $, \[ ... \], etc. Do not forget to preview comment before posting.

Also you may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Subscribe without commenting