A new regularization approach for numerical differentiation

14 Apr 2020  ·  Nayak Abinash ·

The problem of numerical differentiation can be thought of as an inverse problem by considering it as solving a Volterra equation. It is well known that such inverse integral problems are ill-posed and one requires regularization methods to approximate the solution appropriately. The commonly practiced regularization methods are (external) parameter-based like Tikhonov regularization, which have certain inherent difficulties like choosing an optimal value of the regularization parameter, especially in the absence of noise information, which is a non-trivial task. An attractive alternative is iterative regularization, where one minimizes an associated functional in a regularized fashion (i.e., stopping at an appropriate iteration). However, in most of the regularization methods the associated minimizing functional contains the noisy data directly, and hence, the recovery gets affected. In this paper, we propose an iterative regularization method where the minimizing functional does not contain the noisy data directly. The advantage, in addition to circumventing the use of noisy data directly, is that the sequence of functions (or curves) constructed during the descent process does not converge strongly to the noise data, rather weakly and hence, avoids overfitting. Furthermore, this method is very robust to extreme noise level in the data as well as to errors with non-zero mean. To demonstrate the effectiveness of the new method we provide some examples comparing the numerical results obtained from our method with results obtained from some of the popular regularization methods such as Tikhonov regularization, total variation, smoothing spline and the polynomial regression method.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Numerical Analysis Numerical Analysis