A Backward Identification Problem for an Axis-Symmetric Fractional Diffusion Equation

We consider a backward ill-posed problem for an axis-symmetric fractional diffusion equation which is described in polar coordinates. A closed form solution of the inverse problem is obtained. However, this solution blows up. For numerical stability, a general regularization principle is presented for constructing regularization methods. Several numerical examples are conducted for showing the validity and effectiveness of the proposed methods.

The difficulty of inverse problem for fractional diffusion equations lies in the ill-posedness and the fractional derivative. For more details on the inverse problems, the readers can refer to a recent survey [7] and the references therein. However, the works on inverse problems are considered in the Cartesian coordinates. For backward time-fractional diffusion problems, Dou and Hon [2] presented a fundamental solution method combined classical regularization methods, Liu [13,22] considered two quasi-reversibility regularization methods. Yang and Liu [23] proposed a Fourier method.
In this article, we consider an axis-symmetric fractional diffusion equation. Let f (t) ∈ C[a, b] be a time-dependant function, 0 < α < 1 is the order of a fractional derivative and the Caputo fractional derivative is defined as follows [18] A two-parameter Mittag-Leffler function is defined as The forward axis-symmetric fractional diffusion problem now can be expressed as follows: subject to the following boundary and initial conditions Inverse Problem: we want to recover the solution u(r, t) ∈ L 2 ((0, R), ρ) with 0 ≤ t < T from the final time data u(r, T ) := g(r) ∈ L 2 ((0, R), ρ) where L 2 ((0, R), ρ) denotes the Hilbert space of squares Lebesgue measurable functions with weight ρ(r) = r/R 2 defined on (0,R).
The aim of this work is to investigate mathematically and numerically the ill-posedness nature of the backward identification problem. Under the setting of a radial diffusion geometry, the solution is explicitly constructed as an infinite series in terms of the eigenfunctions. Due to the "amplifying factor", small perturbation of the data can lead to large error in the solution. Based on the observation on the ill-posedness nature, we give a general regularization principle for numerical stable solution.

The closed form solution
First we solve the forward problem by the separation of variables. For this purpose, the analytical solution of problem (1.1)-(1.2) is assumed as where ϕ i (r) are called eigenfunctions. Substituting equation (2.1) into equation (1.1), we can obtain the Bessel differential equation and fractional differential equation. In fact, the eigenfunctions ϕ i (r) are the solutions of Bessel differential equations: the boundary conditions are lim r→0 ϕ(r) bounded and ϕ(R) = 0. And λ is the separation constant.
Considering the boundary condition, we can calculate the separation con- where the µ i , i = 1, 2, · · · , ∞ are the positive zeros of the the zero-order Bessel function of the first kind J 0 (µ), the eigenfunctions ϕ i are ϕ i (r) = J 0 (µ i r R ) which is a complete orthogonal basis in L 2 ((0, R), ρ). Now the analytical solution (2.1) of the problem is rewritten as First the orthogonality relationship holds From the above equation, we can easily conclude that the function q i (t) satisfies the following fractional differentiation equation: Solving the fractional differentiation equation, we can easily get the solution Now, finally we have the analytic solution for the forward problem (1.1)-(1.2): Now we turn to solving the inverse problem. From (2.2), we have the equality: Using the orthogonality relationship, we have dr. Therefore, the solution for the inverse problem is given by , then the solution for the inverse problem is given by Therefore the backward identification problem is an ill-posed problem.
Remark 2. If we consider the similar problem which is formulated in the case of Cartesian coordinate: with boundary conditions u(0, t) = u(R, t) = 0 and final condition u(x, T ) = g(x). The solution for the inverse problem is given by where g n is the sine Fourier coefficients. Comparing this expression with (2.4), we can see that the problems in the two cases have almost the same ill-posedness.
In this paper, we are more interested in finding the inverse solution u(r, 0) in (2.4). Here noting that E α,1 (0) = 1 we write the inverse solution as In order to show the instability of the solution, we need a lemma from [13].
From Lemma 1, we can conclude that with a fixed α which is far away from 1. Therefore, from (2.5) we can see that a small perturbation in the data g can cause a large change in the solution u(r, 0). In the case of fractional derivative, the backward identification problem is mildly ill-posed. From above analysis, we know that the ill-posedness of the backward identification problem is caused by the factor amplifying factor of the problem.
The first condition guarantees the approximation property. The last two conditions guarantee the stabilization of the approximation problem.
Based on the principle, we can construct several specific regularization methods. For examples, we list some of them: Method 1. The first stabilized amplifying factor K α (r, µ i ) is given by where χ 1 α is the characteristic function, i.e., The second stabilized amplifying factor Kα(r, µ i ) is given by Correspondingly, we write these two regularization solutions as follows: Solution 1. The first regularization solution uα(r, 0) is given by This method is known as spectral cut-off method. In practical computation, we use where M is the regularization parameter. Solution 2. The second regularization solution uα(r, 0) is given by

Remark 3.
Motivated by the recently-developed fractional Tikhonov regularization methods [8], it is interesting to give some fractional regularization methods, e,g. the fractional Tikhonov method. This method is given by modifying the amplifying factor as

Numerical examples
Now we give some numerical examples to test the proposed regularization principle. In this section, M1 and M2 represent Method 1, Method 2 , respectively. First we solve the forward problem with some specified function f (r) to simulate the final value u(r, T ) := g(r) by (2.2), then we add artificial random noise to g(r) for generating the noisy data g δ (r).
To avoid the "inverse crime", we use two different grid configurations of f (r) for solving forward and backward problems. For the forward problem, we specify the value f (r) at a coarse grid {r j }, j = 0, . . . , M 1 to get g(r) at the coarse grid. After generating the noisy data g δ (r), by interpolation, we can get the values of g δ (r) at a finer gird {r j }, j = 0, . . . , M 2 . Then we solve the backward problem by regularization methods to get the values of f (r) at the finer gird {r j }, j = 0, . . . , M 2 .
In the examples, we take R = 1, T = 1, M 1 = 100, M 2 = 400. We replace ∞ in the solutions with a large integerM (i.e., we truncate the series) in M1, M2 and (2.2). However in M2, the integer M plays the role of regularization parameter. The discrete version of noisy data g(r) is generated as follows: g δ j = g j + max g j δ rand(length(g)) j , j = 1, . . . , M 2 , where g j are the exact data and max g j denotes the maximum of the exact data g j , rand(length(g)) j is a random number, δ is the noise level. Example 1. Consider the initial data f (r) = e −r 2 sin(2πr). Figure 1 (a) shows the results for the computed g(r) and Figure 1(b) shows the computed f (r) by (2.5) with α = 0.5 and δ = 1%. We can see that the solution given by (2.5) is unstable solution for computation because the ill-posedness of problem. Therefore the regularization method is necessary. When α = 0.5 and δ = 1%, we use the regularization methods M1 and M2 to construct the approximation. The results are displayed in Figure 2  Example 2. In Example 1, the exact solution are too smooth. In this example, we consider a hard example, i.e., the initial data is given by We find the methods are still effective. First Figure 3 (c) show the results with different fractional orders α with a fixed noise level δ = 1%,α = 1 * 10 −6 . The larger α is, the worse reconstruction result is. This is because the degree of ill-posedness increases as the fractional order α increases. And the degree of ill-posedness becomes the largest at α = 1.