ON THE DISCRETE LINEAR ILL � POSED PROBLEMS

An inverse problem of photo acoustic spectroscopy of semiconductors is investigated The main problem is formulated as the integral equation of the rst kind Two di erent regularization methods are applied the algorithms for de ning regularization parameters are given THE STATEMENT OF THE PROBLEM An inverse problem of photo acoustic spectroscopy of semiconductors taking into account carrier di usion and recombination consists of the recovering of real function f x x l l which is a part of the following boundary value problem d n x dx i n x f x Dn vsn Dn l vsn l Values of are measured in the experiment for di erent values of frequen cies N Here the function x x lb l lg lb lg is the solution of the boundary value problem d x dx i x x V x


THE STATEMENT OF THE PROBLEM
An inverse problem of photo-acoustic spectroscopy of semiconductors taking into account carrier di usion and recombination consists of the recovering of real function f(x) x 2 (;l 0) l > 0, which is a part of the following boundary  (1.5) with the kernel K(! x) of the exponential type. It is well known that such problem are ill-posed. Since the function g(!) is measured only for nite discrete set of frequencies ! 1 ! 2 : : : ! N , the problem (1.5) is discrete ill-posed. Furthermore, any measured data contain random errors j j = 1 2 : : : N bounded by the errors level 0 @ N ;1 N X j=1 2 j 1 A 1=2 for some positive . Therefore, for the numerical solution of the inverse problem it is necessary to calculate the function f(x) on the basis of discrete data g j j = 1 2 ::: N of the following form: g j = g j + j = ( ' j f) X + j j = 1 2 : : : N (1.6) where ' j (x) = K(! j x ) are known linearly independent functions, f ' j 2 X X is a Hilbert space with the inner product A lot of problems in signal processing, geophysics can be formulated in the form (1.6), a good overview of discrete ill-posed problems is given in 3 4 ].
Because of nite number of data, the solution of the inverse problem is nonunique, therefore, we look for the normal pseudo-solution f + (x) of the problem (1.6). It can be shown that f + (x) has the form where '(x) = ( ' 1 (x) ' 2 (x) : : : ' N (x)) g = ( g 1 g 2 : : : g N ) Q is N N Gram matrix with elements q j k = ( ' j ' k ) j k = 1 2 : : : N : See, for example, 5].
Since, the inverse problem (1.6) is ill-posed, the matrix Q ;1 is ill-conditioned and for the numerical solution it is necessary to use a special regularization method. The Tikhonov's regularization method is very popular, which is convenient to use in semi-continuous form 6]. Accordingly to this scheme, the approximated solution is the function f (x) w h i c h minimizes on the space X the functional N X j=1 (f ' j ) ; g j ] 2 + kfk 2 X f 2 X: Here is the regularization parameter which should be chosen. Although, during last thirty years the theory of regularization is quite well developed, the problem of nding parameter is still important. See, for example, some recent papers 7 8 9 1 0 1 1 1 2 ].

THE REGULARIZATION PARAMETER PROBLEM
All methods for determining the regularization parameter can be divided into several types accordingly the used additional information. One group of methods uses a priori information concerning the error level . Usually one use the discrepancy principle 13]. It is noted that the discrepancy principle yields oversmoothed solution. It is shown in 14] that this method provides the smallest error propagation in the approximated solution but it gives the worst resolution. In reality, the error kf ; f + k can be reduced for some greater error propagation at the expense of improving the resolution. Such approach leads to the majorant principle 13], if an estimate of kf ; f + k is available 14]. Such approach w as used also, for example, in 15 1 6 ]. It should be noted that such estimate is not possible for entire X space and further assumption concerning the upper bound of the norm kf + k is necessary. In this case, the optimal choice of is possible. If a priori value kf + k is unknown, then one try to obtain it from the data, for instance, using the norm kf k. Such approach is realized in 15]. We also use this idea in our paper.
Unfortunately, the sharp estimate of is desirable, since the accuracy of kf ; f + k is very sensitive t o t h e c hange of . Therefore, we are interested in methods that do not use the error level . We mention L-curve m e t h o d with a compact operator T in the Hilbert space X. Really, if for any g ! g the convergence f = R ( ) g ! f = T ;1 g holds and R ( ) g = Rg then simply R T ;1 and R is continuous, i.e. the inverse problem (2.1) is not ill-posed. So, for discrete ill-posed problems we should not expect that for n ! 1 and ! 0 w e will get the convergence f ! f + . This means that any heuristic method of choosing the regularization parameter sometimes fail even for nite n. The non-convergence of the L-curve method is proved in 10]. j (x) (d j + ) 2 (z 1 (x) z 2 (x) : : : z N (x)) > = U ' (x) U D U > is the orthogonal decomposition of the matrix Q, D = diagfd 1 d 2 ::: d N g is the diagonal N N matrix of eigenvalues d j j = 1 2 : : : N .

THE METHOD OF THE REGULARIZATION FUNCTION
If an estimate of kqk is known, then the optimal choice of the regularization parameter is that which provides minimal right-hand side of the inequality (3.2) (the majorant principle). We note that such regularization parameter depends on x, hence we h a ve the regularization function = (x) x 2 (;l 0): Usually kqk is unknown, in this situation we estimate kqk from the data.
The simplest way is to substitute the true vector q with the vector q = ( Q + E) ;1 g: However, more precise approach can be used. From the de nition of the q substituting g = g + = Qq + we deduce the equality q = q + (Q + E) ;1 q ; (Q + E) ;1 (3.3) where = ( 1 2 ::: N ) > : Substituting this formula for q into the right-hand side m times, we h a ve q = m X j=1 j;1 (Q + E) ;j g ; m X j=1 j;1 (Q + E) ;j + m (Q + E) ;m q: (3.4) If m is quite large and < 1, then it is su cient to use the rst term. Substituting it into (3.2), we obtain the method for the choice of the regularization parameter. This method is not heuristic, since it uses the error level . The best tting is to nd providing minimum to the criterion function k df d k (quasi-optimal value). In similar way the criterion function is formul a t e d i n 1 3 ].

THE ANALYSIS OF SOME CLASSICAL METHODS
If we use m = 1 and the formula (3.3), we obtain the inequality kqk 2 k q k 2 + k(Q + E) ;1 k 1 ( + kqk): In order to nd the best estimate of kqk 2 we need to minimize the right-hand side of the inequality. We m a y expect that the minimizer of the criterion function kq k 2 k(Q + E) ;1 k 1 + kqk 2 + will be close to the optimal value. The constant does not change the position of the minimum and may be omitted. Neglecting the term kqk 2 , w e o b t a i n the criterion function of the cross-validation method 6]. It can be seen that this method is not quite precise because it uses m equal only to 1. The crossvalidation method was suggested for the case when errors j j = 1 2 : : : N are a white noise. This assumption is crucial for the application of the crossvalidation method. Using our scheme and m > 1 w e m a y use such criterion function without requiring the a priori distribution of measuring errors as well as in situations when the cross-validation method fails.