On Numerical Realization of Quasioptimal Parameter Choices in (Iterated) Tikhonov and Lavrentiev Regularization

We consider linear ill-posed problems in Hilbert spaces with noisy right hand side and given noise level. For approximation of the solution the Tikhonov method or the iterated variant of this method may be used. In self-adjoint problems the Lavrentiev method or its iterated variant are used. For a posteriori choice of the regularization parameter often quasioptimal rules are used which require computing of additionally iterated approximations. In this paper we propose for parameter choice alternative numerical schemes, using instead of additional iterations linear combinations of approximations with different parameters.


Introduction
We consider the operator equation where A ∈ L(H, F ) is a linear continuous operator, and H, F are Hilbert spaces with corresponding inner products (., .) and norms . . We do not suppose that range R(A) is closed and so in general our problem is ill-posed. The kernel N (A) may be non-trivial. As usual in treatment of ill-posed problems, we suppose that instead of exact right-hand side f ∈ F we have only an approximation f δ ∈ F with given noise level δ: it holds f δ − f ≤ δ. Standard regularization method for solving problem (1.1) is the Tikhonov method The accuracy of this approximation may be increased by iteration. Let m ∈ N be fixed and u 0 = u 0,α ∈ H be the initial approximation. The m-iterated Tikhonov approximation u α = u α,m is got by iterative computation u α,i = (αI + A * A) −1 (αu α,i−1 + A * f δ ), (i = 1, 2, . . . , m).
where p 0 = m (p 0 is called the qualification of method), γ = m, γ p = p m p 1 − p m m−p and γ * = 1/2 for Tikhonov method and γ * = √ m in case m ≥ 2, a ≥ A * A for the approximation (1.6) and a ≥ A for the approximation (1.7) (see [16,17])). For smoother solutions larger m may be recommended (see Section 2: for larger p in (2.3) error estimate (2.4) for larger p 0 = m may be got).
In the regularization methods of the form (1.6), (1.7) the important problem is the choice of a proper regularization parameter α. If α is too small, the numerical implementation will be unstable and the approximation u α will be useless; if m is small and α is too big, the approximation u α is dominated by the initial guess u 0 . This paper is devoted to several quasioptimal rules for choice of α = α(δ). These rules require computing of additionally iterated approximations. In this paper we propose for parameter choice alternative numerical schemes, where instead of additional iterations linear combinations of approximations with different parameters are used. The paper is organized as follows. In Sections 2, 3 the quasioptimality and corresponding rules for parameter choice are considered. Section 4 contains alternative numerical schemes for these rules.

Quasioptimality of the Parameter Choice Rule
In the following we introduce the property of quasioptimality to characterize the quality of choice of the regularization parameter for concrete problem Au = f . Consider the method P of the form (1.6) (or form (1.7) in selfadjoint case). Let R be a rule for the choice of regularization parameter and let α(R) be the parameter chosen by Rule R.
Definition 1 [see [13]]. Rule R for the a posteriori choice of the regularization parameter α = α(R) is weakly quasioptimal (or quasioptimal), if there exists a constant C (which depends not on A, u * , f δ ) such that for each f δ , f δ −f ≤ δ it holds the error estimate Here u * is solution of problem (1.1), nearest to the initial approximation u 0 . The constant C is called the coefficient of quasioptimality. Let us motivate Definition 1. The error of the approximation (1.6) may be presented in the form from which by using f δ − f ≤ δ and (1.8) we get Hence, the weak quasioptimality means that the error of approximate solution is less than the infimum of the upper bound (2.1), multiplied with constant C. A usual way to characterize the quality of the parameter choice rule for method P 0 is to prove the order-optimality of the method P 0 on the different sets of solutions. The method P 0 is order-optimal on the set M , if For methods of the form (1.6) the usual way (see, for example, [4,5,7,8,14]) to prove the order-optimality of the method with a priori parameter choice α = α(δ, M ) is to use the error estimate (2.1) and to prove that These considerations about the connection between the quasioptimality of rule R and the order-optimality of method P with parameter choice by rule R may be formulated as the following theorem.
Theorem 1 [13]. Let (2.2) holds. Let the parameter choice rule R be weakly quasioptimal. Then the method P with parameter choice by rule R is order optimal on the set M .
and the rule R is weakly quasioptimal then by Theorem 1 it holds and the regularization method with parameter choice by rule R is order optimal for the full range p ∈ (0, 2p 0 ]. Various other sets of solutions and corresponding order optimal error estimates may be found in [4,5,7,8,14].

Quasioptimal Rules for Parameter Choice
In the following we consider the quasioptimality properties of several a posteriori parameter choice rules.
The discrepancy principle for methods (1.2)-(1.5) is not weakly quasioptimal and leads to divergence of approximations (1.4). For these methods we consider the following weakly quasioptimal modification [1,9,10] of the discrepancy principle. Define function The modification of the discrepancy principle (MD rule).
The property of weak quasioptimality holds if we apply to the modified discrepancy the operators D α,k , where and consider the following rule.
Under some mild condition (see [3]) the last error estimate holds also, if we use in methods (1.4), (1.5) the ME rule, but in the general case the quasioptimality of the ME rule for these methods is still an open problem.

Numerical Realization of Rules
Let us assume that q < 1. Often for numerical realization of rules MD, ME, the regularization parameter α(δ) = α i is chosen from decreasing sequence , respectively. For realization of rule R1 from increasing sequence α i = a i−1 /q regularization parameter α(δ) is chosen as . We call these analogs of rules MD, ME, R1 as rules MDa, MEa and R1a. For avoiding instability of computing ϕ k (α) for small α another analog of rule R1 may be used, which we call rule R1b. Here from decreasing sequence α i = mq i (i = 0, 1, . . .) at first i 0 as the smallest index with d MD (α i ) ≤ bδ is chosen and for the regularization parameter α j0 is taken, were j 0 ≥ i 0 is the smallest index with ϕ k (α j0 ) ≤ b δ. It can be shown that for these rules the error estimate 3)) or m + int(k) + 1 (for (1.5)) for R1 rule (where int(k) is integer part of k) and m + 1 (for (1.3)) or m + 2 (for (1.5)) for ME rule.
In the following we present formulas for approximate realization of these rules, where some linear combinations of consecutive approximate solutions u αi,m are used instead of additional iterations. For approximation of function ϕ k (α) we note that generating function g α (λ) in methods (1.3), (1.5) satisfies the equality where β α (λ) = α(α + λ) −1 . In method (1.3) due to equalities (1.6) and Au where C m,k = (m − 1)!/(m + k)!. Analogously we get for method (1.5) For approximate computing of d k uα d(α −1 ) k some formula of numerical differentiation may be used. For example, on the bases of the Lagrange interpolation polynom we have Taking n = k and using knots α i = q i−j α j , i ∈ {j − s(k), . . . , j − s(k) + k}, where s(k) = k/2 in case of even k and s(k) = (k + 1)/2 in case of odd k, we have It allows us to use in rule R1 for methods (1.2), (1. In methods (1.4), (1.5), instead of ϕ k (α j ) we use For numerical realization of MD rule and ME rule in methods (1.2)-(1.5) we use extrapolation. The extrapolated approximation is a linear combination of original approximations u αi,j with n different parameters α i and has qualification p 0 = mn, if j = 1, . . . , m are used, and p 0 = m + n − 1, if only j = m is used (see [2]). Therefore for source-like solutions (2.3) with p > 2m the accuracy (2.4) of extrapolated approximations is much higher than for approximations (1.2)-(1.5). However, in this paper we use extrapolation not for construction of alternative approximate solution, but only for choice of α in methods (1.2)-(1.5). Namely, instead of computing u α,m+1 it is sufficient to use another approximation with the same qualification m + 1.
Proposition 1. Let q < 1. Then the following inequalities are true: Proof. Let Q(λ) be the spectral family of projectors of operator AA * . Then (4.1) gives Taking into account the equality 1 − λg α/q (λ) = (1 + µ) −m with µ = λ/(αq), for proving (4.2) it is sufficient to show that for all µ > 0 respectively. Next, we take into account that in methods (1.4), (1.5) , and in method (1.4) B 2 α (Au α − f δ ) can be approximated by Then for MD and for ME rules instead of functions d MD (α) and d ME (α) their approximations may be used. For proving the quasioptimality of rules MDa, MEa and R1a, the following result is useful.