SOME RAPIDLY CONVERGENT METHODS FOR NONLINEAR FREDHOLM INTEGRAL EQUATION 1

,


Introduction
A great deal of real-life problems consists of dynamical problems which are described by differential, integral and integro-differential or differential-algebraic equations.The above mentioned equations can be considered as operator equations in abstract spaces.Thus many problems in modelling can be reduced to the solution of a nonlinear operator equation where F is a mapping from a Banach Space X into Banach space Y, and it is Frechetdifferentiable as many times as necessary.
For a Fredholm equation of the second kind, a well-established theory exists and an increasing collection of efficient numerical methods is available for determining the unknown function approximately.Second kind equations are relatively straightforward to treat provided the forcing term (the right-hand side) and the operator defining the equation are smooth.In this case the algebraic equations produced by approximate methods are usually reasonably well-conditioned.
For solving the second kind integral equation numerically its kernel is usually discretized.An alternative way to reduce infinite dimensional problem to a finite dimensional one is the employment of so called projection method (Galerkin approach).Thus the numerical solution of integral equations involves to a high degree the solution of systems of nonlinear equations.The approach adopted in this report is based on iterative methods for finding an inexact resolvent of the linearized Fredholm integral equation(s) or an inexact solution to the corresponding linear equations.
Computational effort is often one of the basic problems in the solution of real-life problems.The total cost of an iterative method is determined by the number of iterations needed to achieve the required accuracy and the cost of each iteration.The implementation of methods with the high order of convergence requires for computing a solution with the prescribed accuracy, as a rule, less iterations, than methods with a lower convergence order and therefore likely a less amount of computational work than those with a lower convergence order.In order to save solvings of laborous subproblems the use of rapidly convergent methods seems to be a reasonable approach.

Methods
For solving (1.1) we consider the use of approximate variants of methods with the convergence order p ≥ 2 of the type where Q(x, A k ) is an operator from a Banach space X into itself and A i k , i ∈ I are approximations to the inverse operators occurring in the corresponding exact method.The investigation of approximate variants may give more realistic imagination of convergence properties for methods under consideration.It is perhaps appropriate to mention that an approximate variant of the method can be obtained as a result of a strategy used for solving linear subproblems at iteration, i. e. associated linear equations are solved approximately by taking finitely many steps of an iterative procedure or the inverse operator is approximated by a recurrence formula, e.g.
having the convergence order p ≥ 2. In the formula (2.2) I denotes the identity mapping.
Presently there are a lot of iterative methods of the type (2.1) with the convergence order p ≥ 2, but in practice they are relatively little exploited.This is partially due to the fact that their computational schemes of execution of one iteration step are, as a rule, laborous.Besides their advantages become evident mostly in the close vicinity of the solution.
The most popular methods of order three are the method of tangent hyperbolas (or the Chebyshev-Halley method) defined by and the method of tangent parabolas (or the Euler-Chebyshev method) defined by (2.4) Such methods are appropriate for problems where the other costs dominate the ones of the second derivative evaluation [3,4].For avoiding the evaluation of second derivatives F can be replaced by a discretization formula containing one additional value of F and F [1,8].In such a way we get from (2.3) the midpoint method [8] and from (2.4) the method Let ρ be a nonzero real parameter (ρ > 0) and then the formula presents a family of methods with the convergence order equal to four, provided the accuracy of approximation is of order O( F (x k ) ), i.e.
In the limit case as ρ → ∞ we get from (2.7) the fourth order method In particular if we take ρ = −1 and A k = Γ k then from (2.7) follows the method having the order of convergence order equal to 4. It can be shown that the higher order of convergence then more accurately, in general, the related linear subproblems are to be solved to preserve the order of convergence intrinsic to those standard methods [8].Assume now that there exist the uniformly bounded inverse operator Γ k as well as the constants M, K, λ, Λ and sequences {γ k } and {b k } satisfying the following inequalities where (d < ∞), q ≤ 1, k = 0, 1, . . . .Next we shall prove the statement that the method (2.5) under certain conditions converges superquadratically.This result seems to be relevant as to solving nonlinear Fredholm integral equations.
Theorem 1.Let x 0 ∈ X, S = {x ∈ X : x − x 0 ≤ ρ} and let the following conditions be valid on S: Then if γ k+1 = dqb k , where 0 < d < ∞ and 0 < q, b 0 < 1 and r = λH 0 (δ)/d ≤ ρ, then the equation F (x) = 0 has a solution x in S, x − x 0 ≤ r, to which the sequence (2.5) converges superquadratically where Proof.Letting w 1 and w 2 be positive constants with w 1 , w 2 < ∞ we shall first show the validity of the following inequality (2.9) generates the Newton method and for general p ≥ 2 defines an iterative method having the convergence order equal to p + 1.
Replacing Γ k in (2.10) by its approximation A k and putting x k+1 := ν k on the basis of the Taylor expansion (2.11) Note, that in the capacity of Λ and K we can take Λ = C(1 + γ 0 ) and K = F (x 0 ) + L 2 ρ respectively.In analogy we have ) with G < ∞.Taking x k+1 := x k+1 and bearing in mind (2.11) we get On the basis of (2.11) and (2.12) it is not hard to obtain the inequality (2.9).
Further on with n ≥ k, i.e. the sequence {x k } is fundamental and consequently Remark 1.It can be easily shown that in case q = 1 sequence {x k } defined by (2.5) converges quadratically and if γ k = d F (x k ) then it converges cubically.

Solution of Fredholm Integral Equations
Let K be a nonlinear integral operator with a smooth kernel K.We define the equation Obviously, and the solution of the linearized equation can be written as where the resolvent G(s, t) satisfies the following equation [5] G(s, t) = K x (s, t, x(t)) Seeking A i+1 k in the form Some Rapidly Convergent Methods for Nonlinear Fredholm Integral Equation 69 then according to the formula (2.2) for q = 2 we have from which Bearing in mind that x − F (x) = K(x) and A i+1 k = (I + H i+1 )k the method in combination with (2.2) for q = 2 as applied to solving the equation (3.1) takes the form where First, probably, the formulas (3.4) and (3.5) are presented by Ulm [6].The method (3.2) -(3.5) converges quadratically.
The application of the formula (2.5) in combination with (2.2) for q = 3 to the equation (3.1) can be found in [8].This variant of the method converges cubically but the execution of the formula for q = 3 is too expensive and it is desirable to find a reasonable compromise between the needed accuracy of approximation and computational cost.Next we propose a non-expensive procedure for improving the rate of approximation.
First we evaluate Φ(x i+1 ) = A i F (x i+1 )D i and after that exploit the formula and, finally, compute Seeking the operator D i in the form where where and Note, that in the limit case The proposed procedure requires for finding A i+1 only four integrations: The operator Φ(x i+1 ) has a lower condition number than F (x i+1 ) and therefore the inversion of Φ(x) is more stable procedure than the inversion of F (x) directly.The main disadvantage of the procedures is the failure of a correct error estimate.It seems also reasonable to combine the formula (3.8) with the formulas (2.7) and (2.8).
Although the case of this procedure for computing A i+1 in the method (2.5) guarantees only a quadratic convergence it is expected to be more efficient than (3.5) because the operator Φ(x i+1 ) has a lower condition number than F (x i+1 ) and therefore the inversion of Φ(x i+1 ) is more stable than the direct inversion of F (x i+1 ).One of the disadvantages of (3.6) is the failure of a correct error estimate.Nevertheless of that it seems effective to combine the formula (3.6) with (2.5) or the method (2.5) with (3.8) having the convergence order equal to 1 + √ 2 [2,9].Further on we consider the solution of (3.1) in Hilbert space setting assuming that F is symmetric and the following inequalities are valid in a Hilbert space H.In this case for inverting F (x) we can use the iterative formula with 0 < α i < 2/M, i = 0, 1, . . ., n and then H i+1 is defined by the relation For finding A i+1 we now use the following pair of formulas ) ) ≤ µ i+1 and µ = max{µ i+1 }.It can be easily deduced from (3.12) and (3.13) that In virtue of (3.9) it is possible to find a quantity α i+1 such that µ i+1 ≤ 1.Thus we can take q = µ < 1. Obviously, On the basis of Theorem 1 we now can conclude that the use of procedure (3.12) -(3.13) for computing A i+1 in methods (2.5) guarantees for them at least superquadratic convergence.
The second approach, adapted by this report, is to use an iterative solution method for solving associated linear equations.A strategy of problem solving that instead of finding the exact solution of a linear equation at every iteration solves it intentionally inexactly is a possibility to save computational effort and is adaptive in sense that it uses low accuracy numerical solutions at inner iterations when the solution of (3.1) is not reached yet and improves the accuracy as the solution is approached.
The method (3.10) -(3.11) has only superlinear rate convergence but its computational schemes are very simple and therefore it can be used for finding nonexpensive approximate solutions to corresponding linear equations for high order methods as applied repeatedly to solving linear equations in inner iterations.

Concluding Remarks
Iterative methods are usually self-correcting and hence they are not sensitive to computations error.Common for all methods under consideration is the feature that they involve as the main operation the integration which is numerically a trustworthy and well-posed one.Methods under discussion offer various possibilities to organize parallel computation, e.g. the terms K x H i , H 2 i , H i K x , H i K x H i can be computed in parallel as it can for instance, be shown from formula (3.5).Besides methods with successive approximation of the resolvent yield the solution in an analytical form.
To ensure for methods (2.7) and (2.8) the convergence order equal to 4 one can twice use the recurrence formula (3.5) at every iteration step.