Local Convergence of Jarratt-Type Methods with Less Computation of Inversion Under Weak Conditions

We present a local convergence analysis for Jarratt-type methods in order to approximate a solution of a nonlinear equation in a Banach space setting. Earlier studies cannot be used to solve equations using such methods. The convergence ball and error estimates are given for these methods. Numerical examples are also provided in this study.


Introduction
Let X and Y be Banach spaces and F : Ω ⊂ X −→ Y be a nonlinear continuously Fréchet-differentiable operator defined on a non-empty open convex subset of X. We are concerned with the problem of approximating a locally unique solution x * of the nonlinear equation (1.1) Higher order methods like Jarratt method [2,6,7] and fifth order method [10,11] are considered for approximating the solution x * of (1.1). But, for the convergence analysis of these methods, in addition to the assumptions on F and F assumptions of the form (see [2,6,7,10,11]): F (x) − F (y) ≤ L x − y , x, y ∈ Ω, L ≥ 0 (1.2) or F (x) − F (y) ≤ w( x − y ), x, y ∈ Ω (1. 3) are required, where w(z) is a nondecreasing continuous function for z > 0 and w(0) = 0 (see [10] where the kernel Q is the Green's function defined on the interval [0, 1] × [0, 1] by Notice that x * (s) = 0 is one of the solutions of (1.1). Using (1.4), we obtain Then, by (1.4)-(1.6), we have that Note that, F is not Lipschitz. Hence the results in [2,7,8,10,11] cannot be used to solve (1.5).
In this paper we study the local convergence of the three-step method defined for each n = 0, 1, 2... [10] by where x 0 is an initial point, The almost sixth semilocal convergence order of method (1.7) was shown in [10] using the preceding Lipschitz-type conditions. However these results cannot apply to solve (1.5). The idea used in this paper can be used on other iterative methods [1]- [9].
The rest of the paper is structured as follows: In Section 2 we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and uniqueness result not given in the earlier studies [10,11]. Special cases and numerical examples are presented in the concluding Section 3.
1. In view of (2.5) and the estimate condition (2.7) can be dropped and v can be replaced by v(t) = 1 + w 0 (t).
2. Let w 0 (t) = L 0 t, w(t) = Lt, v(t) = M for some L 0 > 0, L > 0 and M ≥ 1. In this special case, the results obtained here can be used for operators F satisfying autonomous differential equations [6] of the form where P : Y −→ Y is a continuous operator. Then, since F (x * ) = P (F (x * )) = P (0), we can apply the results without actually knowing x * . For example, let F (x) = e x − 1. Then, we can choose: P (x) = x + 1.
3. The radius r * = 2/(2L 0 + L) was shown by us to be the convergence radius of Newton's method [4,5] x n+1 = x n − F (x n ) −1 F (x n ) for each n = 0, 1, 2, . . . (2.18) under the conditions (2.4)-(2.7). It follows from the definition of r that the convergence radius r of the method (1.7) cannot be larger than the convergence radius r * of the second order Newton's method (2.18).
As already noted in [4,5] r * is at least as large as the convergence ball given by Rheinboldt [8] r R = 2 3L . In particular, for L 0 < L we have that r R < r * and r R r * → That is our convergence ball r * is at most three times larger than Rheinboldt's. The same value for r R was given by Traub [9].
4. It is worth noticing that method (1.7) is not changing when we use the conditions of Theorem 1 instead of the stronger conditions used in [2,7,10,11]. Moreover, we can compute the computational order of convergence (COC) defined by or the approximate computational order of convergence This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F.

Numerical Examples
The numerical examples are presented in this section.