Preconditioned Iterative Method for Reactive Transport with Sorption in Porous Media

. This work deals with the numerical solution of a nonlinear degenerate parabolic equation arising in a model of reactive solute transport in porous media, including equilibrium sorption. The model is a simpliﬁed, yet representative, version of multicomponents reactive transport models. The numerical scheme is based on an operator splitting method, the advection and diﬀusion operators are solved separately using the upwind ﬁnite volume method and the mixed ﬁnite element method (MFEM) respectively. The discrete nonlinear system is solved by the Newton–Krylov method, where the linear system at each Newton step is itself solved by a Krylov type method, avoiding the storage of the full Jacobian matrix. A critical aspect of the method is an eﬃcient matrix-free preconditioner. Our aim is, on the one hand to analyze the convergence of mesh size. These results are illustrated by some numerical experiments comparing the performance of the methods.


Introduction and mathematical model
Reactive transport models lead to a set of coupled partial differential equations, with algebraic equations in the case of equilibrium reactions or ordinary differential equations in the case of kinetic reactions. The system may be very large, as the number of unknowns is the number of grid points times the number of chemical species. In Amir and Kern [1] a method was introduced where the chemical equations are eliminated, and a set of transport equations are solved, with a source term implicitly representing the effect of chemistry. The resulting problem is solved by the Newton-Krylov method, where the linear system is solved by an iterative method. It was seen that an efficient preconditioning was a crucial component of the method. However, finding a preconditioner is difficult, as no matrix is constructed in the Newton-Krylov method, and one would like to preserve the decoupling between transport and chemistry that is the main advantage of the formulation in Amir and Kern [1].
In this work, we consider a simplified model with one species undergoing a sorption reaction, given by a known equilibrium isotherm. This choice is motivated by the facts that the resulting mathematical problem has the same structure as that considered in the more general multi-component model, that it is amenable to a more complete analysis, and that it can still be seen as representative of a physically relevant model, see for example Logan [19,Chapter 2] or [5,Sec. 7.3.3]. We denote by c the aqueous concentration (in mol/L) of the species, and byc that of the solid part (in mol/kg). The mathematical model given by writing the mass balance equation, and the adsorption relation is: where Ω is a bounded domain in R d , 1 ≤ d ≤ 3, [0, T ] a fixed time interval, Q T = Ω × (0, T ]. The parameters ω, ρ, and D are respectively the porosity, the diffusion-dispersion tensor, β is such that ωβ is the Darcy velocity, and we let ρ ω := ρ(1−ω) ω . For more details on the derivation of the model we refer to Logan [19,Chapter 2] or Bear-Cheng [5,Sec. 7.3.3].
In most of the paper we shall assume that the sorption isotherm is Lipschitz continuous (see (A3) below), as for instance the Langmuir isotherm, ψ(c) = σ c/(K L + c), (1.2) where σ and K L are given constants, though in section 6.2 we will study an example with the Freundlich isotherm: ψ(c) = K F c α , with α ∈ (0, 1).
In this last case the derivative is singular at c = 0, so ψ is not Lipschitz.
The one species sorption model used here, and several of its generalization, has been studied both for its physical insight, as for example in [27], and for its rich mathematical structure, see [28,29,30] where the emphasis is put on the analysis of travelling waves, and [3,4] where error estimates for equilibrium adsorption processes for finite element approximation is considered with generalized adsorption isotherm, especially for the Freundlich adsorption isotherm. Note that in the purely hyperbolic case, a semi-analytical solution can be found, see [31], while the advection-diffusion case is treated in [16]. In this work, we concentrate on the interaction of reaction and diffusion in heterogeneous media.
We make the following assumptions on the model data: (A1) The diffusivity D is a tensor valued function such that there exists 0 < d 0 < d 1 satisfying: (A2) The advective velocity field β is a divergence free vector-valued function such that β L ∞ (Ω) ≤ β 0 .
(A3) The sorption isotherm ψ is non-decreasing, non-negative and Lipschitz continuous function with Lipschitz constant L ψ , such that The remainder of this paper is organized as follows: the following section recalls how we discretize our model problem. Section 3 concerns iterative methods for solving the nonlinear system: we present and analyze a fixed point method as well as a Newton-Krylov method. In Section 4, we discuss different techniques for linear and nonlinear preconditioning of the Jacobian matrix so as to accelerate the convergence of the Newton-Krylov method. In Section 5, we present a spectral analysis of the preconditioned Jacobian matrix, showing that the eigenvalues of the preconditioned Jacobian matrix are bounded independently of the mesh size h, so that the inner-outer iterations are independent of h. The last section is devoted to some numerical experiments.

Discretization
In this section, we describe briefly our discretization method. Our approach is an operator splitting method (cf. [9,23]), the advection and diffusion operators are solved separately using different schemes in time and space. This has the advantage that each physical phenomenon is solved with an appropriate method. The drawbacks are that the method is (formally) first order and that it may be difficult to implement boundary conditions.
Throughout this paper, we use common notation in functional analysis. By , we mean the norm in L 2 (Ω). We denote by H(div; Ω) the space of vector valued function having an L 2 divergence. Its norm will be denoted by H(div;Ω) . For the space discretization, we assume that Ω is a polygonal (or polyhedral in 3D) domain, and we denote by T h a regular decomposition of Ω into closed d-simplices; h stands for the mesh diameter. For the time discretization, we consider a partition of [0, T ] into N sub-intervals (t n , t n+1 ), for n = 0, . . . , N − 1. We denote by ∆t = t n+1 − t n , the time-step.

Operator splitting method and a semi-discretized problem
The splitting method for the system (1.1) is defined in two steps: for n = 0, 1, . . . , N − 1.
Advection step: Reaction-diffusion step: where c * (t n+1 , .) is the solution of (2.1). For the time derivative, we restrict to a simple Euler method. The advective part is treated explicitly, and the diffusive part is treated implicitly.
We denote by c n an approximation of c(t n ). Each interval (t n , t n+1 ) is subdivided into sub-intervals (t n,m , t n,m+1 ), for m = 0, ..., M −1, with t n,0 = t n and t n,M = t n+1 . We denote by ∆t c = t n,m+1 − t n,m the advection time-step such that ∆t = M ∆t c . The semi-discretized problem is given by: and for a given c n,M solution (2.3), we find c n+1 andc n+1 solution of the following problem: with f n = c n,M + ρ ωc n .

The fully discrete problem
For the space discretization, we use an upwind cell centered finite volume method for the advection equation (2.1), and a mixed finite element method for the diffusion equation (2.2). We denote by |T | the volume of mesh element T and by F h the set of the faces of the mesh. An interior face F ∈ F h is shared by two mesh elements T + and T − , we arbitrarily chose a normal n F pointing from T + to T − .
We start by discretizing the advection equation ( where c m * denotes the upstream concentration, and taking a piece-wise constant approximation for c m . We denote by c m T the approximation of c m over T . Equation (2.5) becomes where ξ T,F and β F are given by The time-step ∆t c is chosen such that ∆t = M ∆t c with M ≥ 1. It is controlled by the following CFL condition (2.6) that ensures that the scheme is stable and determines the value of M ∆t c < min (2.6) We now describe the space discretization of problem (2.4), which will be achieved by a mixed finite element method. We let V h ⊂ H(div; Ω) and L h ⊂ L 2 (Ω) denote finite element spaces of dimension n V h and m L h , respectively.
For a given where the bi-linear forms a, b and m and the linear form l are given by To ensure that the discretization (2.7) of (2.4) is stable, the finite element spaces V h and L h will be assumed to satisfy the coercivity and uniform inf-sup conditions: In the following, we will restrict the discussion to the case where V h and L h are the lowest order Raviart-Thomas-Nédélec spaces for velocity and pressure respectively, which are known to satisfy the above two conditions, see [7]. For this choice, the discrete velocities within each element are determined uniquely by the fluxes on the faces (when d = 3) or edges (when d = 2) of the elements.
We recall the definition of the corresponding spaces in two dimensions: P 0 is the space of constant functions and RT 0 is the lowest-order Raviart-Thomas space, A non-linear system corresponding to (2.7) can be constructed as follows.
where with some abuse of notation, we have used q h (x) to denote a finite element function and q h to denote its vector representation relative to the given basis.
.., m L h into the above yields the following algebraic system: where we define the matrices A, B, M as the vector F n as (F n ) i = l(ϕ i ), and the function Ψ as (Ψ (c n+1 h )) i = ψ(c n+1 h,i ). By eliminating the unknowns q n+1 h in (2.10), we obtain the following nonlinear system: Existence for the solution of the discrete system (2.11) will be proved in Proposition 1 below.

Properties of A and B
In this section, we summarize properties of matrices A and B in system (2.10). The matrix A is symmetric positive definite of size n V h , it is sparse for the choice of the basis φ i indicated above and it corresponds to a mass matrix when D = I. When D = I, we obtain that: where G denotes the velocity mass matrix.
In addition to the above, the following property will hold for matrix B [20, lem 10.74, p. 478].
where M denotes the mass matrix for the concentration q h (Ω) and c * is positive constant independent of the mesh size h. We now recall the following bounds for the Schur complement S. Lemma 1. Let the coercivity and the inf-sup conditions (2.8) and (2.9) hold. Then there exists c * > 0 independent of h, such that

Iterative methods for nonlinear systems
In this section, we discuss iterative algorithms applied to the nonlinear system (2.11). We first consider an iterative fixed point algorithm, then we present a Newton-Krylov algorithm. In both case, we establish the convergence result.

Fixed point iteration
The fixed point iteration for the nonlinear system the nonlinear system (2.11) admits a unique solution that is the limit of the fixed-point iteration (3.1).
Proof. To simplify notation, we will remove the superscript n + 1 throughout the proof. We can also eliminate the unknownc, and the proposition will follow if we prove that (if condition (3.2) is satisfied) the function G : is a contraction. We compute, for two given vectors c h and c h : and take the inner product with ∆G : On the left hand side, using Lemma 1, we obtain ( with the notation while on the right hand side, denoting ∆Ψ := Ψ (c h ) − Ψ (c h ), there holds, using the Cauchy-Schwarz inequality

Now we observe that
The conclusion of the Proposition follows.
Remark 1. Condition (3.2) and the conclusion of Proposition 1 is surprising as it says that the iterative method converges provided the time-step is large enough. Indeed, condition (3.2) splits into two cases: • ρ ω L ψ < 1: in that case, there are non restrictions on ∆t; In the second case, the condition may be too restrictive, given that we use an implicit method, which has no stability condition. For this reason, it may be interesting to turn to Newton's method.

A Newton-Krylov algorithm
In this section, we present the Newton-Krylov algorithm for the nonlinear system (2.11), where a linear system is solved by a Krylov iterative method at each Newton step. See [17,18] for general references or [15] in a related context. The non-linear system to be solved at each time-step is where F is given by Let us set X = (c h ,c h ), the Newton-Krylov algorithm for the system (3.3) reads: Given an initial iterate X (0) and letting X (k) be the current approximate solution, the next approximate solution X (k+1) is obtained through the following steps: 1. Inexactly solve (using a Krylov subspace method such as GMRES) the linear system and obtain an approximate Newton direction d (k) such that 2. Compute the new approximate solution where λ ∈ (0, 1] is chosen through a line-search to ensure global convergence (see [17]).
In the description above, J k denotes the Jacobian matrix, the first derivative of F with respect to the degrees of freedom, it is given by Notice that, because of our choice of basis for L h , the matrix D (k) is diagonal.
Additionally, η k ∈ [0, 1) is a forcing term that controls how accurately system (3.4) should be solved. Several possible strategies for choosing η k have been proposed in [12] and are reviewed in [17]. Typically η k has to go to zero as the non-linear iteration converges, and it is chosen to strike a balance between over-solving the linear system (3.5) at the beginning of the iterations and keeping the fast convergence of the (exact) Newton method.
Let us now recall the local convergence result of the inexact Newton iteration (3.6). The Newton iteration for the nonlinear system We quote here part of the basic convergence result from C. T. Kelley's book, Theorem 6.1.2 in [17]: Lemma 2. Assume (3.2) holds, so that (3.3) has a unique solution X * , and that J k (X * ) is non-singular. Assume also that ψ is Lipschitz continuous with Lipschitz constant L ψ , λ = 1 (no line search is performed). Let B(δ) = {X| X − X * < δ} Then there exists δ > 0 such that, if X (0) ∈ B(δ) and if the sequence η k tends to 0 as k → ∞, then the inexact Newton iteration (3.6) converges q-superlinearly to X * .

Linear and nonlinear preconditioning
In the Newton-Krylov method, the linear system (3.4) is solved using an iterative Krylov solver as GMRES, the Jacobian J k is only used once per solver iteration in matrix-vector product between the Jacobian and the Krylov vector w, as in v = J k w. (4.1) The Jacobian-vector product can be approximated by finite differencing of the residual or computed exactly as in (3.7). The efficiency of Newton-Krylov method depends on the choice of an adequate preconditioner. The purpose of preconditioning the system in (3.4) is to reduce the number of iterations, and thus accelerate the convergence rate of iterative solver. In this section, we distinguish between two types of preconditioning: linear preconditioning where the linear system is preconditioned respecting the block structure of the Jacobian matrix and nonlinear preconditioning where the original nonlinear system (3.3) is replaced by new system F (c h ) = 0 by eliminating the unknown c h such that the two systems lead to the same solution.

Linear preconditioning
Using right preconditioning, the system (3.4) can be rewritten as where P represents the preconditioning matrix, and we denote by X = (δc h , δc h ). The solution of this system can be divided into two steps, we first solve The preconditioner matrix P should be a good approximation of J k such that the cost of constructing P should be minimal and solving the linear system (4.2) should be easier than solving the original system. When GMRES is applied to solve this preconditioned system, the Jacobianvector product in (4.1) takes the form v = J k P −1 w, which decomposes into solving the linear system P Z = w, for Z and then computing the Jacobian-vector product v = J k Z.

Block Jacobi
Our first strategy is to approximate the Jacobian system with We neglect the coupling between the transport and the chemistry fields. In this case, we have The Schur complement of the preconditioned matrix J k P −1 is given bỹ

Block Gauss-Seidel
Our second strategy is to approximate the Jacobian system with In this case, we have

Nonlinear preconditioning
In this section, we propose an alternative formulation for the original nonlinear system (3.3) by eliminating the unknown c h . Since S is positive definite, it follows from the first equation of (3.3) that Substituting this into the second equation of (3.3), we obtaiñ The Jacobian matrix for the nonlinear system (4.3) can be computed exactly asJ This formulation looks complicated because of the presence of S −1 , but is actually fairly easy to implement. As usual, the inverse of S is not actually computed. Rather, when one needs to evaluate the residual, one simply solves a linear system with the matrix S, and this turns out be the building block singled out before, namely the solution of one transport step, with some given source term.
We remark thatJ k is the Schur complement of the block Jacobi preconditioned Jacobian matrixS multiplied by the mass matrix, but the preconditioner is nonlinear, it acts on the nonlinear system.

Spectral analysis of the preconditioned linear system
In this section, we present an analysis of the spectrum of the preconditioned Jacobian matrix showing that the spectra of the preconditioned systems are bounded independently of the discretization mesh size h. We denote by µ j , j = 1, ..., N the eigenvalues of the generalized eigenproblem The eigenvalues µ j are real positive numbers because S is symmetric positive and D (k) is symmetric positive definite, if ψ is an increasing function.
Let us first present a spectral analysis of the matrix S.
Lemma 3. Let assumptions (A1)-(A3), as well as the coercivity and inf-sup conditions hold. Then, with the notation as in Section 2.12, the eigenvalues µ j , j = 1, ..., N satisfy Proof. As matrix S is symmetric and D (k) is diagonal, we use the characterization of the eigenvalues as a Rayleigh quotient: µ = u T Su/u T D (k) u for an eigenvector u. We bound the numerator using (2.14), and the denominator using assumption (A3), so that Now, we are able to give a bound on the eigenvalues of the preconditioned Jacobian matrix.
We first note that 1 cannot be an eigenvalue. Indeed, if that were the case, we would get M v = 0 from the first equation and D (k) u = 0 from the second. Since both matrices are invertible, we have a contradiction. We can then eliminate v to obtain From this equality, we obtain where µ is an eigenvalue of the generalized problem Equivalently, as Lemma 3 shows that the µs are all positive, We conclude using the bounds obtained in Lemma 3 again.
We have a similar result for the Schur complement matrix, which corresponds to the Jacobian matrix (4.4). This time, all eigenvalues are real and positive.
consist of λ = 1 together with the generalized eigenvalues ofJ k .
Therefore, the eigenvalues are bounded independently of h.
Proof. Let λ be an eigenvalue of the generalized eigenvalues problemJ k v = λM v. Equivalently, As in the previous proposition, we see that λ = 1 cannot be an eigenvalue. We then put v = S −1 M v to see that ρω λ−1 is a eigenvalue of the generalized problem Thus, λ = 1 + ρ ω /µ and, using Lemma 3 one more time, we obtain This time, λ = 1 is an eigenvalue of the larger matrix in (5.2), the eigenspace being generated by vectors of the form (u, 0) T , u ∈ R N . The other eigenvalues are the same as those determined in the first part of the proof. For each eigenvector v of (5.1), D (k) −1 M v v is an eigenvector of (5.2).
This result shows that the eigenvalues of the preconditioned Jacobian matrix are bounded independently of h, and cluster near 1 as h goes to zero. However, this is not sufficient to show that the rate of convergence is independent of the mesh parameter. Indeed a beautiful result by Greenbaum et al. [14] states that one can prescribe both the eigenvalues and the sequence of residuals, and there exists a matrix with the given eigenvalues such that GMRES applied to this matrix will converge with the given residuals. On the other hand, it has been pointed out by Wathen [32, sec. 7] (quoting a previous result of Taussky) that because the eigenvalues ofJ k are all real and simple, this matrix must be self-adjoint, albeit for a non-standard inner product. We summarize some well-known results concerning the convergence of GM-RES. The most general result states that, for a given system of linear equations Ax = b, with A ∈ C n×n , and x, b ∈ C n , the residuals induced by the GMRES iterates, r (l) = b − Ax (l) satisfy the minimum residual property where P * l = {p ∈ P l : p(0) = 1}, P l is the set of polynomials of degree l or less.
The first convergence bound suggested for GMRES predicts convergence at a rate determined by Λ(A), the set of eigenvalues of A. If A is normal, Λ(A) determines convergence. This is not the case for non normal matrices.
where, κ(V ) = V 2 V −1 2 is the 2-norm condition number of the eigenvector matrix V .
One approach avoiding this difficulty is due to Trefethen [24,26], who has derived residual bounds based on pseudospectra of the matrix A. For a positive number, , the associated -pseudospectrum of A is the set in the complex plane defined by Λ (A) = {z : (zId−A) −1 ≥ 1/ }. This set contains the spectrum of A. This results in the bound where Γ the boundary of Λ (A) and L(Γ ) the length of the curve Γ , which implies the bound for the residual reduction. Pseudospectra can sometimes result in much more realistic bounds than (5.3) but are expensive to compute. Moreover, it is not always clear which value of leads to the most useful information. In this paper, we consider another set associated with the matrixJ k for predicting the convergence rate of minimum residual methods, namely the field of values ofJ k sometimes also called its numerical range. The field of values of a matrix is known to be a convex and compact set in the complex plane that contains the eigenvalues (see [21]). Bounds for GMRES convergence have recently been developed, starting with work by Eiermann [10], and an older bound from [11] can also be interpreted in terms of the distance of the field of values to the origin, see [6,13].
The bound can be stated in terms of the angle β ∈ [0, π/2): provided 0 ∈ W (J k ), which unfortunately we've been unable to prove.

Numerical experiments
In this section, we first perform a number of studies concerning the three different preconditioners for Lipschitz and non-Lipschitz isotherm cases. Then, we look at the dependence of the field of values with respect to the mesh size. We finish this section by some numerical simulations comparing the fixed-point and Newton methods. The velocity β and the diffusion tensor D > 0 are assumed to be constants. For the all numerical tests (except 2D-example), the domain Ω =]0, L[ with (L = 5), the mesh size is h = 0.05, ρ = 1, β = 1., D = 0.05 and the initial condition is c 0 (x) = 0 for 0 < x < L, and the boundary conditions are c(0, t) = 1 and a zero diffusive flux at x = L.
In the tables below, we denote by BJ the Block Jacobi preconditioner, BGS the Block Gauss-Seidel preconditioner, NNI the average number (rounded to integer) of nonlinear iterations per time-step, NLI the average number (rounded to integer) of linear iterations per time-step and RT, the run time in seconds.

Preconditioner comparison
We consider a 1-dimensional model for single-species nonlinear adsorption with a Langmuir isotherm, cf (1.2). For numerical runs, we choose T = 0.5, the porosity ω = 0.1, the time-step is ∆t = 0.0135.
Our first study looks at the effects of changing the value of the density σ by setting K L constant then investigating preconditioner responses to increasing the value of σ.
Tables 1-2 represent the average over time of the number of nonlinear and linear iterations with respect to σ for the three formulations. We conclude from these tables that the NLI increases when we increase the value of σ. This is consistent with Propositions 2 and 3, as increasing σ has the effect of increasing both α 0 and α 1 in Assumption (A3), and this in turn increases both σ 0 and σ 1 in Proposition 3. The number of nonlinear iterations in the cc-formulation case also increases with σ. Table 1 shows also the good performance of the Block of Gauss-Seidel preconditioner with respect to the other preconditioners. In our second study, we choose K L = 1 and σ = 1.5, the other parameters are the same as the first study. Then we look at the effects of mesh size on the convergence rate of the preconditioned linear system.  None  3  104  3  167  3  275  3  453  --BJ  3  68  3  67  3  63  3  60  3  62  BGS  3  48  3  48  3  47  3  45  3  44  Elimi. of c h  3  41  3  41  3  41  3  40  3  40 Tables 3-4 represent the average over time of the number of nonlinear and linear iterations with respect to the mesh size. These tables show that the number of nonlinear iterations does not increase when the mesh is refined. The number of linear iterations for the unpreconditioned method increases, whereas it remains stable for the two linear precondtioners as well as for the non-linear elimination method, as predicted in Propositions 2 and 3. The tables show also a good performance of the nonlinear preconditioner.

An example with non-Lipschitz isotherm
In this section, we discuss the case of non-Lipschitz sorption. We restrict the discussion to the case of a Freundlich isotherm: ψ(c) = c α , α ∈ (0, 1] (K F = 1). The case α = 1 can be included in the previous section, therefore we consider here α ∈ (0, 1). Clearly, the derivative is singular at c = 0, so ψ is not Lipschitz. In this case a regularization step is needed to use fixed point or Newton method. For a given > 0 we define and we recall the following lemma: As a first study, we choose T = 2., the porosity ω = 1. and the time-step is ∆t = 0.1. Then, we look at the dependence of the convergence rate of the preconditioned system with respect to the parameter for different values of α.   Tables 5 and 6 show that the number of the linear and the nonlinear iterations increase when the parameter tends to zero for the three preconditioners. The tables show also a good performance of BGS preconditioner.
As a second study, we are interested in the convergence of the fixed point method according to the inequality (3.2). We choose α = 0.8, ω = 0.8 , = 0.5 and the nonlinear tolerance is set to 10 −12 . In this case ρ ω L ψ < 1 with L ψ (.) = α α−1 . Table 7 represents the number of non-linear iterations with respect to the time-step. As predicted by the analytical result (Proposition 1), Table 7 shows the convergence of fixed point method without any restriction on ∆t, albeit the convergence deteriorates when ∆t is reduced. Now, we choose α = 0.8, ω = 0.5 and = 0.1, then ρ ω L ψ > 1. The maximum number of the fixed point iterations is set to 1000. Table 8 shows that the fixed point method converges only for ∆t large enough, again confirming the condition from Proposition 1.

Field of values
In this section, we consider the same problem as in the last section, we are in particular interested in the effect of the mesh on the numerical radius µ(A) := max{|ξ| : ξ ∈ W (A)} a measure for the size of W (A). Figure 1 represents the eigenvalues, isolines of the pseudospectra and the field of values for the Jacobian matrixJ 1 with different mesh size. The figure was generated with the help of the EigTool software 1 [25]. These figures show that the eigenvalues are bounded independently of the mesh size, as was proved in Proposition 3, and that the field of values of the Jacobian matrix J 1 does not change, and stays well away from the origin, when the mesh is refined. This is an indication that the right hand side of inequality (5.4) is bounded independently of the mesh, as consequence of this the convergence rate of the GMRES method applied to the linearized system after elimination of the unknown c h is also independent of the mesh. We emphasize that this is a numerical observation, and that we currently have no theoretical result that proves this observation.

Comparison of fixed point and Newton: a 2D example
In this section, we consider the geometry of the 2-dimensional benchmark Mo-Mas problem (see [8] where a full statement of the flow and transport problems is described, including the boundary and initial conditions), but we keep the idealized chemistry studied in this paper, with Langmuir adsorption. The domain Ω is the benchmark geometry (see Figure 2), we choose T = 100, the time-step ∆t = 1., the velocity β is obtained by solving the incompressible Darcy flow problem.
The dispersion coefficients α L and α T have dimensions of length, but note that all data in the benchmark are non-dimensional. We choose σ = 1., K L = 0.25, and we fix the residual tolerance at 10 −12 . We first study the convergence of the Newton algorithm and we compare it to the Fixed Point algorithm at the fourth time iteration (see Figure 3). This figure shows a linear (resp. quadratic) convergence for Fixed Point (resp. Newton-Krylov) algorithms, which corresponds to the theoretical one, the total number of linear iteration for Newton-Krylov algorithm is 29 iterations.

Conclusions and perspectives
In this paper, we have introduced and analyzed different preconditioners for the linearized two species reactive transport equation. Specifically, we have focused on the dependence of the GMRES convergence rate with respect to the discretization parameter. We have proven that the eigenvalues of the preconditioned Jacobian matrix are bounded independently of the mesh size, this is confirmed by numerical experiments that also show a good performance of the nonlinear elimination formulation. As the preconditioned Jacobian matrix is non symmetric, we have not been able to prove that the GMRES convergence rate is independent of mesh size. It has been observed numerically that the number of linear iterations is bounded independently of the mesh size. We have also observed numerically that the field of values of the Jacobian matrix remains stable, and away from the origin, when we refine the mesh, so the convergence rate of the GMRES method applied to the linearized system is independent of the mesh. We note that this study has already been generalized to a multi-components reactive transport system (see [2]). Our aim in a future work is to study theoretically the independence of the GMRES convergence rate applied to the linearized preconditioned system with respect to the mesh size.