Three Classifications on Branching Processes and Their Behavior for Finding the Solution of Nonlinear Integral Equations

Abstract. In this paper, we consider the Monte Carlo method for finding the solution of nonlinear integral equations at a fixed point x0. In this method, simulated Galton-Watson branching process is employed for solving the proposed integral equation. The main goal of this paper is to compare the behavior of three classifications of branching process based on the mean progeny, i.e. the subcritical, critical and supercritical process.


Introduction
Consider the following Fredholm integral equation with polynomial nonlinearity: where D ⊆ R and m is a natural number greater than or equal to 2, f (x) ∈ L 2 (D) and the kernel K(x, y 1 , . . . , y m ) belongs to L 2 (D × D × . . . × D) ≡ L 2 (D m+1 ). It is assumed that this equation has an iterative solution corresponding to the iteration process: where u 0 (x) = f (x), j = 0, 1, . . . .
The first task is to construct a Monte Carlo estimator for evaluating the functional < g, u >= D g(x)u(x)dx. (1. 3) The functions u(x) and g(x) belong to any Banach space X and to the adjoint space X * , respectively, and u(x) is a unique solution of the iterative process (1.2). More details can be found in [3].

Monte Carlo Method
In this section we introduce Monte Carlo method for estimating (1.3). If the method of successive approximations dy i with initial approximation u 0 (x) = |f (x)|, converges, then a branching process enables us to establish random variables which mathematical expectations are equal to functional (1.3).
In theory of branching process, we usually consider particles (such as neutrons or bacteria) that can generate new particles of the same type. The initial set of objects is referred to as belonging to the zero generation. Particles generated from the nth generation are said to belong to the (n + 1)-th generation. In Galton-Watson branching processes single particle is remained alive just for a unit of time, and only at the end of its life produces a random number of progeny according to a probability distribution. Every generated particle at the first generation may alive and generate similar particles as particles in zero generation. In the second generation, progeny particles behaves in the identical way and so on. In this process the life spans of all particles are identical and equal to one, then this process can be modeled by a discrete-time index, identical to the number of generations [1].
To obtain a random variable which mathematical expectation is equal to (1.3), we consider a branching process with the following property: any particle distributed with initial density function is born in the domain R m at a random point x 0 . In the next generation, this particle either dies out with probability h(x 0 ), where 0 ≤ h(x 0 ) < 1, or generates m ≥ 2 new analogical particles in the random points x 00 , x 01 , . . . , x 0m−1 with probability p m (x 0 ) = 1 − h(x 0 ) and the transition density function p(x 0 , x 00 , x 01 , . . . , x 0m−1 ) ≥ 0, where D . . . The used index numbers are called multi-indices. The particle which belongs to zero generation is enumerated with zero index, i.e. x 0 . Its discrete inheritors are enumerated with the indices 00, 01, . . . , 0m − 1, i.e. the next points x 00 , x 01 , . . . , x 0m−1 belong to the first generation. If a parent particle has index t, then its progeny have index t0, . . . , t1, . . . , tm − 1. The generated particles behave in the next moment as the initial one and etc. The trace of such a process is a tree form structure shown in Figure 1. To find relation between the branching process and a solution of the integral equation (1.1), we calculate the first two iterations of iterative equation (1.2) in the simple case when m = 2. The branching process which corresponds to these two iterations is presented in Figure 2.
Obviously, the term of u 0 corresponds to zero generation, see A full tree with n generations is called the tree Γ n , where the dying of particles is not visible from zero to (n − 2)-th generation, but all generated particles of (n − 1)-th generation die. We present Γ as the subtree from a full tree. Consider A be the all particles that can generate the same particles in the next steps and B denotes the all died particles of the above explained branches. Consider a branching process with l generations in the general case m ≥ 2. It corresponds to the truncated iterative process with l iterations (1.2). There is a one-to-one correspondence between the number of subtrees of the number of the terms of the truncated iterative process with l iterations. This correspondence allows us to construct a procedure for a random choice of the tree and to calculate the value of a random variable which corresponds to this tree. Thus, when we construct the branching process we receive arbitrary subtrees Γ from the full tree Γ l . We set the random variable with the density function where points x t1 , x t2 , . . . , x tm are generated by x t . According to the following theorem, we obtain random variable for arbitrary tree Γ which estimate lth iteration, u l , of the iterative process (1.2) [3].
It is clear that if l → ∞, then the mathematical expectation of the random variable is: lim The case, when the given function The probable error of this method is r N = 0.6745σ(Θ(Γ ))/ √ N , where σ(Θ(Γ )) is the standard deviation of Θ(Γ ) [5]. For reducing the error we can consider sufficiently large N or reduce the variance of the error i.e. σ 2 (Θ(Γ )).
The problem of optimization of Monte Carlo algorithms consists in minimization of the standard deviation, i.e. minimization of the second moment E(Θ 2 g (Γ )) of the r.v. Θ g (Γ ). This is done by a suitable choice of the density function p(Γ ). Dimov have shown that the probability transition density p(x, y 1 , y 2 , . . . , y m ) = |K(x, y 1 , y 2 , . . . , y m )| . . . |K(x, y 1 , y 2 , . . . , y m )| m i=1 dy i and the initial transition probability p 0 (x) = |g(x)|/ |g(x)|dx, minimizes the variance and in this case the density function p(Γ ) in (2.1) is called almost optimal [2]. Now, we present the Monte Carlo algorithm using the almost optimal density function: Almost Optimal Monte Carlo Algorithm 1. Set Θ g (Γ ) = 1 and get point ξ for calculate of u(ξ).
3. If α ≤ p m (ξ) then go to steps 5 else go to step 4.

Set
. In this case we say that the point dies out.

Classification of Branching Processes
A very important classification of a branching process is based on the mean progeny of a particle, i.e. m = E(X), where the random variable X denotes the number of progeny of a particle. Therefore, in the expected value sense, the process grows geometrically if m > 1, stays constant if m = 1, and decays geometrically if m < 1. We call these cases as supercritical, critical, and subcritical for the process, respectively. If we denote the number of particles ↓ 0 for supercritical, critical, and subcritical branching process, respectively. Let us consider the probability q t = P (Z t = 0) that the process extincts at time t. The sequence q t tends to a limit q which is the probability of eventual extinction. If m > 1, then 0 ≤ q < 1. If m ≤ 1, then q has to be equal to 1.
The supercritical and subcritical processes behave as expected from the expression for the means. The critical process is counterintuitive. Although the mean value stays constant and equal to 1, the process becomes extinct almost surely [7]. In this paper we want to have comparison between these classifications, if p m < 1 m , p m = 1 m and p m > 1 m then we have the subcritical, critical and supercritical processes respectively.

Numerical Results
For the numerical tests of the Monte Carlo algorithm, we focus on three parameters: (i) p m , (ii) the number of employed Markov chain (N ), (iii) total number of employed points in branching process. Since in the case of supercritical branching process, the probability of extinction is not equal to one, then with positive probability, the number of generated points tends to infinity. So we have to consider a rule to limit the number of points. We note that with this stopping rule for terminating process, we have biased on our results. For this, we consider the maximum height of supercritical branching process as 20. The height of a branching process is the length of the path from the parent node to the deepest node in the branching process.
Here, we want to evaluate the following integral equation with unique solution   To see the behavior of the Monte Carlo method, we have repeated the algorithm five times using the same parameters. For p 2 = 0.1, 0.2, . . . , 0.8 and N = 500, 1000, 2000, 3000, 4000, 5000 the solution of (4.1) is shown in Figs. 11 and 12. These figures show the relation between the Monte Carlo simulation and total employed points in simulated branching processes with error of Monte Carlo method. We find that total points in simulated branching processes and error are independent (see Fig. 11, left hand side). We conclude that for p 2 = 0.3 and p 2 = 0.4 the obtained results are better than the others (Fig. 12, left hand side). Now, we may consider the following testing of hypothesis H 0 : µ 1 < µ 2 H 1 : µ 1 ≥ µ 2 where µ 1 denotes the mean of errors of the Monte Carlo simulation in the subcritical and critical classifications, and µ 2 denotes the same error for su-  percritical classification process. Using t-test for two independent samples, we accept H 0 with 99 percent confidence.

Conclusion
Monte Carlo method can be efficiently used for obtaining the solution of integral equations with polynomial nonlinearity (1.1), if we employ the subcritical or critical branching process. Since the employed estimator in the supercritical is biased, then there is a systematic error which is relatively high.