Representation of Solutions of Systems of Linear Differential Equations with Multiple Delays and Nonpermutable Variable Coefficients

Solutions of nonhomogeneous systems of linear differential equations with multiple constant delays are explicitly stated without a commutativity assumption on the matrix coefficients. In comparison to recent results, the new formulas are not inductively built, but depend on a sum of noncommutative products in the case of constant coefficients, or on a sum of iterated integrals in the case of time-dependent coefficients. This approach has a potential to be more suitable for applications. Representation of a solution of a Cauchy problem for a system of higher order delay differential equations is also given.


Introduction
Explicit representation of solutions of systems of delay differential equations (DDEs) with constant delays and linear parts given by pairwise permutable constant matrices was first stated in [11] motivated by a pioneering work [6] on DDEs with one constant delay. Analogous problem with variable delays and time-dependent matrix coefficients without the commutativity assumption was studied in [15] and later in [13]. In all these cases, a variation of constants formula for DDEs (see e.g. [3]) was repeatedly used to construct a fundamental solution of the corresponding matrix DDE. Therefore, it was expressed using the fundamental solution of an equation with one less delay, i.e., it was built inductively. In the particular case of constant coefficients, the fundamental solution had a form of a matrix polynomial of a degree depending on time. This approach is suitable for theoretical problems, such as existence, stability, controllability, etc. (see [8,9,11,13,14,15,20]). To obtain a representation of a solution, which would be more suitable for applications, Laplace transform was applied in [16] to an initial function problem consisting of a system of DDEs with constant delays and linear parts given by pairwise permutable constants matrices, x(t) = Ax(t) + B 1 x(t − τ 1 ) + · · · + B n x(t − τ n ) + f (t), t ≥ 0, (1.1) and an initial condition forτ := max{τ 1 , τ 2 , . . . , τ n }. In this paper, we drop the commutativity assumption and derive an analogous result for time-dependent matrix functions A, B 1 , . . . , B n . Of course, the Laplace transform can not be used if the matrices vary within time. Furthermore, we apply the obtained results to a system of nth order DDEs. We also consider particular cases to show the agreement with known results. We remark that in [10], a similar problem (representation of solutions without the commutativity assumption) was studied for difference equations. Representation of solutions of systems of linear difference equations with one delay was first presented in [4], and later in [12] the case of multiple delays and pairwise permutable matrix coefficients was investigated.
The paper is organized as follows. In the following section, using a particular sum of noncommutative matrix products and unilateral Laplace transform, we derive representation of a solution of a DDE with constant coefficients. In Section 3, we define a sum of iterated integrals and prove an analogous result for DDEs with time-dependent coefficients. In Section 4, we consider a DDE of higher order and derive a representation of a solution for the corresponding Cauchy problem. In the final section, we provide a short conclusion of the paper as well as a discussion.
In the whole paper, we shall denote | · | the norm of a vector without any respect to its dimension. The same notation will be used for the corresponding vector induced matrix norm. Further, M N denotes the linear space of all N × N constant matrices, N and N 0 denote the set of all positive and nonnegative integers, respectively. We also assume the property of an empty sum, i∈∅ z(i) = 0 for any function z. We shall use the symbol to denote the noncommutative product of matrices, i.e., n i=1 B i = B 1 B 2 . . . B n for matrices B 1 , . . . , B n . Finally, θ, Θ and I will stand for the zero vector, the zero matrix and the identity matrix of a suitable dimension, respectively.

Constant coefficients
In this section, we extend a result from [16] on a representation of a solution of a Cauchy problem for equation (1.1) to the case without the commutativity assumption.
First, we recall the result.
. Then Υ α = (1, 1, 2, 2, 2) and S Υ α |α| has exactly 5! 2!3! = 10 elements, as the number of all permutations of a set of five elements, two and three of which are of the same type. Consequently, Note that if AB = BA, then P D α = 10A 2 B 3 . Remark 1. Let |α| > 0. If D 1 , . . . , D n are pairwise permutable, i.e., D i D j = D j D i for each i, j = 1, . . . , n, then D αi i , ∀σ ∈ S Υ α |α| and the right-hand side does not depend on the order of D i s. Therefore, The following examples show particular applications of P D α .  Consequently, (A + A) 2 = P D (2,0) + P D (1,1) + P D (0,2) = 4A 2 , which is clearly correct. Now, we can prove the following result on a representation of a solution of the initial function problem consisting oḟ and initial condition (1.2).
Proof. First, we show that x is exponentially bounded. Let c 1 , c 2 > 0 be such that |f (t)| ≤ c 1 e c2t for all t ≥ 0. By (2.3), for all t ≥ 0, where ϕ = max t∈[−τ ,0] |ϕ(t)|. Denote g(t) the right-hand side of the above inequality. Clearly, g is a nondecreasing positive function. Note that for any s ∈ [0, t] and each j = 1, . . . , n. So we obtain Using Gronwall inequality we derive for all t ≥ 0. Now, we are able to apply the unilateral Laplace transform [17] defined as for Re p > a and an exponentially bounded function h such that |h(t)| ≤ ce at for all t ≥ 0 and some constants a, c ∈ R. The rest of the proof is analogous to the proof of [16, Theorem 3.1], so we skip some details. Using the properties of the Laplace transform, from (2.3) we get where Bj e −pτ j p is invertible (see e.g. [18,Proposition 7.5]). Clearly, it holds for any p sufficiently large. Moreover, since assuming that p is so large that the above Laplace images exist, we obtain Denoting χ the Heaviside step function defined as it is easy to see that where Γ is the Euler gamma function [7]. Using the uniqueness of the inverse of Laplace transform, we have Next, by convolution theorem for Laplace transform [17, Theorem 2.39] and (2.7), we obtain for each j = 1, . . . , n, Here the last equality follows from B(t) = Θ whenever t < 0 due to the empty sum property. Analogously to A j , we derive This completes the proof.
The above proof provides a construction of the solution of (2.3), (1.2). Nevertheless, the obtained formula holds in a more general case: Proof. Considering equation (2.3) as a matrix equation with f ≡ Θ and an initial condition one can see that B of (2.5) is a matrix solution of this equation and the initial condition, i.e.,Ḃ considering the right-hand derivative at t = 0, and Direct differentiation of (2.4) for t ≥ 0 along with the just shown properties of B prove the statement (for details see [16,Corollary 3.2]). Now, we involve the linear nondelayed term.
Then the solution of the Cauchy problem (1.1), (1.2) has the form where Proof. Let us set y(t) = e −At x(t). Then y satisfies an initial value problem of the form (2.3), (1.2), and Theorem 2 can be applied. Then the statement is obtained when one returns to x (for the details see [16,Theorem 3.3]).
1. If we further suppose that all the matrices B i are pairwise permutable, as it was in Theorem 1, then so are the matrices B i . Hence, by Remark 1 we have So, Theorem 3 coincides with Theorem 1.

To better see the effect of
3. The assumption AB i = B i A for each i = 1, . . . , n in the above theorem was needed for y to satisfy the equation of the form (2.3). Otherwise, we would not be able to apply Theorem 2. A result without this assumption will be proved in the next section (see Corollary 2).

Time-dependent coefficients
In this section, we allow the matrix coefficients to vary in time and we derive formulas for solutions of the corresponding initial function problems.
We start by defining sums of iterated integrals of all different products of (in general nonpermutable) matrix functions from a given set. Let D = D(t) = (D 1 (t), . . . , D n (t)) be a vector of continuous matrix functions, Υ α be given by for t ≥ s ≥ 0. Clearly, for any α ∈ N n 0 and any fixed s ≥ 0, R D α (·, s) is C 1smooth on [s, ∞). If D 1 , . . . , D n are constant, a relation between R D α (t, s) and P D α of (2.2) can be derived (see Remark 3 below).
Next, we apply R D α (t, s) to express a solution of the matrix equatioṅ satisfying X(t, s) = Θ, t < s, Here the dot stands for the derivative with respect to t.
Let t ≥ s ≥ 0 be arbitrary and fixed, and δ > 0 be small. Then Hence, we get Next, if 0 ≤ s < t = s + n i=1 α i τ i for each α ∈ N n 0 , and δ < 0 is such that |δ| is small, then (3.7) is still valid. So, For the other case, i.e., 0 ≤ s < t = s+ n i=1 α 0 i τ i for some α 0 = (α 0 1 , . . . , α 0 n ) ∈ N n 0 , δ < 0 such that |δ| is small, we apply the identity R D α 0 (t, s) = Θ to see that (3.8) holds. By these arguments, we can apply Lemma 1 to compute the derivative of Y (·, s) for t ≥ s: Here in the last step, we used the substitution α − e j → α. This proves (3.6) and, as a consequence, the proof is complete.
It is worth to mention that, as was explained in [13, Remark 2.1], instead of A(t) we can use any extension of A(t). The values of Φ(t) for t < 0 do not affect X(t, s). Consequently, if A is constant, then we can set Φ(t) = e At for any t ∈ R. Moreover, if AB m (t) = B m (t)A for all t ∈ [s, ∞) and some m ∈ {1, . . . , n}, then B m (t) = B m (t)e −Aτm as in Theorem 3. Therefore, we use the similar notation.
In particular, we obtain the result promised in Remark 2.3.

Higher order DDEs
In this section, we apply the results of the preceding sections to systems of higher order DDEs. In what follows, we add the lower indices to denote the dimensions of the zero and identity matrix (single index means square matrix). Here, we consider the equation i.e., it is equal to the first N coordinates of y(t), where and Ψ (t) is a fundamental matrix such that Proof. Let us denote y 1 (t) = x(t), y 2 (t) =ẋ(t), . . . , y k (t) = x (k−1) (t). Then y(t) = (y 1 (t) T , . . . , y k (t) T ) T solves the Cauchy probleṁ y(t) = C(t)y(t) + D 1 (t)y(t − τ 1 ) + · · · + D m (t)y(t − τ m ) + g(t), t ≥ 0 which is of the form (3.9), (1.2). The proof is finished by application of Theorem 5.
Next, we present examples showing the agreement with known results.
Example 5. Let us consider the matrix DDË for a constant matrix B ∈ M N along with the non-smooth initial condition Applying Theorem 6 one can see that Here we have α = α 1 = |α|, Υ α = (1, . . . , 1) ∈ N α and S Υ α |α| = {(1, . . . , 1)}, i.e., it consists of one element only. Moreover, Hence, we can take Next, we apply the substitution q m − (α − m + 1)τ → q m for each m=1, . . . , α to get To simplify the notation, we introduce the Kronecker product of matrices (see [21]) and, in particular, its property (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD) for any matrices A, B, C, D of appropriate dimensions. So we can write To go on with computations, we need the following lemma.
Lemma 2. For any α ∈ N, t, s ∈ R it holds Proof. We prove the statement by mathematical induction with respect to α. Clearly, Now, let the statement be valid for α ∈ N. Then which proves the lemma.
Using the above result, we obtain Therefore, In conclusion, we have proved the following result.
Here we have D i = Θ N Θ N B i Θ N , i = 1, . . . , n and take Ψ given by (4.4). As in Example 5, we apply Lemma 2 to show for any t ≥ s ≥ 0, α ∈ N n 0 , |α| > 0. Hence, for t ≥ s ≥ 0. Application of Theorem 6 gives the next result.

Conclusions and discussion
In the present paper, we stated representations of solutions of systems of DDEs with multiple constant delays and constant as well as time-dependent coefficients. To be able to drop the commutativity assumption on matrix coefficients used in previous works, we introduced a sum of all different products of constant matrices, P D α , and a sum of iterated integrals of all different products of matrix functions, R D α (t, s). A connection between them was shown for constant matrices. Previously, in the case of n > 1 variable coefficients, to compute the fundamental matrix solution one needed to compute the fundamental matrix solution of an equation with n − 1 delays, the fundamental matrix solution of an equation with n − 2 delays, . . . an equation with 1 delay. Using the method proposed in this paper, computation of R D α (ϑ, σ) for all 0 ≤ n i=1 α i τ i ≤ t, 0 ≤ σ ≤ ϑ ≤ t is enough. This can be done in advance before actually solving the equation. So it should be more suitable for practical computations than the induction approach.
Finally the results were applied to a system of higher order DDEs to obtain a representation of its solution. This was done by rewriting the system as a larger system of first order DDEs. To show the agreement with known results, we considered systems of second order DDEs with one and multiple delays, respectively. Kronecker product of matrices appeared to be a convenient tool for simplifying the notation. As was shown in the examples, for second order DDEs in the case of one constant matrix coefficient or multiple pairwise permutable constant matrix coefficients the formulas resulting from an application of Theorem 6 can be simplified by an explicit calculation of iterated integrals of matrix products. To our best knowledge, for systems of DDEs of order higher than 2 there are no results on explicit representation of solutions to compare our approach with. So to conclude, Theorem 6 covers a wide class of higher order DDEs, but to simplify the resulted formula in particular cases can require some effort.