EFFICIENCY OF OPERATIONAL DATA PROCESSING FOR RADIO ELECTRONIC EQUIPMENT

. The paper deals with the statistical data processing algorithms in operation system of radio electronic equipment. The main purpose is analysis of data processing algorithm efficiency according to the analytical calculations and simulation results. During radio electronic equipment operation failures are possible. These failures affect on the equipment’s technical condition that can deteriorate. In case of condition-based maintenance, it is necessary to detect the time moment of deterioration beginning. Therefore, in this paper the deterioration detection algorithm was developed according to Neyman-Pearson criterion with a fixed sample size. The initial data are times between failures of radio electronic equipment, and these data can be identified by the exponential probability density function. The step-function model was chosen for failure rate change description. To estimate efficiency the operating characteristic was calculated. The simulation based on Monte-Carlo method confirmed the correctness of theoretical calculations.


Introduction
Air navigation service is an important part of civil aviation. This service is provided by equipment, technologies, human resources, regulatory documents (Kuzmenko, Ostroumov, & Marais, 2018).
Equipment consists of ground and airborne radio electronic systems. There are ground radio communication, navigation and surveillance systems.
Equipment reliability depends on circuit design features and its operation system (Dhillon, 2006). The operation system (OS) includes the following components: radio electronic equipment (REE); technological processes (maintenance, repair, etc.); personnel; regulatory documents, resources, etc. (Rausand, 2004). During REE and their OS functioning, cases of specifications mismatch may occur (Hryshchenko, 2016). This can lead to operational costs increasing and air traffic services risks increasing (Galar, Sandborn, & Kumar, 2017).
The operation system has complicated structure that is why it can be considered as an object of design and improvement.
Design and improvement issues of radio electronic equipment OS are researched in scientific literature (Solomentsev et al., 2015). Operational efficiency depends on the level of reliability of REE (Barlow & Proschan, 1965). In connection with this, the mathematical tools of efficiency analysis are based on the theory of reliability, probability theory and mathematical statistics (Levin, 1978). The aviation administrations of the United States, Europe, and their scientific laboratories have developed standards and guideline documents. Thus, the designers and manufacturers of aircraft and electronic equipment are guided by MSG-1, MSG-2, and MSG-3 documents, etc.
For the purpose of operational efficiency providing, statistical data processing algorithms can be used (Solomentsev, Zaliskyi, & Zuiev, 2016). The values of determining parameters (Silkov & Delas, 2015) and values of reliability measures (Solomentsev et al., 2017) can be used as the source of initial data for processing. This data are results of monitoring of technical condition of equipment (Mironov et al., 2016).
In the general case, processes of change of the technical condition during REE operation are non-stationary and random (Smith, 2005). Such processes are called deterioration processes. The causes of deterioration are: -Aging process of the REE elemental base; -Environmental conditions; -The level of load; -Unsatisfactory actions of operational personnel; -Electromagnetic incompatibility, etc. The presence of nonstationarity increases the level of uncertainty during solving data processing problems (Goncharenko, 2017). In the scientific literature, problems associated with the analysis of deterioration are considered as tasks of the changepoint study (Tartakovsky, Nikiforov, & Basseville, 2015).
Such topics as dynamic reliability assessment are considered in several branches of technical equipment application (Wang et al., 2018).
In general, little attention is paid to the problem of non-stationary random processes processing (in cases when these processes describe changes in the technical condition of REE and OS components). This has negative effects on the OS efficiency and REE reliability.

Problem statement
A generalized structural scheme of the OS can be represented as follows ( Figure 1).
The block diagram describes the OS as a scheme for the formation and execution of control action on REE and other OS components in order to insure the reliability of the REE and the efficiency of OS functioning. Control action should be timely and veracious to reduce the risks and potential costs in case of changing the technical condition of REE and other OS components. It is known that we can reasonably use condition prediction procedure for control objects (Gertsbakh, 2005). In case of prediction, determining parameters models must be the same at stages of data collection and within the extrapolation period. So if there is deterioration we should detect changepoint in trend of determining parameters.
The diagram (Figure 1) is based on the principle of adapting to changing user requirements, regulatory document and technical condition of the equipment. Adaptation is one of the principle of artificial intelligence systems (Jones, 2009), this principle can be implemented in OS for prescriptive maintenance of REE (Taranenko et al., 2018). Data processing algorithms are intended for the formation of correct and timely preventive and corrective actions (Goncharenko, 2018).
The main points in considering the detection procedures are the data models to be processed, the methods for synthesis and analysis of procedures efficiency, measures and criteria of efficiency. As part of such methodological approach, the problem of synthesis and analysis of the procedure for deterioration detection in the trend of a non-stationary random process, which characterizes changes in the condition of REE for a given model of the description of a random process, is solved in the article.
Let us formulate a mathematical statement of the research problem at the level of functionals and operators. In general, the measure of efficiency of the OS can be defined as a function of type: where A  is a set of algorithms for statistical processing of operational data, including detection of deterioration, d t is the time interval from the moment of deterioration beginning to the moment of its detection, t Σ is observation interval, D is probability of correct detection, fa P is probability of false alarm, U is computational complexity of data processing algorithm, C is loss function due to untimely detection of deterioration.
The aim of this research is to synthesize such procedures for deterioration detection which will provide the maximum of measure of efficiency for the case specified requirements to the parameters D , fa P , d t and U . In other words, it is necessary to ensure: where * d t , * D , * fa P , * U are requirements for the relevant parameters.

Synthesis of algorithm for non-stationary random process processing
It is known that REE is the primary component of the OS.
Let us consider the case when the technical condition of the REE is characterized by a reliability parameter in the form of failure rate. There are different models for Figure 1. Block diagram of operation system technical condition deterioration description, e.g. stepfunction, linear and quadratic models.
In the following, we will consider the case when the failure rate change corresponds to the step-function model. For this model, the failure rate is equal to two different constant values before and after changepoint.
Initial data are the times between failures. In the interval of observation, we have a periods of time at which times between failures are characterized by the probability density function (PDF) 1 ( ) f t without deterioration in the technical condition of REE, and the function 2 ( ) f t -in case of deterioration. Analysis of the literature shows that the most common PDF for REE times between failures is the exponential distribution (Solomentsev et al., 2013). The exponential distribution usage allows to obtain relatively simple mathematical formulas, which can be applied during engineering calculations. In addition, exponential distribution has a constant failure rate that corresponds to the step-function model. So in this case: a is a deterioration coefficient of the failure rate that should be detected with D probability; k is a number of failure after which the deterioration begins; n is a total quantity of observed failures.
Let us solve the synthesis of detection algorithm based on simple hypotheses testing. That is, the parameters (0) (0) , , a n λ should be known to provide a guaranteed level of efficiency measure. The actual values of the probability distribution parameters will be related to the current efficiency and are taken into account in theoretical analysis and i t variables modelling. The observed interval on which there is no changepoint has k samples of times between failures i t . Parameter k is a random variable.
The hypothesis 0 H corresponds to the event when there is no changepoint in the data under observation. The alternative 1 H corresponds to changepoint occurrence. The alternative 1 H is generally complicated because the parameter k is an unknown value. The hypothesis 0 H is characterized by PDF 1 ( ) f t , and the alternative 1 H is characterized by PDF 2 ( ) f t . The synthesis of changepoint detection algorithm is based on Neyman-Pearson criterion with a fixed sample size n. In publications, there are changepoint detection algorithms with a focus on a posteriori analysis of output data with a fixed sample size (Zhyhlyavskyi & Kraskovskyi, 1988) and a sequential analysis of data with an infinite sample size (Zaliskyi & Solomentsev, 2014). In this case, we consider changepoint detection algorithm according to posteriori analysis. For this purpose, the decision about the presence or absence of changepoint is taken after the data processing of the entire sample.
Let us calculate the likelihood ratio according to Neyman-Pearson criterion  is a likelihood function for ′ δ e.t . Assume that the samples contain independent random variables, and then the expressions for likelihood functions have the form: ( 1)ln (1 ) .
is a decisive statistic that depends on the sample size n, parameters (0) a , (0) λ and k. This means In the expression for decisive statistics we need to know the parameters (0) a and (0) λ . For these parameters, the detection algorithm should provide the necessary level of probability of correct detection of changepoint and other measures of the detection algorithm efficiency. The form of decisive statistics relates to issues of a priori uncertainty of the detection procedure.
The parameter characterizing the moment of changepoint k is fundamentally unknown. To overcome the priori uncertainty of the k parameter, it is assumed that the value of decisive statistics, when the changepoint actually takes place, will be the maximum value (Tartakovsky et al., 2015). Then the algorithm for changepoint detection is as follows: after processing n samples of times between failures n t  for a specific value k we should compare θ with the threshold for decision-making V. There may be several options in the decision-making scheme.
In the first decision-making scheme after decisive statistics calculation for the entire sample, the maximum value of decisive statistics is compared with the threshold 1 V . If max 1 V θ ≥ , then we will make decision about the presence of changepoint. The failure number that corresponds to max θ is irrelevant. In the second decision-making scheme, after decisive statistics calculation for the current value k, the decisive statistics are compared with the threshold 2 V . That is, the decision about the presence of changepoint is taken in the case of the presence of the first event of exceeding the threshold level 2 V . That is, if , then the decision is made about changepoint presence (the alternative 1 H is true). If , then, we consider that there is no changepoint (the hypothesis 0 H is true). In this case, the moment when the changepoint occurred can be determined.
In this paper, the first decision-making scheme is considered.

Analysis of algorithm for non-stationary random process processing
Analysis problem consists of the calculation of numerical values of efficiency measure: the probability of type I errors α (acceptance of the alternative 1 H when the hypothesis 0 H is true); the probability of type II errors β (acceptance of the hypothesis 0 H when the alternative 1 H is true). Normally, for algorithms of statistical classification or detection, they calculate the probability D of correct detection of the changepoint, that is 1 D = − β . These efficiency measures correspond to the option of posteriori analysis of the output statistics in form of times between failures n t  . In this article, the solution of the problem of changepoint detection for the case of simple hypothesis is considered, so it is necessary to provide a given level of probability (0) D of correct detection of the changepoint for a known level (0) λ , the sample size n, parameter (0) a and 0 k . Problem of changepoint detection algorithms efficiency analysis by simulation of data processing of nonstationary random processes using Monte-Carlo was considered.
There are following tasks of simulation: 1. Calculation of mathematical expectation and variance of decisive statistics for different parameters values k, n, and (0) a . 2. Graphs construction of the dependence of the probability D of correct detection of the changepoint on the parameter a for different values k, sample size n and the corresponding decision thresholds V. The decision thresholds are selected in such way that for the given (0) a obtain the required level (0) D at fixed levels of the parameters n and 0 k .
For efficiency analysis we make two assumptions: 1. The threshold of the decision V calculates for population parameter (0) a , (0) λ , n in point k 0 , where the mathematical expectation of decisive statistics (0) (0) ( , , ) n a t θ λ  has a maximum value.
2. We believe that the probability density function of decisive statistics is normal.
Let us consider the case of operating characteristic D(a) construction. The probability of correct detection is calculated by the formula: is a normal probability density function of decisive statistic for an alternative 1 H . The integral in this expression is presented in the form of normal distribution function ( ) Φ ⋅ : Expressions for the mathematical expectation of decisive statistics ) (⋅ θ are: n k a n k a a m n k a n k a Based on the obtained ratios, calculations of mathematical expectation and variance of decisive statistics were made. The calculations results are given in Table 1. The initial data in Table 1 were selected from such assumptions: 1) sample size should be small enough to test the sensitivity of the data processing algorithm; 2) the deterioration coefficient corresponds REE operation practices; 3) the failure rate must correspond to the real values for REE or its structural units.
In Table 1 1 * ( ) m θ and 2 * ( ) µ θ are estimates of mathematical expectation and variance obtained from the simulation results. The grey column corresponds to the beginning of the deterioration of the technical condition. Figure 2 and Figure 3 show the results of calculation and point estimation based on simulation results for dependence of mathematical expectation and variance on parameter k. In Figure 2 and Figure 3 rectangular points are the estimates obtained from the simulation results.
The parameter 0 k corresponds to the failure number after which deterioration starts.
Based on the data analysis it can be concluded that the maximum of decisive statistic corresponds to the condition when 0 k k = . In other words, the decision-making rule about the presence of changepoint based on the selection and comparison of the maximum value of statistics with the threshold is correct.
The threshold level is determined by the formula: Φ β is the inverse of the normal distribution function ( ) Φ ⋅ . Knowing the threshold of decision making V the probability of a type I error can be calculated: is a normal probability density function of decisive statistic for an hypothesis 0 H . For data from the Table 1 and value (0) 0.1 β = the decision threshold is equal 1 1.756 V = and the probability of a type I error 0.028 α = . Figure 4 shows the deterioration operating characteristic for data from Table 1 obtained by analytical calculations and based on statistical simulation.
Comparison of graphs from Figure 4 represents the correctness of the calculation.

Conclusions
This article analyses the efficiency of the data processing algorithm in the OS of REE. The initial data for processing are presented in the form of non-stationary random process of the failure rate change. Figure 2. Dependence of the mathematical expectation of decisive statistics θ on parameter k according to the data of Table 1 Figure 3. Dependence of the variance of decisive statistics θ on parameter k according to the data of  The problem of synthesizing a deterioration detection algorithm was solved using the Neyman-Pearson criterion. The analysis is based on theoretical calculations and modelling. Analytical expressions for the efficiency evaluation of the algorithm are obtained. The assumption that the decision threshold can be found using the normal distribution for the maximum of decisive statistics is confirmed by simulation results.
The graph of the detection characteristic shows that the probability of correct decision about the alternative 1 H is 0.9 at the level of deterioration (0) 2 a = . The obtained results can be used during the design of new statistical data processing subsystem in the operation systems of REE.