No. 4 (2022)

Published: 2022-11-01

SECTION I. DATA ANALYSIS AND MODELING

  • METHODOLOGY FOR EVALUATION AND SELECTION OF PLM-SYSTEMS IN THE DIGITAL TRANSFORMATION OF A MACHINE-BUILDING ENTERPRISE

    P.А. Voronin, А.М. Belevtsev, F.G. Sadreev
    6-15
    Abstract

    As part of the transition to the sixth technological order, the question of effective implementation of the digital transformation of the enterprise is acute for industrial enterprises. To do this, enterprises need to ensure the transition to process management, automate business processes and integrate all processes, applications and data on a single platform. There is a problem of choosing a way to combine information flows between various software tools in the enterprise, one of the solutions is the use of a PLM system. With a discrete type of production, their use is difficult due to the large number of software solutions in the main, organizational, supporting and business development processes. In this regard, it is necessary to determine the optimal system that would meet all the necessary criteria for building a single platform for a "seamless" production process. There are a large number of PLM systems on the market that implement a certain number of functional appointments. The choice of a PLM system should be based on the satisfaction of an interconnected set of enterprise requirements, which are determined based on business processes. This will eliminate most unsuitable software solutions. At the same time, a PLM system that has the widest functionality may not meet the requirements for creating a system for ensuring integration with business process automation systems, as well as economic, social, political and other requirements. Therefore, an individual list of criteria is compiled for each enterprise in accordance with the field of activity, the type of production, the availability of various software, the existing level of automation and other parameters. The article proposes a methodology for choosing PLM systems for a radio-electronic engineering enterprise, based on the analysis of business processes and determining the requirements of the enterprise, monitoring and determining the functional purposes of PLM systems and determining the optimal option based on the hierarchy analysis method. Obtaining an integrated assessment of certain criteria and software solutions options will allow an objective choice of the optimal PLM system for a particular enterprise. The use of the methodology will speed up and improve the quality of the PLM system selection process for a certain enterprise in the conditions of transition to the sixth technological order and industry 4.0.

  • APPLICATION OF FUZZY LOGIC FOR MAKING DECISIONS ABOUT EVACUATION IN CASE OF FLOODING

    Е.М. Gerasimenko, V.V. Kureichik, S.I. Rodzin, A.P. Kukharenko
    Abstract

    We are talking about natural disasters, such as flooding, which can be predicted a few
    hours before they occur so that evacuation of the population can be organized. Evacuation
    means that people in disaster areas must leave these areas and reach shelters. The article pr esents
    an analysis of the decision-making process on evacuation, the main criteria determining
    the decision and the main stages of using fuzzy logic to make a decision on evacuation based on
    qualitative and quantitative values of the decision-making criteria. These stages include selection
    of criteria, determination of qualitative input and output variables, fuzzification of variables,
    definition of the base of fuzzy rules, construction of fuzzy inference, visualization of results
    and sensitivity analysis. When modeling, the following criteria were taken into account: the
    predicted flood level, the level of danger, the vulnerability of the area of the expected flood and
    the possibility of safe evacuation. The predicted flood level was based on the parameters of the
    maximum level and the rate of water rise. The hazard level reflected the physical characteristics
    of the flood and its potential impact on the safety of people in the flood area. The vulnerability
    of the area of the expected flood was defined as the inability at the local level to prevent people
    from direct contact with flood waters during the event. The possibility of safe evacuation was
    defined as a set of limitations and potential negative aspects that could delay or hinder the successful
    evacuation. The description of qualitative variable criteria for making a decision on the
    need for evacuation, examples of determining the base of fuzzy rules are presented. The fuzzy
    model is implemented using Matlab Fuzzy Logic Toolbox. The procedure of fuzzy inference and
    interpretation of the solution and a model of several scenarios and flood situations are described.
    The method by which a fuzzy model of decision-making on evacuation can be applied in
    combination with a geoinformation system is considered. The actions related to the need for
    evacuation for various scenarios and circumstances are presented.

  • INTELLIGENT DATA ANALYSIS IN ENTERPRISE MANAGEMENT BASED ON THE ANNEALING SIMULATION ALGORITHM

    E.V. Kuliev, А.V. Kotelva, М.М. Semenova, S.V. Ignateva, А.P. Kukharenko
    Abstract

    The article considers an analytical review of the annealing simulation algorithm for the
    problem of efficient enterprise management. The optimization of the annealing simulation algorithm
    for the problem of efficient enterprise management has been carried out. For the analysis of
    cases, the optimization of the work schedule of workers in the organization was used. Established
    worker scheduling model with strong and weak constraints. The simulated annealing algorithm is
    used to optimize the strategy for solving the staff scheduling model. The simulated annealing algorithm
    is an algorithm suitable for solving large-scale combinatorial optimization problems. It also
    evaluates and obtains the optimal scheduling strategy. The simulated annealing algorithm has a
    good effect on the data mining of human resource management. Big data mining can help companies
    conduct dynamic analysis in talent recruitment, and the talent recruitment plan is carried out
    in a quality and standard way to analyze the characteristics of various talents from many angles
    and improve the level of human resource management. An algorithm has been developed that implements
    the operation of the annealing simulation algorithm. The simulated annealing algorithm
    makes new decisions based on the Metropolis criterion, so in addition to making an optimized
    decision, it also makes a reduced decision in a limited range. The Metropolis algorithm is a sampling
    algorithm mainly used for complex distribution functions. It is somewhat similar to the variance
    sampling algorithm, but here the auxiliary distribution function changes over time. Experimental
    studies have been carried out that show that a worker scheduling model based on strong
    and weak constraints is significantly better than a manual scheduling model, achieving an effective
    balance between controlling wage costs in an organization and increasing employee satisfaction.
    The successful application of a workforce scheduling model based on a simulated annealing
    algorithm brings new insights and insights to solve large-scale worker scheduling problems.
    The results presented can serve as a starting point for studying personnel management systems
    based on data mining technology.

  • EXPERIMENTAL STUDY OF THE RELIABILITY OF BROADCAST ENCRYPTION SCHEMES WITH LOW-POWER ALGEBRAIC GEOMETRIC CODES

    D.V. Zagumennov, V.V. Mkrtichyan
    Abstract

    Broadcast encryption is a data distribution protocol that solve the problem of distributing digital
    products to authorized users and prevent unauthorized parties from accessing the data. It is widely
    used in computer networks data protection, digital television and distributed storage. In broadcast
    encryption schemes, data is distributed freely, but in encrypted form, and each legal user is given a
    unique set of keys to decrypt it. However, broadcast encryption schemes are vulnerable to attacks
    from coalitions of malicious users from among authorized users who are trying to create “pirated”
    keys and gain unauthorized access to distributed data. Attacks of this kind can be handled in broadcast
    encryption schemes by using error-correction codes that have special identifying properties, in
    particular, frameproof (FP) and traceability (TA) properties. Previously, theoretical limits were obtained
    for the power of a coalition of attackers, within which schemes based on identifying algebraic
    geometric codes are applicable. The paper presents an information system for conducting experimental
    studies of schemes reliability based on low-power identifying algebraic geometric codes, inparticular, for calculating identifying properties violation probabilities, including when exceeding
    known theoretical limits. As an example of using the presented system, the results of a computational
    experiment for two algebraic geometric codes are presented and analyzed. In conclusion, some open
    questions are considered that are of interest for further research, in particular, the possibility of expanding
    experimental studies to codes of arbitrary power.

  • ANALYSIS OF ADVANCED COMPUTER TECHNOLOGIES FOR CALCULATION OF EXACT APPROXIMATIONS OF STATISTICS PROBABILITY DISTRIBUTIONS

    А.К. Melnikov, I.I. Levin, А.I. Dordopulo, L.M. Slasten
    Abstract

    The paper is devoted to the evaluation of the hardware resource of computer systems for
    solving a computational-expensive problem such as calculation of the probability distributions of
    statistics by the second multiplicity method based on Δ-exact approximations for samples with a
    size of 320-1280 characters and an alphabet power of 128-256 characters, and with an accuracy
    of Δ=10-5. The total solution time should not exceed 30 days or 2.592·106 seconds for 24/7 computing.
    Owing to the use of the properties of the second multiplicity method, the computational complexity
    of the calculations can be brought to the range of 9.68·1022-1.60·1052 operations with the
    number of tested vectors of 6.50·1023-1.39·1050. The solution of this problem for the specified parameters
    of samples during the given time requires the hardware resource which cannot be provided
    by modern computer means such as processors, graphics accelerators, programmable logic
    integrated circuits. Therefore, in the paper we analyze the possibilities of promising quantum and
    photon technologies for solving the problem with the given parameters. The main advantage of
    quantum computer systems is the high speed of calculations for all possible parameter values.
    However, quantum acceleration will not be achieved to calculate the probability distributions of
    statistics due to the need to check all the obtained solutions. Here, the number of obtained solutions
    corresponds to the dimension of the problem. In addition, due to the current development
    level of the quantum hardware components, it is impossible to create and use the 120-qubit quantum
    computers for the solution of the considered problem. Photon computers can provide high
    computation speed at low power consumption and require the smallest number of nodes to solve
    the considered problem. However, unsolved problems with the physical implementation of efficient
    memory elements and the lack of available hardware components make the use of photon computer
    technologies impossible for calculation of the probability distributions of statistics in the near
    future (5-7 years). Therefore, it is most reasonable to use hybrid computer systems containing
    nodes of different architectures. To solve the problem on various hardware platforms (generalpurpose
    processors, GPUs, FPGAs) and configurations of hybrid computer systems, we suggest to
    use an architecture independent high-level programming language SET@L. The language combines
    the representation of calculations as sets and collections (based on the alternative set theory
    of P. Vopenka), the absolutely parallel form of the problem represented as an information graph,
    and the paradigm of aspect-oriented programming.

  • EVALUATION OF THE FUNCTIONAL ELEMENTS INFLUENCE ON THE PARAMETERS OF THE QKD SYSTEM BASED ON B92 PROTOCOL

    К.Е. Rumyantsev, P.D. Mironova, H.H. Shakir
    Abstract

    The influence of the parameters of functional elements on the energy, time and probabilistic
    characteristics of the quantum key distribution system (QKD) based on the B92 protocol is studied.
    The dependences of the probability of writing correct and erroneous bits into a raw quantum
    key sequence are plotted for various lengths of a fiber-optic communication line (FOCL) and the
    use of various lasers (EML-laser; DFB-laser; VCSEL-laser and FP-laser) and photodetector
    modules (id201; id210; id220; id230). Thus, changes in the probability of writing a correct bit into
    a raw quantum key sequence are much more significant than changes in the probability of writing
    an erroneous bit (50.9 times versus 3.3 times with FWHM=80 pm and a change in the length of
    the FOCL from 10 to 100 km). This is due to the fact that with an increase in the length of the
    FOCL, the probability of the absence of registration at the receiving station of photons or dark
    current pulses (DCP) sharply increases. Numerical material indicates a direct proportional dependence
    of the probability of writing an erroneous bit on the frequency of generation of noise
    pulses of single-photon avalanche photodiodes (SAPD). So, with an increase in the frequency of
    occurrence of DCP by 60 times (from 100 to 6000 Hz), the probability of recording an erroneous
    bit also increases by 60 times (for example, with a FOCL length of 100 km – 6.39 versus 383.3).
    It has been established that the root-mean-square deviation of the photon delay time is directly proportional to the length of the FOCL and the width of the laser spectrum. With a spectrum width
    of FWHM=10 pm and an increase in the FOCL length from 10 to 100 km (by a factor of 10), the
    standard deviation of the photon delay time also increases by a factor of 10 (from 4.16 to 41.6 ps).
    To achieve the best performance of the QKD system as a whole, it is advisable to use a laser with
    a minimum width of the radiation spectrum, for example, an EML-laser. However, EML-lasers are
    considered the most complex and expensive of all the considered types of lasers, so the use of
    EML-lasers significantly increases the cost of the entire QKD system.

  • THE TRANSIENT REGIME PATTERNS IN THE DISSIPATIVE CELL MODEL OF EARTHQUAKES

    А.S. Cherepantsev
    Abstract

    The purpose of this work was to analyze the mechanisms of the growth of drop clusters,
    leading on a finite-size lattice to a state close to a critical one with a power-law size distribution of
    clusters similar to that observed in a seismic process. At the same time, the question of applicability
    of this model to the description of processes in a real geophysical medium remains. Analysis of
    the elements coupling in the one-dimensional OFC model with open boundary conditions allowsan estimation of the variability of the incoming energy to the lattice elements located at different
    distances from the boundaries. The constructed computational model makes it possible to
    estimate the size of the boundary areas of high average incoming energy variability at different
    values of the coupling parameter α. It is shown that, as α grows, the boundary region of inhomogeneity
    expands. It is shown that there are two different modes of synchronous drop fo rmation,
    simulating an earthquake. Both mechanisms are determined by the capture of a neighboring
    element and the subsequent synchronization of the drops. This process forms a stable
    drop of a larger size. The presence of boundary regions with a high gradient of the input energy
    rate is the main mechanism for the formation of clusters of lattice elements, demonstrating the
    simultaneous drop of the accumulated energy. Such a synchronization is achieved due to the
    high mutual variability of energy at each iteration step. The second important mechanism of
    cluster growth is typical for the formed clusters that exceed the size of the near-boundary region
    of high inhomogeneity of the energy inflow. As the cluster size grows, the capture area of
    neighboring elements that are not included in the cluster expands. Accordingly, the probabi lity
    that the energy of the neighboring element is in the capture area increases. The calculations
    show that the mean time of reaching the given size of the cluster on the lattice at different sp atial
    dimensions d and at different coupling parameters confirms the presence of two time intervals
    with a different mechanism of cluster formation. In this case, the growth of large clusters has
    a power-law character, with an exponent determined by the dimension d.

  • ESTIMATION OF REALIZABILITY OF SOLVING TASKS ON COMPUTER SYSTEMS IN GROUP MAINTENANCE

    V.А. Pavsky, К.V. Pavsky
    Abstract

    The increase in the performance of computer systems (CS) is associated with both scalability
    and the development of the architecture of the computing elements of the system. Cluster CS,
    which are scalable, make up 93% of the Top500 supercomputers and are high-performance. At the
    same time, there is still the problem of efficient and complete use of all available computer resources
    of the supercomputer and CS for solving user tasks. Failures of elementary machines
    (nodes, computing modules) reduce the technical and economic efficiency of CS and the efficiency
    of solving user tasks. Therefore, when planning the process of solving problems, reducing the loss
    of time to restore CS from failures is an important problem. To quantify the potential capabilities
    of computer systems, indices of the realizability of solving tasks are used. These indices characterize
    the quality of the systems, taking into account reliability, time characteristics and service parameters
    of incoming tasks. The paper proposes a mathematical model of the functioning of a
    computer system with a buffer memory for group maintenance of a task flow. The mathematical
    model uses queuing theory methods based on probability theory and systems of differential equations.
    It should be noted that the method of composing systems of differential equations is simpleenough if the corresponding graph scheme is presented. However, the exact solution of systems of
    equations and, as a rule, in elementary functions, does not exist, or formulas are difficult to see.
    Here the solution is obtained in the stationary mode of operation of the queuing system. The indices
    allowing to estimate the fullness of the buffer memory are calculated. The obtained analytical
    solutions are simple, can be used for express analysis of the functioning of computer systems.

  • DEVELOPMENT OF HOMOMORPHIC DIVISION METHODS

    I.D. Rusalovsky, L.K. Babenko, О.B. Makarevich
    Abstract

    The article deals with the problems of homomorphic cryptography. Homomorphic cryptography
    is one of the young areas of cryptography. Its distinguishing feature is that it is possible to
    process encrypted data without decrypting it first, so that the result of operations on encrypted
    data is equivalent to the result of operations on open data after decryption. Homomorphic encryption
    can be effectively used to implement secure cloud computing. To solve various applied problems,
    support for all mathematical operations, including the division operation, is required, but
    this topic has not been sufficiently developed. The ability to perform the division operation
    homomorphically will expand the application possibilities of homomorphic encryption and will
    allow performing a homomorphic implementation of many algorithms. The paper considers the
    existing homomorphic algorithms and the possibility of implementing the division operation within
    the framework of these algorithms. The paper also proposes two methods of homomorphic division.
    The first method is based on the representation of ciphertexts as simple fractions and the
    expression of the division operation through the multiplication operation. As part of the second
    method, it is proposed to represent ciphertexts as an array of homomorphically encrypted bits, and
    all operations, including the division operation considered in this article, are implemented
    through binary homomorphic operations. Possible approaches to the implementation of division
    through binary operations are considered and an approach is chosen that is most suitable for a
    homomorphic implementation. The proposed methods are analyzed and their advantages and disadvantages
    are indicated.

SECTION II. INFORMATION PROCESSING ALGORITHMS

  • BIOINSPIRED ALGORITHM FOR SOLVING INVARIANT GRAPH PROBLEMS

    О.B. Lebedev, А.А. Zhiglatiy
    Abstract

    A bioinspired method for solving a set of invariant combinatorial-logical problems on
    graphs is proposed: the formation of a graph matching, the selection of an internally stable set of
    vertices, and the selection of a graph clique. A modified paradigm of the ant colony is described,
    which uses, in contrast to the canonical method, the mechanisms for generating solutions on the
    search space model in the form of a star graph. The problem of forming an internally stable set of
    vertices in a graph can be formulated as a partitioning problem. At the initial stage, the same
    (small) amount of pheromone ξ/m, where m=|E|, is deposited on all edges of the star graph H.
    The process of finding solutions is iterative. Each iteration l includes three stages. Agents have
    memory. At each step t, the memory of the agent ak contains the amount of pheromone фj(t) deposited
    on each edge of the graph H. At the first stage, each agent ak of the population uses a constructive
    algorithm to find the solution Ur 0k, calculates the estimate of the solution ξk(Ur
    0k) and the value of the degree of suitability of the solution obtained by the agent φk (the amount of pheromone corresponding to the estimate). At the second stage, after the complete formation of solutions
    by all agents at the current iteration, the pheromone ωj accumulated in the j-th cell in the
    CEPб buffer array is added to each j-th cell of the main array Q2={qj|j=1,2,…,m} of the CEP0
    collective evolutionary memory. At the third stage, the general evaporation of the pheromone occurs
    on the set of edges E of the star graph H. The time complexity of the algorithm, obtained experimentally,
    coincides with theoretical studies and for the considered test problems is O(n2).

  • METHODS AND ALGORITHMS FOR TEXT DATA CLUSTERING (REVIEW)

    V.V. Bova, Y.A. Kravchenko, S.I. Rodzin
    Abstract

    The article deals with one of the important tasks of artificial intelligence – machine processing
    of natural language. The solution of this problem based on cluster analysis makes it possible
    to identify, formalize and integrate large amounts of linguistic expert information under conditions
    of information uncertainty and weak structure of the original text resources obtained from
    various subject areas. Cluster analysis is a powerful tool for exploratory analysis of text data,
    which allows for an objective classification of any objects that are characterized by a number of
    features and have hidden patterns. A review and analysis of modern modified algorithms for agglomerative
    clustering CURE, ROCK, CHAMELEON, non-hierarchical clustering PAM, CLARA
    and the affine transformation algorithm used at various stages of text data clustering, the effectiveness
    of which is verified by experimental studies, is carried out. The paper substantiates the
    requirements for choosing the most efficient clustering method for solving the problem of increasing the efficiency of intellectual processing of linguistic expert information. Also, the paper considers
    methods for visualizing clustering results for interpreting the cluster structure and dependencies
    on a set of text data elements and graphical means of their presentation in the form of
    dendograms, scatterplots, VOS similarity diagrams, and intensity maps. To compare the quality of
    the algorithms, internal and external performance metrics were used: "V-measure", "Adjusted
    Rand index", "Silhouette". Based on the experiments, it was found that it is necessary to use a
    hybrid approach, in which, for the initial selection of the number of clusters and the distribution of
    their centers, use a hierarchical approach based on sequential combining and averaging the characteristics
    of the closest data of a limited sample, when it is not possible to put forward a hypothesis
    about the initial number of clusters. Next, connect iterative clustering algorithms that provide
    high stability with respect to noise features and the presence of outliers. Hybridization increases
    the efficiency of clustering algorithms. The research results showed that in order to increase the
    computational efficiency and overcome the sensitivity when initializing the parameters of clustering
    algorithms, it is necessary to use metaheuristic approaches to optimize the parameters of the
    learning model and search for a global optimal solution.

  • EVOLUTIONARY POPULATION METHOD FOR SOLVING THE TRANSPORT PROBLEM

    B.К. Lebedev, О.B. Lebedev, Е.О. Lebedevа
    Abstract

    The paper considers an evolutionary population method for solving a transport problem
    based on the metaheuristics of crystallization of a placer of alternatives. We study a closed (or
    balanced) model of the transport problem: the amount of cargo from suppliers is equal to the total
    amount of needs at destinations. The goal of optimization is to minimize the cost (achieving a minimum
    of transportation costs) or distances and the criterion of time (a minimum of time is spent on
    transportation). The metaheuristics of the crystallization of a placer of alternatives is based on a
    strategy based on remembering and repeating past successes. The strategy emphasizes «collective
    memory», which refers to any kind of information that reflects the past history of development and
    is stored independently of individuals. An ordered sequence Dk of routes is considered as a code
    for solving the transport problem. The objects are routes, the alternatives are the set of positions P
    in the list, where np is the number of positions in the list Dk. The set of objects Dk corresponds to
    the set of all routes. The set of alternative states P of the object corresponds to the set of alternative
    options for placing the object in the list Dk. The operation of the population evolutionary algorithm
    for the crystallization of a placer of alternatives is based on a collective evolutionary
    memory called a placer of alternatives. A scattering of solution alternatives is a data structure
    used as a collective evolutionary memory that carries information about the solution, including
    information about the realized alternatives of agents in this solution and about the usefulness of
    the solution. A constructive algorithm for the formation of a reference plan by decoding the list Dk
    has been developed. At each step t, the problem of choosing the next route in the sequence Dk and
    determining the amount of cargo transported from the point of departure Ai to the point of destination
    Bj along this route is solved. The developed algorithm is population-based, implementing the
    strategy of random directed search. Each agent is a code for some solution of the transport problem.
    At the first stage of each iteration l, a constructive algorithm based on the integral placer of
    alternatives generates nk decision codes Dk. The formation of each decision code Dk is performed
    sequentially in steps by sequentially selecting an object and position. For the constructed solution
    code Dk, the solution estimate ξk and the utility estimate δk are calculated. An individual scattering
    of alternatives Rk is formed and a transition to the construction of the next solution code is formed.
    At the second stage of the iteration, the integral placer of alternatives formed at previous iterations
    from l to (l-1) is summed with all individual placers of alternatives formed at iteration l.
    At the third stage of iteration l, all integral utility estimates r*
    αβ of the integral placer of alternatives
    R*(l) are reduced by δ*. The algorithm for solving the transport problem was implemented in
    C++ in the Windows environment. Comparison of the values of the criterion, on test examples,
    with a known optimum showed that in 90% of the examples the solution obtained was optimal, in
    2% of the examples the solutions were 5% worse, and in 8% of the examples the solutions differed
    by less than 2%. The time complexity of the algorithm, obtained experimentally, lies within O(n2).

  • IMPLEMENTATION OF A TRADING ADVISOR FOR THE METATRADER 5 MULTI-MARKET PLATFORM

    Т.N. Kondratieva, I.F. Razveeva, Е.R. Muntyan
    Abstract

    The article describes the process of creating a flexible trading strategy for algorithmic trading
    in a specialized development environment MQL5 IDE in the MetaTrader 5 multi-asset platform.
    The advantages and expediency of using the MetaTrader 5, MetaTrader 4 platforms and
    their respective trading applications Trade Assistant, Forex Trade Manager, Trade Time Manager,
    CAP Gold Albatross EA and Fast Copy are shown. A comparative analysis of the existing implementations
    of trading advisors based on various indicators, as well as those created using intelligent
    technologies, has been carried out. In the previously implemented trading advisors, for predicting
    the prices of the volatility of financial assets, flexible learning algorithms, compensatory
    fuzzy logic models, and technical analysis tools are mainly used, which entails high time costs, in
    conditions of high financial market volatility. To solve this problem, the authors propose an integrated
    approach based on the use of technical analysis tools built into the MetaTrader 5 multimedia
    platform and the trading strategy automation algorithm, which makes it possible to obtain a
    forecast of a given accuracy for the selected instrument in real time. The paper substantiates the
    need to introduce elements of automatic trading when analyzing the quotes of financial instruments
    and managing a trading account in order to avoid mechanical, analytical, organizational
    and psychological mistakes made by traders. The study shows step by step the process of creating,
    debugging, testing, optimizing and executing the implemented trading advisor. An algorithm for
    automating a trading strategy has been developed and its block diagram has been presented.
    The initial data for the trading strategy automation algorithm are determined, and the mathematical
    apparatus for calculating indicators of limit orders of the TakeProfit and StopLoss types is
    described. Since exchange trading is associated with many risks, we analyzed the impact of different
    values of lots of TakeProfit and StopLoss limit orders on possible profit and drawdown limit
    (loss). As a result, the EA worked correctly in real time without human intervention for eightweeks using two trading strategies. The results of testing the developed software allow us to draw
    the following conclusions: when the EA shows a high degree of recommendation, the actual financial
    assets show high efficiency.

  • METHOD AND ALGORITHM FOR SYNTHESIS OF CONTROLLED CHEBYSHEV DIGITAL FILTERS OF THE FIRST KIND OF LOW FREQUENCIES BASED ON THE BILINEAR TRANSFORMATION METHOD

    I.I. Turulin, S.M. Hussein
    Abstract

    The article presents a technique for the synthesis of controlled digital recursive Chebyshev lowpass
    filters of the first kind with an infinite impulse response. The frequency response of such filters has
    ripples in the passband and is as flat as possible in the stopband. Controllability is understood as an
    explicit dependence of the filter coefficients on the cutoff frequency. The technique is based on the bilinear
    transformation of the transfer function of the analog low-pass filter prototype and the frequency
    transformation of the amplitude-frequency characteristics of the obtained digital filter. The main idea ofthe technique is that for an analog prototype filter with a cutoff frequency of 1 rad/s, the parameters of
    the transfer function of biquadratic or bilinear links, which have the dimension of frequency, will be
    numerically equal to the correction factors for similar parameters of a controlled filter with an arbitrary
    cutoff frequency. As an example, the synthesis of a digital Chebyshev filter of the first kind of the fifth
    order is considered. In this article, the transfer function of an arbitrary order filter is represented as a
    cascade connection of II order links if the filter is of an even order. In the case of an odd order greater
    than one, one cascaded link of the first order is added. Despite the relative simplicity of the frequency
    conversion, in its practical use for digital filters synthesized using computer-aided design of digital filters
    (or using reference books containing calculated prototype low-pass filters for various approximations
    of the frequency response of an ideal low-pass filter), a series arises non-trivial specific moments
    that complicate the engineering use of this method of synthesizing controlled digital filters. Therefore, in
    addition to the technique, a step-by-step algorithm has been developed that allows one to synthesize a
    filter without knowing these moments. The algorithm is implemented in the Mathcad environment; as
    an example, a digital recursive Chebyshev filter of the 1st kind of the 5th order is calculated.
    The example shows the calculated coefficients of a digital controlled low-pass filter, which explicitly
    depend on the cutoff frequency, the amplitude-frequency characteristics of this filter and its lowfrequency
    prototype converted into a filter with the same cutoff frequency, the amplitude-frequency
    characteristics are given in the same coordinates. Due to the good formalization of the algorithm, the
    latter is suitable for the implementation of computer-aided design systems for controlled digital
    Chebyshev low-pass filters of the first kind.

  • ALGORITHM FOR COMPRESSION OF FLOATING-POINT DATA IN SCIENTIFIC RESEARCH SUPPORT SYSTEMS

    А.А. Chusov, М.А. Kopaeva
    Abstract

    The paper presents an original algorithm and its implementation for single pass real-time
    compression of streams of numeric floating-point data. The purpose of the research is to develop
    and formalize a single-pass algorithm of stream floating-point data compression in order to increase
    performance of both encoding and decoding, because a use of existing implementations
    provides insufficient speed of compression, are too restrictive on hardware resources and limited
    in applicability to real-time stream data compression when it comes to floating-point data.
    For that, the following issues have been addressed. The developed mathematical model and the
    algorithm for compression of scalar floating-point data are described together with results of experimental
    research of the compression method applied to single-dimensional and twodimensional
    scientific data. The model is based upon the commonly-used binary_64 representation,
    of the IEEE-754 standard, onto which extended real-line values are mapped. The algorithm
    can be implemented as part of high-performance distributed systems in which performance of
    input-output operations, as well as internetwork communication, are critical to overall efficiency.
    The performance and applicability of the algorithm in data stream compression result from its
    single-pass behaviour, relatively low requirements to a priori known and statically defined size of
    memory required to implement history of compression, which the predictor, used in compression
    and decompression, is based on. Indeed, the measured compression ratios are comparable to ones
    which result from using more resource-intensive universal coders but providing significantly lower
    latency. Provided synchronization of parameters of both compressor and decompressor applied to
    a stream of vector values and assuming a correlation between absolute values of scalars of the
    same dimension within the vectors, further improvement of the predictor performance can be attained
    by means of SIMD-class parallelism which, in turn, is beneficial for overall performance of
    compression and decompression, provided that the underlying hardware is capable of addressing
    random-access memory based on offsets in a vector register, such as by employment of the
    VGATHER class instructions of Intel processors. In order to reduce the bottlenecks associated
    with input-output, an implementation of the algorithm is employed by the authors as part of a
    computing system used for parallel simulation of wave fields which is distributed via a network.
    The experiments described in the paper demonstrate significant performance increase of the proposed
    coder compared to well-known universal compressors, RAR, ZIP and 7Z, while the achieved
    compression factors remain comparable.

  • DEVELOPMENT OF A METHOD FOR PERSONAL IDENTIFICATION BASED ON THE PATTERN OF PALM VEINS

    V.А. Chastikova, S.А. Zherlitsyn
    Abstract

    The article describes the work on the creation of a neural network method for identifying
    a person based on the mechanism of scanning and analyzing the pattern of palm veins as a biometric
    parameter. As part of the study, the prerequisites, goals and reasons for which the deve lopment
    of a reliable biometric identification system is an important and relevant area of activity
    are described. A number of problems are formulated that are inherent in existing methods for
    solving the problem: the graph method and the method based on calculating the distance expressed
    in various interval metrics. The description of the principles of their work is given.
    The tasks solved by personal identification systems are formulated: comparison of the subject of
    identification with its identifier, which uniquely identifies this subject in the information system.
    A mechanism for reading a pattern of veins from the palm of the hand, developed for analyzing
    an image obtained with a digital camera sensitive to infrared radiation, is described. When the
    palm is in the frame, illuminated by the light of the near infrared range, the image obtained
    from the camera becomes noticeable pattern of veins, vessels and capillaries that lie under the
    skin. Depending on the organization, the identification system may, based on the provided identifier,
    determine the appropriate access subject or verify that the same identifier belongs to the
    intended subject. Three methods for further analysis of biometric data and personal identification
    are given: approaches based on categorical classification and binary classification, as well
    as a combined approach, in which identification is first used by the first method, and then, by
    the second, but already for a known access identifier defined on the first stage. The resulting
    architecture of the neural network for the categorical classification of the vein pattern is pr esented,
    a method for calculating the number of model parameters depending on the number of
    registered subjects is described. The main conclusions and experimental measurements of the
    accuracy of the system when implementing various methods are presented, as well as diagrams of
    changes in the accuracy of models during training. The main advantages and disadvantages of the
    above methods are revealed.

  • METHODOLOGY OF TOPOLOGICAL RESTRICTIONS FOR INTENSIVELY USED FPGA RESOURCE

    К.N. Alekseev, DА. Sorokin, А.L. Leont'ev
    Abstract

    In the paper we consider the problem of achieving high real performance of reconfigurable
    computer systems in implementing computationally expensive tasks from various problem areas.
    The parameters of the programs executed on reconfigurable systems determine their real performance.
    The main component of these programs is the computing data processing structures implemented
    as FPGA configuration files. At the same time, one of the key parameters of any computing
    structure is the clock frequency of its operation, which directly affects its performance. However,
    there are several problems concerning the achievement of high clock rates, and they cannot be solved
    with the help of modern CAD tools. The reason is the non-optimal topological placement of functional
    blocks of the computing structure within the field of FPGA primitives, especially with high resource
    utilization. Due to this, the load on the FPGA switching matrix is increasing, and, as a result,
    the connections among functionally dependent FPGA primitives turn out to be much longer than is
    acceptable. In addition, excessive connection length is observed when tracing connections among
    primitives that are placed on different FPGA chips or are physically separated by on-chip peripherals.
    In the paper we describe a methodology which provides optimization of the placement of computing
    structure elements on FPGA primitives, and minimizes the length of traces among primitives,
    and also minimizes the number of traces among physically separated FPGA topological sections.
    To prove the proposed methodology, we implemented the test task "FIR-filter" on a reconfigurable
    computer "Tertius." We have demonstrated the main problems concerning reaching the target clock
    rate and have described a method for their solution. Owing to our methodology, it is possible to
    increase the clock rate; hence, the performance of Tertius will increase by 25% without revising
    the functional circuit of the task’s computing structure. According to our current research of the
    suggested methodology and its efficiency, we claim that CAD tools, used for creating topological
    restrictions and based on our methodology, will significantly reduce the time for developing programs
    with the required characteristics for reconfigurable computer systems.

SECTION III. ELECTRONICS, COMMUNICATIONS AND NAVIGATION

  • IMPULSE CHARACTERISTICS OF SILICON STRUCTURES WITH N-P JUNCTION IRRADIATED BY PROTONS

    N.М. Bogatov, V.S. Volodin, L.R. Grigoryan, А.I. Kovalenko, М.S. Kovalenko
    Abstract

    Currently, methods are being actively developed to create semiconductor structures with desired
    properties by irradiation with ionizing particles (radiation defect engineering). The interaction
    of radiation defects with impurities, dislocations and other structural defects causes a change in the
    properties of semiconductors and semiconductor devices. Irradiation with protons makes it possible
    to controllably create radiation defects with a distribution maximum in a pre-calculated region. The
    aim of this work is to analyze the effect of irradiation with low-energy protons on the impulse characteristics
    of silicon structures with an n+-p junction. The task is to determine the effective lifetime  of
    charge carriers in the space charge region (SCR) of the n+-p junction. The n+-p-p+-structures made
    of silicon grown by the Czochralski method, irradiated from the side of the n+-layer by a low-energy
    proton flux at sample temperatures of 300 K and 83 K were studied. To measure the impulse characteristics,
    bipolar rectangular voltage pulses with a constant amplitude of 10 mV and a frequency of
    1 MHz were used. The experimental data are explained using models of nonstationary charge carrier
    transport in inhomogeneous semiconductors and the formation of radiation defects in silicon underthe action of protons. Depth distributions of the average number of primary radiation defects are
    calculated: interstitial silicon, vacancies, divacancies created by one proton per unit length of the
    projective path. It is shown that irradiation with protons with a dose of 1015 cm2 and an energy of
    40 keV does not change the value of , but with an energy of 180 keV creates a region with an effective
    lifetime of 5.5108 s in the SCR of the n+-p junction.

  • ANALYSIS OF UNDERLYING SURFACE IN IMAGE FORMATION IN DOPPLER BEAM SHARPENING MODE

    R.R. Ibadov, V.P. Fedosov, S.R. Ibadov
    Abstract

    Radar based on real beam scanning is widely used in both civil and military spheres. However,
    it is difficult to realize high azimuth resolution of a stationary platform or a platform with nonuniform
    motion using conventional signal processing algorithms. Doppler beam sharpening (DBS)
    technology is a combination of high resolution and real-time performance compared to Synthetic
    Aperture Radar (SAR) technology, which uses the Doppler shift between echoes from objects on the
    underlying surface along the azimuth direction, caused by the movement of the radar platform. Unfortunately,
    the traditional DBS imaging algorithm, which constructs a Doppler filter using an FFT,
    has a low azimuth resolution and a high level of side lobes, which limits further improvement in azimuth
    resolution. In the article, the algorithm for constructing a map of the underlying surface in the
    direction of movement of the radar carrier based on the DBS was studied and the map image was
    analyzed using the Fourier transform. A three-dimensional view of the map of the underlying surface
    is shown with the distribution of values in the images. The subject of the study is the method and algorithm
    for constructing a map of the underlying surface in the Doppler beam sharpening mode and
    identifying chain structures based on the analysis of the Fourier transform. The object of the study is

    a set of test images of the terrain map. The result of the study is the development of an algorithm for
    constructing a map in order to identify chain structures on the underlying surface. The novelty of the
    work is an algorithm that allows you to build a map of the underlying surface based on the DBS,
    taking into account the blind zone in the direction of movement of the radar carrier. The results obtained
    also make it possible to reveal chain structures in the region of interest. The possibility of
    estimating the periodicity of image elements using the Fourier transform has been tested. As a result
    of solving the tasks set, the following conclusions can be drawn: – an algorithm has been developed
    for constructing a map of the underlying surface based on DBS with image correction in the direction
    of movement of the radar carrier. – analysis of the results of the study showed that the proposed algorithm
    allows you to identify chain structures on the map.

  • DEVELOPMENT OF ADAPTIVE OFDM-BASED COMMUNICATION SYSTEM FOR TROPOSPHERE AND RADIO RELAY CHANNEL

    P.V. Luferchik, А.А. Komarov, P.V. Shtro, А.N. Konev
    Abstract

    It is known that inter-symbol interference may occur during data transmission in radio relay
    and tropospheric communication systems. The presence of multipath propagation, frequencyselective
    fading and extreme instability in the tropospheric and radio relay channel significantlyreduces the energy efficiency of the communication system. The aim of the work was to increase
    the efficiency of using the channel for radio relay and tropospheric communication by using
    OFDM (orthogonal frequency-division multiplexing) signals in the system using adaptive coding
    and modulation. In the course of execution, the modulator and demodulator models of the OFDM
    signal are implemented. When using various signal code structures in various reception/
    transmission conditions, it is possible to achieve optimal use of the frequency and energy
    resources, to create systems that adapt to the conditions of signal propagation. To implement this
    mechanism, a service field was introduced into the transmitted service data, which contains information
    about the code rate used, the modulation type, and the interleaving depth. This approach
    allows optimizing the use of energy and frequency resources. Together with the use of channel
    quality estimation algorithms, it becomes possible to dynamically change the signal-code structure
    when the reception conditions change. By adjusting the interleaving depth, it is possible to optimize
    the S/N threshold or the amount of information delay in the channel, depending on the system
    requirements. The use of adaptive choice of code rate and modulation will allow more efficient use
    of the channel resource with a constant change in its state. The obtained results will significantly
    increase the energy efficiency of the OFDM system, lead to stable communication in nonstationary
    channels and increase the throughput.

  • DEVELOPMENT OF ENERGY EFFICIENT COMMUNICATION SYSTEM IN THE TROPOSPHERE RADIO CHANNEL BASED ON OFDM SIGNALS

    P.V. Luferchik, P.V. Shtro, А.N. Konev, А.А. Komarov
    Abstract

    It is known that inter-symbol interference may occur during data transmission in radio relay
    and tropospheric communication systems. The presence of multipath propagation and frequencyselective
    fading in the tropospheric, radio relay channel significantly reduces the energy efficiency
    of the communication system as a whole. The aim of the work was to increase the efficiency of
    using the channel for radio relay and tropospheric communication by using OFDM (orthogonal
    frequency-division multiplexing using the methods of reducing the peak factor of the OFDM signal
    and increasing the linearity of the transmission path. To evaluate digital predistortion algorithms
    in the Matlab/Simulink environment, a model was developed for the LMS, NLMS, RLS, RPEM
    methods and a power amplifier model with real characteristics. Based on the results of modeling
    algorithms, RLS was chosen. In addition, in this work, a modified version of the adaptation algorithm
    based on the recursive least squares method (RLSm) was developed. The main result of the
    modification is: a decrease in the number of arithmetic operations per iteration (more than
    5 times), an increase in the stability of adaptation algorithms, due to the introduction of regularization
    methods, a decrease in the convergence time, due to the introduction of an exponential dependence.
    Various algorithms for reducing the peak factor of an OFDM signal were investigated,
    the best result was achieved by combining Tone reservation (TR) and Active constellation extension
    (ACE). Simulation in the Matlab/Simulink environment showed that the combination of TR
    and ACE algorithms reduces the peak factor of OFDM signals by ~5dB for BPSK data stream and
    ~4.5dB for 8-PSK, QAM-16, QAM-64, QAM-128 and QAM- 256. To increase the linearity of the
    transmission path, the RLSm digital pre-distortion algorithm was selected and upgraded, it made
    it possible to reduce the magnitude of the error vector modulus (EVM) by 13.5dB, and also increase
    the modulation/error ratio (MER) by 13.6dB.

  • MODELING RESULTS OF THE TURBULENT SURFACE LAYER ELECTRODYNAMIC STRUCTURE

    О.V. Belousova, G.V. Kupovkh, А.G. Klovo, V.V. Grivtsov
    Abstract

    The article presents the results of mathematical modeling of turbulent surface layer
    electrodynamic structure. A model of a stationary turbulent electrode effect operating near the
    earth's surface is used. The analysis of equations by methods of similarity theory allowed us to
    make a number of reasonable physical assumptions that allowed us to obtain analytical solutions.
    Analytical formulas have been obtained for calculating the profiles of concentrations of small ions
    (aeroions), the density of the space electric charge and the electric field strength in a turbulent
    electrode layer. As a result of mathematical modeling, the dependences electrical characteristics
    in surface layer on the values of the electric field, the turbulent mixing degree and aerosol pollution
    of the atmosphere are investigated. It is shown that the parameter of the electrode effect (the
    ratio of the values of the electric field strength on the earth's surface and at the upper boundary of
    the electrode layer) practically does not depend on atmospheric conditions, whereas the height of
    the electrode layer and, accordingly, the scale of the distribution of the electrical characteristics
    of the surface layer vary significantly. The intensification of turbulent mixing in the surface layer
    leads to an increase in the height of the electrode layer and, as a consequence, the scale of distribution
    of its parameters. The strengthening of the electric field or air pollution by aerosol particles of sufficient concentration leads to a decrease in its height. An increase in the concentration
    of aerosol particles in the atmosphere reduces the values of the electric charge density at the
    earth's surface. Theoretical calculations are in good agreement with experimental data and the
    results of numerical modeling of the surface layer electrical structure. The analytical formulas
    obtained in the work for calculating the electrical characteristics of the surface layer and the results
    of calculations can be useful in solving a number of applied problems of geophysics, in particular
    for monitoring the electrical state of the atmosphere.