No. 2 (2020)

Published: 2020-07-20

SECTION I. INTELLIGENT SYSTEM

  • MULTILEVEL APPROACH FOR HIGH DIMENSIONAL 3D PACKING PROBLEM

    V. V. Kureichik, А. Е. Glushchenko
    Abstract

    The article considers one of the important combinatorial optimization problems, the problem
    of 3D packing of different elements in a fixed volume. It belongs to the class of NP-complex and difficult
    optimization problems. The paper presents and describes the formulation of the 3D packing
    problem, introduces a combined objective function that takes into account all the restrictions. Due to
    the complexity of this task, a multilevel approach is proposed. It is consisting in dividing the 3D packing
    problem into 3 subtasks and solving each subtask in a strict order. Moreover, for each of the
    subtasks a unique set of objects is defined that are not repeated in the remaining subtasks. To implement
    a multi-level approach, the authors developed a combined bio-inspired algorithm based onevolutionary and genetic search. This approach can significantly reduce the time to obtain the result,
    partially solve the problem of preliminary convergence of the algorithms and obtain sets of quasioptimal
    solutions in polynomial time. A software package was developed and computer-based algorithms
    for automated 3D packaging based on a combined bio-inspired search were implemented.
    A computational experiment was conducted on test examples (benchmarks). The packaging quality
    obtained on the basis of the developed combined bio-inspired algorithm is on average 5 % higher
    than the packaging results obtained using known algorithms, and the solution time is less than 5 % to
    20 %, which indicates the effectiveness of the proposed approach. The series of tests and experiments
    carried out made it possible to refine the theoretical estimates of the time complexity of the packaging
    algorithms. In the best case the time complexity of the O (n2) algorithms; in the worst, O (n3).

  • HYBRID BIOINSPIRED ALGORITHM FOR ONTOLOGIES MAPPING IN THE TASKS OF EXTRACTION AND KNOWLEDGE MANAGEMENT

    D.Y. Kravchenko, Y.A. Kravchenko, V. V. Markov
    Abstract

    The article is devoted to solving the problem of mapping ontological models in the processes
    of extracting and knowledge management. The relevance and significance of this task are due to
    the need to maintain reliability and eliminate redundancy of knowledge during the integration
    (unification) of various origins structured information sources. The proximity and consistency of
    the conceptual semantics of the combined resource during the mapping is the main criterion for
    the effectiveness of the proposed solutions. The article considers the problems of choosing appropriate
    solution approaches that preserve semantics when displaying concepts. The strategy of
    choosing bio-inspired modeling is substantiated. The aspects of the effectiveness of various decentralized
    bio-inspired methods are analyzed. The reasons for the need for hybridization are identified.
    The paper proposes to solve the problem of mapping ontological models using a bio-inspired
    algorithm based on hybridization of bacterial and cuckoo search algorithms optimization mechanisms.
    The hybridization of these algorithms allowed us to combine their main advantages: a consistent
    bacterial search that provides a detailed study of local areas, and a significant number of
    the cuckoo agent during the implementation global movements of Levy flights. To evaluate the
    effectiveness of the proposed hybrid bio-inspired algorithm, a software product was developed and
    experiments were performed on the mapping of different sizes ontologies. Each concept of any
    ontology has a certain set of attributes, which is a semantic vector of attributes. The degree of the
    semantic vectors similarity for the compared concepts of displayed ontologies is a criterion for
    their integration. To improve the quality of the display process, a new encoding of solutions has
    been introduced. The quantitative estimates obtained demonstrate time savings in solving problems
    of relatively large dimension (from 500,000 ontograph vertices) of at least 13 %. The time
    complexity of the developed hybrid algorithm is O (n 2). The described studies have a high level of
    theoretical and practical significance and are directly related to the solution of classical problems
    of artificial intelligence aimed at finding hidden dependencies and patterns on a multitude of
    knowledge elements.

  • DETERMINING A SET OF CONDITIONS FOR AUTOMATICALLY FINDING THE BEST OPTION FOR HYBRID MACHINE TRANSLATION OF TEXT AT THE LEVELOF GRAPHEMES

    V. S. Kornilov, V. M. Glushan, A.Y. Lozovoy
    Abstract

    The article is devoted to the Algorithmic Search for Optimal Solutions for evaluating and
    improving the Quality of Hybrid Machine Translation of Text. The Object of the Research is Texts
    on any Alphabetical Languages with different Bases (Alphabets), as well as their Translations into
    other Alphabetical Languages.Currently, existing Methods and Means of Hybrid Machine Translation
    are characterized by a wide variety of Quality Assessment Algorithms, but the Disadvantage
    of these Methods is that most of them do not have Clear Criteria, Limitations and Schemes of Assessments,
    eventually, the Result of the Translation in most cases does not correspond to the Level
    of Publication. The Aim of the Work is to determine a Set of Conditions for automatic search for
    the Best Option of Hybrid Machine Translation of Text at the Level of Graphemes.The Main Tasks
    to be solved during the Research are the Search for Qualitative and Quantitative Conditions, including
    the maximum, minimum and average values of the Lengths of Translations, Reverse Translations
    and Editorial Distances between Pairs of Texts that have the Same Meaning. The Scientific
    Novelty lies in use the Graphical Representation of the Model of Alphabetic Languages at the
    Level of Graphemes in the Form of a Cartesian Coordinate System with a Dimension equal to a
    Unit Editorial Distance (by Levenstein). When solving the de Goui’s Theorem, the current Rules ofStandardization PR 50.1.027–2014 "Rules for the Provision of Translation and Special Types of
    Linguistic Services", the Method of Decanonicalization and the Model "Original Text – Translation
    – Reverse Translation" were used. As a Result, Actual and Practically Applicable Solutions
    for the Problems under consideration are obtained. In this regard; this Work may be interest to a
    wide range of Specialists engaged in Machine Translation and Translation Studies.

  • MULTI-STAGE METHOD FOR SHORT-TERM FORECASTING OF TEMPERATURE CHANGES MODES IN THE POWER CABLE

    N.K. Poluyanovich, N.V. Azarov, A.V. Ogrenichev, M.N. Dubyago
    Abstract

    The article is devoted to research on the creation of diagnostics and prediction of
    thermofluctuation processes of insulating materials of power cable lines (PCL) of electric power
    systems based on such methods of artificial intelligence as neural networks and fuzzy logic. The
    necessity of developing a better methodology for the analysis of thermal conditions in PCL is
    shown. The urgency of the task of creating neural networks (NS) for assessing the throughput,
    calculating and predicting the temperature of PCL conductors in real time based on the data of
    the temperature monitoring system, taking into account changes in the current load of the line and
    the external conditions of the heat sink, is substantiated. Based on the main criteria, traditional
    and neural network algorithms for forecasting are compared, and the advantage of NS methods is
    shown. The classification of NS methods and models for predicting the temperature conditions of
    cosmic rays has been carried out. The proposed neural network algorithm for predicting the characteristics
    of electrical isolation was tested on a control sample of experimental data on which
    training of an artificial neural network was not carried out. The forecast results showed the effectiveness
    of the selected model. To solve the problem of PCL resource prediction, a network was
    selected with direct data distribution and back propagation of the error, because Networks of thistype, together with the activation function in the form of a hyperbolic tangent, are to some extent a
    universal structure for many problems of approximation, approximation, and forecasting. A neural
    network was developed to determine the temperature regime of a current-carrying core of a power
    cable. A comparative analysis of the experimental and calculated characteristics of the temperature
    distributions was carried out, while various load modes and the functions of changing the
    cable current were investigated. When analyzing the data, it was determined that the maximum
    deviation of the data received from the neural network from the data of the training sample was
    less than 2.2 %, which is an acceptable result. The model can be used in devices and systems for
    continuous diagnosis of power cables by temperature conditions.

  • ANALYSIS AND SELECTION OF METHODOLOGIES IN THE SOLUTION OF THE PROBLEMS OF INTELLECTUALIZATION IN SYSTEMS FOR PROGNOSIS OF THERMOFLUCTUATION PROCESSES IN CABLE NETWORKS

    N.K. Poluyanovich, M.N. Dubyago
    Abstract

    The article is devoted to research on the creation of diagnostics and prediction of
    thermofluctuation processes of insulating materials of power cable lines (PCL) of electric power
    systems based on such methods of artificial intelligence as neural networks and fuzzy logic. The
    necessity of developing a better methodology for the analysis of thermal conditions in PCL is
    shown. The urgency of the task of creating neural networks (NS) for assessing the throughput,
    calculating and predicting the temperature of PCL conductors in real time based on the data of
    the temperature monitoring system, taking into account changes in the current load of the line and
    the external conditions of the heat sink, is substantiated. Based on the main criteria, traditional
    and neural network algorithms for forecasting are compared, and the advantage of NS methods is
    shown. The classification of NS methods and models for predicting the temperature conditions of
    cosmic rays has been carried out. To solve the problem of forecasting the PCL resource, a network
    was selected with direct data distribution and back propagation of the error, because networks of this type, together with an activation function in the form of a hyperbolic tangent, are to
    some extent a universal structure for many problems of approximation, approximation, and forecasting.
    A neural network has been developed to determine the temperature regime of a currentcarrying
    core of a power cable. A comparative analysis of the experimental and calculated characteristics
    of the temperature distributions was carried out, while various load modes and the
    functions of changing the cable current were investigated. When analyzing the data, it was determined
    that the maximum deviation of the data received from the neural network from the data of
    the training sample was less than 2.5 %, which is an acceptable result. To increase the accuracy, a
    large amount of input and output data was used when training the network, as well as some refinement
    of its structure. The model allows you to evaluate the current state of isolation and predict
    the residual life of PCL. The model can be used in devices and systems for continuous diagnosis
    of power cables by temperature conditions.

  • ESTIMATING THE EFFECTIVENESS OF THE METHOD FOR SEARCHING THE ASSOCIATIVE RULES FOR THE TASKS OF PROCESSING BIG DATA

    V. V. Bova, E.V. Kuliev, S.N. Scheglov
    Abstract

    The modern databases have significant volume and consist of large masses of information.
    One of the popular methods of knowledge identification in terms of tasks of analysis and processing
    of large data volumes is composed of the algorithms for searching the associative rules.
    The paper solves the problem of building the bases of associative rules for the analysis of the unstructured
    large data volumes on the basis of searching different regularities considering the importance
    of their characteristics. The authors propose the method for synthesizing the bases and
    building the transaction database to calculate the threshold values of support and application of
    criteria of estimating implicit associations. This allows us to extract repeated and implicit associative
    rules. To improve the computational effectiveness of extracting the associative rules, the paper
    applies the genetic algorithm for optimization of input parameters of the characteristic searching
    space. The developed method shortens the time of rules extraction, reduces the number of generated
    common rules, and avoid the resource-consuming procedure of pre-processing the synthesized
    rule base. The authors developed the program and algorithmic module to carry out the experimental
    research of the proposed method for synthesizing the associative rules on the basis of filtering
    the input parameters of the search model for solving the tasks of processing the unstructured
    data. The experiments conducted on the test transaction bases allow us to clarify the theoretical
    estimations of time complexity of the proposed method that used the genetic algorithm to calculate
    the weighed support of the set of rules considering the assessment of a priori informative content
    of the characteristics included in the dataset. The time complexity of the developed method is estimated
    as  О(I2). The comparative analysis is performed using the test data of the Retail Data
    with the algorithms Apriori and Frequent Pattern-Growth. The results have proven the effectiveness
    of the search method on big sets of transactions. The method allows us to reduce the cardinal
    of an irredundant set of extracted associative rules in more than 40% in comparison with the popular
    algorithms. The experiments have shown that the method can be effective for the tasks of
    knowledge discovery in terms of processing large volumes of data.

  • BIOINSPIRED SIMULATION METHOD FOR SCHEDULING OF PARALLEL FLOWS APPLICATIONS IN GRID-SYSTEMS

    D.Y. Kravchenko, Y.A. Kravchenko, V. V. Kureichik, A.E. Saak
    Abstract

    The article is devoted to solving the problem of parallel requests scheduling flows in spatially
    distributed computing systems. The relevance of the task is justified by a significant increase in
    the demand for the distributed computing paradigm in the conditions of information overflow and
    uncertainty. The article discusses the problems of scheduling user requests that require severalprocessors at the same time, which goes beyond the classical theory of schedules. The aspects of
    the efficiency of using heuristic algorithms for scheduling planar resources are analyzed. The
    reasons for their insufficiency are determined both in terms of effectiveness and empirical approaches.
    The paper proposes to solve the problem of scheduling parallel applications based on
    the integrated application of intelligent agents coalition and an event simulation model. It is proposed
    to classify incoming applications on the basis of using a modified bio-inspired optimization
    method for cuckoo search. The joint use of a coalition of intelligent agents and a bio-inspired
    method will allow for unprecedented parallelism of calculations, and the subsequent determination
    of the processing classified applications ways on the basis of a simulation model will allow us
    to form sets of alternative solutions to speed up problem solving and optimize the distribution of
    available computing resources depending on the sets of incoming applications. To evaluate the
    effectiveness of the proposed approach, a software product was developed and experiments were
    conducted with a different number of incoming applications. Each incoming application has a
    certain set of attributes, which is a vector of the application characteristics. The degree of the
    application similarity feature vector and the vertex reference feature vector in the distributing
    simulation model is a classification criterion for the application. To improve the quality of the dispatch
    process, new procedures for duplicating unclassified applications have been introduced, which
    allow intensifying the search for matches in feature vectors. It also provides backup dispatching trajectories
    necessary for processing precedents for the appearance of applications with absolute priority
    at the inputs. The quantitative estimates obtained demonstrate time savings in solving problems of
    relatively large dimension (from 500,000 vertices) of at least 10%. The time complexity in the considered
    examples was O (n 2). The described studies have a high level of theoretical and practical significance
    and are directly related to the solution of classical problems of artificial intelligence
    aimed at finding hidden dependencies and patterns on a large set of big data.

SECTION II. COMPUTING AND INFORMATION AND CONTROL SYSTEMS

  • SYNERGETIC SYNTHESIS OF SLIDING MODE CONTROL FOR VEHICLE’S ACTIVE SUSPENSION SYSTEM

    A.S. Sinitsyn
    Abstract

    This article discusses the problem of designing vehicle’s active suspension systems in which
    the actuator is not ideal and is a subject to the influence of hysteresis and dead zone. The main
    goal of this work is to synthesize a control system that reduces the influence of hysteresis and deadzone on the efficiency of the adaptive suspension system. System parameters like hysteresis require
    significant efforts to identify them and, moreover, can vary widely over the life cycle of the system.
    Thus, it is very difficult to take into account hysteresis in the synthesis of the control system, as
    well as the construction of observers. The solution to this problem is use of sliding control systems,
    which to a certain extent are robust to parametric and structural changes in the control object.
    Existing approaches to the synthesis of sliding control systems are based on a linear or linearized
    model of the control object. Thus, the effectiveness of such systems can vary significantly when the
    regulator operates as part of a real, non-linear control object. The proposed sliding mode control
    allows to reducing sensitivity of the system to disturbances due to imperfect actuator, and also
    takes into account the nonlinear structure of the control object. The efficiency of a closed system is
    investigated on a dynamic model built in Simulink package. The proposed controller is compared
    with an adaptive synergetic regulator. Road of class C (according to ISO 8608 classification) was
    selected as a disturbance. To investigate the effectiveness of the proposed control system, the following
    parameters are evaluated: weighted acceleration of the sprung mass; relative motion of
    suspension and tire reaction force. The RMS and maximum values are calculated for each parameter.
    The results of numerical simulations allow to conclude that the use of sliding control
    systems can improve the following adaptive suspension performance indicators: reduce the maximum
    value of the weighted acceleration of the sprung mass by more than two times and reduce the
    maximum load on the tire by more than 20 %.

  • SYNERGETIC SYNTHESIS OF CONTROL LAW FOR UAV IN THE PRESENCE OF WIND DISTURBANCES WITH INPUT CONSTRAINTS

    G.E. Veselov, Ingabire Aline
    Abstract

    This paper discusses the application of synergetic control theory (SCT) methods to the problem of
    control system synthesis for fixed-wing unmanned aerial vehicle (UAV) in the presence of wind disturbances.
    The main purpose of this study is to develop a synergetic method for the synthesis of nonlinear
    control systems for fixed-wing UAVs, which guarantee the asymptotic stability of the closed-loop systems
    when moving along a given trajectory, stability and adaptability with significant nonlinearity of
    mathematical models for controlling fixed-wing UAVs in the presence of wind disturbances. Furthermore,
    an important task in the synthesis of control systems for various objects, including UAV, is to take
    into account constraints on the state variables of the control object, which can be determined by both the
    energy efficiency requirements and safety systems, as well as other constraints and requirements imposed
    on these coordinates. This article proposes a procedure for the synthesis of nonlinear vector control
    systems for fixed-wing UAV by applying SCT approaches that provide invariance to external unmeasured
    disturbances, fulfillment of specified technological control objectives, asymptotic stability of
    the closed-loop system, and also take into account the introduced constraints on the UAV internal coordinates.
    The procedure suggested in this article for the synergetic synthesis of nonlinear vector control
    systems of fixed-wing UAV ensures the effective use of this type of UAV in solving various tasks, including
    the operation of such UAV as elements of a group of autonomous objects that solve a given group
    technological task. The effectiveness of the proposed approach to the synergetic synthesis of control
    strategies is confirmed by the results of computer modeling of the synthesized nonlinear vector control
    system of fixed-wing UAV. The proposed synergetic method of control system synthesis for fixed-wing
    UAV can be applied for the development of advanced flight simulation and navigation complexes that
    simulate the UAV behavior in the presence of wind disturbances and serve as a basis for improving the
    flight performance of the fixed-wing UAV.

  • THE PROBLEM OF CHOOSING A DUMPING FACTOR IN THE EFFECTIVE CONTROL MODEL FOR DIRECTED WEIGHTED SIGNED GRAPHS

    A.N. Tselykh, V.S. Vasilev, L.A. Tselykh
    Abstract

    The paper deals with the problem of choosing a dumping factor in the effective control model
    based on maximizing the transfer of impacts for fuzzy cognitive models represented by directed
    weighted signed graphs. To transfer influence, a management model is used that implements the
    development of the system. An effective control algorithm is based on solving the optimization
    problem of finding a vector of external influences that maximizes the accumulated increase in
    increments of vertex indicators. The optimal control effect is considered to be a control that provides
    the maximum ratio of the square of the norm of the response vector of the system to the
    square of the norm of the control vector. The dumping factor of this model controls the comparative
    scale of the direct and indirect influence of all intrafactor relationships of the system as a
    whole. The purpose of the study is to determine such areas of acceptable values for the obtained
    solutions, in which (i) the condition of consistency of the result is met; (ii) the change of vertex
    ranks is slow. By the sequence of results, we mean satisfaction with the rules of the system as a
    whole. These rules can be expressed in imposing restrictions on the status of vertices, on the sign
    of impacts and responses. Set the damping factor, called the resonance, where resonance occurs a
    surge in the value of the objective function the problem of maximization of the impact when there
    is no alignment between the resonant response and caused its effect. The choice of the dumping
    factor affects the value of the target function of the impact maximization problem and the vector of
    effective management on which this solution is achieved. The value of the resonant damping coefficient
    can be interpreted as the limit of the possible controllability of the system, i.e. limit the
    potential impact on the system without harming it. The proposed solution is evaluated based on the
    degree of stability of the rank of model nodes depending on the influence of changes in the damping
    coefficient, the algorithmization of determining the range of its acceptable values and the
    shape of the resonance within the values of the damping coefficient.

  • DEVELOPMENT OF KNOCK MATHEMATICAL MODEL FOR MODERN IC ENGINE

    A.L. Beresnev, М. А. Beresnev
    Abstract

    The paper examines under-studied aspects of internal combustion engine management, such
    as the use of knock. In knock, instead of a constant frontal flame, a detonation wave is formed in
    the combustion zone, carrying at supersonic speed. Fuel and oxidant are detonated in compression
    wave. This process, from the point of view of thermodynamics, increases the efficiency of the engine.
    In internal combustion engines, there are two different modes of combustion propagation of
    air mixture fuel: deflagration and detonation. Engines operating in detonation combustion mode
    are currently not used and their capabilities are most interesting. Modern internal combustion
    engines do not carry the detonation mode well, but the possibility of short-term combustion of part
    of the fuel-air mixture with detonation is embedded in their design. The situation of detonation is
    currently constantly being studied, it is considered as a harmful component of the combustion
    process, which requires improvement of the engine, its control and use of modern fuel. It is proposed
    to use this, considered random, process to increase the torque and power of the internal
    combustion engine. The possibility of using detonation combustion of fuel-air mixture in internal
    combustion engine as a useful part of the working process is considered, and the possibility of
    controlling combustion of fuel-air mixture in a mixed mode is assumed, which allows to improve
    indicator parameters. Assumptions have been made to create a model and the cylinder pressure
    has been simulated during the combustion phase of the partially detonated fuel-air mixture. The
    method of heat generation calculation has been determined, which is one of the most important
    stages of mathematical model creation, as this determines accuracy and adequacy of calculated
    parameters, both for deflagration combustion modes of fuel-air mixture and using detonation
    combustion. Proposed procedure for calculation of internal combustion engine operating cycle
    parameters makes it possible to carry out real-time calculations and account for influence of binary
    fuel composition on parameters of power, economy, mechanical and dynamic load on parts of
    crank mechanism, as well as thermal load on engine.

  • MATHEMATICAL MODEL FOR CALCULATING RELIABILITY INDICES OF SCALABLE COMPUTER SYSTEMS WITH SWITCHING TIME

    V.A. Pavsky, K.V. Pavsky
    Abstract

    The main feature of scalable computer systems is modularity. Increasing performance in
    such systems is achieved by increasing the same type of elements, elementary machines (EM, for
    example, a computing node). As a result of failures, the system performance is changed. Thus,
    scalability of computer systems (CS), on the one hand, increases performance, but on the other
    hand, computer resource growth exacerbates the problem of reliability and increases the complexity
    of organizing effective functioning. Analysis of reliability and potential capabilities of computing
    systems is still an urgent problem. For quantitative analysis of the functioning of scalable
    computing systems, robustness indices related to reliability are used. For example, indices of potential
    robustness of CS take into account the fact that all operable elementary machines are used
    in solving tasks, the number of which (EM) changes over time as a result of failures and recoveries.
    When analyzing reliability, models based on the theory of Markov processes and Queuing
    theory (QT) are popular in the theory of computing systems. Most QT analytical models do not
    consider the switching time (reconfiguration) in a separate parameter, due to the complexity of thesolution. Usually, models are simplified by the fact that the recovery time and switch combined in a
    single parameter. Analytical solutions of a system of differential equations with three parameters
    (failure, recovery, and switching) for calculating reliability and potential robustness are obtained on
    the example of the QT model. This allows the user to determine whether the switching time should be
    taken into account. Also it is shown that solutions of the three-parameter model are reduced to solutions
    of the two-parameter model if the switching time is not taken into consideration.

SECTION III. EVOLUTIONARY MODELING

  • EVOLUTION ALGORITHM FOR PARTITION BY METHOD OF CRYSTALLIZATION OF ALTERNATIVES FIELD

    B.K. Lebedev, O.B. Lebedev, Е. О. Lebedevа
    Abstract

    The operation of the partitioning algorithm is based on the use of collective evolutionary
    memory, which means information that reflects the history of the search for a solution and is
    stored independently of individuals. The algorithm associated with evolutionary memory seeks to
    memorize and reuse ways to achieve better results. The collective evolutionary memory of the
    partitioning algorithm is a set of statistical indicators that reflect, for each implemented alternative,
    the number θ of its occurrences in the best solutions at previous iterations of the algorithm
    and the number δ indicating the usefulness of the implemented alternative when constructing solutions
    at previous iterations of the algorithm. The team does not have centralized management, and
    its features are the presence of indirect exchange of information. Indirect exchange consists in
    performing certain actions, at different times, during which some parts of evolutionary memory
    change by one agent. In the future, this changed information is used by other agents in these parts.
    First, at each iteration, a constructive algorithm generates nk solutions Qk. Each solution Qk is a
    mapping Fk=V→X, is represented as a bipartite subgraph Dk and is formed by sequentially assigning
    elements to nodes. The formation of each solution Qk is performed by the set of agents A,
    by means of the probabilistic choice by each agent ai of the node vj. The process of assigning an
    element to a node involves two stages. In the first stage, agent ai is selected, and in the second
    stage, the node. In this case, the restriction must be fulfilled: each agent of the set A corresponds
    to one unique node of the set V. The estimate ξk of the solution Qk and the utility estimate δk of the
    set of alternatives implemented by the agents in the solution Qk are calculated. At the second stage,
    the agents increase the integral utility of the set of alternatives in the integral placer of alternatives
    R* by the value δk. At the third stage, the utility estimates δk of the integral placer of alternatives are
    reduced by μ. The paper uses the cyclic method of forming decisions. In this case, the building up of
    estimates of the integral utility δk of the set of positions P is performed after the complete formation
    of the set of solutions Q at iteration l. Experimental studies were carried out on the basis of formed
    test cases with the optimal solution obtained earlier. The results obtained were compared with the
    results obtained by other well-known algorithms for dividing circuits into parts. For comparison, a
    set of standard benchmarks was formed. After analyzing the results, we can conclude that the proposed
    method allows you to get 4–5 % better solutions than its analogues.

  • MODIFIED GENETIC PROJECT PLANNING ALGORITHM IMPLEMENTED WITH THE USE OF CLOUD COMPUTING

    А. А. Mogilev, V.M. Kureichik
    Abstract

    The paper proposes a structure of a modified genetic algorithm for solving resource constrained
    project scheduling problem implemented with the use of cloud computing, a computational
    experiment was conducted, during which the results of the proposed algorithm were compared
    with the best known, at the moment, results. Based on the results of the experiment, it was concluded
    that the proposed algorithm can be used to plan the work of real projects, since it is possible
    to draw up schedules for projects with the number of works n = 90 for an acceptable period of
    time. When planning projects with the number of jobs n = 30, n = 60, n = 90, 120, the execution
    time of the proposed algorithm was less than the execution time of the standard genetic algorithm
    by 2.8, 4, 5.5 and 6.8 times, respectively. Due to the fact that the task of constructing a project
    schedule taking into account limited resources is NP-difficult, the problem of creating new and
    modifying existing methods for solving it remains relevant. For planning projects with a large
    number of works, it is advisable to use cloud computing, since planning such projects can require
    a lot of time and computing resources. In this regard, the algorithm proposed in this paper differs
    from the existing ones by using cloud computing to distribute the load between workstations on
    which this algorithm is simultaneously running. The use of modified operators in the genetic algorithm,
    as well as the use of cloud infrastructure as a service for implementing a distributed genetic
    algorithm, determines the scientific novelty of the study.

  • SOLUTIONS’ ENCODING IN EVOLUTIONARY METHODS FOR INSTRUMENTAL DESIGN PLATFORM

    E.V. Kuliev, А. А. Lezhebokov, М. М. Semenova, V.A. Semenov
    Abstract

    The article considers current issues and analyzes the problems of three-dimensional integration
    and three-dimensional modeling that arise at the design stage during the solution of the
    problem of optimal planning of components of large and extra-large integrated circuits and case
    devices of electronic computing equipment. The main advantages of applying the principles of
    three-dimensional integration are presented and described in sufficient detail, which allow efficiently
    organizing the production of personalized electronics, optimally planning the configuration
    of large and ultra-large integrated circuits, taking into account thermal and energy characteristics.
    In the course of research, the authors developed an approach to encoding decisions based on
    an intelligent mechanism, which is characterized by the presence of built-in means of control of
    acceptable decisions. One of such tools that have experimentally proven their effectiveness is the
    built-in mechanism of “deadly mutations”, which takes into account the status of genes and predetermined
    restrictions on the final configuration of the housing of the designed device. A series of
    general approaches and specific algorithms for solving the planning problem based on the results
    of research by the author's team and modern approaches to solving NP-complete problems are
    proposed. The most important practically significant result of the research of the indicated problem
    is the developed software and instrumental design platform in the modern cross-platform Java
    programming language. The selected development technology allows you to use all the main advantages
    of modern multi-core and multi-processor architectures, to use software multi-threading
    to implement parallel schemes for solving combinatorial problems. The software and tool platform
    has a user-friendly interface, which allows you to effectively manage the process of solving the
    problem of planning the components of large and ultra-large integrated circuits of threedimensional
    integration by visualizing key performance indicators of algorithms on graphs and in
    text statistics blocks. The developed application software made it possible to carry out a series of
    computational experiments based on random data sets, as well as on open-data boron benchmarks
    for such tasks. The results of experimental studies have confirmed the theoretical estimates of the
    time complexity and effectiveness of the proposed approaches and algorithms, including the genetic
    algorithm, which uses the new decision coding mechanism proposed in the work.

  • CLASSIFICATION AND ANALYSIS OF EVOLUTIONARY METHODS OF EVA BLOCK LAYOUT

    Y.V. Danilchenko, V.I. Danilchenko, V. M. Kureichik
    Abstract

    Currently, there is a large increase in the need for the design and development of radioelectronic
    devices. This is due to increasing requirements for radio-electronic systems, as well as
    the emergence of new generations of semiconductor devices. In this regard, there is a need to develop
    new tools for automated layout of EVA blocks. There are a number of problems that complicate
    the actual representation of knowledge in CAD and are probably solvable at the current level
    of cognitive science development. The problem of stereotyping and the problem of coarsening are
    interrelated and need to create hybrid models of representation. The paper deals with the problem
    of solving the problem of EVA block layout in the design of radio-electronic equipment. The purpose
    of this work is to find ways to optimize the planning of EVA block layout using a genetic
    algorithm. The relevance of the work is that the genetic algorithm can improve the quality of layout
    planning. These algorithms allow you to improve the quality and speed of layout planning. The
    scientific novelty lies in the search and analysis of effective methods for composing EVA blocks
    using genetic algorithms. The main difference from the known comparisons is in the analysis of
    new promising algorithms for composing EVA blocks. Result of work. The paper shows the disadvantages
    of traditional algorithms for searching for a suboptimal EVA plan. Descriptions of modern
    models of evolutionary and other calculations are given. Genetic algorithms have a number of
    important advantages – adaptability to a changing environment, the evolutionary approach makes
    it possible to analyze, Supplement and change the knowledge base depending on changing conditions,
    as well as quickly create optimal solutions. If you apply genetic algorithms and preprocessing
    heuristics to provide optimal initial solutions, you can achieve more productive use of
    algorithms. Known genetic algorithms converge quickly, but they lose population diversity, which
    affects the quality of the solution. To balance data, the solution is corrected using efficient operators
    or stable mutation.

SECTION IV. INFORMATION SYSTEMS AND TECHNOLOGIES

  • METHODS OF IMPROVING PERFORMANCE OF MODERN WEB-APPLICATIONS

    V. N. Gridin, V. I. Anisimov, S.A. Vasilev
    Abstract

    Existing and developing approaches to build modern web applications are considered. The
    main forms and directions of development modern web applications are determined, as well as
    methods for increasing the productivity of data exchange client-server systems. The varieties and
    principles of establishing communication channels in a distributed client-server environment are
    highlighted. The main advantages to use combined methods of interaction made with asynchronous
    full-duplex data exchange protocols are provided to ensure a high data transfer rate, provideinformation in a timely manner, reduce the load on the server component, and reduce the redundancy
    of transmitted data. The technologies of state management decentralization in single-page
    applications, the interconnection of modern technologies to ensure a high degree of interactivity of
    the client component are indicated. A comparative analysis of the intelligent query processing
    mechanism, declaring the data structure and access methods with data-oriented REST APIs is
    carried out, which provides variations of the basic CRUD operations. The main advantages of
    GraphQL's approach to organize distributed state management based on providing graph-like
    structures with an indefinite level of nesting to the client application and the possibility of subscribing
    to change in the data set of interest are highlighted. The problems of traditional data
    storage systems in modern information conditions, the geometric accumulation of complex structured
    data are presented. The basic approaches to data storage in the context of the NoSQL concept
    are described. The advantages of using the key-value model in information systems and the
    principles of operation databases using RAM as storage are considered. The disadvantages of
    these technologies for data storage are considered and possible ways to minimize them based on a
    collaboration of methods are proposed. Thus, a dependency diagram of technologies for efficient
    data exchange in modern web applications is provided to ensure a high degree of interactivity
    client-server web applications.

  • EVOLVABLE ADAS: H-GQM S.M.A.R.T.E.S.T. APPROACH

    D.E. Chickrin, А. А. Egorchev, D.V. Ermakov
    Abstract

    The introduction to the mass market of vehicles with an ADAS 3+ level of automation is expected
    in the early 2020s. Currently, the vast majority of automakers conduct research in this
    field, a fairly large number of prototypes, pre-production and production systems have already
    been demonstrated. ADAS (advanced driver assistance systems) are complex hardware & software
    systems, the feature of which is that the core hardware platform remains unchanged for one or
    even several generations of vehicles (5–7 years). At the same time, the system should be able to
    transform and evolve to correct errors and expand functionality, especially due to active development
    of sensory peripheral systems and software algorithms. The GQM methodology and its modifications
    are used to support the development process of complex systems and evaluate them.
    However, these methodologies are limited exclusively to software products. Also, authors of these
    methodologies are not addressing explicitly the issues of applying the GQM methodology for analyzing
    and tracking the process of evolution of complex technical systems. This paper presents HGQM
    (Hardware GQM) methodology for controllable evolution of complex automotive hardware
    & software systems. The H-GQM methodology is based on GQM and is aimed at hardwaresoftware
    systems with a monolithic hardware core, a modifiable software core and atomic peripherals.
    Entity harmonization process is described to prove the applicability of the GQM for software-
    and-hardware systems analysis. S.M.A.R.T.E.S.T goal-setting concept is proposed for choice
    of evolutionary goals. This concept is based on S.M.A.R.T. criteria for the setting objectives of
    business processes and extended with harmonization and evolvability restrictions. The formulation
    of the H-GQM plan framework is provided using ADAS as an example. Within the framework of
    the proposed methodology, an ADAS-specific scalable target template has been formed.

  • ADAPTATION OF INFORMATION AND TECHNICAL CHARACTERISTICS TO THE CONSTANTLY CHANGING PARAMETERS OF IONOSPHERIC PROPAGATION

    A.I. Rybakov, R.E. Krotov, S.A. Kokin
    Abstract

    The aim of the research work was to study and select the existing adaptation options for
    transmission parameters, in order to reduce the influence of shortcomings of the short-wave radio
    line, it is advisable to use the methods of digital signal processing as efficiently as possible. According
    to the results of the characteristics of analog-to-digital converters (ADCs), it became a
    study of the available hardware for constructing extended radio links, it was concluded that with
    an increase in the performance of FPGAs on which digital signal processing is implemented, it is
    possible to implement the technology of creating an active antenna array (AAR), consisting of the
    Nth number of independent antenna modules, which is a conceptual task in solving the issue of
    adapting information and technical characteristics to constantly changing parameters of
    ionospheric transmission, for a more energy-efficient approach to designing an ionospheric radio
    communication system. Improving the performance of the radio system by improving communication
    protocols, solving the problem of optimal channel load from the time of formation and reception
    of signals. The main idea of such an AAR is to digitize or generate a high-frequency signal in
    the immediate vicinity of the antenna, as part of the antenna modules. The indicated results make
    it possible to replace separately tuned radios and transceivers built according to a complex superheterodyne
    circuit with a limited number of available hardware units operating under software
    model of a software-configurable radio channel. In the next work, it is planned to conduct studies
    to assess the passage of OFDM signals through multipath communication channels with fading of
    Rayleigh and Rice. The resulting model will allow us to evaluate the noise immunity at different
    lengths of the cyclic prefix of the OFDM symbol and to observe the behavior of the signal constellation
    under the influence of various instabilities.

  • THE LIBRARY OF FULLY HOMOMORPHIC ENCRYPTION OVER THE INTEGERS

    L.K. Babenko, I.D. Rusalovsky
    Abstract

    The article discusses one of the new directions of cryptography, a homomorphic cryptography.
    Its distinctive feature is that this type of cryptography allows you to process encrypted data
    without first decrypting it in such a way that the result of operations on encrypted data is equivalent
    after decryption to the result of operations on open data. The paper describes the main areas
    of application of homomorphic encryption. The analysis of existing developments in the field of
    homomorphic encryption is performed. The analysis showed that existing library implementations
    only allow processing of bits or arrays of bits and do not support the division operation. However,
    to solve applied problems, support for performing integer operations is necessary. The analysis
    revealed the need to implement the operation of homomorphic division, as well as the relevance of
    developing your own implementation of a library of homomorphic encryption over integers. The
    ability to perform four operations (addition, difference, multiplication and division) on encrypted
    data will expand the field of application of homomorphic encryption. A method of homomorphic
    division is proposed, which allows performing the division operation on homomorphically encrypted
    data. A library architecture of completely homomorphic operations on integers is proposed.
    The library supports the basic homomorphic operations on integers, as well as the division
    operation, thanks to the method of homomorphic division. Based on the proposed method of
    homomorphic division and library architecture, a library of homomorphic operations on integers
    was implemented. The article also provides measurements of the time required to perform certain
    operations on encrypted data and analyzes the effectiveness of the developed library implementation.
    Conclusions and possible ways of further development are given.

SECTION V. NANOTECHNOLOGIES

  • STUDY OF THE MODES OF FORMING ZnO: Ga NANOCRYSTALLINE FILMS BY MAGNETRON SPUTTERING

    А. А. Geldash, L.E. Levenets, E.Y. Gusev, V.N. Dzhuplin
    Abstract

    The main goal of this work is to study the modes of formation of thin nanocrystalline ZnO:
    Ga films by direct current magnetron sputtering. The main objective of the study is to obtain thin
    (~ 300 nm), transparent, conductive films with a resistivity value of less than 5 · 10-3 Ohm · cm,
    which can be used as contacts for nanostructures of photosensitive elements, as well as studying
    the technological parameters of magnetron sputtering equipment and metal oxide targets. The
    morphology of the resulting thin ZnO: Ga films was studied. As a result of the study, it was revealed
    that the surface of the films consists of individual crystals, united together during the deposition
    of the material. These crystals have pronounced faces and peaks. It was also revealed that
    with an increase in the power of a direct current source, the crystals on the film surface increase
    several times in proportion to the increase in power, and the film thickness increases due to anincrease in the speed of sputtering of the target material on the substrate. Further, the electrical
    characteristics of the obtained films were investigated and the dependences of the influence of the
    power (thickness) of the film on the carrier concentration, their mobility, and also resistivity were
    derived. Thus, it was revealed that with an increase in the film thickness from 320 nm to 340 nm, the
    mobility of current carriers increases from 3.027 cm2/(V·s) to 3.228 cm2/(V·s), and with an increase
    in the film thickness from 800 nm to 1200 nm of current carriers increases from 6.511 cm2/(V·s) to
    6.547 cm2/(V·s). With an increase in the film thickness from 320 nm to 340 nm, the concentration of
    current carriers decreases from 1.571·1020 cm– 3 to 1.489·1020 cm– 3, and with an increase in the film
    thickness from 800 nm to 1200 nm, the concentration of current carriers decreases from 2.481·1020
    cm-3 to 1.653·1020 cm-3. As the film thickness increases from 320 nm to 340 nm, the resistivity increases
    from 1.303·10-2 Ω·cm to 1.38 ·10-2 Ω·cm. and as the film thickness increases from 800 nm to
    1200 nm, the resistivity increases from 3.851·10-2 Ω·cm to 5.779·10-2 Ω·cm.

  • PHOTODETECTOR WITH CONTROLLED RELOCATION: DRIFT-DIFFUSION MODEL AND APPLICATION IN OPTICAL INTERCONNECTIONS

    I.V. Pisarenko, Е.А. Ryndin
    Abstract

    Previously, we proposed an injection laser with a double AIIIBV nanoheterostructure for the
    generation and modulation of light in optical interconnections for integrated circuits. To convert
    short optical pulses generated by the laser-modulator into electrical signals, a technologically
    compatible photodetector with subpicosecond response time is needed. Traditional designs of
    photosensitive semiconductor devices do not meet the specified requirements. Therefore, we developed
    a promising concept of a high-speed photodetector with controlled relocation of carrier density
    peaks within specially organized quantum regions. This optoelectronic device includes a longitudinal
    photosensitive p-i-n junction and a transverse control heterostructure, which containstwo low-temperature-grown layers and two control junctions. Before the trail of an optical pulse,
    the photodetector operates as a classical p-i-n photodiode. Transverse electric field is activated
    only during the back edge of a laser pulse. It relocates the peaks of electron and hole densities
    from the absorbing region to the regions with low carrier mobility and short lifetime. This process
    leads to the decrease in response time to a subpicosecond value. In our previous papers, we estimated
    the performance of the considered device using a quantum mechanical combined model that
    had not taken into account certain aspects of charge carrier transport in its structure. This paper
    is aimed at a proper semiclassical analysis of transients in the photodetector with controlled relocation
    by means of a two-dimensional drift-diffusion model. For the numerical implementation of
    the model, we develop a finite difference simulation technique based on the explicit method and
    applied software. According to the obtained results, it is reasonable to use the differential connection
    principle in order to compensate displacement currents in the supply circuit of the device. In
    view of this feature, we propose a circuit of optical receiver that provides the generation of resultant
    electrical signal as well as the required mode of the control voltage application to the
    photodetector contacts, and a driver circuit for the lasers-modulators.