No. 4 (2020)

Published: 2020-11-22

SECTION I. ARTIFICIAL INTELLIGENCE AND FUZZY SYSTEMS

  • METHOD FOR SEARCHING SEQUENTIAL PATTERNS OF USER'S BEHAVIOR ON THE INTERNET

    V.V. Kureychik, V. V. Bova, Y.A. Kravchenko
    Abstract

    One of the important tasks of data mining is to isolate patterns and detect related events in
    sequential data based on the analysis of sequential patterns. The article examines the possibility of
    using sequential patterns to analyze the events of search and cognitive activity of users when interacting
    with Internet resources of an open information and educational environment. Searching
    for sequential patterns is a complex computational task whose goal is to retrieve all frequent sequences
    representing potential relationships within elements from a transactional database of
    sequences of search activity events with a given minimum support. To solve it, the article proposes
    a method for searching for patterns in sequences of events to detect hidden patterns that indicate
    possible levels of vulnerability when performing information search tasks in the Internet space.
    A mathematical model of user behavior in a search session based on the theory of sequential patterns
    is described. To improve the computational efficiency of the method, a modified algorithm
    for generating sequential patterns has been developed, at the first stage of which AprioriAll is
    performed, which forms frequent candidate sequences of all possible lengths, and at the second
    stage, a genetic algorithm for optimizing the input parameters of the feature space of the generated
    set to search for maximum patterns. A series of computational experiments were carried out on
    test data from the MSNBC corpus, the SPMF open source data mining library. The comparative
    analysis was carried out with the VMSP and GSP algorithms. The research results confirmed the
    efficiency of the search for maximum sequential patterns by the proposed algorithm in terms of the
    execution time and the number of extracted patterns. The results of the experimental studies of the
    method showed that to increase the stability and accuracy of the work, the sample size obtained as
    a result of the GA operation will reduce the required number of scans of the pattern database,
    providing acceptable computational costs comparable to the VMSP algorithm and the GSP algorithm
    that exceeds the search time for sequential patterns. an average of more than 150 %.

  • SOLUTION OF THE PARTIALLY REVERSAL MODELLING TASK OF THE MINIMUM COST FLOW FINDING IN FUZZY CONDITIONS

    E. M. Gerasimenko
    Abstract

    This article is devoted to the development of an algorithm for solving the problem of modeling
    a partially reversal flow of minimum cost in a fuzzy transportation network. The minimum cost
    flow problem is a central problem in transportation planning and evacuation modelling. The relevance
    of these tasks is due to necessity to find optimal transportation routes in terms of cost andtransfer the maximum flow along them. This article is devoted to solving this problem in fuzzy
    conditions, since the apparatus of the theory of fuzzy sets allows you to set network parameters,
    such as the capacity of road sections, the cost of transportation in a fuzzy form. This method of
    assignment is convenient in situations where there is a lack of data on the modeled object, linguistic
    nature of data, measurement errors, etc. In the problems of evacuation modelling, which occur
    spontaneously, there is also a lack of accurate information about the capacity and cost of transportation.
    The contraflow concept, which was used in the paper, allows increasing the total flow
    by reversing traffic. The lane reversal technique is a modern technique for increasing the transmitted
    traffic by increasing the network output capacity. The use of traffic reversal allows releasing
    congested sections of the road and redistributing traffic towards unloaded roads, eliminating congestion
    and "traffic jams" on the roads. A method of operating with fuzzy numbers is proposed,
    which does not lead to "blurring" of the boundaries of the resulting number and allows operating
    with fuzzy boundaries at the last iterations, while at the rest of the previous iterations, calculations
    are performed only with the centers of fuzzy numbers. A numerical example is considered that
    illustrates the operation of the proposed algorithm.

  • CLASSIFIER OF IMAGES OF AGRICULTURAL CROPS SEEDS USING A CONVOLUTION NEURAL NETWORK

    V. A. Derkachev, V. V. Bakhchevnikov, A. N. Bakumenko
    Abstract

    This article discusses the creation of a convolutional neural network architecture that classifies
    images of crops (in particular wheat) for subsequent use in an optical seed separator (photo
    separator). Interest in the design of neural networks for classifying images has recently increased
    significantly, which is associated both with the development of the theory of deep neural networks
    and the increased computing power of desktop computers, as well as the transfer of computing to
    graphic processors. The aim of the article is to develop the architecture of a neural network that
    allows the separation of the input flow of wheat seeds into two classes: “good” seeds and “bad”
    (with defects in shape and color) seeds. The architecture of the resulting neural network is convolutional,
    because, unlike a fully connected one, this class of neural networks is within certain limits
    immune to changes in the scale and angle of rotation of objects in the input data. In the work,
    for the formation of training, validation and test samples, seed images obtained using a household
    camera were used, which negatively affected the results of training and testing the neural network
    regarding the possible result of application in a real photo separator. The architecture of the developed
    neural network is preliminarily optimized for use on FPGAs, however, in the considered
    case, the transition from the values of weighting factors from the data type from a floating point to
    an integer type has not been made, which can lead to a decrease in the accuracy of the neural
    network, while significantly reducing the amount of resources FPGA. Application of the proposed
    architecture allows one to obtain a fairly accurate estimate of classified wheat seeds from verification
    and test data sets.

  • INTELLIGENT METHOD OF KNOWLEDGE EXTRACTION BASED ON SENTIMENT ANALYSIS

    E.M. Gerasimenko, V.V. Stetsenko
    Abstract

    The paper explores the impact of age and gender in sentiment analysis, as this data can help
    e-commerce retailers increase sales by targeting specific demographic groups. The data set used
    was created by collecting book reviews. A questionnaire was created containing questions about
    preferences in books, as well as age groups and gender information. The article analyzes segmented
    data on the subject of moods depending on each age group and gender. Sentiment analysis
    was performed using various machine learning (ML) approaches, including maximum entropy,
    support vector method, convolutional neural network, and long short-term memory. This paper
    investigates the impact of age and gender in sentiment analysis, because this data can help
    e-commerce retailers to increase sales by targeting specific demographic groups, as well as increase
    the satisfaction of the needs of people of different age and gender groups. The dataset used
    is generated by collecting book reviews. A questionnaire was created containing questions about
    preferences in books (user opinions of e-books, paperbacks, hardbacks, images and audiobooks),
    as well as data on age group and gender. In addition, the questionnaire also contains information
    on a positive or negative opinion regarding preferences, which served as the basis for reliability
    for the classifiers. As a result, 900 questionnaires were received, which were divided into groups
    according to gender and age. Each specific group of data was divided into training and test one.
    Segmented data were analyzed for sentiment analysis depending on age group and gender.
    The age group “over 50 years old” showed the best results in comparison with all other age
    groups in all classifiers; data in the female group performed higher accuracy compared to data from
    the groups without gender information. The high scores shown by these groups indicate that sentiment
    analysis approaches are able to predict moods in these groups better than in others. Sentiment
    analysis was performed using a variety of machine learning (ML) approaches, including maximum
    entropy, support vector machines, convolutional neural networks, and long short term memory.

  • EVOLUTIONARY DESIGN AS A TOOL FOR DEVELOPING MULTI-AGENT SYSTEMS

    L.A. Gladkov , N.V. Gladkova
    Abstract

    The article is devoted to the discussion of the problems of constructing evolving multi-agent systems
    based on the use of the principles of evolutionary design and hybrid models. The concept of an
    agent is considered. A set of basic properties of the agent is presented. The analogies between multiagent
    and evolutionary systems are considered. The principles of construction and organization of multi-
    agent systems are considered. The similarities between the main definitions of the theory of agents
    and the theory of evolution are noted. It that the main evolution models and evolutionary algorithms can
    be successfully used in the design of multi-agent systems is noted. The analysis of existing methods andmethodologies for designing agents and multi-agent systems is carried out. The existing differences in
    approaches to the design of multi-agent systems are noted. The main types of models are described and
    their most important characteristics are given. A model of agent interaction, including a description of
    services (services), relationships and obligations existing between agents is presented. The model of
    relations (contacts), which defines communication links between agents is described. The importance
    and prospects of using the agent-based approach to the design of multi-agent systems are noted. The
    concept of designing agents and multi-agent systems, according to which the design process includes the
    basic components of self-organization, including the processes of interaction, crossing, adaptation to the
    environment, etc is proposed. Various approaches to the evolutionary design of artificial systems are
    considered. An evolutionary model of the formation of agents and agencies as the main component of
    evolutionary design is proposed. Modified evolutionary crossing-over operators to implement the agent
    design process are proposed.

  • METHOD FOR DETECTING FEATURE POINTS OF AN IMAGE USING A SIGN REPRESENTATIONS

    A. N. Karkishchenko, V. B. Mnukhin
    Abstract

    The aim of the study is to develop a method for detecting feature points of a digital image
    that is stable with respect to a certain class of brightness transformations. The need for such a
    method is due to the needs of detecting feature points of images in video surveillance systems and
    face recognition, often working in a changing light environment. A feature of the proposed method
    that distinguishes it from a number of well-known approaches to the problem of distinguishing
    characteristic points is the use of the so-called sign representation of images. In contrast to the
    usual defining of a digital image by a discrete brightness function, with a sign representation, the
    image is set in the form of an oriented graph corresponding to the binary relation of the increase
    in brightness on a set of pixels. Thus, the sign representation determines not a single image, but a
    set of images, the brightness functions of which are connected by strictly monotonic brightness
    transformations. It is this property of the sign representation that determines its effectiveness for
    solving the problems caused by the goal set above. A feature of the method under consideration is
    a special approach to the interpretation of the characteristic points of the image. This concept in
    image processing theory is not strictly defined; we can say that the characteristic point is characterized
    by increased "complexity" of the image structure in its vicinity. Since the sign representation
    of the image can be represented in the form of a directed graph, in this paper, to evaluate the
    complexity measure of the local neighborhood of its vertices, it is proposed to use the ranking
    method known in the spectral theory of graphs based on the Perron-Frobenius theorem. Its essence
    lies in the fact that the value of the component of the so-called Perron eigenvector of the
    adjacency matrix of this graph acts as a measure of the complexity of the vertex. To conduct experimental
    studies of the proposed approach, a set of programs was developed, the results of
    which confirm the efficiency of the method and demonstrate that with its help it is possible to obtain
    results close to the expected ones on model examples. The paper also offers a number of recommendations
    on the use of this method.

  • AN ONTOLOGICAL APPROACH TO DISTRIBUTED COMPUTING TECHNOLOGIES IMPLEMENTATION ON THE INTERNET

    V. M. Kureichik , I.B. Safronenkova
    Abstract

    Distributed computing technologies development has allowed uniting geographically distributed
    resources and has provided an opportunity for effective resource intensive problemsolving
    in various fields of science and technology. At the same time a set of problems, which demands
    the development of new approaches, taking into account contemporary Internet technologies
    implementation, has risen. In this paper a problem of workload relocation in distributed computer-
    aided design system (DCAD) operating in the “fog” environment was considered. The goal
    of this paper is ontological approach development to workload relocation problem-solving in
    DCAD taking into account some “fog” environment features. The ontological approach involves
    an ontological procedure implementation, which allows “filtering” the candidate-nodes, which
    have insufficient resources for workload relocation. The scientific novelty of this paper is ontological
    models using for workload relocation problem-solving in DCAD. It allows reducing the number
    of candidate-nodes in the “fog” for workload relocation, thereby contributing to reduce the
    time of location process modeling and, consequently, the total time of workload relocation problem-
    solving is also reduced. The fundamental difference of presented approach is domain
    knowledge, represented in ontological model, applying for workload relocation problem-solving.
    The experimental study results have shown the expediency of ontological analysis for workload
    relocation problem-solving .

  • POPULATION ALGORITHM FOR CONSTRUCTING A TREE OF SOLUTIONS BY METHOD OF CRYSTALLIZATION OF ALTERNATIVES FIELD

    B.K. Lebedev , O.B. Lebedev , V. B. Lebedev
    Abstract

    In some cases, it becomes necessary to establish a correspondence between the declared
    and actual value of a categorical variable on the basis of a set of object characteristics. In this
    case, there is a need for a classifier with an optimal sequence of the considered attributes with agiven value of the objective function. The target variable can be: yes, no, variety number, class
    number, etc. This paper solves the problem of constructing a classification model in the form of an
    optimal sequence of the considered attributes and their values included in the route from the root
    vertex to the terminal vertex with a given value of the target variable. If a classifier is required
    that includes the possibility of alternative answers, then first, independently from each other, optimal
    routes are built for each value of the target variable, and then these routes are combined
    ("glued") into a single binary decision tree. In the algorithm for constructing a classifier based on
    the method of crystallization of a placer of alternatives, each solution Qk is interpreted as an oriented
    route Mk on a binary decision tree. Let us call the ordinal number of an element in the directed
    route Mk the position siS={si|i=1,2,…,nA}. An element of the route Mk is the pair (xi, ui-),
    where xi corresponds to Ai. ui- in the route Mk is an edge outgoing from xi and corresponds to the
    value Ai chosen together with Ai. The second index of the element ui- is determined after the choice
    of Ai, placed in the position sj+1 adjacent to sj. The work of the decision tree construction algorithm
    is based on the use of collective evolutionary memory, which is understood as information
    reflecting the history of the search for a solution. The algorithm takes into account the tendency to
    use alternatives from the best solutions found. The peculiarities are the presence of an indirect
    exchange of information – stigmerges. The totality of data on alternatives and their assessments
    constitutes a scattering of alternatives. The key points of the analysis of alternatives in the process
    of evolutionary collective adaptation are considered. Experimental studies have shown that the
    developed algorithm finds solutions that are not inferior in quality, and sometimes surpass their
    counterparts by an average of 3–4 %. The time complexity of the algorithm, obtained experimentally,
    lies within O(n2)-O(n3).

  • COMPARATIVE ANALYSIS OF MISSING DATA RECOVERY METHODS

    A.A. Sorokin , A. V. Dagaev , I. M. Borodyansky
    Abstract

    In recent decades, the methods of system analysis have been developing qualitatively. It is
    associated with an increase in the rate of technical development, the densification of time processes,
    the rapid growth of accumulated information and new capabilities of computer technology.
    These include methods for analyzing large amounts of data, methods of data mining, methods of
    analytical modeling, methods of parallel data processing, neural network methods, forecasting
    methods, and others. The presented methods make it possible to quickly and efficiently process
    heterogeneous clusters of information, accumulate and synthesize data, generalize and classify
    information. The last of the presented methods are methods of interpolation and extrapolation of
    lost, damaged or missing information. These methods allow to structure, restore and model information
    based on statistical data, mathematical and algorithmic methods. Thus, the article deals
    with the problem of recovering missing data in graphic and complex objects. Literary sources on
    the problems under consideration are given. They provide extensive information on the topic under
    consideration: present genetic algorithms used for spatial interpolation; the solution of problems
    of heterogeneity of interpolation of seismic data is considered; it is described the use of
    spline approximation to calculate the characteristics of nonlinear electronic components; the
    method of constructing a model of three-dimensional parametric rational bodies using generalized
    Bezier interpolation is analyzed, which allows modeling the shape of a body and anisotropic
    space; methods using fuzzy linear equations are described, which are widespread in computer
    vision; the method of adaptive interpolation based on the gradient and taking into account the
    local gradient of the original image is investigated. It is made comparing several common methods
    of interpolation and data restoration, in article, such as: bilinear interpolation, Bezier surface.
    Each method and features of its application within the framework of the experiment are briefly
    described. The result of a series of experiments with the presented methods with different numbers
    of tests is presented. In conclusion, summary is drawn about the rationality of choosing one of the
    proposed methods without the use of a long field experiment in each case.

  • NON-PARAMETRIC METHOD FOR DETECTING BREAKDOWN OF TIME SERIES USING THE RANDOM WALKS THEORY MECHANISM

    G. F. Filaretov , Z. Bouchaala
    Abstract

    The task of the on-line detection of a sudden change in the probability properties of a time series
    is considered, which is usually interpreted as the detecting task of change point the characteristics
    (breakdown) in the observed stochastic process. The actuality of the development of research on this
    topic is noted, which is due to the emergence of ever new applied problems where methods and algorithms
    for breakdown detecting can be successfully used - in particular, when creating monitoring systems
    in industry, ecology, medicine, etc. Two main varieties of methods for breakdown detecting are
    discussed: parametric and nonparametric. It is noted that, although nonparametric methods, ceteris
    paribus, are inferior to parametric methods in terms of efficiency (the speed of breakdown detecting),
    they also have a number of advantages, without requiring, in particular, for their application detailed
    information about the probabilistic properties of the controlled process. This is fundamentally important
    for building monitoring systems, when detailed information about these properties may either be completely
    absent and then it is necessary to conduct a rather laborious preliminary study of it, or to be
    unreliable. An original sequential nonparametric algorithm for detecting discord is proposed based on
    the implementation of the random walk mechanism or, more specifically, using the theory of success
    runs. The operating principle of the control algorithm is explained and its description is given. The results
    of the study of the basic statistical characteristics of the algorithm, including the determination of
    its effectiveness, and results of comparison with known parametric methods, are given. The area of possible
    practical use of the proposed algorithm is highlighted, where its effectiveness remains quite high.
    The prospects of using the proposed algorithm as part of the software and algorithmic support of monitoring
    systems for various purposes are noted.

SECTION II. DESIGN AUTOMATION

  • LOGICAL RESYNTHESIS OF COMBINATIONAL CIRCUITS FOR RELIABILITY INCREASE

    N. O. Vasilyev , M.A. Zapletina , G. A. Ivanova , A.N. Schelokov
    Abstract

    The external influences are necessary to take into account for microelectronic devices operating
    in space. In these conditions, the operation of the device is hampered by the negative
    effect of radiation on the electronic components of the circuit. Exposure of heavy charged part icles
    leads to single faults of logic elements due to which the operation of a whole device can be
    violated. In this regard, the designed spacecraft electronic circuits must meet increased r equirements
    for the fault tolerance of integrated circuits (ICs). The decrease of technological
    design standards for ICs makes the problem of fault tolerance to be relevant for civilian microelectronic
    products, also. The solution to this problem is usually carried out by methods of
    hardware protection, which include methods of error-correcting coding, methods of redundancy,
    as well as methods of logical protection. The paper considers the methods for assessing the
    IC tolerance to single faults in logic elements, as well as the main methods of circuits failure
    protection. The paper proposes a resynthesis technique for logical combinational circuits, using
    logical constraints derived from the resolution method to assess the IC resistance to single faults.During resynthesis, it is proposed to use the methods of logical protection of vulnerable parts of
    the circuit. This does not cause a perceptible increase in the area occupied by the device unlike in
    methods of redundancy and error-correcting coding.

  • SEARCH POPULATION ALGORITHM FOR VLSI ELEMENTS PLACEMENT

    B.K. Lebedev , O. B. Lebedev , V.B. Lebedev
    Abstract

    The paper considers a population search algorithm for the placement of VLSI components.
    By analogy with the process of the emergence and formation of crystals from matter, the process
    of generating a solution by sequential manifestation and concretization of the solution based on an
    integral placer of alternatives is called the method of crystallization of a placer of alternatives.
    The solution Qk of the placement problem is represented as a bijective mapping Fk = A → P, each
    element of the set A corresponds to one single element of the set P and vice versa. The
    metaheuristic of crystallization of a placer of alternatives underlying the algorithm searches for
    solutions taking into account collective evolutionary memory, which means information reflecting
    the history of the search for a solution and the memory of the search procedure. A distinctive feature
    of the metaheuristic used is that it takes into account the tendency to use alternatives from the
    best found solutions. Compact data structures for storing solution interpretations and memory are
    proposed. An algorithm associated with evolutionary memory seeks to memorize and reuse ways
    to achieve better results. The developed algorithm belongs to the class of population. The iterative
    process of finding solutions includes three stages. At the first stage of each iteration, the constructive
    algorithm generates nq solutions Qk. The work of the constructive algorithm is based on the
    indicators of the main integral placer of alternatives – the matrix R, which stores the integral indicators
    of the solutions obtained at the previous iterations. The process of assigning an item to a
    position involves two stages. In the first stage, the element is selected, and in the second stage, the
    position pj. In this case, the restriction must be fulfilled: each element corresponds to one position
    pj. The estimate ξk of the solution Qk and the estimate of the utility δk of the set of positions Pk selected
    by the agents are calculated. The work uses a cyclical method of forming decisions.
    In this case, the accumulation of estimates of the integral utility δk in the main integral placer of
    alternatives R is performed after the complete formation of the set of solutions Q. At the second
    stage of the iteration, the estimates of the integral utility δk are increased in the main integral
    placer of alternatives − the matrix R. At the third stage of the iteration, the estimates of the utility
    δk of the integral placer of alternatives R are reduced by a priori a given value δ*. The algorithm
    ends after the specified number of iterations has been completed. Comparative analysis with other
    solution algorithms was carried out on standard test examples (benchmarks) of the IBM corporation,
    while the solutions synthesized by the CAF algorithm exceed the solution efficiency of the
    known methods by an average of 6%. The time complexity of the algorithm is O(n2)-O(n3)

  • LOGIC RESYNTHESIS METHODS FOR LAYOUT DESIGN OF MICROELECTRONIC CIRCUITS

    N. O. Vasilyev , P.I. Frolova , G.A. Ivanova , A. N. Schelokov
    Abstract

    As the size of electronic components decreases, the number of design rules increases. To reduce
    design rules checking runtime for 22 nm and below technologies, regular structures are used
    in the lower layers of the layout. When designing circuits based on a regular template, it becomes
    possible to combine the logical and layout design stages. This task is also relevant for designing
    circuits on FPGAs. This paper discusses a method for structural optimization of logic circuits at
    the stage of layout design. The method is adapted for use in the design route of circuits with regular
    structures in the lower layers of the layout, as well as for resynthesis of technology mappings
    on FPGAs. When working with circuits with regular structures, logical synthesis is used in the
    basis of elements for which compact layout templates are built. This approach simplifies the layout
    design stage, and also leads to an additional reduction in the area of the designed device. Optimization
    of logic circuits for FPGAs is carried out using a simulated annealing algorithm that performs
    logic operations on a special graph model that takes into account the features of the FPGA.
    Taking into account the features of various technologies in the proposed method allows achieving
    good results in terms of such parameters as the area occupied by the circuit.

  • HYBRID APPROACH THE JOINT SOLUTION OF PLACEMENT AND TRACING PROBLEMS

    L.A. Gladkov , N. V. Gladkova , Dzhabbar Yasir Yasir Mukhanad
    Abstract

    The article proposes an integrated approach to solving the problems of placing and tracing elements
    of circuits of electronic computing equipment. The approach is based on the joint solution of
    placement and tracing problems using fuzzy genetic methods. A description of the problem under
    consideration is given and a brief analysis of existing approaches to its solution is performed. The
    article discusses integrated approaches to solving optimization problems of computer-aided design of
    digital electronic computing equipment circuits. The urgency and importance of developing new
    effective methods for solving such problems is emphasized. It is noted that an important direction in
    the development of optimization methods is the development of hybrid methods and approaches that
    combine the advantages of various methods of computational intelligence. The article describes the
    following main points: the structure of the proposed algorithm and its main stages; modified genetic
    crossover operators; models for the formation of the current population are proposed; modified heuristics,
    operators and strategies for finding optimal solutions. The results of computational experiments
    are presented. The experiments carried out confirm the effectiveness of the proposed approach.
    In conclusion, a brief analysis of the results obtained is given.

  • AUTOMATED STRUCTURAL-PARAMETRIC SYNTHESIS OF A STEPSED DIRECTIONAL RESPONDER ON CONNECTED LINES BASED ON A GENETIC ALGORITHM

    Y. V. Danilchenko , V.I. Danilchenko, V. M. Kureichik
    Abstract

    An automated approach to the structural-parametric synthesis of a stepped directional coupler
    on connected lines based on a genetic algorithm (GA) is described, which makes it possible to
    create an algorithmic environment in the field of genetic search for solving NP complete problems,
    in particular, the structural-parametric synthesis of a stepped directional coupler on connected
    lines. The purpose of this work is to find ways of structural-parametric synthesis of a stepped directional
    coupler on coupled lines based on the bionspiration theory. The scientific novelty lies in
    the development of a modified genetic algorithm for automated structural-parametric synthesis ofa stepped directional coupler on connected lines. The problem statement in this work is as follows:
    to optimize the synthesis of passive and active microwave circuits by using a modified GA. A fundamental
    difference from the known approaches in the use of new modified genetic structures in
    automated structural-parametric synthesis, in addition, a new method for calculating a stepped
    directional coupler on connected lines based on a modified GA is righteous in the work. Thus, the
    problem of creating methods, algorithms and software for automated structural synthesis of microwave
    modules is currently of particular relevance. Its solution will improve the quality characteristics
    of the designed devices, reduce design time and costs, and reduce the requirements for
    developer qualifications.

  • ANALYSIS OF CHARACTERISTIC FOR CED CIRCUITS BASED ON REDUNDANT ENCODING METHODS

    D. V. Telpukhov , T. D. Zhukova, A. N. Schelokov
    Abstract

    Typically, soft errors that occur in electronic equipment under influence of various destabilizing
    factors, were under the scrutiny of memory element developers. But recent research in this
    area shows that with development of microelectronics, the number of soft errors in combination
    circuits is increasing and soon their frequency of occurrence will be comparable to that in unprotected
    memory elements. Presently, to address this problem, special attention has been paid to
    methods based on control devices. These methods, by introducing additional structural redundancy,
    enable scheme to automatically detect and/or correct errors that occur in it. However, as a
    result of application of various methods of synthesis concurrent error detection (CED) circuits
    depending on initial parameters and internal structure of protected scheme devices possessing
    various efficiency and reliability characteristics are realized. However, as a result of application
    of various methods of synthesis CED circuits depending on initial parameters and internal structure
    protected circuit, the devices possessing different efficiency and reliability characteristics are
    realized. That is why there is a necessity to define and develop evaluation functions for analysis inorder to find the best method of synthesis CED circuit for certain device without any preliminary
    modeling. This work is devoted to development specification of structural redundancy and reliability
    characteristics evaluation functions on the example of CED circuits on basis of spectral and lowdensity
    parity-check code. The comparative and correlation analysis of analytical data with experimental
    values was carried out to evaluate efficiency of the functions obtained as a result of study.

SECTION III. CONTROL SYSTEMS AND NONLINEAR DYNAMICS

  • THE MATHEMATICAL PROBLEM OF OPTIMAL CONTROL OF THE STRING

    G. V. Kupovykh, A. G. Klovo , I. A. Lyapunova
    Abstract

    It is generally accepted that optimal control problems or system design problems determine
    for a given object or system of control objects a law or a certain control sequence of actions that
    provide a maximum or minimum of a given set of system quality criteria. In this case, the speed
    problem can be considered, i.e. the problem of bringing the system to a given state in the shortest
    time. We also study the problems of minimizing a given functional for a fixed time of system management.
    Optimal control is closely related to the choice of the most rational modes for managing
    complex objects. A lot of works has been devoted to the problem of control, in addition, wellknown
    mathematical schools are currently engaged in such research. In problems with concentrated
    parameters, the systems under study are described by ordinary differential equations or
    their systems. In this case, the Pontryagin maximum principle plays an important role in this
    study. For partial differential equations, we talk about systems with distributed parameters. In thispaper, we investigate the possibility of synthesizing optimal control of a single system with distributed
    parameters. A model of string oscillation under the influence of control functions under
    boundary conditions is considered. The role of the choice of the functional to be minimized in creating
    opportunities for the synthesis of optimal control. In this case, the control action is searched
    for at each point of the time interval, which leads to the possibility of constructing it explicitly.
    The conditions for the existence of optimal control everywhere in the corresponding functional
    spaces are formulated. In a specific statement of the problem, everywhere optimal control is explicitly
    constructed.

  • STATISTICAL METHODS FOR EVALUATING THE CONNECTIVITY OF DYNAMIC SYSTEMS BASED ON A SEPARATE TIME PROJECTION

    A. S. Cherepantsev
    Abstract

    Based on the approaches of nonlinear dynamics to the estimation of invariants of a dynamical
    system, the possibility of determining the degree of coupling of various dynamical systems is
    considered. The dynamic coupling of the studied systems is understood as the number of common
    components in the systems that determine the time evolution of the observed projections. The proposed
    method has been tested on model dynamic systems and used to analyze the behavior of complex
    dynamic systems observed in geophysics – apparent electrical resistance in two orthogonal
    directions and relative vertical surface displacements. The data of long regime observations used
    in the calculations in the seismically active region are interesting by the available facts of sensitivity
    to the stress-strain state of the geophysical medium. Assuming a parameter of the state of the
    medium as a common component of the observed dynamic processes of various nature, the number
    of common components of the systems is estimated based on the proposed methodology. The paper
    proposes a statistical method for finding individual samples of synchronous changes in variations
    of the dynamic parameters of the observed number of geophysical fields. Assuming the unsteady
    nature of the formation of a dynamic system in the presence of a large number of acting external
    factors, it is relevant to determine the time intervals for synchronizing the properties of dynamic
    systems when a dominant effect appears. The result of the application of the developed method is
    the conclusion about synchronization of variations in the correlation dimension of volumetric
    deformation at different time scales in the phase of the occurrence of strong seismic events.

  • APPLICATION OF THE METHOD OF INTEGRAL ADAPTATION FOR SYNTHESIS OF ADAPTIVE LAWS OF CONTROL IN A PNEUMATIC DRIVE UNDER HARMONIC DISTURBANCES

    E.N. Obukhova
    Abstract

    The active use of electro-pneumatic systems in various areas of industrial automation is due
    to such rather high performance indicators of the pneumatic drive as reliability, speed, low cost,
    availability of use in high humidity conditions, as well as in explosive and fire hazardous environments.
    The article provides a brief analysis of domestic and foreign scientific works devoted to the
    development of various methods of pneumatic system control, in which the problem of synthesis of
    effective control laws with adaptive properties to external disturbances is posed. The aim of this
    work is to develop an adaptive nonlinear synergetic control law to suppress the disturbing effect,
    which was specified and additively introduced into the mathematical model in the form of a harmonic
    function. The synthesis of the adaptive control law was carried out using the method of
    integral adaptation, which is part of the concept of synergetic control theory. The obtained results
    of computer modeling confirm the adaptive properties of the obtained nonlinear synergistic control
    laws and the achievement of the set technological control goal - moving the rod to a given
    position under conditions of harmonic disturbance. One of the important stages in the analysis of
    the results presented in this work is the conduct of experimental studies that make it possible to
    test the performance of synthesized nonlinear synergetic control laws obtained by analytical
    means on the educational and experimental stand of pneumatic drives of vertical and horizontal
    displacement of the Camozzi Company. For the practical implementation of the obtained synergetic
    control laws, the controller was programmed in the CoDeSys programming environment for
    industrial automation in the graphical language of functional block diagrams FBD (Function
    Block Diagram) IEC 61131-3 programming.

SECTION IV. INFORMATION SECURITY

  • METHOD OF IMPLEMENTING HOMOMORPHIC DIVISION

    L. K. Babenko, I. D. Rusalovsky
    Abstract

    The article deals with the problems of homomorphic cryptography. Homomorphic cryptography
    is one of the young directions of cryptography. Its peculiarity lies in the fact that it is possible
    to process encrypted data without preliminary decryption in such a way that the result of operations
    on encrypted data is equivalent, after decryption, to the result of operations on open data.
    The article provides a brief overview of the areas of application of homomorphic encryption. To
    solve various applied problems, support for all mathematical operations is required, including the
    division operation, and the ability to perform this operation homomorphically will expand the
    possibilities of using homomorphic encryption. The paper proposes a method of homomorphic
    division based on an abstract representation of the ciphertext in the form of an ordinary fraction.
    The paper describes in detail the proposed method. In addition, the article contains an example of
    the practical implementation of the proposed method. It is proposed to divide the levels of data
    processing into 2 levels – cryptographic and mathematical. At the cryptographic level, a completely homomorphic encryption algorithm is used and the basic homomorphic mathematical operations
    are performed – addition, multiplication and difference. The mathematical level is a superstructure
    on top of the cryptographic level and expands its capabilities. At the mathematical level,
    the ciphertext is represented as a simple fraction and it becomes possible to perform the
    homomorphic division operation. The paper also provides a practical example of applying the
    homomorphic division method based on the Gentry algorithm for integers. Conclusions and possible
    ways of further development are given.

  • PROBABILISTIC CHARACTERISTICS OF THE THRESHOLD ALGORITHM FOR DETECTING SYNCHRONIZING PULSES IN THE QUANTUM KEY DISTRIBUTION SYSTEM BASED ON INFORMATION FROM AN ADJACENT PAIR OF TIME SEGMENTS

    K. E. Rumyantsev, Y. K. Mironov, P.D. Mironova
    Abstract

    Quantum key distribution systems (QKD) provide increased security of transmitted information.
    For the stable operation of the QKD system, accurate synchronization of user stations is
    required with minimal time costs. An algorithm for detecting a sync signal with a threshold test is
    proposed. It is assumed that the sync pulse is simultaneously in two adjacent time segments. The
    probability of detecting a pair of time segments where a sync pulse is present is determined by the
    probability of exceeding the threshold level by the total number of signal and noise pulses recorded
    in two adjacent segments. The purpose of the research is aimed at a comparative analysis of the
    threshold level and probabilistic characteristics of synchronization equipment during threshold
    testing of each pair of time segments within a time frame, obtained when orienting on the Gauss
    and Poisson model for the number of photons and dark current pulses (DCP) received during the
    time segment analysis. The probabilistic characteristics of the detection algorithm for sync signals
    are studied in a quantum key distribution system based on a comparison of the number of photons
    from an adjacent pair of time segments with a threshold level. The application of the approximation
    of the statistical properties of the processes at the output of the photodetector by the Poisson
    law and the normal distribution is analyzed. The influence of the Poisson and Gaussian models on
    the choice of the threshold level and the calculation of the synchronization efficiency during the
    threshold testing of each pair of time segments within the time frame are estimated, obtained by
    orientation on the Gauss and Poisson models for the number of photons and DCP received during
    the analysis of the time segment. It was established that the choice of the threshold level based on
    the normal distribution gives an underestimated value. The approximation of the statistics of photons
    and pulses of dark current by a normal law provides a threshold level lower than the required
    one. Moreover, the difference grows with stricter requirements for the probability of false positives.
    The obtained probabilistic properties of the sync signal detection algorithm based on the
    analysis of the sum of counts from an adjacent pair of segments with a threshold level allow us to
    formulate recommendations for choosing an approximation of the signal statistics: for express
    calculations of probabilistic characteristics, it is advisable to use the Gaussian model; if a higher
    analysis accuracy is required, it is recommended to use the Poisson model.