No. 3 (2023)

Published: 2023-08-09

SECTION I. COMPUTING AND INFORMATION MANAGEMENT SYSTEMS

  • IMPROVING THE EFFICIENCY OF MANAGING NONCONFORMING PRODUCTS, BASED ON AN INTEGRATED AUTOMATED INFORMATION SYSTEM OF THE ENTERPRISE

    М.V. Nikitina, А.V. Kapitanov
    Abstract

    The aim of the study is to increase the efficiency of product manufacturing, on the basis of
    an integrated automated information system of nonconforming product management enterprise.
    The main objectives are to analyze the existing automated nonconforming product management
    systems on the market, to develop a model of an integrated automated nonconforming product
    management system when transmitting information to enterprise departments through a unified
    industry document management system and nonconforming product management information
    system and to assess the effectiveness of an integrated automated nonconforming product management
    information system. The article states that automated information systems of production
    management are an integral part in production management. Currently, they are implemented in
    many enterprises of the Russian Federation. The analysis of such existing domestic automated
    control systems as the automated control system of discrete production "Prisma", "Galaktika
    ERP", "8D. Management of nonconformities" and AS "Quality management". Using the expert
    method, the lowest indicator of quality of the automated system "Quality Management" implemented
    at the enterprise was identified, this indicator is the interoperability of the system. Further
    it was revealed that in order to minimize the receipt at the enterprise of products that do not meet
    established requirements, it is necessary to integrate this automated system of non-conformity
    management and a unified industry document management system. Conducting integration of the
    systems, controlling the volume of production of good products and the number of detected defective
    products are considered on the example of smart LPG pressure sensors. Models of the process
    of analysis of product nonconformity detection before and after the integration are presented and
    described. To assess the effectiveness of the integration process, a histogram is built, which shows
    the results of the efficiency of the production of good products, Gantt charts show the reduction of
    time costs for the process of analysis of the detection of inconsistencies, as well as the results of
    cost-effectiveness of the integration process.

  • DEVELOPMENT OF A QUEUING SYSTEM ON FPGA FOR PROCESSING ETHERNET PACKETS

    А. V. Mangushev, V.А. Zybin, I. D. Polukhin
    Abstract

    A scheme for buffering Ethernet packets for hardware implementation of their processing
    based on FPGA has been developed. The scheme is designed at the RTL level in the System Verilog
    language in the Quartus II 13.1 development environment. Verification and modeling were
    carried out in the ModelSim Altera environment. An FPGA of the CycloneIV family, located on the
    A scheme for buffering Ethernet packets for hardware implementation of their processing based on
    FPGA has been developed. The scheme is designed at the RTL level in the System Verilog language
    in the Quartus II 13.1 development environment. Verification and modeling were carried
    out in the ModelSim Altera environment. An FPGA of the CycloneIV family, located on the
    DE2-115 debugging board, was chosen as the target platform. Particular attention is paid to data
    reception and transmission modules, as well as the implementation of a hardware queue (FIFO)
    with the possibility of changing its contents by the processing module. The scheme is parameterized,
    it allows you to change the queue depth at the expense of one parameter without making
    changes to other parts of the scheme. A feature of the scheme is the ability to add any hardware
    module that monitors, processes or encrypts network traffic. The MII interface is used for transmitting
    and receiving packets, which allows using any available physical layer chips for receiving
    and transmitting packets. The device allows you to easily change the input and output interface,
    which increases its versatility. The system does not use proprietary IP cores, which makes it as
    portable as possible to FPGAs from various manufacturers. The main feature of the scheme is the
    low delay between receiving and sending a packet, determined only by the parameters of the processing
    module. The results of the work can be applied during the design of devices that transmit
    data with preprocessing. For example, network equipment (switches, routers), monitoring and
    data collection systems.

  • REVIEW OF COLLABORATIVE ROBOTIC SYSTEMS AND LEGAL-SYSTEM ASPECTS OF INTERACTION WITH THEM

    D. Е. Chikrin, К. R. Smolnikova
    Abstract

    Of significant interest to the robotics industry is the study of the multidisciplinary field of
    human-robot interaction (HRI). Industry 4.0 (4IR) dictates the intensive implementation of robotic
    solutions in all sectors of the economy and human life processes. That is why the interaction between
    operator and cobot is one of the most relevant topics affecting the economy, labor market
    and society as a whole. Currently, cobotics is one of the new breakthrough areas in robotics, and
    due to the development of 4IR standards, cobots have a key advantage in automation, where full
    replacement of human labor is impossible. This collaboration of operator and collaborative robot skills will accelerate the manufacturing process and allow companies integrating cobots to become
    more competitive and minimize the process of manufacturing tasks. The purpose of the study
    is to describe robotic systems and analyze the legal-system aspects of the interaction between
    cobot and operator in a collaborative workspace (collaborative workspace). The objectives of the
    study are: 1) general overview of collaborative robotic systems by types: tasks to be solved, work
    to be performed and control; 2) consideration of existing risk assessment systems for operatorcobot
    interaction. The realization of the set tasks will contribute to further research in the innovative
    field of HRI, aimed at creating an environment for safe and efficient operator-cobot collaboration.
    The practical value of this paper also lies in the systematic approach to consider the field
    of cobotics to further explore safe collaboration scenarios. In our opinion, the most effective approach
    is to analyze each specific use case of a type of robot. At the same time, we note that in the
    current realities of the rapidly growing robotics sector, it is difficult to classify and unify collaborative
    robotic systems into a single act.

  • DEVELOPING A DECISION-MAKING MECHANISM FOR AUTONOMOUS COLLISION AVOIDANCE OF UNMANNED NAVIGATION: FUZZY APPROACH

    L.A. Barakat, I.Y. Kvyatkovskaya
    Abstract

    In the near future, unmanned vessel (UV) will become increasingly important and will act
    without any human intervention. This situation raises the collision risk between UVs and general
    ships. Research on maritime accidents have shown that ship collision accidents due to violation of the
    International Rules for the Prevention of Collisions at Sea, 1972 (COLREGs-72), which were developed
    by the International Maritime Organization (IMO), remain the leader of navigational accidents
    on shipping waterways. In this respect, autonomous preventing collisions is critical for unmanned
    navigational safety at sea. Hence, in this paper, aiming at the problem of autonomous collision
    avoidance in open sea area under conditions of good visibility. To this end, a fuzzy logic system to
    obtain autonomous collision of UVs according to the rules of COLREGs-72 proposed in this paper.
    The proposed Decision-Making Mechanism (DMM) based on logical schema for the implementation
    of the strategy that is the best in the sense of the selected optimality criterion (optimal strategy) for
    unmanned navigation control. The inputs to the collision avoidance fuzzy logic system are the navigational
    parameters (speed, course, position, etc.). The rule base of the collision avoidance fuzzy
    logic system consists of 17 rules to avoid collisions. The authors proposed a trapezoidal membership
    function which allows an analytical representation of the collision risk of an UV with a target ship,
    depending on the situation feature (encounter sector). Currently, various information collision
    avoidance systems, which have been developed, added a safety barrier to help prevent collisions at
    sea. However, further research and efforts of scientists from many developed countries of the world
    were still required. As part of further research, the authors plan to use the described method to develop
    an information decision-making system for a movement control of an unmanned vessel

  • SOFTWARE-HARDWARE COMPLEX FOR OBSTACLE SEGMENTATION WITH U-NET ARCHITECTURE FOR AUTONOMOUS AGRICULTURAL MACHINERY

    I.G. Galiullin, D.Е. Chikrin, А.А. Egorchev, R.F. Sabirov
    Abstract

    Agriculture plays a fundamental role in ensuring food security and meeting the population's
    needs for food products. Optimization of agricultural crop production and increasing efficiency
    are essential tasks for modern agriculture. In this regard, more attention is being given to the
    development and implementation of autonomous agricultural systems capable of automating and
    optimizing various production processes. However, the effectiveness of autonomous systems is
    limited by the insufficient development of obstacle detection systems and decision-making algorithms.
    When agricultural machinery and other autonomous vehicles encounter obstacles in their
    path, precise and rapid recognition of these obstacles plays a decisive role in making appropriate
    decisions to avoid accidents. This article presents a software-hardware complex for obstacle segmentation using the U-Net architecture, designed to overcome these limitations in autonomous
    agricultural systems. The U-Net architecture is renowned for its ability to accurately recognize
    objects in images, making it an attractive choice for machine vision systems in agricultural conditions.
    The presented complex boasts high performance and enables real-time obstacle segmentation,
    including columns, trees, and shrubbery, during the movement of agricultural machinery
    along a designated trajectory. This ensures precise decision-making and avoidance of accidents,
    significantly enhancing the efficiency and safety of autonomous systems in agricultural production.
    Field tests have confirmed the effectiveness and applicability of the proposed solutions under
    real agricultural conditions. The presented software-hardware complex with U-Net architecture
    opens up new possibilities for autonomous agricultural technology, promoting increased productivity
    and efficiency in agriculture. It represents a significant step in the development of modern
    agricultural technologies and contributes to the use of autonomous systems to enhance agricultural
    production and improve productivity.

SECTION II. INFORMATION PROCESSING ALGORITHMS

  • METHODOLOGY FOR FORECASTING THE DEVELOPMENT OF TECHNOLOGICAL TRENDS AND BUILDING ROAD MAP ON THE BASIS OF CONSTRUCTING FUTURE EVENTS

    А. А. Belevtsev, А. М., V.А. Balyberdin
    Abstract

    The innovation economics development, particularly in the high technology sphere, stimulates
    the new ways of solving for strategic analyses, forecasting and priorities estimation tasks.
    These tasks provide the forming of perspective research plans. Existing methods for assessing and
    forecasting the level of technology development are based on the development of a structural and functional concept of the subject area under study, its structural decomposition (top-down) and expert
    assessment over a given time horizon. This approach has several significant drawbacks as it
    does not consider globalization and the high dynamics of forming new trajectories of technology
    development and technological fronts from bottom to top. In addition, the task of prioritizing, timing,
    and cost estimation of technology development encounters significant difficulties as it requires experts
    to provide specific numerical assessments. Based on the results obtained by the authors through
    research and development in the considered area, this work proposes a methodology for forecasting
    technological trends and constructing roadmaps, which provides: – formation of a quantitative forecast
    assessment of the development of technological trends and constituent technologies, taking into
    account their connectivity, based on the construction of future events; – construction of roadmaps for
    the development of technological trends and technologies for a given subject area and group of subject
    areas; – conducting forecast assessments of the time required to implement technologies and
    technological trends in conditions of uncertainty and incomplete information, absence of validated
    quantitative assessments and prototypes. The methodology is based on the procedure for forecasting
    the development of technological trends and technologies based on the transition from a logical
    graph of technological trends to a dynamic graph. The methodology provides a quantitative forecast
    assessment of the development of technological trends and their constituent technologies: – in conditions
    of uncertainty and incomplete information; – in the absence of technology prototypes. At the
    same time, possible interconnections between individual technologies and technological trends
    during development are taken into account.

  • TOLERANCE SYNTHESIS BASED ON SENSITIVITY ANALYSIS

    А. V. Khludenev
    Abstract

    The tolerance allocation for passive discrete elements of analog devices is an important task
    when planning mass serial production. Permissible deviations from rated values affects on the cost, the
    choice of preferred number series for parameters and the acquisition availability of these components.
    The element tolerances, as well as the temperature dependence and aging effects, are the key factors
    affecting the product performances and the yield. The solution of this problem has received attention in
    the scientific and technical literature for more than 40 years. During this time, the tools for designing
    electronic devices have changed. Computer-aided design (CAD) systems for electronics - Electronic
    Design Automation (EDA) have become widely used, providing an end-to-end design flow. Modern
    EDAs have limited capabilities for tolerance design, providing a solution to the problem of tolerance
    analysis. EDA users have to determine element tolerances based on their intuition and experience
    through time-consuming interactive optimization. Using specialized tolerance software tools without
    integration with professional grade EDA is not an acceptable solution. The purpose of the study is to
    substantiate decisions for the tolerance synthesis in the end-to-end design flow in the EDA environment.
    The article discusses methods for determining tolerances using the results of sensitivity analysis. Sensitivities
    can be obtained using standard EDA tools. The implementation of these methods in the Excel
    environment is considered. To exchange data between the spreadsheet and the EDA schematic databases
    is proposed to use the clipboard. The proposed solutions make it possible to reduce the number of
    interactive operations and time spent when tolerance allocation. An example of analog device tolerance
    design in the EDA environment is given.

  • DATA CLUSTERING ALGORITHM FOR PROTECTING CONFIDENTIAL INFORMATION ON THE INTERNET

    I.S. Bereshpolov, Y.А. Kravchenko, А. G. Sleptsov
    Abstract

    The article is devoted to solving the scientific problem of protecting confidential information
    in the Internet based on the algorithm for clustering significant amounts of data. The protection of
    a computer network confidential information is a hot topic for research, especially in connection
    with the growing use of information technology and the increase in data of valuable information
    stored in the Internet. With the growth of information responsibility, the need for effective methods
    of computer networks information security has become critical. In this scientific article, the authors
    propose a solution to the problem of protecting computer networks confidential information
    based on the big data clustering algorithm. Traditional intrusion detection methods have limitations
    such as the ability to work only with one- or two-dimensional data, and also have a strong
    reliance on prior knowledge. To eliminate these limitations, the authors propose a heuristic intrusion
    detection algorithm that uses clustering based on a cloud model. The proposed algorithm
    takes advantage of both labeled and unlabeled samples for data clustering, thereby reducing reliance
    on a priori knowledge. The results of a computational experiment carried out on the proposed
    algorithm were compared with several canonical intrusion detection algorithms. The results
    showed that the proposed algorithm improved the performance of the intrusion detection system,
    increased the accuracy of detection, reduced the false alarm rate, and enhanced the reliability of
    the system. The dynamic weighting method used in the algorithm removed the complexity of highlevel
    data processing and allowed the algorithm to learn itself, resulting in a relatively stable
    cloud model. Despite the significant improvement in the performance of the proposed algorithm
    compared to the canonical clustering algorithms, the results of the study also showed that the
    algorithm has some limitations, such as a high false positive rate and sensitivity to data with certain
    types of distribution. To eliminate these shortcomings, further improvement of the algorithm is
    required. In general, the proposed heuristic clustering intrusion detection algorithm based on the
    cloud model is a promising solution for protecting computer networks confidential information.

  • COMPILING A DIET BASED ON A GENETIC ALGORITHM

    Е.Е. Polupanova, А. S. Oleynik
    Abstract

    This work is devoted to solving the problem of compiling a diet using a genetic algorithm.
    The task of compiling a diet is a combinatorial optimization problem. The main purpose of solving
    the problem of compiling a diet is to find a suitable combination of dishes to perform the distribution
    in accordance with the special needs of a person. The article provides a statement of compiling
    a diet problem and its mathematical model. Since the task of compiling a diet is NP-hard and
    the input data may require large computational costs for an accurate algorithm, it is reasonable to
    apply a heuristic approach to solving this problem. The article highlights are in detail the main
    concepts of the theory of genetic algorithms, the sequence of steps of the developed genetic algorithm
    for compiling the diet, the flowchart of the genetic algorithm. To research the genetic algorithm
    of compiling a diet there was developed a client-server application running the Android
    operating system. The result of the genetic algorithm for compiling a diet is the seven days menu,
    which is displayed and stored in the application. The client-server architecture of the application
    was chosen in order to save the user's phone resources. The description of the Android-application user interface with the ability to adjust various parameters of the algorithm is given in the article.
    Also the analysis of the obtained algorithm efficiency is highlighted: an estimation of the accuracy
    and operating time of the developed genetic algorithm with different configurations of the algorithm.
    Based on the results of the experiments, it was possible to determine the optimal values of
    the configurable parameters of the genetic algorithm (the number of chromosomes, the number of
    iterations, the probability of mutation), allowing to obtain good results in an acceptable time.
    The characteristic features of the implemented genetic algorithm of compiling a diet is a relatively
    short operating time, even in a large input data. In addition, the developed solution has a high
    economic value due to the application of the algorithm in practice, for example, in the work of
    nutritionists, fitness trainers, as well as for ordinary overweight users.

  • PROBABILISTIC CHARACTERISTICS OF THE SYNC DETECTION ALGORITHM BASED ON THE SELECTION OF AN ADJACENT PAIR OF SEGMENTS WITH THE MAXIMUM TOTAL COUNT

    К.Е. Rumyantsev, P.D. Mironova
    Abstract

    An algorithm for detecting sync signals based on the selection of an adjacent pair of segments
    with the maximum total count is proposed. This algorithm takes into account the shortcomings
    of an alternative algorithm for detecting a sync signal based on comparing the sum of samples
    from an adjacent pair of segments with a threshold level, consisting in the need to know the level of background and noise influence, which determines the threshold level and the probability
    of erroneous detection of a signal pair of segments. The dependence of the probability of an error
    in detecting a sync pulse on the average number of signal photons in a sync pulse is studied for
    various values of the number of segments in a time frame. Thus, the probability of erroneous detection
    of a sync pulse during a frame decreases significantly as the average number of photons in
    a sync pulse increases. For example, by increasing the average number of signal photons in a sync
    pulse from 1 to 5, the probability of a sync detection error is reduced by a factor of 37. It should
    be noted that the number of pairs of segments has a weak effect on the probability of erroneous
    detection, which indicates a weak effect of dark current pulses on the probabilistic characteristics
    of the proposed algorithm for detecting sync signals. Analytical expressions are obtained for accurate
    and express calculation of probabilistic characteristics of detection, taking into account the
    probability of finding a sync pulse at the boundary of two adjacent segments due to the equality of
    the duration of the sync pulse and the time segment. The results of calculation using exact expressions
    for the probability of detecting a sync pulse showed that when the signal-to-noise ratio is
    equal to 10 and higher, the influence of noise pulses on the probability of detecting a sync pulse
    can be neglected. It has been noted that the greater the number of recorded events, or, in other
    words, the greater the sum of the average numbers of signal photons and dark current pulses, the
    greater the detection probability. The calculation of the probability of detecting a sync pulse using
    simplified expressions shows a slight deviation from calculations using exact formulas, which does
    not exceed 5,3%, and the calculation using approximate expressions gives an underestimated result.
    The resulting approximate analytical expressions can be used for express calculation of the
    probability of detecting a sync pulse in a pair of segments.

  • SEQUENTIAL HYBRIDIZATION ALGORITHM FOR THE TRAVELING SALESMAN PROBLEM SOLVING

    Е. Е., А.А. Rybalko
    Abstract

    The traveling salesman problem is a combinatorial optimization problem. The article presents
    a statement of this problem and proposes a graph mathematical model in which vertices
    correspond to cities, and edges are paths between cities, and it is assumed that the graph is
    weighted. The solution of the traveling salesman problem consists in finding the minimum weight
    Hamiltonian cycle in a complete weighted graph. The problem is NP-hard, so a heuristic approach
    is used to solve this problem and speed up the solution of the problem on large volumes of
    input data. The heuristic consists in applying hybridization of two algorithms to solve the traveling
    salesman problem: the annealing simulation algorithm and the nearest neighbor algorithm. Sequential
    hybridization scheme is used to solve the traveling salesman problem. The basic idea is
    that the nearest neighbor method is launched on the initial set of solutions, and then the best solution
    of the first stage is fed to the annealing simulation algorithm. The article details the construction,
    flowcharts of the hybrid algorithm, the annealing simulation algorithm, and the nearest
    neighbor method. The article goes on to describe the user interface of the application written in
    Typescript. The application uses an area map as a solution to the traveling salesman problem. In
    the last part of the article, a comparative analysis of the algorithms' performance is highlighted: a
    comparison of the accuracy and operating time of the developed hybrid algorithm, the annealing simulation algorithm, and the nearest neighbor method for different input data sets. It was established
    that the developed hybrid algorithm is in second place in terms of speed and in first place in
    terms of solution quality among the implemented algorithms. In addition, the developed solution
    has a high economic and practical value because an application for solving the traveling salesman
    problem, and therefore an application for route navigation, can replace existing analogues or it
    can be used in any narrowly focused areas, as well as in logistics.

  • SELECTION OF MULTI-CRITERIA ANALYSIS METHODS ON THE EXAMPLE OF THE PROBLEM OF RANKING

    Е.S. Podoplelova
    Abstract

    This work is devoted to the selection and comparison of popular traditional methods of multi-
    criteria decision making. The article presents an overview of the existing works of recent years
    on the topic of their comparison, highlights the main criteria, as well as the most significant results.
    Further, an example of the implementation of a DSS (decision support system) was considered
    on the recommendation of such a method to the user, which includes a description of not only
    the main methods, but also their modifications, highlighting an exhaustive taxonomy of multicriteria
    analysis methods in general. For the selection of methods in this article, international
    databases of scientific publications were used: Science Direct, Google Scholar and IEEE Xplore.
    Certain search settings have been made to retrieve jobs that match the query. The next step describes the task of ranking alternatives to demonstrate the results of applying the selected methods.
    As a method for distributing the weights of the criteria, the method of analysis of hierarchies
    (AHP) was used. The calculation results are presented in tables and graphically. The evaluation
    metric was considered to be the stability of the method to the number of alternatives and criteria,
    as well as sensitivity to the weights of the criteria. At the current stage of the study, the following
    methods were selected: TOPSIS, WASPAS, VIKOR, PROMETHEE and ELECTRE. As a result of
    the study, optimal methods were determined (in terms of the ratio of computational complexity to
    stability) for their further use in the development of DSS, the ELECTRE method was used as an
    additional tool with a large number of alternatives to screen out the least attractive ones.
    PROMETHEE showed high sensitivity to changes in weights and complexity of calculations, therefore
    it was excluded from further development stages. VIKOR and TOPSIS showed the best stability
    with the simplicity of calculations.

  • METHODS FOR MINING CAUSUSITY FROM OBSERVATIONS IN ARTIFICIAL INTELLIGENCE

    М.Y. Georgi
    Abstract

    The article discusses the importance of capturing causal relationships in machine learning
    for decision-making and evaluating real-world impact. It is noted that most current successes in
    machine learning are based on pattern recognition and correlation analysis; however, for more
    complex tasks, extracting causal relationships is necessary. The problems of explainability of predictions
    and causal understanding, even with the use of advanced machine learning techniques
    such as LIME, SHAP, TreeSHAP, DeepSHAP, and Shapley Flow, are recognized as fundamental
    obstacles in the development of artificial intelligence. The article briefly presents the main philosophical
    and mathematical concepts and definitions of causality, including counterfactuals, Bayesian
    networks, directed acyclic graphs, and causal formal inference. It concludes that the practical
    significance of data-based causal analysis consists in answering a priori formulated questions,
    which may reflect a hypothetical relationship between an event (a cause) and a second event (an
    effect), where the second event is a direct consequence of the first. A comparative analysis of the
    methods and main scenarios for using the Causal Discovery and Causal Inference frameworks is
    also carried out. Based on this analysis, it becomes possible to make assumptions about the causal
    structure underlying the investigated dataset and to use statistical methods to evaluate the strength
    and direction of such relationships. The article also discusses methods and algorithms of causal
    analysis and their application in real-world tasks. Representative methods are mentioned, such as
    constraint-based models, estimation-based models, functional causal models, (conditional) independence
    tests, evaluation functions, and other tools that can be used to solve the problem of extracting
    causal relationships from observational data. Most of these methods are implemented in
    open-source frameworks such as Microsoft DoWhy, Uber CausalML, causal-learn, Econ-ml, and
    many others, which facilitate causal analysis.

  • RESEARCH OF METHODS FOR EXAMINATION OF THE SIGNIFICANCE OF THE SMOOTHING POLYNOMIAL COEFFICIENTS

    I.L. Shcherbov
    Abstract

    The aim of the work is to study the procedure for checking the significance of the coefficients
    of the smoothing polynomial based on the criteria for testing statistical hypotheses in order
    to form a vector of coefficients of the smoothing polynomial. The developed methods of nonlinear
    adaptive smoothing with optimization of the degree of the smoothing polynomial with optimization
    of the structure of the smoothing polynomial were studied. The study was carried out by simulating
    the value of secondary coordinates, which, according to the formulas of simple methods, were
    converted into primary coordinates, taking into account the location and type of measuring instruments.
    Then, the values of measurement errors distributed according to the normal law were
    added to the obtained values of the primary coordinates. The primary measurement data thus obtained
    were subjected to nonlinear adaptive smoothing. The formation of the coefficient vector of
    the smoothing polynomial was carried out on the basis of the criteria for testing statistical hypotheses
    in the following sequence: formation of the corresponding statistics according to the measurement
    data; comparison of these statistics with a threshold level depending on the confidence
    level and the number of degrees of freedom; making a decision on the inclusion of this component
    in the polynomial. The formation of the coefficient vector of the smoothing polynomial was carried
    out on the basis of the Fisher criterion. Based on the results of the study, the following conclusions
    can be drawn: methods of nonlinear adaptive smoothing with optimization of the structure of the
    smoothing polynomial are superior in terms of quality and efficiency to the method with optimization
    of the degree of the smoothing polynomial; the method of non-linear adaptive smoothing with
    optimization of the structure of the smoothing polynomial Structure 1 is superior in terms of quality
    and efficiency to the method with optimization of the structure of the smoothing polynomial
    Structure 2; The greatest gains in quality and efficiency for all the studied methods are achieved in
    the middle part within 3/5 of the smoothing interval; for all the studied methods, the quality and
    efficiency indicators decrease at the edges of the smoothing interval

SECTION III. MODELING OF PROCESSES AND SYSTEMS

  • A MODEL FOR PROCESSING APPLICATIONS AND DISTRIBUTING TASKS FOR A ROBOTIC WAREHOUSE

    V.V. Soloviev, А. Y.
    Abstract

    The purpose of this work is - development of a model for processing applications and distributing
    tasks between robots that serve a robotic warehouse. This research is relevant due to
    increase number of warehouse space, the appearance of stores without buyers (darkstores) and
    the popularization of purchases through the Internet, which requires the involvement of robots to
    solve transport problems when arranging orders. To achieve this goal, in this work solves the
    problem of conceptual representation of a robotic warehouse in the form of a queuing system,
    which allows using its quality indicators to improve transport processes. Models of the control
    system of a single robot and processing of orders are presented in the form of finite state machine,
    which simplifies model experiments and further implementation in onboard robot computers. Propose
    the criterion for evaluating the duration of the execution of orders by robots, including several
    types and positions of product in the order, is proposed, which allows processing one order
    by several robots at the same time. The route of each robot is represented by a set of sections of the path between the collection points of individual products, described in the form of ordered
    permutations. Such representation made it possible to define a system of inequalities, on the basis
    of which routes of several robots for processing one order are formed. Algorithms for the distribution
    of tasks for the robot's onboard computer and the central warehouse server have been developed.
    The greatest computational load lies on the server because all possible permutations for
    each order are calculated there. Experimental researches on the simulation model have shown the
    high efficiency of the developed models and algorithms.

  • UNIFIED MODEL OF MATURITY OF NETWORK SECURITY CENTERS OF INFORMATION AND TELECOMMUNICATION NETWORKS

    S.S. Veligodskiy, N.G. Miloslavskaya
    Abstract

    In accordance with Decree No. 250 of the President of the Russian Federation, special
    structural units created by subjects of critical information infrastructure are able to counteract
    computer attacks on their information and telecommunication networks (ITCNs). In order to be
    effective, these units must have Network Security Centers (NSCs) of ITCN with a high level of maturity
    that meets the information security requirements for its owner organization. Currently, there
    is no single approach to assessing the NSC maturity level. Thus the article’s goal is to describe the
    developed Unified Maturity Model (UMM) of ITCN NSCs of organizations, created based on the
    generalization and development of the analyzed maturity models and authors' systematics of network
    security management (NSM) processes and services of a typical ITCN implemented in the
    NSC, as well as technologies that support the implementation of processes and the provision of
    services, supplemented by consideration of the general organization of the NSC functioning and its
    staffing. The NSC maturity model refers to a structured set of elements that combines the information
    need to establish the NSC maturity level with their attributes – NSC properties or characteristics.
    The requirements for UMM being developed for the organization's internal NSC are formulated,
    and their implementation is demonstrated at the end of the article. A formalized representation
    of the maturity model of NSC as an object for assessing the maturity level in five assessment
    areas, namely, the organizational support for the NSC functioning, the NSM processes for
    ITCN and the NSC as its integral part, the ITCN NSM services provided by the NSC, the technologies
    used and staffing, is introduced. A method for visualizing the obtained assessment results as
    pie charts is proposed. An approach to establishing the final NSC maturity level is presented. It is
    shown that all the formulated requirements for the NSC maturity model are met in the developed
    ITCN NSC UMM. Further, a methodology for the ITCN NSC UMM application should be created.

  • AUTOMATIC CRUISE CONTROL MODEL FOR A CAR

    V.V., А.Y.
    Abstract

    The purpose of this work is to develop an automatic model of cruise control of a car, a model
    of its rectilinear motion and their comprehensive study. This work is relevant due to the lack of
    adaptive cruise control systems on domestic cars, high traffic congestion and tedious traffic jams
    for the driver. To achieve this goal, the task of developing an automatic model of the cruise control
    system, including ten possible states, taking into account the interaction with the standard subsystems
    of the car and the radar, has been solved. The model takes into account the ranges of changes
    in vehicle speeds depending on the engine speed, and also estimates the duration of braking
    depending on the road situation and the condition of the vehicle subsystems. On the based automatic
    model were recieved six scenarios of the cruise control system operation obtained, taking
    into account possible errors and control effects on the vehicle subsystems, depending on the current
    situation. During the development of the rectilinear motion model, the forces of friction and
    air resistance, gravity, traction force and inertia force were taken into account. The model is supplemented
    by the dependence of the torque on the rotational speed of the crankshaft, obtained by
    approximation and angle of change in the slope of the roadway. In research was found that in
    order to increase the adequacy of the model, it is necessary to take into account the dynamics of
    the engine and transmission. This disadvantage is eliminated by introducing two additional firstorder
    differential equations. In the study of the complex cruise control model, a PID controller
    and a P-controller as an obstacle controller were used as a speed controller. Adjustment of the
    parameters of the regulators was performed using genetic algorithms from the MATLAB package.
    Experimental studies on the simulation model have shown the high efficiency of the developed
    models and algorithms.

  • DEVELOPMENT AND RESEARCH OF THE MODEL FOR VIDEO INFORMATION CLASSIFICATION

    А.G., I.S., Y.А.
    Abstract

    The article is devoted to solving the scientific problem of classifying video content in the face
    of an increase in the information volume. Computer vision is a very relevant field of artificial intelligence
    technologies application to expand the capabilities of various search and archive systems. The
    authors give definitions to the main terms of the studied subject area. A formalized statement of the
    problem to be solved is presented. A detailed classification of possible options for solving the problem
    is given. With the rapid development of information technology, digital content is showing an
    explosive growth trend. The classification of sports videos is of great importance for archiving digital
    content on the server. Many data mining and machine learning algorithms have made great strides in
    many application areas (such as classification, regression, and clustering). However, most of these
    algorithms have a common drawback when the training and test samples are in the same feature
    space and follow the same distribution. This article discusses the importance of solving the problem
    of the video information content classification and automatic annotation, and also develops a model
    based on deep learning and big data. As part of this study, the authors developed a model that improves
    the quality of video classification, which improves search results. The results of the computational
    experiment show that the proposed model can be effectively used to classify video events
    within the sports subject area based on the use of a convolutional neural network. At the same
    time, high accuracy of sports training video classification is provided. Compared with other models,
    the proposed model has the advantages of simple implementation, fast processing speed, high
    classification accuracy, and high generalization ability.

  • THRESHOLD ASSESSMENT OF THE STATE OF A TECHNICAL OBJECT BASED ON SEGMENTATION AND IDENTIFICATION OF THE CONTROLLED PARAMETER MODEL

    S.I. Klevtsov
    Abstract

    To fix the jumps in the average value, a detection method based on the segmentation of the signal
    under study based on the formation of cumulative sums using the Page-Hinckley criterion is proposed.
    The use of the Page–Hinckley likelihood criterion makes it possible to detect abrupt changes
    in the average value of the controlled object parameter in real time under noisy conditions. When
    using the method, it is assumed that the signal is described by a time series of values of the signal
    under study. From this series, it is possible to single out separate successive sections, which can be
    considered as some signal models limited in time. The method is based on the use of criterion statistics,
    on the basis of which two or three models estimated from different parts of the signal are compared,
    which makes it possible to detect abrupt changes in the model parameters. The method assumes
    that a piecewise constant signal with additive noise is considered. At arbitrary moments of
    time, there are jumps in the average value of this signal. Jumps in the average value of the signal can
    be different in sign (fixed on different sides of the time axis) and significantly exceed the original
    value in absolute value. The average value of the signal is a constant value close to zero. But a situation
    is possible when a repeated jump will be made from a level different from the average value
    close to zero, both in the direction of increasing and decreasing the average value of the signal and
    changing the signal polarity (the sign of the signal values). A criterion has been chosen that allows
    minimizing the delay time in detecting a jump in the average value of the recorded signal with a minimum
    of false alarms. In this case, segmentation of the signal under study is used based on the formation
    of cumulative sums using the Page-Hinckley criterion. The use of the Page–Hinckley likelihood
    criterion makes it possible to detect abrupt changes in the average value of the controlled object
    parameter in real time under noisy conditions.

  • UNMANNED AERIAL VEHICLE AERODYNAMICS PERFORMANCE OPTIMIZATION USING VARIABLE SWEEP WING ANGLE

    A.J.D. Al-Khafaji, G.S. Panatov, А. S. Boldyrev
    Abstract

    The unmanned aerial vehicles (UAVs) can take many forms depending on the type of UAV
    duty and condition of flight. In this project, optimization of UAVs aerodynamics property through
    the sweep angle of wing (sweepback angle) to reduce wave drag and delay the onset of drag divergence.
    therefor models of unmanned aerial vehicles (UAVs) designed with five different sweepback
    wing angle (15o, 20o, 25o, 30o, and 35o) and different aspect ratio with constant taper ratio =
    0.2 have been used. Every wing was built with airfoil for root and tip chord SD8020, with low
    Mach number equal to 0.058 (i.e., Velocity equal to 20 m/s). The whole models of a wing were
    plotted for a three-dimensional using the SOLIDWORKS software program, and then the models
    of this wing were analyzed employing ANSYS FLUENT. Calculations of the value of lift to drag
    ratio were made for deciding which UAV has optimum value of lift and the lowest drag versus the
    attack angle (0o, 2o, and 4o). The results Show that the aerodynamics performance changes according
    to the value of the sweepback angle and aspect ratio, the maximum lift to drag ratio
    achieved at UAV with sweepback angle 15o and the angle of attack is 2o, minimum lift to drag
    ratio at UAV with sweepback angle 35o and the angle of attack is 0o. Due to constant taper ratio
    which equal to 0.2 the wing area different according to each model. Best model with maximum lift
    to drag ratio has wing area equal to 1.68 m2 while model with minimum lift to drag has wing area
    equal to 0.65 m2.

SECTION IV. ELECTRONICS, NANOTECHNOLOGY AND INSTRUMENTATION

  • CONTROL SIGNAL GENERATOR OF A NEW GENERATION

    D.V. Belyaev, А.B. Rempe, А. N. Zikiy
    Abstract

    Control signal generators (SCS) allow for the rapid assessment of the condition of radio receiving
    equipment during its operation, therefore, much attention is paid to their research and
    development. Very often, noise generators or generators on Gann diodes with low frequency stability
    are used as control signal generators. Due to the appearance of available frequency synthesizer
    chips with a built-in voltage-controlled generator, an attempt has been made to use a frequency
    synthesizer as a master generator in a control signal generator. The use of a frequency
    synthesizer chip with sequential loading of control codes led to the need to use a microcontroller.
    An experimental study of a two-frequency control signal generator has been carried out. The functional
    scheme of the GCS is presented. A brief description of the element base is given. A twofrequency
    synthesizer Si4133GT was used as a master generator. As the results of the study are
    presented: – output signal spectrum; – waveforms of the output signal; – dependence of the output power on the frequency for three instances of the cell; – dependence of the output power on the
    attenuator control code. The following results have been achieved: – operating frequencies 450
    and 1200 MHz; – the output power of each channel is at least 100 MW; – relative instability of
    the carrier frequency 10-5; – the attenuation range of the step attenuator is not less than 20 dB;
    – pulse modulation depth of at least 30 dB; – duration range of modulating pulses from 10 to
    100 microseconds; – the range of variation of the repetition period from 300 to 1000 microseconds;
    – it is possible to enter an external control signal. By most parameters, the developed GCS
    exceeds the parameters of the GCS previously developed at the enterprise. GCS is supposed to be
    used as part of a multi-channel receiver.

  • EVALUATION OF CHARACTERISTIC PARAMETERS OF OPTO-ELECTRONIC DEVICES USED IN REMOTE SENSING SYSTEMS

    B.М. Azizov, А. N. Badalova, H.N. Mammadov
    Abstract

    In the article under consideration, the main operating parameters characterizing opticalelectronic
    devices and the features of the factors influencing them are investigated. From the indicators
    under consideration, the functions of sensitivity, resolution, and noise transmission were
    singled out, and theoretical issues of the relationship between the input parameters were analyzed.
    In the study, the transition to non-linearity intended for the development of equipment carried out
    according to a linear pattern causes many errors. The transition of the system to a nonlinear mode
    should be determined depending on both internal and external factors and on the function in which
    the system performs it. In the course of work, as the main internal factor, it is possible to show the
    change in temperature and other parameters of the system associated with it, and as an external
    factor, atmospheric change, which has great dynamism. To assess the formation and quantitative
    change in the sensitivity of the resolution and noise signals of the system, the article analyzes some
    auxiliary functions that affect the transfer function and determines the optimal values for different
    systems operating in different modes. In different satellite systems, due to the difference in the
    interaction of the signal with the atmosphere obtained in the optical range (ultraviolet, visible, and
    infrared), the transfer function becomes complex. As a result, the signal-to-noise ratio changes
    over a wide range. Based on the foregoing, it is shown that the main indicator characterizing the
    system as a whole is a change in the output signal over time. Therefore, it is expedient to replace
    the function of space and time, which characterizes the object under study in terms of the observation
    area, with the function of the temporal output signal. According to the results of the study, for
    one reason or another, the value of internal and external factors acting on the function in time and
    space allows us to directly evaluate the temporal function of the output signal.

  • BOUNDARY VALUE PROBLEM FOR EXCITING A ROTATING CYLINDRICAL WAVEGUIDE WITH IMPEDANCE WALLS

    D.Е. Titova
    Abstract

    The aim of the paper is to study the behavior of electromagnetic field excited in rotating
    waveguides. Solution of the problem of excitation of electromagnetic waves in rotating waveguides
    is important for interpreting the experiments with electromagnetic waves in rotating interferometers
    and gyroscopes. It can also be used for development of new methods of rotation rate measurement.
    Formulation and solution of such problems in rigorous way is complicated due to the
    fact that the rotating reference frames are non-inertial, and the presence of centrifugal forces and
    Coriolis forces make the space curved. In this paper, formulation and solution of the problem of
    excitation of electromagnetic field in a rotating cylindrical waveguide is presented in a rigorous
    form. The rigorous solution of the problems is derived with covariant Maxwell equations and take
    into account the effect of an equivalent gravitational field on the electromagnetic field in rotating
    reference frames. Influence of the rotation on the main characteristics of the waveguide is studied.
    Impedance boundary problem of excitation of an electromagnetic field in a rotating cylindrical
    waveguide with constant impedance walls is solved. Frequency responses of the rotating waveguide
    are calculated on the basis of the analytical solutions. It is shown that the parameters of the
    excited electromagnetic field depend on the waveguide rotation rate. It is shown, that the azimuthal
    harmonics, which propagate in the clockwise and counterclockwise directions in the waveguide
    have different wavelengths and propagation constants. Calculations confirm the effect of splitting
    of the waveguide cut-off frequency into two new cut-off frequencies due to rotation. The new cutoff
    frequencies are equal to the difference between the cut-off frequency of the waveguide at rest
    and the rotation rate of the waveguide multiplied by the order of the mode, which is excited in it.
    The dependence of the electromagnetic field parameters on the rotation rate can be used for rotation
    rate measurement. The solution derived can be used for setting up and analysis of the results
    of scientific experiments with rotating waveguides.

  • ANALYSIS OF THE ELECTROMAGNETIC FIELD IN CABLE SYSTEMS WITH INSULATION FROM POLYMER MATERIALS

    N. К. Poluyanovich, D,V. Burkov, М. N. Dubyago, О. V. Kachelaev
    Abstract

    The article is devoted to the calculation of the electromagnetic field strength (EMF) in the
    insulating material of a power cable (SC). The magnetic field of a single sample of the APvPu-10
    1x240/70 cable was investigated. Theoretical information is given for calculating the strength of
    an electrostatic axisymmetric field based on the solution of Fredholm integral equations in a piecewise homogeneous linear polymer insulation with inclusions. Models are constructed for
    calculating and analyzing the intensity distribution of inhomogeneous electric fields in a dielectric
    medium with inclusions of different areas and with different electrophysical parameters (filling).
    When the EMF passes through various materials filling the inclusion, the absorption of wave energy
    by these substances is observed. Based on the simulation performed using the Comsol program,
    the analysis of EMF at the interface of dielectric media between spherical micro-inclusion
    and the main insulation was performed. It is shown that in solid dielectrics, conductors, EMF
    absorption is significant. If a wave meets any conductor, then most of its energy is absorbed by it.
    The presence of inhomogeneities in the insulation at the insulation – inhomogeneity interface
    causes jumps in the electric field strength 1/2, 2/3. The simulation and analysis of the electric
    field voltage distribution in the defect region were carried out and it was found that with increasing
    Sdef, the amplitude of the magnetic induction surge (B) at the first boundary of the defect increases.
    On the second border, the opposite is true. With increasing Sdef. the depth of the induction
    failure (B) increases. However, while maintaining the overall picture, the values of dips with
    different types of filling inclusions are different: – the greatest gradient is observed when filling
    with water, the smallest when filling with carbon plus cross-linked polyethylene (C + SPE). Thus,
    it can be a diagnostic parameter of the quality of the insulation of the IC. The results of the work
    are of interest in solving a complex of problems related to various aspects of electromagnetic
    compatibility and reliability of functioning of electric power systems.

  • ULTRA WIDEBAND INDOOR OMNI-DIRECTIONAL 2 × 2 MIMO ANTENNA FOR 2G, 3G, 4G, AND 5G APPLICATIONS

    I.А.
    Abstract

    Multi-frequency and wideband communication systems have developed into a popular research
    topic as a result of the rising demand for high-speed data transfer and the coexistence of
    several types of communication networks. The radiation pattern of Omni-directional antennas
    allows for effective transmission and reception from a mobile unit, making them handy for a number
    of wireless communication devices as well as capable of handling additional distinct frequency
    bands. Implementing a wide bandwidth antenna, however, might be important for mobile communication
    systems supporting 2G, 3G, 4G, and upcoming 5G applications. Numerous studies on 5G
    wideband antennas were published because the 5G network allows for larger data throughput,
    greater robustness, and lower power consumption for its vast user base. The MIMO technology
    has developed into a key technology for 5G applications because of the benefits include increasing
    channel capacity, boosting the performances of transmitting and receiving signals, fitting large
    antennas into a small space, and more. Recently, several varieties of 5G MIMO antennas for
    smartphones were proposed. This research proposes a wideband 2 × 2 MIMO antenna for indoor
    GSM/3G/LTE/5G communication systems. The antenna in use produces Omni-directional radiation
    patterns by employing two antenna elements that are evenly spaced out around the center.
    Concurrently, a large bandwidth and good omnidirectional radiation performance are attained.
    According to simulation results, a gain of up to 7.5 dB can be used to obtain an impedance bandwidth
    of (0.7-7) GHz with return losses as high as -22. The antenna is simulated by ANSYS HFSS
    (high frequency structure simulator) 2020.