No. 5 (2024)

Published: 2025-03-27

SECTION I. INFORMATION PROCESSING ALGORITHMS

  • ESTIMATION OF THE EXECUTION TIME OF ENCRYPTION, DECRYPTION, AND HOMOMORPHIC CALCULATIONS USING THE DOMINGO-FERRER CRYPTOSYSTEM

    L.К. Babenko, V. S. Starodubcev
    6-15
    Abstract

    This article considers a symmetric probabilistic homomorphic Domingo-Ferrer cryptosystem based
    on the problem of number factorization. Currently, homomorphic cryptosystems of two types are relevant:
    the Gentry type and those based on the problem of factorization of numbers. A distinctive feature of the
    latter, in comparison with Gentry-type cryptosystems, is the lower complexity of performing homomorphic
    operations, which significantly expands the scope of their application in practice. However, since
    homomorphic cryptosystems based on the number factorization problem have not been widely used and
    have not been sufficiently analyzed, unlike Gentry-type cryptosystems, their thorough comprehensive study
    is required. For the considered symmetric homomorphic Domingo-Ferrer cryptosystem, descriptions of
    key generation, encryption, decryption, and homomorphic computing operations are given. For encryption,
    decryption, and homomorphic computing operations, a complexity estimate is given, expressed in the
    number of basic mathematical operations, as well as graphs illustrating the dependence of the number of
    operations on the selected parameters of the cryptosystem. The aim of the study is to assess the complexity
    of performing encryption, decryption and homomorphic calculations by a symmetric probabilistic
    homomorphic Domingo-Ferrer cryptosystem based on the number factorization problem. The main result
    of this work is an assessment of the complexity and determination of the most time-consuming stages of
    encryption, decryption and performing homomorphic calculations using the Domingo-Ferrer cipher, confirmed
    by a number of experimental studies. The conducted research represents an important step in the
    development of the Domingo-Ferrer cryptographic system based on the problem of factorization of numbers
    and has the practical significance of implementing algorithms with the ability to determine the time
    costs of encryption, decryption and performing homomorphic calculations. The results obtained can be
    used by researchers and programmers in the development of implementations of the Domingo-Ferrer
    cryptosystem in programming languages.

  • THE OPTIMAL STATE ESTIMATION FUSION ALGORITHM IN DISCRETE-CONTINUOUS AUV SYSTEMS

    А.А. Kabanov, V. А. Kramar, К.V. Dementiev
    Abstract

    The article is focused on the optimal state estimation fusion algorithm for discrete-continuous systems.
    The aim of the study is to create an effective fusion strategy for combining data obtained from continuous
    and discrete information sources to improve the accuracy and reliability of state estimation in
    complex dynamic systems. The paper discusses in detail the theoretical foundations of the proposed method,
    including the mathematical description of the continuous and discrete system models, the optimization
    criterion formulation, and the derivation of equations for calculating the complementation weights. Particular
    attention is paid to the analysis of conditions under which the proposed algorithm provides an
    improvement in estimation accuracy compared to the use of only continuous or only discrete filter.
    The authors present the results of numerical modeling demonstrating the developed algorithm efficiency
    on the example of autonomous underwater vehicle motion parameter estimation. It is shown that the proposed
    fusion method allows to significantly reduce the estimation errors compared to the use of separate
    filters, especially in conditions of incompleteness and noise in measurements. In conclusion, it is stated
    that the developed algorithm is promising for application in various fields related to information processing
    in complex technical systems, such as navigation, motion control, monitoring of the objects and
    processes. It is noted that the proposed approach can be generalized to the case of complexing data from
    a larger number of information sources and adapted to different types of discrete-continuous systems.
    The article is considered to be valuable for specialists in control theory, signal and information processing,
    as well as for developers of navigation and motion control systems. The research results can find practical
    application in the creation of high-precision state estimation systems in various technical applications

  • IMAGE PREPROCESSING ALGORITHM TO REDUCE THE PROBABILITY OF OVERFITTING OF CONVOLUTIONAL NEURAL NETWORKS ON A NEURAL ACCELERATOR

    V.V. Kovalev
    Abstract

    The main volume of requirements in early detection systems are imposed on the performance of digital
    image processing algorithms that are implemented on embedded devices with limited computing resources.
    In the early detection problem, objects in images are represented by a small number of pixels.
    Therefore, in order to ensure the required accuracy characteristics of the algorithms for searching and
    recognizing objects in images, algorithms for preliminary processing of a sequence of video frames are
    used to expand the original feature space. Processing high-resolution images by image preliminary processing
    algorithms leads to an unacceptable time delay in the execution of the algorithm and is a "bottleneck"
    of the entire algorithm. In this paper, an algorithm for preliminary processing of a sequence of video
    frames for a neural accelerator is proposed in order to expand the feature space, which allows increasing
    the speed of data processing. This is achieved by merging the image preliminary processing algorithm
    with the feature extractor of a convolutional neural network and transferring the execution of a new feature
    extractor to the computing power of the neural accelerator. The developed algorithm was tested by
    conducting a computational experiment. On NVIDIA Jetson and Rockchip computing devices, the algorithm
    of preliminary processing is implemented twice on the central processor and neural accelerator,
    according to the developed algorithm. Estimates of the execution time of the algorithms are obtained,
    which show that the proposed algorithm of preliminary image processing for the neural accelerator allows increasing the data processing speed by 1.4 - 4 times depending on the type of bit depth of calculations.
    However, the transition to the integer type of calculations of the CNN model with a modified feature
    extractor leads to a decrease in the Mean Average Precision metric by 5–19.4%, characterizing the integral
    average accuracy of searching and recognizing objects in images

  • APPLICATION OF A HYBRID NEURAL NETWORK AE-LSTM FOR ANOMALIES DETECTION IN CONTAINER SYSTEMS

    I.V. Kotenko, М.V. Melnik
    Abstract

    The popularity of container systems attracts the attention of many researchers in the field of information
    technology. Containerization technology allows to reduce the cost of computing resources when
    deploying and supporting complex infrastructure solutions. Ensuring the security of container systems and
    containerization in general, as well as the use of smart attacks based on artificial intelligence by malefactors,
    is a serious problem on the way to the safe and stable operation of container systems. This article
    proposes an approach for detecting not only previously unknown individual anomalous processes, but also
    anomalous process sequences in container systems. The proposed approach and its implementation based
    on the Docker platform are based on tracing system calls, constructing histograms of running processes,
    and using the AE-LSTM neural network. The process of constructing histograms is based on accounting of
    the number of executed system calls for each individual process. This solution provides the ability not only
    to accurately identify any process in the system, but also to effectively detect anomalous process sequences
    with a high degree of accuracy. The generated sequences are used as input data for the neural network.
    After completing the training process, the neural network acquires the ability to detect anomalous sequences
    by comparing a given threshold of reconstruction error with the actual error level of the input
    data vector. When the neural network encounters a new input data vector, it calculates the reconstruction
    error level - the difference between the expected and actual value. If this error exceeds a predetermined threshold, the system signals the presence of an anomaly in the sequence. Experiments show that the proposed
    approach demonstrates high accuracy in detecting anomalous processes with a low level of false
    positive detection results. Such results confirm the effectiveness of the proposed approach. Also, the computational
    costs of training the neural network model are quite low. This allows using less powerful hardware
    without significant performance losses. Such a solution can be trained and implemented in a new
    infrastructure in a fairly short time

  • STRUCTURAL MODIFICATION OF THE HUFFMAN METHOD FOR COMPRESSION OF DENSE DATA STREAMS WITHOUT LOSS ON A RCS

    I.I. Levin, Е.А. Dudnikov
    Abstract

    Modern society demands require solving a whole range of computationally intensive tasks in real
    time. Such solutions require enormous computing power, broadband high-speed data transmission channels
    and impressive memory capacities. Such demands can be met by developing and implementing new
    technologies, expanding the technical infrastructure, which will require significant financial and time
    costs. Such a transition can be facilitated using the existing technical base by using real-time data compression
    algorithms. Data compression tools at the rate of receipt can increase the speed of calculations,
    data transfer, and reduce the occupied space during storage, using the existing infrastructure. Modern
    CPU-based technical platforms are not capable of providing streaming data processing at the rate of their
    receipt; the actual performance of such systems does not exceed 10% of the peak. Reconfigurable computing
    systems (RCS) based on programmable logic integrated circuits (FPGAs) can become a new platform
    for lossless data compression systems at the rate of receipt. However, for the efficient operation of such
    systems, it is necessary to develop new methods using structural calculations that allow the full potential
    of the FPGA resource to be unleashed. This paper presents the implementation of a modification of the
    dynamic Huffman coding algorithm on the RCS, which allows creating prefix codes of optimal length and
    processing dense data flows at the rate of receipt with a throughput of at least 128 Gbit/s. The performance
    of the developed modification is 5 times higher than the best known complementary implementation
    based on FPGA per computing pipeline

  • SOME ASPECTS OF APPLICATION OF ARTIFICIAL INTELLIGENCE TECHNOLOGIES IN INFORMATION SECURITY (REVIEW)

    S.Y. Melnikov, R. V. Meshcheryakov, V. А. Peresypkin
    Abstract

    Artificial intelligence (AI) technologies are one of the most dynamically developing areas of information
    processing. AI technologies are used both to ensure the information security and to organize attacks
    on information security tools. AI systems themselves may contain vulnerabilities and be susceptible
    to various types of attacks. The article analyzes some aspects of the use of AI technologies in information
    security tasks. Within the framework of the task of biometric identification, threats of falsification of biometric
    identification characteristics in order to obtain access rights, and ways to counter such threats are
    considered. The advantages of using AI in protecting information in computer systems and networks in
    comparison with traditional means of protection are analyzed. Using the example of an acoustic channel
    of information leakage from a keyboard, the use of AI technologies for processing data from technical
    leakage channels is illustrated. Methods for increasing the information content of such channels using temporary convolutional networks and image classification models, as well as ways to counter them, are
    considered. Special attention is paid to information security issues in increasingly popular systems for
    compressing and transmitting information without significant semantic losses (Semantic Communications).
    A number of information security issues that arise when using large language models such as
    ChatGPT, capable of massively generating unique “human-like” content and using it to organize phishing
    and other social engineering attacks, are considered. An attack on AI systems using a covert channel is
    described. Attention is paid to the need to develop trusted artificial intelligence technologies

  • A HARDWARE-ORIENTED METHOD OF ACCELERATED SEARCH BY TEMPLATE BASED ON STRUCTURAL-PROCEDURAL COMPUTING

    Е. А. Titenko, E.I. Vatutin, М.А. Titenko, А.P. Loktionov, E.V. Melnik
    Abstract

    The operation of searching for occurrences of a pattern in a text is generally significant in modern
    computing tools for solving problem-searching tasks. Of greatest interest are hardware and software solutions
    that have a homogeneous structure and regular connections between computing blocks. The aim of
    the work is to reduce the time costs for searching for occurrences based on the use of parallel search in
    associative memory and the method of parallelization by iterations. The proposed method uses associative
    memory for parallel search for occurrences and dynamic reconfiguration of the structure of the original
    string from a one-dimensional form to a matrix form. The method is critical to such resources as the number
    of memory access channels, the volume of block memory for creating and parallel operation of an
    array of associative cells. Involvement of all elements in the reconfiguration entails excessive costs of the
    internal block memory for sequential viewing of partial entries by one set of starting positions multiples of
    the sample length (the second symbolic operand). Instead, an approach is proposed to combine in time the
    search for partial entries by two sets of substrings multiples of the sample length, with a simultaneous
    proportional reduction in the elements of the bit slice of the associative memory for each set, which allows
    processing several sample symbols at the current search step. Quantitative estimates of search time are
    determined by the number of comparison and substring writing operations in the overall work cycle, as
    well as the proportions of the time of these operations. It is shown that for samples of more than 10 elements,
    the time gain is approximately 1.8-2 times. This effect is obtained by eliminating the steps of sequential
    shift with transitions between the boundary elements of the strings. The developed method provides
    pipeline processing of a stream of string operands with a combination of viewing at the current
    search step of a non-unit set of characters of the processed string. The search time is re duced by introducing
    a pipeline, the number of stages of which depends on the reduction coefficient of the bit slice size,
    which allows hardware implementation of the structural-procedural approach used in reconfigurable
    computing systems

  • METHOD OF DEVELOPMENT OF THREAT SCENARIOS KNOWLEDGE BASE FOR INCIDENT RESPONSE PLATFORM (IRP)

    I.V. Mashkina, А.М. Urazaeva
    Abstract

    The objective of the work is to study the possibility of increasing the efficiency of response to information
    security (IS) incidents. This can be achieved by developing a system capable of quickly localizing
    an incident, providing automation of response to an IS threat, taking predetermined actions depending on
    the details of the threat scenario being implemented. An architecture for constructing an IRP system is
    proposed, the main modules of which are a response scenario knowledge base, a threat scenario
    knowledge base, modules for determining the incident status and making decisions on the formation of
    command information. The problem of developing threat scenarios for creating a scenario knowledge
    base has been solved, on the basis of which adequate response scenarios can be developed that are unique
    for each chain of the cybercriminal's actions, events and involved objects. The paper formalizes the method
    for developing a knowledge base of threat scenarios based on constructing EPC diagrams of scenarios
    that display multi-component attacks taking into account tactics, techniques, vulnerabilities used, and
    information security threats (IST) specified in regulatory documents and databases. The paper formulates
    the rules for constructing EPC diagrams of threat scenarios and the methodology for EPC modeling for
    objects of influence in ICS. An example of an attack scenario on an industrial network from a global network
    is considered in the case when a cybercriminal, having attacked a remote user's computer, first gains
    unauthorized access to the corporate segment and gains a foothold in it for further penetration beyond the
    perimeter of the process network. The paper presents the developed EPC diagram of a threat scenario
    indicating the tactics, techniques, intermediate IST, and some vulnerabilities used. The assessment of the
    probability of scenario implementation is formalized

  • ALGORITHM FOR SEARCHING AND ACQUISITION OF KNOWLEDGE BASED ON TECHNOLOGIES FOR PROCESSING AND ANALYZING TEXTS IN NATURAL LANGUAGE

    Е.М. Gerasimenko, Y.А. Kravchenko, D.А. Shanenko
    Abstract

    The article is devoted the topical scientific problem of increasing the efficiency of processing and
    analyzing text information when solving problems of searching and acquiring knowledge. The relevance of
    this task is related to the need to create effective means of processing the accumulated huge amount of
    poorly structured data containing important, sometimes hidden knowledge that is necessary for building
    effective control systems for complex objects of different nature. The algorithm of search and knowledge
    acquisition in processing and analyzing textual information proposed by the author is characterized by the
    use of low-level deterministic rules that allow for qualitative text simplification based on the exclusion of
    words invariant to meaning from textual information. The algorithm relies on domain elaboration that
    allows to create lists of domain-specific words, which allows for high quality text simplification. In this
    task, the input data are streams of textual information (profile descriptions) extracted from online recruiting
    platforms; the output information is represented by sentences formed in the form of a triple "subjectverb-
    object", reflecting the granules of knowledge obtained during text processing. The use of this order of
    units constituting a sentence is due to the fact that this order is the most widespread in the Russian language,
    although other variations of the order are possible in the texts themselves without losing the general
    meaning. The main idea of the algorithm is to split a large corpus of text into sentences, then filter the
    resulting sentences based on the keywords entered by the user. Subsequently, the sentences are further
    split into components and simplified depending on the type of received component (verbal, nominal).
    The field of marketing was used as an example in this work, and the keywords were "social media".
    The author has developed an algorithm for for knowledge search and acquisition based on natural language
    text processing and analysis technologies, and a software implementation of the proposed algorithm
    has been performed. A number of metrics were used as efficiency evaluation methods: the Flash-
    Kincaid index; the Coleman-Liau index; and the automatic readability index. The conducted computational
    experiments have confirmed the effectiveness of the proposed algorithm in comparison with analogues
    that use neural networks to solve similar problems

  • SYNTHESIS OF PSEUDO-DYNAMIC FUNCTIONS PD-sbox-ARX-32

    S.V. Polikarpov, V. А. Prudnikov, К.Е. Rumyantsev
    Abstract

    The aim of the work is to develop a method for synthesizing optimal pseudo-dynamic functions
    PD-sbox-ARX-32, 32-bit in size, in accordance with conflicting requirements for cryptographic characteristics
    of the considered structure. The methods for synthesizing classical sbox’es are considered, including
    those using evolutionary and genetic methods. The requirements for cryptographic characteristics are
    presented, both for the PD-sbox functions and for their constituent elements (classical sbox and ARX functions).
    A method for synthesizing pseudo-dynamic functions PD-sbox-ARX-32 is proposed, including two
    stages: 1) heuristic search for a structure corresponding to conflicting requirements for the resulting cryptographic
    characteristics, consumed software and hardware resources, as well as the speed of operation of the
    presented function; 2) search for optimal parameters of the main element of PD-sbox-ARX-32 – ARX functions,
    using the evolutionary method, the essence of which is to select the values of cyclic shifts in ARX functions.
    As a result, a set of four ARX functions was obtained for the pseudo-dynamic transformation of PDsbox-
    ARX-32, having the weight of linear characteristics equal to and difference characteristics equal
    to (in this case the empirical weight is ). To determine the weights of cryptographic characteristics,
    methods based on the use of SAT solvers were used in the work. The paper concludes that the selected
    structure of the 32-bit ARX function in the PD-sbox allows for a critical path (maximum number of sequential
    addition operations modulo ) that is four times smaller than that of the 8-iteration 32-bit
    Alzette-like structure, with a twofold increase in the number of operations and comparable maximum values
    of the weights of the difference and linear characteristics. A similar result is obtained when comparing
    the 32-bit ARX function with the 8-iteration 32-bit transformation from the Speck32 block cryptographic
    algorithm. The proposed method for synthesizing the parameters of the 32-bit ARX function allows for
    minimizing the number of assembler instructions spent on cyclic shift operations when implemented on
    low-resource 8-bit microcontrollers AVR (for example ATmega328P).

  • IMAGE FUSION QUALITY ASSESSMENT USING SHANNON ENTROPY AND HARTLEY USEFUL INFORMATION COEFFICIENT

    А.А. Aleksandrov, G.S. Miziukov, М.А. Butakova
    Abstract

    The article examines methods for improving the quality of images obtained from different sources of
    information through multi-modal integration. By combining information from various modalities, features
    that cannot be accurately interpreted when analyzed separately can be utilized. To support the relevance
    of this topic, recent research in this area is discussed. The goal of the work is to enhance the information
    content of images resulting from merging data from diverse sources and create high-quality images suitable
    for accurate machine learning applications. To achieve this objective, the authors address several
    tasks. They develop an approach for measuring image quality and design algorithms to evaluate the quality
    of fused results based on multi-modal information. These algorithms are implemented within a software
    framework for validating the proposed approach. Evaluation experiments are conducted based on the
    presented calculations of the information content of images and the effect of noise and blur on the entropy
    of the combined image. The results of the experimental studies on data sets from open sources have shown
    that the proposed method allows determining the best way to merge images with maximum information
    content. The use of Shannon entropy makes it possible to calculate the amount of information transmitted
    in images, and the Hartley coefficient of useful information helps estimate the amount of noise in an image.
    Additionally, the article compares the results at different levels of noise and degrees of blur in images,
    demonstrating the effectiveness of different algorithms for evaluating image quality. To illustrate the
    proposed approach, we analyze images obtained by combining data from two devices: an infrared camera
    and a video camera capturing images in the visible range

  • BUILDINGS FIRES FREQUENCY DETERMINING METHODOLOGY BASED ON DENSITY ESTIMATION AND SIMULATED ANNEALING METHODS

    О. S. Malyutin, R.S. Habibulin
    Abstract

    Solving the problem of determining the optimal spatial location of fire departments is a rather complex
    scientific and technical problem, including, as previous studies have shown, an extensive list of factors,
    including the need to assess the expected frequencies of fires in different parts of settlements, depending
    on the nature of the development. Currently, in the Russia, approaches and methods to solve this problem
    are not sufficiently developed. As a rule, researchers limit themselves to the fact of the existence of a
    spatial distribution of fires, without delving into the causes that led to one or another nature of such a
    distribution. Meanwhile, their understanding will allow us to build models for estimating the expected
    densities of fire flows in various areas. The article proposes an approach based on the method of estimating
    the spatial density of random events (KDE, Kernel Density Estimation) and a simulated annealing
    algorithm to select the values of the calculated frequencies of fires in buildings of various classes of functional
    fire hazard. The approach has been tested on the available data on fires for the period 2010-2020
    and urban development in the city of Krasnoyarsk. The study showed that the proposed approach allows
    us to obtain such values of fire occurrence frequencies at which their predicted density will be as close as
    possible to the actual one. The results obtained expand the set of research tools in the field of assessing
    both the actual and predicted fire situation and are aimed at developing methods and algorithms for determining
    the optimal locations of fire departments. The proposed approach can also be used to solve
    other problems of spatial optimization in the field of public safety, road safety, protection of the population
    from emergency situations, as well as in the field of urbanism and urban planning

  • KEYPHRASE EXTRACTION BASED ON LARGE LANGUAGE MODELS

    Mohammad Juman Hussain
    Abstract

    The article addresses the current problem of extracting key phrases from natural language texts,
    which is a critical task in the field of natural language processing and text mining. It examines in detail
    the main approaches to extracting key phrases (keywords), including both traditional methods and modern
    approaches based on artificial intelligence. The paper discusses a set of widely used methods in this field,
    such as TF-IDF, RAKE, YAKE, and linguistic parser-based methods. These methods are based on statistical
    principles and/or graph structures, but they often face problems related to their insufficient ability to
    take into account the context of the text. The GPT-3 large language model demonstrates superior contextual
    understanding compared to traditional methods for key phrase extraction. This advanced capability
    allows GPT-3 to more accurately identify and extract relevant key phrases from text. The comparative
    analysis using the Inspec benchmark dataset reveals GPT-3's significantly higher performance in terms of
    Mean Average Precision (MAP@K). However, it should be noted that despite high accuracy and extraction
    quality, the use of large language models may be limited in real-time applications due to their longer
    response time compared to classical statistical methods. Thus, the article emphasizes the need for further
    research in this area to optimize key phrase extraction algorithms, taking into account real-time requirements
    and text context

SECTION II. DATA ANALYSIS AND MODELING

  • AN APPROACH TO BUILDING ADAPTIVE OBJECT ACCOUNTING SYSTEMS USING ARTIFICIAL INTELLIGENCE METHODS

    V.I. Voloshchuk, Ali Garyagdiyev, М.А. Kozlovskaya, Y.E. Melnik, А.N. Samoylov
    Abstract

    The use of artificial intelligence methods for object accounting is associated with a number of difficulties,
    such as the variability of objects, the influence of shooting conditions, the overlap of objects in
    complex scenes, the need to work with different scales and high accuracy, as well as the presence of noise
    distortions in the data. The paper proposes an approach based on dynamic learning and adaptation to
    input data to organize the setup and operation of adaptive object accounting systems based on artificial
    intelligence methods, which includes several consecutive stages. The first stage is the semantic analysis of
    the user's request, which is based on the use of vector-graph data structure, which provides the allocation
    of semantically important elements of the request, allowing the system to understand the context of the
    task and adapt the strategy of search and classification of objects. Then follows the stage of automatic collection and preprocessing of data from open sources, which provides the expansion of the training
    sample and increases the stability of the model. The next important step is the generation of the training
    sample. This process includes image retrieval based on query semantics, manual validation and data partitioning,
    and initial training of the system for automatic partitioning. The above steps are repeated until
    the desired system performance is achieved. The iterative process of pre-training based on alternation of
    automatic markup and manual correction allows to reduce time expenditures on formation of training
    samples. The advantage of using vector-graph structure is the formation of more accurate semantic representation
    of information. Data augmentation including rotation, reflection, scaling, changing brightness
    and contrast, and adding noise is applied to enhance the generalization ability of the model. The proposed
    approach is designed to improve the efficiency (as the ratio of system operation time to its setup time) of
    object registration systems, ensuring their adaptability to different tasks and survey conditions

  • MODELING SIDE-CHANNEL LEAKAGES FOR THE CRYPTOGRAPHIC ALGORITHMS "MAGMA" AND "KUZNACHIK" BASED ON THE ELMO EMULATOR

    V.О. Malyavina, Е.А. Maro
    Abstract

    Analysis of the resistance of implementations of information security tools to attacks via side channels
    is a relevant task in the development of cryptographic modules. The first stage in the study of resistance
    via side channels is the assessment of the presence of statistical leaks in various parameters of the
    operation of devices during the execution of cryptographic algorithms. The universal source, assessed as a
    side channel, is the analysis of the energy consumption of the device during cryptographic operations. In
    this research the ELMO tool was used to obtain power consumption traces for the Magma and Kuznyechik
    encryption algorithms, identify instructions containing statistical power consumption leaks for observed
    algorithms. To model the power consumption traces, the GOST R 34.12—2015 encryption algorithm
    (n=64 Magma and n=128 Kuznyechik) was implemented in C in ELMO. The full-round version of the
    Magma and Kuznechik encryption algorithms consists of 15,400 instructions (of which 4,450 instructions
    contain a potential leakage in energy consumption) and 7,167 instructions (of which 4,833 instructions
    contain a potential leakage in energy consumption), respectively. The side channel (corresponding to the
    processed data) can be identified using a statistical t-test. To perform this task, two independent sets of
    device energy consumption traces are formed: traces with a fixed value of the input vectors and traces
    with arbitrary (not coinciding with the fixed) values of the input vectors. Power consumption leaks were
    modeled for different numbers of Magma and Kuznyechik encryption rounds based on the statistical t-test.
    The identified instructions are optimal for subsequent differential or correlation attacks on power consumption
    on the observed encryption algorithms. The instructions containing the maximal statistical dependence based on the conducted testing were determined. For the Magma cipher, the instructions added
    r3,r4,r3 and ldrb r3,[r3,r1] were identified, for the Kuznyechik cipher - lsls r5,r3,#0x0 and
    str r7,[r3,#0x20000888]. The identified instructions are optimal for subsequent differential or correlation
    attacks on power consumption on the encryption algorithms under research

  • OVERVIEW OF SWITCHING SUBSYSTEM MODELS FOR DIGITAL PHOTONIC COMPUTING DEVICES

    D.А. Sorokin, А.V. Kasarkin
    Abstract

    This article examines options for organizing the switching subsystem of digital photonic computing
    devices, whose main task is to enable efficient computations in various problem domains. According to the
    authors, digital photonic computers should process information within a structural computing paradigm.
    This paradigm fundamentally differs from the classical von Neumann paradigm, as data transfer between
    functional elements is inseparable from processing. Therefore, developing a switching subsystem in digital
    photonic computing devices is a critical challenge. This subsystem must handle data dependencies between
    operations not only in time but also in space. Only under these conditions can data processing in
    photonic computing systems achieve performance that exceeds the performance of the most advanced
    electronic computing systems by two or more decimal orders. The article addresses issues of streaming
    data exchange between functional devices in a digital photonic computer. The authors developed and analyzed
    switching device models and methods for organizing the switching subsystem for sequential data
    processing, using a basis of photonic logic. The research established that structural organization of computations
    in digital photonic computers is feasible when data exchange is achieved through spatial switching
    of input and output channels of functional devices. In implementing digital photonic computers as
    universal devices aimed at a wide range of tasks, hierarchical and hierarchical-ring variants of the
    switching subsystem organization are most suitable for forming computational structures. However, these
    variants are characterized by high overhead for constructing switches. Therefore, in problem-oriented
    photonic computers designed for solving highly interconnected tasks with high specific performance, the
    use of orthogonal or toroidal switching subsystems is preferred. In this case, direct spatial switching between
    functional devices within a group, as well as between groups, should be ensured. These variants
    have higher requirements for the quality of physical channels formed between switches and functional
    devices, as well as between the switches themselves.

  • SIMULATION OF THE ELECTRIC FIELD STRENGTH DISTRIBUTION IN AN ALL-OPTICAL LOGIC COMPARATOR BASED ON THE GaAs PHOTONIC CRYSTAL

    М. Pleninger, S.V. Balakirev, М.S. Solodovnik
    Abstract

    Photonic crystals, semiconductor structures with a photonic band gap, are of great interest to the
    scientific community. They represent a new class of optical materials with spatial periodic modulation of
    permittivity with a period close to the wavelength of radiation. Interest in these structures is explained by
    their importance for fundamental studies of the interaction of radiation with matter and the potential for
    creating next-generation optoelectronic devices. This paper presents the results of modeling a compact
    optical logic comparator based on a GaAs photonic crystal operating in the second transparency window
    of an optical fiber (wavelength of 1.3 μm). The model comparator is a medium with two input and two output optical channels. When radiation is input to one of the comparator inputs, the corresponding output
    channel transmits radiation, indicating a logical one. In the absence of signals on the input channels
    or when signals are input to both input channels, both output channels do not transmit radiation, indicating
    logical zeros. The channels in the comparator are created using intersecting waveguides formed in a
    two-dimensional GaAs photonic crystal, which consists of a set of cylindrical GaAs crystals (pillars) with
    a diameter of 130 to 170 nm, embedded in a vacuum medium with a period of 450 to 750 nm. To ensure
    attenuation of electromagnetic waves introduced into the comparator in both input channels, defective
    GaAs pillars with a smaller diameter are embedded at the intersection of the waveguides. The influence of
    the diameter and period between the GaAs photonic crystal pillars on the propagation patterns of electromagnetic
    radiation in the optical comparator medium is studied. Based on the analysis of the ratio of
    signal intensity levels at the inputs and outputs of the device, it is established that the optimal diameter of
    the GaAs pillars and the distance between them, at which the structure best meets the requirements of the
    logic comparator, is 155 and 600 nm, respectively.

  • DEVELOPMENT AND RESEARCH OF A QUANTUM GRAPH MODEL FOR IMAGE COMPRESSION AND RECONSTRUCTION

    А.N. Samoilov, S.М. Gushanskiy, N.Е. Sergeev, V.S. Potapov
    Abstract

    The article discusses in detail the methods and approaches to the application of quantum algorithms
    for solving optimization and image processing problems. Particular attention is paid to quantum approximate
    optimization (QAO) and the use of quantum networks for data compression and reconstruction problems.
    QAO is a hybrid algorithm that combines quantum and classical computational processes, allowing
    one to efficiently solve complex combinatorial problems. QAO is based on parameterized unitary operations
    that are optimized during iterations. This approach makes it possible to consider the unique features
    of the quantum nature of information, which in some cases allows achieving higher performance than
    when using exclusively classical methods. In the process of implementing QAO, one of the main obstacles
    remains the problem of noise, which can arise, for example, when using CNOT gates. The article discusses
    various strategies for reducing the noise level, which is an important task for ensuring the stability and
    improving the accuracy of quantum algorithms. For example, methods for isolating individual operations
    and correcting errors are considered, which allows one to minimize the impact of noise on the calculation
    results and improve the accuracy of quantum optimization. The authors also propose a graph interpretation
    of quantum models based on the use of tensor networks. This approach allows for efficient simplification
    of computational graphs, thereby optimizing the resources required to perform complex quantum
    operations. This method also demonstrates high efficiency in image compression and restoration tasks,
    which opens up new prospects for the application of quantum networks in data processing. The article
    describes the structure of quantum networks, including multilayer quantum gates, which allow for deeper
    and more detailed image processing, providing both efficient compression and high-quality data restoration.
    An analysis of various types of quantum gates, such as Hadamard, Pauli-X, Pauli-Y, and T-gates,
    was also conducted. These gates play a key role in the efficiency of quantum algorithms, since each of
    them contributes to quantum dynamics and the way quantum states are manipulated

SECTION III. ELECTRONICS, INSTRUMENTATION AND RADIO ENGINEERING

  • A DUAL-POLARIZED TAPERED SLOT ANTENNA ARRAY WITH REDUCED PROFILE HEIGHT

    I.N. Bobkov, Y.V. Yukhanov
    Abstract

    An element of a planar Vivaldi antenna array designed to operate on two linear polarizations is
    considered. The antenna array element is made of a dielectric substrate with double-sided metallization
    and consist of a tapered slot, a balun and a short section of microstrip line. At the same time, the length of
    the balun is reduced by making it sine-shaped and, thus, the longitudinal size of the Vivaldi element and
    the profile height of the entire antenna array are reduced. The connection between adjacent elements of
    the antenna array is carried out using metal posts placed on a metal screen. The results of a numerical
    study of the matching and radiation characteristics of a unit-cell with periodic boundary conditions on the
    faces are presented. It is shown that despite the reduction of the length of the Vivaldi antennas, due to the
    miniaturization of the balun, there is no narrowing of the operating frequency band in the antenna array.
    The calculated gain of the unit-cell is close to the theoretically achievable directivity of the aperture of the
    same area as the unit-cell. The broadside radiation efficiency does not fall below 75% over the entire operating frequency range. A study of the radiation characteristics when scanning the beam of the radiation
    pattern in the E-, H- and D-planes showed the possibility of deflecting the beam by an 60º angle without
    the appearance of antenna array blinding effects. The effect of small isolation between the nearest
    orthogonal elements of the antenna array on the matching at the input of the elements when scanning a
    beam in a diagonal plane is shown. The presented results of a study of the cross-polarization characteristics
    of the element when the beam is deflected at an angle of 45º in the D-plane show that the crosspolarization
    gain of the unit-cell is less than the co-polarization gain by an amount from 6 to 15 dB.
    The operating frequency band, determined by the VSWR≤3 level, ranged from 915 to 7500 MHz

  • HIGH-SPEED OUTPUT STAGES OF OPERATIONAL AMPLIFIERS WITH DIFFERENCING CIRCUIT CORRECTION OF TRANSITION PROCESS

    А.А. Zhuk
    Abstract

    The development and design of gallium arsenide (GaAs) analogue functional units in modern microelectronics
    (operational amplifiers, output stages, etc.) is at the initial stage of development. This is
    because GaAs wide-gap semiconductors are currently positioned primarily for high-current and ultrahigh-
    frequency electronics (e.g., power supplies, power amplifiers, etc.). To create micro-power analogue
    component base operating under severe operating conditions, for example, under high temperatures
    (+300...+350°C) and radiation, it is necessary to develop special GaAs circuit solutions that take into
    account the parameters and limitations of the corresponding technological processes. A family of output
    stages protected by 5 patents of the Russian Federation for various modifications of GaAs micro-power
    operational amplifiers is proposed, which can be realised on the combined GaAs technological process
    allowing to create n-channel field-effect transistors with control p-n junction and GaAs bipolar p-n-p
    transistors. The considered OS circuits differ from each other by the values of input and output resistances,
    static current consumption, circuitry of static mode establishment circuits, frequency range, maximum
    amplitudes of positive and negative output voltage, etc. The results of comparative computer modeling of
    the static mode, amplitude and amplitude-frequency characteristics of the OS in LTspice simulation software
    are given. The proposed circuit solutions are recommended for application in GaAs micro-power
    operational amplifiers of new generation, as well as for use in various GaAs analog microelectronic devices,
    including those operating under severe operating conditions: exposure to penetrating radiation and
    low temperatures. At small-scale production of the proposed output stages it is recommended to perform
    them on GaAs technological process mastered by Minsk Scientific Research Institute of Radio Materials
    (JSC ‘MNIIRM’, Minsk, Republic of Belarus), which allows the operation of the proposed circuits at high
    temperatures (up to +300...+350 oC), as well as under the influence of penetrating radiation with absorbed
    dose of gamma-quanta (up to 1 Mrad) and neutron flux (up to 1013 n/cm2).

  • ANTENNA ARRAY OF COMPACT VIVALDI RADIATORS WITH ELLIPTICALL SHAPE CUTOUTS ON THEIR OUTER EDGE

    R.E. Kosak
    Abstract

    The influence of elliptical cutouts on the outer edge of a compact Vivaldi radiator, designed as part
    of an infinite phased array (PA) and a finite antenna array (AA), on its radiation characteristics is investigated.
    The voltage standing wave ratio (VSWR) and realized gain (RG) of the radiator are estimated. For
    the radiator as part of the infinite PA, the radiation characteristics were obtained in the sector of angles
    ±60° in the E– and H–planes. The research determined that the introduction of elliptical cutouts measuring
    3 × 2 mm on the outer edges of the Vivaldi radiator as part of the PA makes it possible to expand the
    operating frequency band in terms of VSWR ≤ 3 in both planes, and improve the average level of matching
    in wide-angle scanning mode in the E-plane. In the E–plane in scanning mode in the sector of angles
    ±60°, the overlap ratio at the level of VSWR ≤ 3 increases from 2.86 to 3.41, and in the H–plane in scanning
    mode in the sector of angles ±45°, the overlap ratio at the level of VSWR ≤ 3 (≤ 3,05 at 45°) increases
    from 2.74 up to 3.15. A 16-element array from the compact ultra-wideband (UWB) Vivaldi radiators
    with and without elliptical cutouts is researched. When using elliptical cutouts in the design of finite-size
    antenna array radiators, the overlap ratio increases from 2.07 to 2.37. It has been determined that the AA,
    as well as the radiator in the PA, is UWB and can operate in the range from 283.8 to 671.3 MHz at the
    VSWR level ≤ 3, which corresponds to overlap ratio  2.37. The average VSWR level when all radiators
    are turned on is located at the VSWR level = 4, and when connecting one row of radiators to matched
    loads – at the VSWR level = 1.6. Basically, over almost the entire operating frequency band in this case,
    the VSWR value is below the VSWR level = 2.3. The realized gain in the operating frequency band is in the
    range from 3.82 to 9.50 dB.

  • SOME METHODS FOR DATA FLOW SYNCHRONIZATION IN DIGITAL SIGNAL PROCESSING SYSTEMS

    I.I. Levin, I.I. Levin, D. S. Buryakov
    Abstract

    The paper proposes some methods of providing coherent data processing in radar and communication
    systems that include phased arrays. An approach is developed to collect digitized data from the antenna
    elements of phased array and transfer information between distributed nodes that perform digital
    signal processing. To ensure coherent data processing and transmission, it is proposed to use a reference
    clock frequency signal and a common machine time, which are generated in the central node and distributed
    through the channels with the same delay. All control actions in the processing nodes are based on
    these signals. For the transmission of digitized data from the antenna elements of the phased array in the
    work it is proposed to use the transmission by fragments of operands with information integrity control
    and time reference of the digitized data. The experiments conducted on a real directional pattern formation
    device confirmed the effectiveness of this method and its suitability for practical use. The development
    of digital signal processing systems with phased array is constantly moving forward, and new radar
    systems with high resolution and sufficient sensitivity are required. Usually, to increase the resolution, the
    number of antenna elements is increased. However, this leads to an increase in the size of the antenna and
    hence the length of the links. As the length of the link increases, differences in signal propagation paths
    may occur due to the variation in the characteristics of optical links and the influence of external factors
    on the signal as it is transmitted through longer links. This can lead to inhomogeneity in delays between
    synchronization channels and disruption of the coherent processing system. In this regard, a new method
    of dynamic compensation of delays in the channels of the common machine time system for correct operation
    with long communication lines is proposed in this paper

  • METHOD FOR DETECTING OPTICAL SIGNAL IN QUANTUM NETWORKS

    А.P. Pljonkin
    Abstract

    The article presents a method for detecting an optical synchronization signal for a section of a
    quantum communications network. The objective of the article is to present a variant of implementing an
    urban quantum network. The paper considers a proposed solution to the problem of configuring a synchronization
    channel for quantum communication systems with a non-standard topology. A generalized
    operating principle of a quantum key distribution system with phase coding is described. A synchronization
    algorithm adapted for configuring an urban quantum network containing several segments is proposed.
    A feature of the proposed scheme is the presence of one receiving and transmitting station with
    which several coding stations interact. The article presents the results of analyzing the energy model of
    the proposed method and calculating the average losses in the quantum channel. In conclusion, we discuss
    possible variants of the structure of quantum networks and the applicability of synchronization processes
    in them. Quantum communications networks are actively scaling and use various quantum key distribution,
    authentication, and synchronization protocols. Quantum key distribution (QKD) solves the central
    problem of symmetric cryptography and is a secure technology for generating an identical bit sequence
    for two remote users. Theoretically, the security (resistance) of such technology does not depend on the
    computing power of hackers, who, for example, may have a quantum computer. However, the practical
    implementation of theoretical models still shows technical imperfection, which allows attackers to find
    vulnerabilities. When researching and designing various modifications of quantum key distribution systems
    (QKDS), it is necessary to pay attention not only to the issues of the stability of quantum protocols,
    but also to the components of the technical implementation of the equipment.

  • ULTRA-WIDEBAND VIVALDI ANTENNA ARRAYS WITH TEM HORN

    А.V. Gevorkyan, V.S. Savostin
    Abstract

    Presents the designs and characteristics of antenna array based on the antipodal Vivaldi element.
    Antenna arrays with TEM horns of linear and elliptical profile are studied. The parameters of the horns
    were optimized. The characteristics were studied in the frequency range from 4 to 12 GHz. The antenna
    array with a linear profile TEM horn has the best VSWR in the range from 4 to 5 GHz (for the edge elements
    the maximum is 4,75, and 3,33 for others). The operating frequency band of the antenna array is in
    the range from 4,90 to 12,00 GHz (overlap coefficient kп=2,45). The frequency characteristics of the realized
    gain has dips. The antenna array with a TEM horn of an elliptical profile with a narrow base has a
    minimum operating frequency band (from 7,06 to 12,00 GHz (kп=1,70) and a smooth characteristic of the
    realized gain. Antenna array with an increased base width of the TEM horn of an elliptical profile has the
    best VSWR in the range from 5,3 to 12,0 GHz (for edge elements, the maximum is 2,51, and 2,15 for others),
    but the characteristics of the realized gain is smooth only up to 9 GHz. The operating frequency band
    of the antenna array is in the range from 4,84 to 12,00 GHz (kп=2,48). The best characteristics has the
    antenna array with a TEM horn elliptical profile with an expanded base and increased height. An increase
    in the height of the horn leads to an increase in the values of the realized gain at frequencies above
    9,25 GHz, where there were dips. The operating frequency band ranges from 4,72 to 12,00 GHz
    (kп=2,54). In the operating frequency band, the values of the realized gain are in the range from 11,9 to
    20,6 dB. Thus, by choosing the shape and parameters of the horn, the frequency characteristics of the
    antenna array can be improved.

  • NOISE-RESISTANT LOW-ORBIT REPEATER SATELLITE IDENTIFICATION PROTOCOL

    I.А. Kalmykov, I.D. Efremenkov, D. V. Dukhovnyj
    Abstract

    The development of deposits in the Far North is one of the global projects implemented by the Russian
    Federation. Effective control and monitoring of the condition of unattended objects (UO) engaged in
    hydrocarbon fishing, reliable communication of control commands to them is possible only with the help
    of low-orbit satellites (LOS) combined into one grouping. However, as the number of countries involved in
    the development of deposits in the Far North expands, the number of LOS groupings will also grow. As a
    result, the receiver located on the UO will see several repeater satellites at once. In this case, a low-orbit
    intruder satellite (LOIS) may try to impose a previously intercepted control command on the receiver,
    which may lead to the failure of the UO. To prevent the possibility of imposing such spoofing interference,
    you can use the low-orbit satellite identification system (LSIS). The effectiveness of the LSIS depends
    largely on the identification protocol. In order to increase the speed of authentication of the NS, in a number
    of works it is proposed to use a zero-knowledge protocol, which is performed in modular codes (MC).
    This result is achieved by parallel execution of arithmetic operations on the basis of the code. However,
    this property of the MC can be used to increase the noise immunity of the LSIS, which must function in
    various weather conditions. The goal is to develop a noise-resistant protocol for recognizing an LOSrepeater,
    executed in modular code and requiring less time for error correction