No. 3 (2024)
Full Issue
SECTION I. COMPUTING AND INFORMATION MANAGEMENT SYSTEMS
-
NEUROCOGNITIVE ALGORITHMS FOR MANAGING MULTI-AGENT ROBOTICS SYSTEM FOR AGRICULTURAL PURPOSES
К.C. Bzhikhatlov, I.А. Pshenokova, А.R. MakoevAbstract ▼The main goals of the introduction of robots into agriculture are to increase efficiency and performance,
fulfilling labor -intensive and dangerous tasks and solving the issue of lack of labor. Technological
achievements in the field of detection and management, as well as machine learning allowed autonomous
robots to perform more agricultural tasks. Such tasks vary at all stages of cultivation: from preparation of
land and sowing to monitoring and harvesting. Some agricultural robots are already available, and it is
expected that in the coming years there will be even more, since technologies for processing big data, machine vision and easy capture are becoming more accurate. Currently, the introduction of several interacting
robots in the field is becoming increasingly relevant, since it has good prospects in reducing
production costs and increasing operating efficiency. The purpose of this study is to develop an intellectual
system for managing a mobile robot group based on multi -agent neurocognitive architectures. The task
of the study is to develop neurocognitive algorithms for controlling the multi -agent robotics system of
agricultural purposes. The work describes a multi -agent robotics complex for active plant protection
within the framework of the Smart Field system. The concept of the management system of the group of
mobile robots based on modeling multi -group neurocognitive architectures is presented. To ensure the
work of the multi -agent heterogeneous group of autonomous robots, the use of a neurocognitive control
model with the implementation of individual intellectual agents is proposed on each individual robot and
at the bases of service or servers. At the same time, given the implementation of recursing in architecture
itself, the task of scaling such a management system is noticeably simplified. The use of sensors and effectors
to ensure the exchange of knowledge between robots and decision -making centers allows minimizing
the load on the communication system and ensure a reserve of failure tolerance of the management system.
The results obtained can be used to develop universal control systems and simplification for various
groups of autonomous robots. -
HARDWARE AND SOFTWARE MEANS FOR DYNAMIC RECONFIGURATION OF A GROUP OF SMALL SPACE VEHICLES
S.N. Emelyanov, S.N. Frolov, Е.А. Titenko, D.P. Teterin, А.P. LoktionovAbstract ▼The goal of the study is to automate the control of a group of nanosatellites in conditions of its
variable number by updating its state based on sending and processing broadcast requests between
nanosatellites and using the Transformer neural network. A neural network is needed to make predi ctions
about the state of the spacecraft network. The problem of ensuring connectivity of a network of
nanosatellites is studied, which comes down to the implementation of adaptive network control with
assessment and prediction of the state of communication channels between pairs of devices based on a
neural network. Dynamic reconfiguration and machine learning of a network of devices have been developed.
Algorithmic tools have been defined for the initial training of a neural network and its subs equent
additional training, taking into account the preprocessing of the original sparse or fully connected
data sets about the network of devices. Upon completion of training on synthetic data, the created
neural network is able to predict the quality of communication, taking into account line of sight, signal
attenuation depending on distance and the state of the nanosatellite hardware platform. The developed
software system performs deterministic reconfiguration based on the current state of the nanosatellite
network and adaptive reconfiguration based on historical data by analyzing the hidden patterns of
nanosatellite functioning using the Transformer neural network. To predict the quality of communication,
a functional is used to connect the geodetic coordinates of pairs of satellites and the vectors of
their states with the elements of the matrix of the quality of communication between nanosatellites with
a given initial time, the value of the time interval, and the value of the sampling step of the measurement
process. The use of neural networks implemented on GPUs made it possible to predict possible
states of nanosatellites and carry out reconfiguration of the constellation ahead of schedule, including
removing “problematic” nanosatellites from the network. -
NEUROLINGUISTIC INFORMATION IDENTIFICATION OF INTELLIGENT SYSTEMS
L.К. Khadzhieva, V.V. Kotenko, К.Y. RumyantsevAbstract ▼The results of studies of the possibilities of using language and its components (text and speech) as
factors of neurolinguistic identification and authentication of intelligent systems (IS) of native speakers of
Russian and Chechen languages are presented. To achieve the research goals, an approach based on
information virtualization was used. It is proposed to use one of the ways to solve the problems of increasing
the efficiency of identification and authentication, which is the use of the factor of linguistic
neurolinguistic text identification and authentication. Research shows, firstly, that when a language
changes, in the case of using an intelligent system as a speaker of several languages, there is a change in
the parameters of neurolinguistic identification, and secondly, that if all intelligent systems are native
speakers of the same language, then when moving from one intellectual system to the other is a change in
the parameters of neurolinguistic identification. Thus, the study determined that the language of an intelligent
system can be used as an identification and authentication factor. IP speakers who are native speakers
of both Chechen and Russian languages have been studied. At the first stage, ten IPs were studied as
native speakers of the Russian language, and at the second stage, the same ten IPs were studied, but as
native speakers of the Chechen language. The results of the dependence of the main parameters, as well as
the dependence of the derived parameters of neurolinguistic text identification of intellectual systems of
native speakers of Russian and Chechen languages are presented. The results obtained open up a fundamentally
new opportunity for research in the direction of neurolinguistic text identification and authentication.
Research in this direction is of scientific and practical interest, both for the case of identifying an
intellectual system of native speakers of one language, and for the case when one intellectual system is a
native speaker. -
THE PROCEDURE FOR CALCULATING THE DRIVE OF THE WORKING BODY OF A ROBOTIC DEVICE FOR HUMANITARIAN DEMINING
S.S. Noskov, А.Y. Barannik, А.А. Lebedev, A.V. LagutinaAbstract ▼The aim of the study is to develop a methodology that allows us to calculate the main parameters
characterizing the ability of a robotic vehicle equipped with a striker minesweeper to perform humanitarian
demining operations. For this purpose, within the framework of this work, tasks were solved such as
calculating the torque on the shaft of the striker trawl, determining the power of the motor driving the
striker trawl, and calculating the power of the power plant of a robotic vehicle. During the research, the
experience of creating and the main parameters of foreign mine clearance equipment with firing minesweepers
were analyzed – the Hydrema 910 MCV crew mine clearance vehicle, the MV-4 robotic mine
clearance vehicle, the Uran-6 remote-controlled mine clearance vehicle, and the MT-2 remote-controlled
mine trawl. The main features of the working body of the considered machines, namely the firing minesweeper,
were also analyzed. The developed methodology is based on a method for calculating the resistance
force of soil destruction and an explosive object when exposed to a bike, based on the theory of
interaction of working bodies of earthmoving machines, developed by academician N.G. Dombrovsky.
Also, during the development of this technique, the results of work on the calculation of the design of the
striker trawl were used by Croatian specialists Vinkovic N., Stojkovic V. and Mikulic D. At the same time,
calculations were carried out for various soils, which, depending on the resistivity of cutting, are divided
into 4 categories: sandy clay, gravel; dense clay, coal; hard clay with gravel; medium slate, chalk, soft
gypsum stone. The obtained data actually became an array of initial information, which, together with
known physical dependencies, allowed us to form an array of calculation formulas that allow us to calculate
the torque on the shaft of the striker trawl, the power of the motor driving the striker trawl, as well as
the power of the power plant of the robotic means, and thereby solve the scientific problem posed at the
beginning of the study.
SECTION II. INFORMATION PROCESSING ALGORITHMS
-
FEATURES OF THE IMPLEMENTATION OF THE CRYPTANALYSIS SYSTEM OF HOMOMORPHIC CIPHERS BASED ON THE PROBLEM OF FACTORIZATION OF NUMBERS
L.К. Babenko, V.S. StarodubcevAbstract ▼This article discusses homomorphic cryptosystems based on the problem of factorization of numbers.
In comparison with Gentry-type cryptosystems, their implementation is less laborious, but it requires
careful verification of durability. The Domingo-Ferrer symmetric cryptosystem is considered as an example
of a homomorphic cryptosystem based on the number factorization problem. For this cryptosystem, the
processes of key generation, encryption, decryption, and performing homomorphic operations are presented.
A description of an attack with a known plaintext on the Domingo-Ferrer cryptosystem is given, as well as a demonstration example of such an attack with a small value of the degree of the polynomials of
the ciphertext representation. For the system architecture under development, the basic requirements and
a general scheme are presented with a brief description of the area of responsibility of individual modules
and their interrelationships. The aim of the study is to identify approaches, techniques and tactics common
to specific cryptanalysis methods of homomorphic cryptosystems based on the problem of factorization of
numbers, and to create a system architecture that would simplify cryptanalysis by providing the cryptanalyst
with a convenient environment and tools for implementing his own cryptanalysis methods. The main
result of this work is the architecture of the cryptanalysis system, which allows for a comprehensive analysis
of vulnerabilities for various attacks and to assess the level of cryptographic strength of the cipher in
question, based on the problem of factorization of numbers, as well as the justification for the use of such
an architecture for the analysis of homomorphic ciphers using the example of the Domingo-Ferrer cryptosystem.
The implementation of a cryptanalysis system based on the proposed architecture will help researchers
and cryptography specialists to study in more detail possible weaknesses in homomorphic ciphers
based on the problem of factorization of numbers and develop appropriate measures to strengthen
their durability. Thus, the ongoing research is important for the development of cryptographic systems
based on the problem of factorization of numbers and provides new tools for cryptanalysts in the field of
analysis of homomorphic cryptosystems. The results obtained can be used to increase the strength of existing
ciphers and develop new cryptographic methods. -
TECHNIQUE FOR CONSTRUCTING THE STRUCTURE OF A RECURSIVE FILTER WITH A FINITE IMPULSE RESPONSE IN THE FORM OF A FUNCTION APPROXIMATING THE HANN WINDOW
D.I. Bakshun, I.I. TurulinAbstract ▼Filters with an impulse response (IR) in the form of a weighting (smoothing) function are used in
completely different areas of digital signal processing, such as spectral analysis - in order to reduce the
Gibbs effect, in the formation of an amplitude distribution – to reduce the level of side lobes, including for
radio engineering systems with synthesized aperture and others. The article considers the structure of a
recursive FIR filter (RFIR-filter) with IR in the form of an approximated Hann window with a limited fixed
number of multiplication and summation operations for any window duration. Such a structure has significantly
lower computational complexity compared to the classical structure of the FIR-filter, and it can be
used in embedded systems with limited computing resources. The function approximating the Hann window
is a polynomial of the third degree, the coefficients of which are calculated using a specific integration
of the quasi-sine function. An analytical formula is obtained for the coefficients of the non-recursive
part of the filter by calculating the inverse finite difference of the fourth degree from the approximating
function of the Hann window. The coefficients of the non-recursive part are integers, the values of which
depend on the number of samples (length) of the half-cycle of the quasi-sine function, which simplifies the
implementation of such an RFIR-filter based on a programmable logic integrated circuit (FPGA).
The average absolute approximation error is calculated with an increase in the length of the window.
When the number of samples is less than 600, the error does not exceed 4.5%, which is an indicator of the
high accuracy of matching the approximating function to the Hann window. The authors propose a further
perspective for the development of the structure of the RFIR-filter with IR in the form of an approximating
function of the Hann window. This structure makes it possible to implement a RFIR-filter with a change in
the length of the Hann window in the time domain while maintaining stability by accurately performing
calculation operations using the coefficients of the non-recursive part, which are fixed-point numbers, and
their linear dependence on the half-period length of the quasi-cosine function. -
TEXT SENTIMENT ANALYSIS BASED ON FUZZY RULES AND INTENSITY MODIFIERS
Е.М. Gerasimenko, V.V. StetsenkoAbstract ▼Expressing feelings is a hidden part of hard life and communication. To create computers that can
better serve humanity, computer science continues to research into developing machine learning algorithms
that can process text data and perform sentiment analysis tasks on natural language texts. Additionally,
the availability of online reviews and increased end-user expectations are driving the development
of system intelligence that can automatically categorize and share user reviews. Every year, research
in this area has discovered more and more emotions in text, but only a small part of it has been devoted to
the use of fuzzy logic. This mainly happens because the researchers often use binary classification – «positive
» and «negative», less often adding a third class – «neutral». The use of fuzzy logic helps to determine
emotions, and not just «good» and «bad», but the degree of these emotions. The number of classes is defined
by determines of the level of detail. Previously, we proposed a fuzzy dictionary-based sentiment
model, in this paper we propose an improved text sentiment determination model based on a sentiment
dictionary (SentiWordNet) and fuzzy rules. To determine the accuracy and precision of sentiment analysis,
coefficients were applied to observe the emotional load of words of different parts of speech and action
modifiers that contribute to the strengthening or weakening of emotional tones. The quantitative value of
the sentiment of the text is obtained by aggregating normalized data by emotional classes using fuzzy result
methods. As a result of the study, it was found that taking into account all modifiers can significantly
increase the accuracy of the method previously proposed by the authors, and also ensures the determination
of boundaries when determining a detailed assessment of relationships in 7 classes (“very positive”,
“positive”, “somewhat positive”, “neutral” , “somewhat negative”, “negative”, “very negative”). -
USING A GPU FOR REAL-TIME DIGITAL SIGNAL PROCESSING
А.О. Kasyanov, М.V. PotipakAbstract ▼This paper is devoted to the development of energy-efficient implementations of digital signal processing
algorithms in MIMO radar for estimating target parameters on computers with different architectures.
In accordance with the global trend, the possibility of using computers with parallel architecture for
digital processing of broadband radar signals is being considered. The authors proposed an implementation
of the procedure for processing the reflected signal of MIMO radar using the technology of general computing
on graphics cards (GPGPU). The performance of the developed solution was assessed on various GPUs with different microarchitectures. A criterion for evaluating the performance of a processing algorithm is
proposed in the form of the ratio of the algorithm's throughput to the peak throughput of the computer's
memory. A numerical assessment of the efficiency of using the computer's memory bandwidth of the developed
algorithm was carried out in comparison with known implementations on the GPU. The purpose of this
work is to detect and evaluate target parameters in real time using MIMO radar, using a commercially
available computer with the minimum possible weight and size characteristics. To achieve the set research
purpose, the following problems were solved: – selection and adaptation of algorithms that allow the assessment
of target parameters in MIMO radar; – implementation of selected algorithms taking into account the
architecture of the computer, allowing for an assessment of the target background situation in real time; –
assessment of the performance of the resulting solution. In the process of developing an algorithm for digital
processing of a MIMO radar signal, several options for implementing the algorithm were analyzed taking
into account the architecture of a parallel computer, which made it possible to process a radio image frame
consisting of 8 million complex samples in less than 50 ms. by NVIDIA Jetson AGXXavier GPU. The inverse
relationship between frame processing time and the peak GPU memory bandwidth is shown. A criterion for
evaluating the performance of the processing algorithm is proposed. A numerical assessment of the efficiency
of using the computer's memory bandwidth of the developed algorithm was carried out in comparison with
known implementations on the GPU. The gain of the developed algorithm is on average 5 times compared to
the results obtained by other authors. Compared to an FPGA, implementing 2D FFT on a GPU is 17 times
faster. The practical significance of the functional software developed by the authors does not impose any
restrictions on the number of receiving and transmitting channels and can be used for signal processing in
MIMO radars with a large number of channels. -
ASSESSMENT OF THE LUBRICATION CONDITION OF ROLLING BEARINGS USING CLASSIFICATION ALGORITHMS
P.G. Krinitsin, S.V. ChentsovAbstract ▼The purpose of this work is to solve the problem of unscheduled failures of rolling bearings installed
on industrial equipment as a result of their improper maintenance during operation. It is known that up to
50% of all unscheduled downtime of industrial equipment occurs due to bearing failure. In this case, the
main reason for bearing failures is violations of the lubrication regime of the rolling elements: excessive
and insufficient quantities of lubricants. These reasons account for up to 36% of the total number of bearing
failures. During equipment operation, it is very difficult to identify and prevent all problems with bearing
lubrication, due to the wide variety of factors influencing their occurrence. Therefore, an urgent task
for research is the development of an automated recommendation system for managing the maintenance of
industrial equipment, with control of the lubrication of bearing units. The paper discusses a method for
classifying the states of bearings depending on their diagnostic parameters: indicators of vibration velocity,
vibration acceleration and temperature. For this purpose, classical machine learning algorithms are
used: KNN, RandomForestClassifier and SVM models. For each model, hyperparameters are determined
to achieve maximum results during training. In the process of conducting the study, an analysis of the
influence of each of the diagnostic parameters - signs on the performance of the classification model was
carried out. Understanding which indicator of bearing performance will be the most important will allow
you to choose equipment condition monitoring devices at a manufacturing enterprise consciously, to solve
specific production problems. The developed algorithm allows us to qualitatively, with 98% accuracy, assess the lubrication condition of rolling bearings and issue recommendations for timely maintenance of
equipment. The classifier model is planned to be used as part of a complex for monitoring the technical
condition of equipment, expanding diagnostic capabilities: in addition to information about the probability
of equipment failure and predicted service life, the diagnostic complex, combined with the proposed
model, will allow influencing the mileage of bearings by improving the quality of their lubrication. -
COMBINED SEARCH FOR SOLVING THE PROBLEM OF TWO-DIMENSIONAL PACKING OF GEOMETRIC FIGURES OF COMPLEX FORMS
V.V. Kureichik, А.Y. KhalenkovAbstract ▼The article considers the problem of two-dimensional packing of geometric shapes of complex
shapes. The problems of this class are classified as NP-hard problems of combinatorial optimization.
In addition, the packaging of shapes of complex geometric shapes is one of the most difficult subtypes of
the two-dimensional packaging problem. In this regard, it is necessary to develop effective heuristic approaches
to solving this problem. The article presents the formulation of the problem, describes its main
features, and presents the limitations and conditions characteristic of this subtype of the two-dimensional
packaging problem. The criterion for calculating the effectiveness of the solution is described. To solve
this problem, the article proposes a combined search architecture consisting of two metaheuristic computational
algorithms. In this architecture, a modified genetic and swarm multi-agent bioinspired algorithm
based on the behavior of a bee colony was implemented as optimization methods. These algorithms allow
us to obtain sets of quasi-optimal solutions in polynomial time. The advantages of using the proposed
approach are given. To test the effectiveness of the proposed approach, a software product was developed
that uses the proposed architecture and metaheuristic computational algorithms to solve the problem.
The software product was developed in the C++ programming language and written in the Microsoft
Visual Studio Code development environment. A computational experiment was conducted on a set of
benchmark test cases. Based on the results of experimental studies, it is concluded that the proposed combined
search is effective in solving the problem of two-dimensional packing of geometric shapes of complex
shapes in comparison with solutions based on classical algorithms. -
STUDY OF MODIFIED ALGORITHMS WITH SIGNAL CONSTELLATION ROTATION IN DTMB STANDARD ON THE BASIS OF SIMULINK MODEL IN MATLAB ENVIRONMENT
S.N. Meleshkin, I.B. SilesAbstract ▼This paper discusses modified algorithms with signal constellation rotation applied in the digital
terrestrial television multimedia broadcasting standard adopted in Cuba. Compared to using
constellations without rotation, these algorithms give a significant increase in system performance under
challenging reception conditions, with industrial interference and low signal-to-noise ratio. This paper
analyzes the effect of the angle and direction of rotation of the signal constellation on the stability of the
digital terrestrial television multimedia broadcasting system. The main purpose of this paper is to analyze
the effect of the angle and direction of rotation of the signal constellation on the stability of the digital
terrestrial television multimedia broadcasting system. For the study, a proprietary architecture of digital
terrestrial television multimedia broadcasting system adopted in Cuba was developed, implemented in
Simulink in Matlab environment. This Simulink model allows analyzing the dependence of the bit error
rate on the value of white Gaussian noise for different system configurations. The model of additive white
Gaussian noise, which is mixed with the generated signal, is widely used in the research. The proposed
modifications allow the reception of digital terrestrial television multimedia broadcasting in fade-free
channels with equal values of the bit error rate for all cases analyzed. In this case, in order to obtain a
significant gain from constellation rotation, in the order of seven decibels, it is proposed to transmit the
quadrature and in-phase components on different subcarriers and at different moments of time. In the
scheme with signal constellation rotation, the quadrature component should be transmitted not on the
same subcarrier, but with a delay and on a different subcarrier. Then from one quadrature amplitude
modulation is actually two amplitude binary modulation in-phase and quadrature projection, which are
transmitted on independent subcarriers, and affected by interference differently, which provides reliable
demodulation at lower values of the signal-to-noise ratio and the impact of industrial noise. A disadvantage
of the algorithm is that it does not sufficiently counteract Gaussian noise. -
EVALUATION OF THE CHARACTERISTICS OF A TWO-STAGE SYNCHRONIZATION ALGORITHM BASED ON THE SELECTION OF AN ADJACENT PAIR OF SEGMENTS WITH THE MAXIMUM TOTAL COUNT IN THE QKD SYSTEM
К. Y. Rumyantsev, Y.К. Mironov, P.D. MironovaAbstract ▼A two-stage synchronization algorithm is proposed based on the selection of an adjacent pair of
segments with the maximum total count in the QKD system. The algorithm is based on a well–known approach
to reducing the time of entering into synchronism - the analysis of adjacent pairs of time segments.
A distinctive feature of the proposed algorithm is to ensure that the probability of successful search and
testing is not worse than the required level. It should be noted that due to the testing stage, erroneous
decisions made at the search stage are rejected, which minimizes the probability of false synchronization
due to the registration of noise pulses. At the search stage, the equipment sequentially registers the total
counts from all adjacent pairs of segments. Next, a pair of segments with the maximum total count is selected,
and the count in one of the pairs of segments reliably exceeds the values of the counts from all other
pairs of segments, and the equipment proceeds to the testing stage. Testing consists of polling the
photodetector during the gating pulse to re-register the count. In case of positive testing, the process of
«rough» estimation of the moment of reception of the sync pulse is considered successfully completed,
otherwise the equipment returns to the search in the next frame. Note that the maximum allowable number
of frames and tests correspond to the search and testing stages, respectively. Analytical expressions are
obtained for calculating the time and probabilistic characteristics of the search and testing stages of the
proposed detection algorithm based on the selection of an adjacent pair of segments with the maximum
total count, including for calculating the allowable number of frames and tests while ensuring the required probabilities of successful search and testing, respectively. It has been found that with an increase in the
average number of photons in the sync pulse, the average number of frames and tests, as well as the average
time of successful search and testing, decrease significantly. For example, when the average number
of photons in a sync pulse increases by 5 times, the average number of tests for successful testing and the
average time for successful testing decrease by 1.5 times, and the permissible number of tests by 5 times. -
RESEARCH OF THE DABBAGHIAN-WU ALGORITHM FOR CONSTRUCTING NON-CYCLIC PANDIAGONAL LATIN SQUARES
А.О. Novikov, E.I. Vatutin, S.I. Egorov, V.S. TitovAbstract ▼In this paper we consider a mathematical model and the Dabbaghian-Wu algorithm based on it, designed
to construct non-cyclic pandiagonal Latin squares. It is shown that due to the high computational complexity
and the fact of existence of other varieties of pandiagonal squares, the application of classical algorithms, such
as brute force and cyclic shifts, is insufficient to construct a complete list of pandiagonal Latin squares. This confirms the purpose of the work – the research and experimental approbation of mathematical models and
algorithms for the task of construction in an acceptable time. We perform research of the algorithm presented by
Dabbaghian and Wu, which is intended for constructing pandiagonal Latin squares of prime order p, defined by
the expression p=6n+1. It is a modification of the cyclic construction algorithm and allows to obtain a
pandiagonal non-cyclic square from the original cyclic square. The conversion is done by cyclic shifts in specific
cells in each row of the original square. A software implementation of the Dabbaghian-Wu algorithm was developed.
The results of the experiments confirmed the correctness of the construction methodology proposed by
Dabbaghian and Wu. Thus, for order 13, 72 unique squares were found. In addition, an attempt was made to
construct for an order that is not an odd prime number, for example 25. In this case, it was possible to obtain 4
correct pandiagonal Latin squares. By additional conversions of the resulting sets, increase the number of
squares, so for order 13 the collection is expanded to 1570, and for 25 to 210. The research made it possible to
research the Dabbaghian-Wu algorithm in depth and draw a conclusion about its features, the advantages include
its relatively low computational complexity, and the disadvantages are the full correctness of the constructions
only for odd prime orders. The resulting sets of squares will be used in the future to obtain their numerical
characteristics using distributed computing. -
ALGORITHM FOR CLASSIFICATION OF FIRE HAZARDOUS SITUATIONS BASED ON NEURAL NETWORK TECHNOLOGIES
Sanni Singh, А.V. PribylskiyAbstract ▼Modern technological requirements and developing urban infrastructure pose the task of developing
methods for recognizing and classifying fire hazardous situations. Quickly and effectively recognizing the
initial signs of a fire becomes a vital aspect of ensuring the safety of people as well as property. In this
regard, systems are developed, implemented, tested and implemented that can automatically recognize
and classify fire hazardous situations. Classification of fire hazardous situations allows you to determine
the degree of danger of detected deviations, which contributes to making more effective decisions to prevent
the consequences of fires and their signs, such as a one-time short-term increase in temperature and
smoke level, which may indicate failure of electrical components located near the sensors. The algorithm
for classifying fire hazardous situations is developed for a complex of interconnected sensors, which in
turn, due to its structure, allows you to detect even the slightest sign of fire. Within the framework of this
study, an algorithm for classifying fire hazardous situations based on neural network technologies is presented.
A description of existing classes of fire hazardous situations is provided, as well as the criteria by
which data for these classes were marked. The algorithm was modeled on training and test samples, presenting
the accuracy parameters used, the formula for their calculations, and the results of classifying fire
hazardous situations. A study was carried out of the influence of the sample step in the database sample
on the accuracy parameters and training time of the neural network. The developed algorithm is implemented
in the Python programming language in the PyCharm IDE. The dataset for training and testing
was obtained from real sources containing information about detected fire hazardous situations in subways in which a complex of interconnected sensors is installed. The results of modeling the algorithm
showed that the proposed algorithm has high accuracy for predictive classification of fire hazardous situations
in real objects. -
METHOD OF MULTI-CRITERIA GROUP DECISION-MAKING IN AN EMERGENCY SITUATION USING FUZZY HESITANT SETS
S.I. Rodzin, А. V. Bozhenyuk, Е.V. NuzhnovAbstract ▼In case of an emergency, effective emergency measures must be taken. It is known that an emergency
event has the characteristics of limited time and information, harmfulness and uncertainty, and decision
makers are often limited in rationality in conditions of uncertainty and risk. People's psychological behavior
should be taken into account in real decision-making processes. Decision-making in emergency situations
is an urgent task and the subject of research interests. This article presents a new approach to emergency
decision-making using fuzzy oscillating sets. To determine the weights of the criteria, a mathematical
model is built that allows you to convert the values of the criteria into a compatible scale and exclude
the influence of different scales for their measurements. In order to display the psychological behavior of
decision makers, a function of the degree of group satisfaction and a function of the value of the perceived
usefulness of the alternative are introduced. The usefulness of alternatives is calculated and ranked, and
an example of an emergency study is given. Compared with existing methods, the proposed method for
decision-making in an emergency situation has the following features: the possibilities for determining the
weights of decision-making criteria are expanded when the criteria have a different scale; the method
takes into account the psychology of LPR, unlike well-known approaches that assume the rationality of
LPR decisions; compared with the theory of prospectuses, the method does not require a subjective assessment
of the level of It uses fewer parameters, which expands the scope of its application. The proposed
method also has some limitations: certain computational costs are required with a large number of alternative
solutions and their characteristic attributes. However, this limitation is overcome when using software
such as MATLAB. It is interesting to consider the possibility in the future to apply the proposed
method for risk assessment tasks when making decisions in conditions of fuzzy information, if the attribute
values are random variables. -
IMPLEMENTATION OF AN EFFICIENT SEPARABLE VECTOR DIGITAL FILTER ON FPGA
К.О. Sever, К.N. Alekseev, I.I. TurulinAbstract ▼In modern video surveillance systems, in which the use of computer vision technology is widespread,
the most important information in the image is data on the contours of objects and the highlighting of small
details. The systems are subject to stringent requirements, such as: high speed of processing information
from a large number of cameras simultaneously, operation in conditions of poor lighting of the object and
under the influence of external noise (electromagnetic fields, short interference from high-voltage transmission
lines). Therefore, improving image processing methods using parallel computing devices and building a
multi-threaded system is an urgent task. In this work, a 3x3 anisotropic high-pass filter is designed and simulated
for image processing on an FPGA. An algorithm for its construction in the form of a separable vector
representation is described. A detailed description is given of the development of an effective separable twodimensional
digital filter for sharpening and highlighting the boundaries of objects in RGB images. The filter
is based on the synthesis of the proposed 3x3 anisotropic high-pass filter and the Sobel gradient filter.
The corresponding block diagram of the filter has been designed. Based on the results of processing the distorted
image, we can conclude that the developed filter has the property of more uniform detailing and high lighting of objects in the image and is less susceptible to Gaussian noise compared to the Sobel gradient filter
and the Laplace high-pass filter. A filter pipeline circuit has been developed on an FPGA for processing one
plane of an RGB image. Due to the use of separable filters, the proposed implementation is almost 2 times
more optimal in terms of the number of addition/subtraction operations performed than the direct implementation
of a 3x3 Sobel gradient filter and a 3x3 anisotropic high-pass filter. -
INVESTIGATION OF THE IMPACT OF POPULATION SIZE ON THE PERFORMANCE OF A GENETIC ALGORITHM
V.A. Tsygankov, О. А. Shabalina, А.V. KataevAbstract ▼The paper investigates ways to determine the population size in a genetic algorithm and studies the
relationship between the number of individuals and the speed of the algorithm. Methods for determining
the optimal number of individuals in a population by different methods are described: depending on the
size of the chromosomes, for a tree-like type of chromosomes, in the presence of a noise factor and by the
method of a neighboring element with a maximum and minimum boundary. The data obtained by performing
each method differ from each other, for this reason, an assessment was made in order to verify the
accuracy of theoretical data by comparing them with experimental ones. To conduct experiments, a program was developed on the Unity graphics platform with the ability to change the number of individuals in
the population. After receiving the results, the experimental data were compared with the data obtained on
the basis of methods for determining the population size in the genetic algorithm from the first part of the
work. The experiment showed that the optimal population size lies in the range of 100-160 individuals.
With a decrease in their number, the execution time of the task begins to increase significantly, and with
an increase beyond the calculated limit, the reduction in execution time does not correspond to the computing
resources expended. The experimental data obtained themselves have the smallest error with the
method used by the tree representation of chromosomes. The results of the study can be used to select the
size of the population during training in order to achieve a better ratio of computing power to learning
speed, and a method defined in the course of work can help in theoretical calculations -
UNAUTHORIZED ACCESS TO QUANTUM KEY DISTRIBUTION SYSTEM
А.P. PljonkinAbstract ▼The paper examines the latest research and trends in safeguarding data transmission through stateof-
the-art cryptographic techniques. It details the encryption and decryption process using the one-time
pad method, also known as the Vernam cipher, renowned for its unparalleled security. The work showcases
common challenges addressed by quantum cryptography, which encompasses concepts like outcome
unpredictability, quantum entanglement, and the Heisenberg uncertainty principle. The paper discusses
the use of symmetric algorithms for data encryption and sets forth standards for encryption keys that ensure
the absolute confidentiality of data exchange. It provides a concise history of quantum communications
and cryptography development, highlighting the critical need for ongoing research in this domain.
A pivotal aspect of cryptographic security, the distribution of encryption keys to legitimate users, is underscored.
Quantum cryptography presents a method for generating and sharing keys derived from quantum
mechanical principles, integral to quantum key distribution (QKD) systems. Contemporary QKD systems
undergo extensive scrutiny, including their susceptibility to various attack types, with most research aimed
at identifying potential weaknesses in quantum protocols, often due to technical flaws in QKD system
components. The study addresses a technique for unauthorized access to QKD systems during detector
calibration. Furthermore, the paper explores a strategy for illicitly infiltrating the operations of a quantum
key distribution system in calibration mode and suggests a defensive approach. Field research findings
are presented, revealing that QKD systems are prone to vulnerabilities not only during quantum protocol
execution but also throughout other crucial operational phases. The identified attack method enables
the unauthorized acquisition of data from a quantum communication channel and the manipulation of
system operations. A design for auto-compensating optical communication systems is proposed to protect
the calibration process against unauthorized breaches. The impact of sync pulses, reduced to singlephoton
levels, on accurately detecting timing intervals with an optical signal is demonstrated. The article
concludes with experimental results that exhibit variances between theoretical expectations and the actual
performance of individual components within a quantum communication system. -
A TRANSFORMER-BASED ALGORITHM FOR CLASSIFYING LONG TEXTS
Ali Mahmoud MansourAbstract ▼The article is devoted to the urgent problem of representing and classifying long text documents using
transformers. Transformers-based text representation methods cannot effectively process long sequences
due to their self-attention process, which scales quadratically with the sequence length. This limitation
leads to high computational complexity and the inability to apply such models for processing long
documents. To eliminate this drawback, the article developed an algorithm based on the SBERT transformer,
which allows building a vector representation of long text documents. The key idea of the algorithm
is the application of two different procedures for creating a vector representation: the first is based
on text segmentation and averaging of segment vectors, and the second is based on concatenation of segment
vectors. This combination of procedures allows preserving important information from long documents.
To verify the effectiveness of the algorithm, a computational experiment was conducted on a group
of classifiers built on the basis of the proposed algorithm and a group of well-known text vectorization
methods, such as TF-IDF, LSA, and BoWC. The results of the computational experiment showed that
transformer-based classifiers generally achieve better classification accuracy results compared to classical
methods. However, this advantage is achieved at the cost of higher computational complexity and,
accordingly, longer training and application times for such models. On the other hand, classical text
vectorization methods, such as TF-IDF, LSA, and BoWC, demonstrated higher speed, making them more
preferable in cases where pre-encoding is not allowed and real-time operation is required. The proposed
algorithm has proven its high efficiency and led and led to an increase in the classification accuracy of the
BBC dataset by 0.5% according to the F1 criterion.
SECTION III. PROCESS AND SYSTEM MODELING
-
MATHEMATICAL MODEL OF A QUADRATURE SAMPLING FREQUENCY CONVERTER
А.А. MaryevAbstract ▼The work relates to the field of radio communications and is devoted to the analysis of the functioning
of a quadrature stroboscopic frequency converter, which is now widely used in software-defined radio
systems, implemented on the principle of direct conversion receiver. This receiver structure combines a
number of advantages which are important for practical realization, such as high adaptivity, ease of
changing the demodulator configuration, simplicity of the receiver hardware and low cost of components.
Despite the relatively widespread use of quadrature stroboscopic converters, the topics of their signaltheoretic
analysis and optimization of parameters by criteria characteristic of typical radio communication
tasks are not sufficiently covered in literature. There are also known difficulties in the choice of terminology,
that's why the paper contains some of the most common in the names used for devices of the considered type. In the main part of the paper a rather simple mathematical model of a quadrature stroboscopic
frequency converter based on a number of simplifying assumptions is proposed. At the same time,
these assumptions do not reduce the generality of the obtained results. Based on the proposed model, the
estimation of the frequency converter gain is performed. In addition to the study of the idealized mathematical
model, in which switching is considered instantaneous, the influence of the finite switching time on
the frequency converter gain is investigated. The mathematical apparatus of signal theory is used to perform
the analysis. The model of the stroboscopic frequency converter proposed in the paper allows further
complication and use to study the influence of additional factors on the characteristics of radio systems
based on this architecture. In particular, it is possible to study the influence of short-term instability (jitter)
of the switching period of the key, as well as the influence of non-identicality of the parameters of
quadrature channels. The obtained analytical expressions and the given graphs of the investigated dependencies
can be useful in the design of radio communication systems for various purposes, in which a
quadrature stroboscopic frequency converter is used. -
OPTIMIZATION OF THE STRUCTURE OF THE ENERGY CONSUMPTION FORECASTING SYSTEM WITH ATYPICAL ENERGY CONSUMPTION PATTERNS
N. К. Poluyanovich, О.V. Kachelaev, М.N. Dubyago, S.B. MalkovAbstract ▼The creation of an intelligent energy consumption forecasting device for consumers with atypical
energy consumption is considered, depending on the required forecast accuracy, taking into account, in
addition to the target parameters of the power grid (P, Q), technological processes of enterprises, influencing
factors: socio-economic (hour of the day; day of the week; ordinal number of the day in the year;
sign of a holiday or mass events d); meteorological: (wind-cold index). The model refers to intelligent devices for adaptive forecasting of power consumption modes of the electric grid based on a multilayer
neural network. The article is devoted to the choice of the optimal architecture of the neural network (NN)
and the method of its training, providing forecasting with the least error. A multifactional model of power
consumption based on a multilayer NN has been synthesized and tested. Within the framework of the conducted
research, an NN model was built describing the architecture of a cyber-physical system (CFS) for
forecasting power consumption. It has been established that for each consumer, due to significant differences
in the nature of energy consumption, it is necessary to experimentally select network parameters in
order to achieve a minimum prediction error. It is shown that with atypical power consumption, i.e., not
repeated over time periods (hour, day, week, etc.), artificial intelligence and deep machine learning methods
are an effective tool for solving poorly formalized or non-formalized tasks. The developed model has
acceptable accuracy (MSE deviation up to 15%). To increase the accuracy of the forecast, it is necessary
to carry out a regular refinement of the model and adjust it to the actual situation, taking into account new
additive factors affecting the electricity consumption curve. The possibility of using this device in the technological
management systems of regional grid companies, which forms the basis of a hierarchical automated
information measuring system for monitoring and accounting for electricity, by accounting and
forecasting the active and reactive power of electric consumers -
INVESTIGATION OF ACCURACY CHARACTERISTICS OF NAVIGATION SYSTEMS USING REMOTE SENSING DATA
Т.V. Sazonova, М.S. ShelagurovaAbstract ▼The article considers methods of the navigation for Unmanned Aerial Vehicles (UAV) based on the
Earth remote probe data, i.e. high resolution aerial or space photographs which are specially processed.
For video navigation, there are used orthonormal photographs of areal; for micros relief navigation, there
are processed photographs by stereophotogrammetry method. The methods of video navigation are based on the separation and comparison of characteristic points in the current and reference images. Depending
on the available reference data as the photographs, the video navigation divides between the connections
of terrain to the images and the odometry. The odometer navigation does need reference data which is its
positive feature, but the principles of odometer navigation enclosure in an increase of errors along measurements
of the navigation parameters. The video navigation with the connections of terrain to the images
provides more accurate characteristics, but it requires a preliminary preparation of reference data and
uses an on-board computer with a large memory capacity. The created methods of video navigation are
examined by the mathematic modelling. The results demonstrated that it is advantageous to combine both
methods. In this case, the expected accuracy of the UAV navigation using the introduced methods is comparable
to accuracy of a satellite navigation system. The realization of video navigation methods in an onboars
computer based on NVIDIA Jetson TX2 single-board module demonstrated its efficiency in real
time. The methods of the navigation by micro relief are based on a search estimation of UAV coordinates
within the limits of the confidential square. The results of mathematics modelling of the micro relief navigation
demonstrated that this method is serviceable with a high accuracy (3 to 8 m) both in the UAV
flights over a man-made environment and in the UAV flights over a natural objects composition. The realization
of navigation by the micro relief in an on-board computer build with Salut-EL24PM2
RAYaZh.441461.031 module demonstrated its serviceability in real time. The introduced methods of video
navigation and navigation by micro relied were successfully approved with a semi-natural modelling. In
near time, the flight tests will be intended. For practical realization of the created methods for the highprecision
navigation, it is required to resolve this question providing the user with referent data which
should obtain the operative processing for actual space-and-aerial photographs with the high resolution
against the area. -
DEVELOPMENT OF A COMPUTER MODEL FOR IMPROVING THE SYSTEM OF PASSIVE HEAT REMOVAL FROM THE HOLDING POOL WITH A TWO-PHASE RING THERMOSIPHON
V.V. Dyadichev, S.G. Menyuk, D.S. MenyukAbstract ▼The purpose of this study is to create a computer model that will be used to improve the passive heat
removal system from the holding pool with a two-phase annular thermosiphon. This model will allow you
to analyze the operation of the system, determine a set of quasi-optimal solutions for its parameters and
improve the efficiency of heat removal. The development of such a model can help improve heat transfer
processes and improve the efficiency of the system as a whole. Method. To solve this problem, mathematical
and computer modeling methods were used, the mechanisms of heat transfer in the system were studied
and optimal parameters for effective heat removal were determined, as well as various design options
and system parameters were compared to select the most effective solution. The use of these methods
provided an integrated approach to the development and improvement of a passive heat removal system
with a two-phase ring thermosiphon. Result. A computer model has been developed to improve the system
of passive heat removal from the holding pool with a two-phase annular thermosiphon. This model allows
you to analyze the operation of the system, improve its parameters and improve the efficiency of heat removal.
Creating such a model is an important step in the development and improvement of the system,
allowing you to more accurately predict its performance and make the necessary improvements. Conclusion.
The developed computer model can be used for further research, improvement of heat removal processes
and increase the efficiency of the system as a whole. It allows you to study the heat removal processes
in more detail and adjust the operation of the system. The model provides an opportunity to perform
numerical calculations, analyze various scenarios and evaluate the effectiveness of changes in system
parameters. -
LARGE LANGUAGE MODELS APPLICATION IN ORGANIZATION OF REPLENISHMENT OF THE KNOWLEDGE BASE ON METHODS OF INFORMATION PROCESSING IN SYSTEMS OF APPLIED PHOTOGRAMMETRY
А.V. Kozlovskiy, E.V. Melnik, А.N. SamoylovAbstract ▼The article deals with the issues related to the automation of the procedure of synthesis of applied
photogrammetry systems. Such systems serve to measure and account for objects from images and are
now widely utilized in various fields of activity, such as mapping, archaeology and aerial photography.
Increasing availability and mobility of imaging devices has also contributed to the widespread application.
All this has led to active research aimed at developing methods and algorithms for applied photogrammetry
systems. Manual tracking of new methods and algorithms of photogrammetric information
processing for a wide range of application areas is quite difficult, which makes the automation of this
procedure urgent. The solution proposed in the article is based on the use of a knowledge base of information
processing methods in applied photogrammetry systems, the main elements of which are a fuzzy
ontology of the subject area and a database, which is logical, since the information about the subject area
can be structured quite easily. As a basis for the ontology, an existing solution was taken, which was supplemented
based on the results of analyzing the current state of the subject area. The resulting ontology
was further used to search and classify information processing methods in applied photogrammetry systems
and to populate the knowledge base. Due to the intensification of the development of new methods of information processing in the systems of applied photogrammetry, there is a need to modify the ontology
and to replenish the database, i.e. to replenish the knowledge base. The Internet is an important source of
information for this purpose. To automate the search for data on information processing methods and
ontology modification, it is reasonable to use large language models. To automate data mining of information
processing methods and to populate the knowledge base, it is useful to use large language models
that simplify several natural language processing tasks, which include clustering and formation of new
entities for classification. The corresponding method is described in the paper. For the method the results
of testing its performance are given. As part of problem solving, a comparative analysis of large language
models has been carried out, resulting in the RoBERTa model. -
REVEALING THE ELECTRICAL CHARACTERISTICS OF THE DISCHARGE CIRCUIT FOR A HIGH-VOLTAGE LIGHTNING DISCHARGE STAND
А.А. Yakovlev, М.Y. Serov, R.V. Sakhabudinov, А.S. GolosiyAbstract ▼The steady passage through the atmospheric layer by the launch vehicle where electricity discharges may
occur is ensured not only by performance measures, but also by preliminary ground tests. The countries to be
carrying out the launch of spacecraft into the orbit have special bench equipment. A particular system of points
views has developed to be implemented in standards and other documents, and the given requirements have
become obligatory to meet. The current paper is a following part of the research related to the creation of a
high-voltage lightning discharge stand which is being developed for testing rocket and space technology products.
The main task of this device is the generation of given electric (or electromagnetic) pulses that simulate the
effect of lightning discharge on the structural elements of the launch vehicle. There are four pulse current generators
in the high-voltage part such as (pulse current generator-A, pulse current generator-B, pulse current generator-
C, pulse current generator-D-types), sequentially connected to the load to create a certain form of a
common current pulse. Routine loads include: high-voltage grounding table, vertical rack, breakdown testing
device. The task of this stage of work was to check the parameters of the current pulse that occurs when the
pulse current generator-A discharges to the calibration load to be the high-voltage grounding table. The article
illustrated the calculating findings of the pulse parameters of component A-type in the discharge circuit through
the development of the pulse generation process: before the moment of short-circuiting of the capacitive storage
and from the moment of short-circuiting. The discharge device such as crowbar allows you to connect the load
according to a two-circuit circuit at the time of the maximum discharge current. Analytical dependencies of both
equivalent electrical circuits of circuits are covered in the article. Differential equations are solved by numerical
method, graphs of change of current and voltage of oscillatory pulse A-type in open and closed circuits are obtained.
The given activity as well as the calculations made it possible to evaluate the dynamic characteristics of
the studied circuit during its operation in one of the fastest flowing and energy-intensive modes of operation. In
general, the switching of the discharge circuit to the high-voltage grounding stand with the selected parameters
confirms the operability of the VSMR and the achievement of satisfactory characteristics of the given current
pulse implemented by the A.-type.
SECTION IV. ELECTRONICS, NANOTECHNOLOGY AND INSTRUMENTATION
-
INFLUENCE OF SURFACE STATES ON THE ELECTRIC FIELD OF THE N-P JUNCTION
N.M. Bogatov, V.S. Volodin, L.R. Grigoryan, М.S. KovalenkoAbstract ▼The structure and properties of semiconductor devices largely depend on the distribution of the internal
electric field, which is created by the distribution of ionized impurities. One of the methods for the controlled
introduction of donors and acceptors is their diffusion into the bulk of the semiconductor. The existence
of surface electronic states in the band of forbidden energies has an uncontrollable effect on the distribution
of the electric field in the surface region. The purpose of the study is to analyze the influence of surface
states on the distribution of the electric field in a diffusion n-p junction. Research objectives. 1 – Develop an
algorithm for the numerical solution of the Poisson equation, taking into account the general electrical neutrality
of the n-p junction and the density of surface states in the emitter. 2 – Calculate numerically the distributions
of electric potential, electric field strength, electron and hole concentrations in a diffusion n-p junction.
3 – Analyze the influence of surface states on the change in the internal electric field and the rate of
surface recombination of nonequilibrium charge carriers. As a result, the influence of surface states on the electric field distribution in a diffusion n-p junction in silicon was numerically simulated. The model is based
on a numerical solution of the Poisson equation with boundary conditions that include the condition of the
overall electrical neutrality of the sample. It is shown that the density of electronic states on the emitter surface
creates a narrow range of electric charge density distribution. The maximum value of the modulus of the
electric field strength in this region exceeds the similar value in the n-p junction by three times or more. The
electric field strength caused by the surface charge directs minority charge carriers towards the surface. This
increases the effective rate of their recombination. Reducing the surface charge density or changing its sign
is one of the tasks of semiconductor device technology. -
COMPACT ULTRA-WIDEBAND CARDIOID VIVALDI RADIATOR WITH RECTANGULAR IMPEDANCE INSERTS
R.E. Kosak, А.V. GevorkyanAbstract ▼The paper presents the design of the cardioid-shaped Vivaldi radiator with rectangular impedance
inserts along the edges of its metallization. The influence of impedance inserts, their location, and parameters
on the characteristics of the radiator is investigated. The frequency characteristics of the voltage
standing wave ratio (VSWR), realized gain, efficiency, and cross-polarization level of the radiator without
and with impedance inserts are given. The developed radiator is electrically compact (electrical height at
the upper operating frequency is equal to 0.740 λ, and at the lower operating frequency is equal to
0.127 λ) and ultra-wideband with an overlap ratio (OR) of 5.809:1 in the operating frequency band
127.3–739.5 MHz. The width of the impedance inserts varied from 5.0 mm to 25.5 mm towards the tapered
slot. At the same time, an increase width leads to a slight expansion of the operating frequency band and
an increase of OR. But the realized gain practically does not change since the radiator is weakly directional
and its realized gain depends mainly on the size of the aperture. The numerical values of efficiency
and cross-polarization characteristics also remained virtually unchanged with increasing insert width.
The optimal width of the impedance inserts is equal to 25.5 mm. The height of the impedance inserts was
measured from the top of the radiator. The influence of impedance inserts with a height of 60, 100, 140,
145, and 160 mm is considered. It has been determined that as their height increases, the width of the
operating band increases, but the average VSWR level in the frequency band 180–280 MHz gradually
increases. The realized gain, efficiency, and cross-polarization level also remain virtually unchanged with
the increasing height of the inserts. The optimal height of the impedance inserts is equal to 25.5 mm. Thus,
the introduction of impedance inserts makes it possible to expand the operating frequency band of the
radiator.