No. 1 (2025)
Full Issue
SECTION I. INFORMATION PROCESSING ALGORITHMS
-
BIOINSPIRED SEARCH IN THE COMPLETE GRAPH OF A PERFECT MATCH OF MAXIMUM POWER
B. К. Lebedev, О.B. Lebedev, М. А. Ganzhur, М. I. BeskhmelnovAbstract ▼A reconfigurable architecture of a hybrid multi-agent decision-making system based on swarm algorithm
paradigms has been developed. The reconfigurable architecture allows implementing the following
hybridization methods by tuning: high-level and low-level hybridization by nesting, preprocessor/
postprocessor type, co-algorithmic based on one or several types of algorithms. A methodology for
synthesizing a perfect matching of minimum weight in a complete graph based on the basic principles of
hybridization of search. evolutionary procedures has been proposed. In this paper, the swarm agents are
transforming chromosomes, which are the genotypes of the solution. An ordered list of the set of graph
vertices is used as the solution code. A structure of an ordered matching code has been developed, the
main advantage of which is that one solution (matching) corresponds to one code and vice versa. The
properties of the ordered code have been determined and encoding and decoding algorithms have been
developed. The hybrid system operation starts with the random generation by a swarm of bees of an arbitrary
set of solutions differing from each other in the form of an initial set of chromosomes. The key operation
of the bee algorithm is the study of promising solutions and their neighborhoods in the search space.
A method for forming neighborhoods of solutions with an adjustable degree of similarity and closeness
between them has been developed. At subsequent stages of the multi-agent system operation, solutions are
searched for by procedures built on the basis of hybridization of the swarm and ant algorithms. A distinctive
feature of hybridization is the preservation of the autonomy of the hybridized algorithms. Note that a
single data structure is used to represent solutions in the algorithms, which simplifies the docking of the
developed procedures. An approach to constructing a modified paradigm of a swarm of transforming
chromosomes is proposed. The search for solutions is performed in an affine space. In the process of
searching, permanent transformations (transitions) of chromosomes into states with the best value of the
objective function of the solution (gradient strategy) are carried out. The process of finding solutions is
iterative. At each iteration, the chromosomes are transformed (transitioned) into states with better values
of the objective function of the solution. The purpose of transforming a chromosome that tends to be the
best chromosome into a new state is to minimize the degree of difference by changing the mutual arrangement
of elements in an ordered list, which corresponds to an increase in the weight of the affine
connection. The chromosomes updated after the transformation are, in turn, the base points in subsequent
transformations. As a result of the experiments, it was found that the quality indicators of the developed
algorithms have higher values than in the works presented in the literature. -
DEVELOPMENT OF AN AGENT-BASED ALGORITHM FOR SOLVING SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS OF LARGE DIMENSION
D.А. Bereza, L. А. Gladkov, N. V. GladkovaAbstract ▼Solving systems of linear algebraic equations (SLAE) is one of the most important fundamental tasks in
the development of a new generation of design systems in various fields of science and technology. The relevance
of this study is due to the growing volume of data and the increasing complexity of tasks. Traditional
methods for solving of SLAE, such as the Gauss method, the run-through method, iterative methods (Jacobi
method, Seidel method, etc.), have proven themselves well when working with relatively small systems. However,
when solving large-dimensional of SLAE, these methods are not efficient enough due to high computational
costs and memory requirements. One of the promising approaches to solving problems of high complexity
is the use of agent-based systems. Agent-based systems offer a new way of organizing computing processes
based on the interaction of independent agents, each of whom performs a specific part of the task. This
approach allows for more flexible allocation of computing resources and efficient solution of complex tasks
in a big data environment. A method for solving equations describing a mathematical model of a circuit is
presented, taking into account the optimization of the ratio between the accuracy of calculations and the time
of their execution. In this paper, we propose an agent-based algorithm for solving systems of linear algebraic
equations of large dimension. During the development of this algorithm, an analysis of existing methods and
algorithms for solving of SLAE was carried out, their advantages and disadvantages were identified. An
agent-oriented architecture was developed to solve large-scale of SLAE, the organization of agent interaction
and mechanisms for distributing tasks between them were proposed. A software implementation of the developed
algorithm was performed. To evaluate the effectiveness of the proposed approach, it was tested on a
number of test tasks. The performance and scalability of the developed algorithm were also evaluated, and it
was compared with traditional methods for solving of SLAE. -
ADAPTIVE ALGORITHM FOR PROCESSING SPATIAL-TEMPORAL SIGNALS FOR DATA TRANSMISSION IN 3D WIMAX CHANNEL BASED ON SIMO-OFDM PRINCIPLES
V.P. Fedosov, Al-Musawi Wisam Mohammedtaqi M. Jawad, S.V. KucheryavenkoAbstract ▼The development of the telecommunications industry is focused on the use of wireless broadband
communication systems that allow increasing the speed of information transfer. New technologies with
high transmission capabilities have been developed to solve this problem. Limitation of the signal spectrum
and signal fading in Fresnel zones due to multipath components in a wireless system deployed in
densely built-up urban areas are significant problems in the design of wireless communication systems, as
well as the occurrence of the Doppler effect due to the movement of the mobile station and signal attenuation
during propagation in the channel in different frequency ranges. To increase the speed and throughput,
it is possible to use the procedure of transmitting and receiving signals to form channels with one
input and several outputs SIMO (Single Input Multiple Output), providing spatial filtering when choosing
the path with the maximum signal power. The article presents the analysis and modeling of data transmission based on the SIMO system of the 3D WiMAX wireless channel. The results of comparison of signal
processing by this method with and without the adaptive algorithm, obtained by the criterion of maximum
signal-to-noise ratio (SNR) are presented by the dependences of the probability of occurrence of a bit
error (BER) on the signal-to-noise ratio (SNR). As a result of modeling, it was concluded that for the same
system, the probability of error is sensitive to a change in the modulation type, in other words, BER
changes in accordance with a change in the type of signal modulation. It can also be concluded that SIMO
systems are sensitive to multipath signal propagation for the same modulation type, and BER increases
with an increase in the number of receivers since the signal-to-noise ratio SNR decreases. -
INITIALIZATION OF SOLUTIONS IN POPULATION METAHEURISTICS BASED ON THE METROPOLIS–HASTINGS METHOD
S.I. Rodzin, А.I. DermenzhiAbstract ▼The most important tasks of making optimal decisions using heuristic algorithms are considered to
be improving accuracy and preventing premature convergence. Most of the research in this area focuses
on the development of new operators, tuning the parameters of population metaheuristics, and hybridization
of several solution search strategies. Much less attention is paid to initialization, an important operation
in population algorithms that involves creating an initial population of solutions. A new approach to
population initialization for heuristic algorithms is proposed. When forming a set of initial solutions, it is
proposed to use the Metropolis–Hastings method. According to this method, the initial solutions in the
population take values close to the global or local optima of the objective function. This makes it possible
to increase the accuracy of the solutions obtained. To demonstrate the possibilities of the proposed initialization
approach, it was integrated into the basic differential evolution algorithm. To assess the effectiveness
of the strategy, an experimental test was carried out by comparing it with such well-known methods
as random initialization, training based on opposition and chaos methods, as well as the method of diagonal
uniform distribution. The comparison was carried out on a representative set of multimodal, unimodal,
and hybrid functions, including Rastrigin, Quing, Rosenbrock, Schwefel, quintal, step, and spherical functions.
The convergence rate of the algorithms and the accuracy of the obtained solutions were analyzed.
The average value for the best solutions, the median best solution, the standard deviation from the best
solution, the number of function calls, the success rate, and the acceleration coefficient were used as comparison
indicators. The values of the indicators were averaged based on the results of 30 separate runs of
each algorithm. The proposed algorithm works faster, shows better convergence and accuracy. The algorithm
gives the best results because the initialization strategy allows you to choose promising solutions
that are close to local or global optima. Statistical verification of the results of the algorithms using the
Friedman criterion confirmed that the proposed approach to initializing a population of solutions provides
a better balance of convergence rate/accuracy of solutions. -
ALGORITHMS OF GENERATION AND SEM-IMAGES PROCESSING FOR PROPERTIES IDENTIFICATION OF BIOINORGANIC MATRICES AND METHODS OF THEIR VERIFICATION
А.V. Poltavskiy, D. S. Polyanichenko, Е. R. Kolomenskaya, М. А. ButakovaAbstract ▼Scanning electron microscopy (SEM) is one of the most common methods for analyzing the characteristics
of materials obtained through chemical synthesis. The use of this method makes it possible to
obtain images with high resolution and magnification. The article examines algorithms for image analysis
of materials with specific properties, such as porosity – bioneorganic matrices. Scaffolds are a broad
class of materials with a wide range of applications, including agriculture, medicine, catalysis, and many
others. One of the important applications of such structures is tissue engineering, where such frameworks
are necessary to ensure the regenerative processes of body tissues. And for each organism matrices must
be personalized, which requires a laborious process of selecting the characteristics of the framework applicable
in a particular case. This task is currently partially solved by the application of artificial intelligence
technologies to improve accuracy or support decision making during matrix fabrication or analysis.
However, some of the work in this process is still manual and represents a labor-intensive chore for the
technician. In particular, the process of analyzing SEM images and characterizing the resulting material
still involves many time-consuming steps using various tools. At the same time, such characteristics as
porosity, tortuosity, and diffusivity are very important factors for an expert in the process of making a
decision on the applicability of the fabricated bioinorganic matrix in each specific case. Accordingly, the
purpose of this research is to develop a set of algorithms for processing SEM-images. Also based on the
set goal within the framework of the research we can distinguish a number of issues: development of algorithms
for detection of objects in the image, development of a neural network model for refining the detection
results, implementation of algorithms for calculating the characteristics of porous material, as well as
design and execution of a number of verification tests to confirm the quality of the performed calculations.
As a result of our research, we drew some conclusions. In particular, we found that an approach using
synthetic data generation significantly speeds up and simplifies the learning process of neural networks,
as well as improves the quality of output models. We also found that the algorithms we developed can fully
automate the analysis of SEM images with porous structures, and their quality was confirmed through a
number of verification tests. These algorithms can be applied to other similar problems related to image
analysis and identification of features and characteristics. -
DEVELOPMENT AND RESEARCH OF ALGORITHMS FOR FORECASTING FIRE HAZARDOUS SITUATIONS
Singh Sanni, А.V. Pribylskiy, Е.Y. KosenkoAbstract ▼Early detection of fire hazard situations is a critical aspect of ensuring safety, as it helps to minimize
the risk of material and human losses. Early detection of threats helps to preserve material assets,
reduce the time for their restoration and, more importantly, save human lives. In this regard, a new approach
to predicting fire hazard situations is proposed: an algorithm for training a model for predicting
fire hazard situations, as well as an algorithm for predicting fire hazard situations, which are developed
on machine learning models such as recurrent neural networks, random forest, optimization trees, autoregressive
neural networks, etc. The study proposes to consider algorithms for predicting fire hazard situations
developed on the basis of an analysis of existing forecasting algorithms, including methods based
on machine learning, statistical models and simulation approaches, taking into account their advantages
and disadvantages, accuracy indicators. The results of the study of the developed algorithms show that
they are capable of predicting the outside temperature value of the sensor with an accuracy of 93.33%
based on the test data from a complex of interconnected fire sensors, with errors of MAE = 1.72,
MSE = 2.95 in the abnormal mode on the test data, and with an accuracy of 92.85% for the temperature
inside the sensor, errors MAE = 1.66, MSE = 2.75. The accuracy on the test data in the normal mode for
the outside temperature was 96.27%, errors MAE = 1.22, MSE = 1.48, and the accuracy of predicting the
inside temperature was 96.16%, errors MAE = 1.24, MSE = 1.53. For the test sample of 500,000 readings,
the errors of the predicted outside temperature were: MAE = 1.82, and MSE = 3.31, and the accuracy
was 91.78%. The errors of the predicted temperature inside (temp2_inside) were: MAE = 1.89, and
MSE = 3.57, and the accuracy was 91.35%. -
PYTHON ANT ALGORITHM
D.Y. Zorkin, L.V. Samofalova, N.V. AsanovaAbstract ▼This study is devoted to the analysis and optimization of the ant colony algorithm for solving the
traveling salesman problem, a classic NP-hard combinatorial optimization problem. The primary objective
of the work is to experimentally assess the impact of the algorithm’s parameters on the quality and
efficiency of the search for approximate solutions, as well as to develop recommendations for their adaptive
tuning. The standard Berlin52 graph from the TSPLIB library—containing the coordinates of 52 cities
with a known optimal route length of 7542 units—was used as the test dataset. Experiments were conducted
in a Python environment using the ACO-Pants library, which implements the ant colony algorithm.
A series of 10 runs with fixed parameters was performed: number of ants (20), number of iterations (100),
pheromone influence coefficient (α = 1.0), distance coefficient (β = 2.0), and pheromone evaporation rate
(ρ = 0.5). The results showed an average deviation from the optimum of 1.85%, with the best found solution
being 7675.23 (a deviation of 1.67%). To enhance the algorithm’s efficiency, adaptive mechanisms
for dynamic parameter tuning were explored: a linear increase of α (up to 2.0) and a decrease of β (to
3.0), a reduction of ρ (to 0.3), as well as an increase in the number of ants (up to 30). These modifications
reduced the average deviation to 1.70% and improved the stability of the solutions. Particular attention
was paid to analyzing the balance between exploring new routes and exploiting accumulated data. It was
found that increasing the number of ants improves the quality of solutions; however, beyond 30 agents, the
efficiency gains diminish. Dynamic adjustment of the parameters prevents premature convergence to local
minima and accelerates the search for globally optimal paths. Visualization of the convergence dynamics
confirmed a rapid decrease in route length during the first 20 iterations, followed by subsequent stabilization.
The practical significance of this work lies in demonstrating the flexibility of the ant colony algorithm
for routing tasks in logistics and network planning. The results indicate that ACO outperforms generalpurpose
methods (for example, genetic algorithms) in computational efficiency for the TSP. The developed
recommendations for parameter tuning can be applied to scale the algorithm to larger graphs. Overall,
the study emphasizes the importance of adaptive approaches in metaheuristic optimization and opens up
prospects for further improvements through hybridization with other methods. -
PARALLELIZATION OF INFORMATION PROCESSING IN THE FORMATION OF COMPOSITE IMAGES
А.V. KozlovskiyAbstract ▼This paper considers the issues of organization of parallel information processing when solving
problems of applied photogrammetry, namely the formation of high-resolution images. The article presents
a new information processing method for obtaining high-resolution (HR) image formation for applied
photogrammetry tasks based on adaptive stitching of subframes on the basis of key point matching
and contour analysis using a low-resolution (LR) reference image as a template. One of the features of the
method is parallelization of information processing, which is achieved by working in a group of mobile
objects. The novelty of the method lies in the combination of the following key components: the use of the
reference LR-image as a template is the basis for parallelization of information processing processes, and
allows to organize joint work of the process participants according to common rules, as well as to minimize
the global errors of frame stitching; the use of a complex algorithm of subframe matching by key
points for stitching the high-resolution image by LR-template allows to significantly increase the detail
and accuracy of image reconstruction due to the coefficient of error of the image stitching. Experimental
results demonstrate a 25% improvement in stitching accuracy (SSIM = 0.92) and a 40% reduction in processing
time compared to traditional methods. The method is adapted for application on devices with limited
computational resources, including distributed systems based on mobile platforms, and allows parallelization-
based optimization in a group of mobile devices (mobile objects, MOs).
SECTION II. DATA ANALYSIS AND MODELING
-
BUILDING A MODEL AND EVALUATING ITS ROBUSTNESS IN THE TASK FORECASTING FOR CONSUMERS WITH ADDITIVE TECHNOLOGIES ELECTRICITY CONSUMPTION PROFILES
N.К. Poluyanovich, О. V. Kachelaev, М. N. DubyagoAbstract ▼The construction of a robust model and an assessment of its accuracy in problems of forecasting
electrical loads with additive consumption profiles are considered. A study was conducted on the influence
of neural network parameters (data packet size; number of neural network layers; neuron activation functions; optimizers) on the error in predicting power consumption. Graphs comparing the profiles of actual
and projected consumption and the deviation of the forecast for electricity consumption above the average
value for the period under review are presented. Optimal parameters of the predictive neural network
model have been selected in manual mode. The result of the study of the varieties of genetic algorithms
revealed the optimal hybrid algorithm for learning a neural network model based on the rapid convergence
of the solution. A Python-based algorithm for selecting network hyperparameters based on power
consumption data with different patterns of electricity consumption has been tested. The conducted training
and testing of the genetic algorithm confirmed the possibility of obtaining forecasts of greater accuracy
and the possibility of automating the selection of optimal hyperparameters. In the tasks of forecasting
power consumption using a neural network model, regardless of the method of creating the structure, the
optimal metric has been selected. It is revealed that for consumers with additive profiles of electricity consumption,
it is advisable to use the robust Huber loss function, at the same time, for consumers with a
unique or regular profile of electricity consumption, the use of a sliding window increases the error, unlike
additive consumers. It is shown that the use of a genetic algorithm significantly increases the accuracy
of forecasting due to the individual selection of optimal parameters for a specific consumer. A block
diagram of an intelligent device for predicting energy consumption modes has been developed. A decisionmaking
assistance system has been introduced that allows for the implementation of planned proactive
management based on data taken from the electricity meter and obtained as a result of the neural network
forecasting model. The decision–making assistance system calculates the deviation of the projected power
consumption values from the actual ones and, as a result, issues recommendations to the dispatcher of the
distribution power grids. Based on data from the decision-making assistance system, the distribution grid
operator can make a decision on ordering the required amount of electricity, gets the opportunity to monitor
possible spikes and decreases in consumer electricity consumption, abnormal equipment operation,
and additionally monitor the adequacy of the neural network model -
APPLYING DEEP LEARNING TO EXTRACT CAUSALITY FROM TEXT USING SYNTHETIC DATA
А.N. Tselykh, I. А. Valukhov, L.А. TselykhAbstract ▼This article addresses the problem of developing a causal full-tuples extraction model from unstructured
texts to represent decision-making situations in complex social and humanitarian environments.
We present a causal full-tuples extraction model using a pre-trained BERT with additional feature-based
special fine-tuning. To refine the causal classification, the model uses two types of features (verb causality
and cause-and-effect quality metrics) to recognize a causal tuple, automatically extracts semantic features
from sentences, increasing the accuracy of extraction. Text preprocessing is performed using the open
source SpaCy library. The extracted cause-and-effect tuples in the format <cause phrase, verb phrase,
effect phrase, polarity> are easily transformed into the corresponding elements of the graph <outgoing
graph node, graph arc direction, incoming graph node, graph connection weight sign> and can then be
used to construct a directed weighted signed graph with deterministic causality on arcs. In order to reduce
dependence on external knowledge, synthetic generated annotated datasets are used to fine-tune and test
the BERT model. Experimental results show that the accuracy of extracting cause-and-effect relationships
on synthetic data reaches 94%, and the F1 value is 95%. The advantages of the presented technological
solution are that the model does not require high operating costs, is implemented on a computer with
standard characteristics, uses free software, which makes it accessible to a wide variety of users. It is
expected that the proposed model can be used to automate text analysis and support decision-making in
conditions of high uncertainty, which is especially important for social and humanitarian environments. -
METHODOLOGY FOR DETERMINING AND ANALYZING THE TECHNICAL CHARACTERISTICS OF TECHNOLOGICAL TRENDS
М.S. Anferova, А.М. Belevtsev, V. V. DvoretskiyAbstract ▼The rapid growth of scientific knowledge and the ever-increasing volume of scientific publications
pose serious challenges to identify new trends and understand the changing research landscape. The formation
of technological trends is necessary for the development and construction of development
roadmaps at the national, sectoral and corporate levels. The task of identifying technological trends is an
important problem in the field of data analysis and machine learning. Well-known methods of analysis,
including clustering by time factor, make it possible to form key phrases, but the task of forming trends,
studying their characteristics and dynamics of their development does not currently have a satisfactory
solution. The solution to this problem involves: – creating a methodology for moving from key phrases to
directly naming new technological trends; – determination of the regularity of technology development in
a given subject area; – determining the direction of future research. Solving these tasks will create an
effective decision support tool, reduce the time to identify a trend, assess the dynamics of its development
and build roadmaps. In the presented work, a new approach to the formation of technological trends is
proposed. The method is based on machine learning algorithms and natural language processing methods
and aims to overcome some of the limitations of traditional methods. In particular, the technique makes it
possible to identify complex relationships between various scientific concepts and provides a more accurate
and comprehensive way to identify trends. The analysis of methods and methods for identifying trends
in scientific and technological development and their development based on keywords identified using a
model using time clustering is carried out. An algorithm for identifying trends is proposed -
HIERARCHY ANALYSIS METHOD: A SYSTEMATIC APPROACH TO DECISION MAKING UNDER UNCERTAINTY
А.А. Bognyukov, D.Y. Zorkin, I. А. TarasovaAbstract ▼This article provides a detailed examination of the application of the Analytic Hierarchy Process
(AHP) for evaluating investment alternatives under dynamic market conditions. The AHP methodology
enables the structuring of complex multi-criteria tasks by dividing them into hierarchical levels and then
progressively synthesizing the results to reach an optimal decision. Special emphasis is placed on how
AHP reduces subjectivity when assessing numerous investment-related factors, as the final conclusions
are based on quantitative indicators and a consistency check of expert judgments. To illustrate the advantages
of this approach, the article presents a comparative analysis of three companies: Apple Inc.,
PAO “Segezha Group,” and PAO “Aeroflot.” The evaluation criteria include stock price dynamics, dividend
yield, market capitalization, volatility (oscillation coefficient), and the influence of industry specifics
on growth prospects. Apple Inc. stands out primarily due to its high market capitalization and stable dividend
payouts, whereas PAO “Segezha Group” and PAO “Aeroflot” each have their own strengths, such
as growth potential in specific market segments and a focus on promising industries. Nevertheless, the
final results of the multi-criteria analysis indicate that Apple Inc. leads in most of the key metrics overall.
It should be noted that the significance of AHP extends well beyond academic research. In practice, this
method is widely used in the corporate sector for risk assessment, investment portfolio formation, and the
selection of strategic priorities. Its flexibility ensures universal applicability both for large multinational
corporations and for local enterprises that aim to objectively compare alternatives. The article also high lights the importance of careful data collection and systematization. Errors or inaccuracies at this stage can
significantly distort the final conclusions, which is particularly critical in making investment decisions. The
consistency check within AHP makes it possible to promptly identify conflicting evaluations and adjust the
pairwise comparison matrices. Thus, the authors demonstrate that the Analytic Hierarchy Process is a reliable
tool for the objective and transparent evaluation of investment projects. By considering a wide range of
quantitative and qualitative characteristics, AHP enables the development of balanced recommendations
regarding which assets and companies can deliver the highest returns at a reasonable level of risk. -
GEOINFORMATION MODELS OF EMERGENCY SITUATIONS WITH SPATIAL GENERALIZATIONS
S. L. Belyakov, L. А. IzrailevAbstract ▼The main problem of decision making in emergency situations is the reliability of these decisions. Emergency
situations by virtue of its unpredictable and dynamic nature often have incomplete and inaccurate information.
The use of accumulated experience allows to find reliable solutions based on known precedents of
emergency situations. Geographic information systems (GIS) can act as a tool for accumulating experience and
generating solutions based on it. The cartographic basis of GIS allows analyzing emergency situations, taking
into account their spatial and temporal characteristics. However, the cartographic representation of precedents
with adopted solutions describes them too narrowly. There is no idea what properties of the situation are significant
and whether the precedent solution can be applied in other circumstances. The use of known images and
their admissible transformations, created on the basis of expert knowledge, can solve this problem. The image
generalizes a set of similar precedents. The purpose of such generalization is to expand the area of application
of information from private observations by determining the boundaries of permissible transformations. However,
the need to attract experts for their creation is a difficult task, since each situation is unique in its own way.
No less problematic is the transfer of experience from one spatial and temporal domain to another. In this paper
we consider an approach to automatic image generation. We propose a method of creating a geoinformation
model of emergency situations, which includes the generalization of precedents on a common location. This
approach is aimed at improving the reliability of prediction of emergency situations. An experiment was conducted
to synthesize images based on precedents of road accidents and evaluate their effectiveness compared to
individual precedents. The use of the developed method of automatic data processing to create images is relevant,
as it significantly reduces the cost of knowledge acquisition. The use of spatial generalizations also eliminates
the need for expert knowledge, since the formation of precedent sets is performed by analyzing their geographical
location. -
SIMULATION OF LIGHTNING STRIKE INDUCED CURRENTS AT MISSILERY SAMPLES TESTING
А.А. Yakovlev, R.V. Sakhabudinov, А.S. GolosiyAbstract ▼The lightning strike (LS) to launch vehicle (LV) is accompanied by direct impact on the airframe
and electromagnetic (EM) fields occurring inside the airframe. The EM fields influence the extended power
lines (PL) and induce currents and voltages in them. In this case, pyrotechnic circuits of LV might be
actuated and thus damage critically the operation of airborne equipment and the vehicle itself. Their offnominal
ignition may lead to a catastrophe. The amplitude-time parameters of induced EM fields reach
hundreds of kV/m and kA/m for electric and magnetic fields respectively. Constructing a simulation facility
that is capable to produce EM fields with similar properties and size comparative to that of the LV becomes
a tough technical challenge. The purpose of the research was to substantiate an acceptable, practically feasible method of full-scale modeling of induced currents. The research objectives were to evaluate
the possibility of generating an electromagnetic field of specified parameters, to estimate the currents and
voltages induced by lightning discharges in the PH cable lines, and to design a circuit solution for the
installation under development. The electromagnetic processes occurring in cable lines when exposed to
lightning discharge currents were calculated based on solutions to Maxwell's equations. The cable lines
were modeled by equivalent substitution schemes. In this regard, it is considered reasonable to use a combined
method of evaluation of LV tolerance to the impact of EM fields caused by lightning strikes; the
method is meant to combine both calculation and experimental techniques. At the first stage, the expected
response of extended power lines to EM fields is calculated, and the second stage implies loading the
power line consumers with estimated current (voltage) pulses provided by high voltage test bench for
lightning strike simulation. The use of this approach makes it possible to significantly simplify the requirements
for test equipment for generating electromagnetic fields, which will ultimately ensure the safe
use of pyrotechnic devices on board a launch vehicle in conditions of lightning activity. -
SIMULATION MODEL OF LOW-ALTITUDE METHOD OF PROFILING A REFLECTIVE SURFACE
А.N. Bakumenko, V. Т. LobachAbstract ▼The paper is devoted to the development and study of a new method for low-altitude surface profiling
using a synthetic aperture radar (PCA), which allows obtaining high-resolution radar images both in
range and along the track line. The paper examines in detail the theoretical foundations of PCA systems,
the features of signal formation and processing, and the creation of a simulation model to test the effectiveness
of the proposed method. The paper analyzes the basic principles of SAR systems, including the use
of probing signals with linear frequency modulation (LFM). These signals play a critical role in achieving
the high resolution required for high-quality display of small details on the earth's surface. The paper
pays attention to taking into account the features of the SAR carrier's motion, such as its speed and flight
altitude. These parameters have a significant impact on the quality of the obtained images, and their correct
control can significantly improve the final result. The authors consider the influence of the wave
phase front and the Doppler effect on the shape of the received trajectory signal. Understanding these
processes is necessary for correct interpretation of data and improving the accuracy of radar images. The
paper presents a developed simulation model in the MATLAB programming language, which allows simulating
the operation of a radar and assessing the quality of the images obtained. This model is an important
tool for testing and optimizing the proposed method. The paper provides examples of simulation
results that confirm the performance and adequacy of the proposed model. These results show that the
method is able to operate effectively even in difficult conditions and provide high-quality radar images.
Thus, the article presents a new and promising method for low-altitude profiling of a reflective surface,
which can be used in a variety of fields, including scientific research, environmental monitoring, agriculture,
as well as military and civilian applications
SECTION III. COMPUTING AND INFORMATION MANAGEMENT SYSTEMS
-
A REVIEW OF TRENDS IN THE DEVELOPMENT OF BIOMIMETIC UNDERWATER VEHICLES
D.А. Gritsenko, I. B. AbbasovAbstract ▼This paper presents an overview of some modern trends in the development and creation of biomimetic
underwater vehicles. Biomimetics as an interdisciplinary field of science draws inspiration from
natural forms, which allows developers to create original solutions for underwater research problems.
The introduction notes the relevance of the problem and the advantages of biomimetic designs, and provides
some successful examples of using these underwater objects. The purpose and objectives of the review
are indicated, and the methods for collecting and analyzing information are described. The features
of this interdisciplinary field of underwater vehicle development are noted, which are designed taking into
account not only technology, but also using knowledge from the field of biology. The designs of biomimetic
fish robots, materials for these underwater vehicles are presented, taking into account streamlining. The
varieties of technologies for creating autonomous underwater vehicles, their features of movement and
control in the aquatic environment are described: fish-like movements, jet thrust. The methods of controlling
biorobots are emphasized, developments based on the movement of the fins of the manta ray are indicated.
The importance of using deep reinforcement learning in modeling the control of an underwater vehicle is
noted. Examples of the development of biomimetic underwater vehicles based on computational analysis of
fluid dynamics, the occurrence of turbulence in various types of motion are presented in detail. Some developers
have created bionic dolphin-like robots by combining mechanical properties and underwater planning,
which has significantly improved the maneuverability and speed of these devices. Some examples of the implementation
of the bionic design method in the field of shipbuilding and aviation are considered. The problems
and prospects for the development of biomimetic technologies in relation to the development of underwater
autonomous biomimetic vehicles are noted. In conclusion, the main results of the study and the prospects
for the development of biomimetic technologies in marine engineering are indicated -
INTEGRATED INTELLIGENT UNMANNED VEHICLE CONTROL SYSTEM
А.L. OkhotnikovAbstract ▼The article describes the results of the development and implementation of the intelligent control
system of the «Lastochka» unmanned train on the Moscow Central Ring. The peculiarity of the unmanned
control system for railway transport is: relatively high speed and large mass of trains, which determine a
long braking distance. It is necessary to solve the problem of accurate determination of the distance to the
obstacle, its identification and determination of the exact location of the train on the track. This task can
be solved by an intelligent decision-making system based on the integration of vision and high-precision
positioning systems. The main element of the control system is a specialised computer using artificial intelligence
technologies. To recognise and identify obstacles, the control system uses an artificial neural
network, which is part of the computer software. Technical vision operates in four ranges of electromagnetic
waves. The vision system can be considered as an information-measuring system that performs input
and processing of information without human participation. The structure of the integrated vision system,
which includes on-board, infrastructure and mobile systems, is presented. The experiment has shown that
the vision system reacts faster than a human on average by 14 seconds. The composition of the equipment
of the integrated high-precision positioning system is proposed, which in addition to global navigation
satellite system, platform-free inertial system and odometers, includes a digital track model. The model is
the source of the exact location of the reference infrastructure objects, relative to which the positioning of
the transportation object with high accuracy is determined and the basis for zeroing the increasing error
of measurements of the inertial navigation system and odometer. The results of practical implementation
of the intelligent control system on the Moscow Central Circle are described. -
CONTROL IN AUTONOMOUS TELECOMMUNICATION SYSTEMS USING INTENT ONTOLOGY
N.А. Zhukova, I. А. KulikovAbstract ▼The article is devoted to the description of management capabilities in autonomous telecommunication
systems using ontologies of intents. In modern telecommunication networks, there are trends towards
decentralization of systems and endowing their components with the ability to operate autonomously,
while the business logic of their operation is determined at the system level, which in many cases requires
interaction between several or many system components acting as service providers or consumers.
The article considers autonomous networks managed using the TMN model (from the English Telecommunication
Management Network - telecommunication management network), which is a multi-level model
that includes levels of business management, services, telecommunication network and its components.
To manage networks in the paradigm of service providers and consumers, the international association
uniting service providers and their consumers in the field of telecommunications TMForum has developed
a concept based on the use of ontologies of intents (Intent in Autonomous Networks), which allow formulating
management tasks in autonomous networks by defining criteria for managing networks and their
elements from the point of view of the intentions of the participants in the interaction to receive and provide
services. Due to the fact that the ontology of intentions is described in the OWL format, which represents
it as a semantic network of interconnected classes, the article proposes to use a telecommunication
network model in the form of a knowledge graph for managing telecommunication networks, which is
associated with both the domain ontology of telecommunication networks and the ontology of intentions,
which ensures the autonomy of network components due to management using intentions, and the use of a
domain ontology in the field of telecommunication networks facilitates integration with third-party suppliers
and consumers of the operator's services. The proposed approach to jointly using the ontology of intentions,
policies and a network model in the form of a knowledge graph for managing telecommunication
networks at the business level is new and its applicability is shown in the article using the example of implementing
the registration process and fulfilling an application for connecting a telecommunication service.
The considered example shows the possibility of jointly using a telecommunication network model in
the form of a knowledge graph built on the basis of a domain ontology and an ontology of intentions when
performing high-level business processes for managing an autonomous telecommunication network. -
HARDWARE AND SOFTWARE IMPLEMENTATION OF A REMOTELY OPERATED UNMANNED UNDERWATER VEHICLE OF THE MICRO-CLASS
О.V. Shindor, P.А. Kokunin, А. А. Egorchev, L.N. Safina, Y. S. MurinAbstract ▼In modern underwater robotics, the tasks of control, increasing autonomy, increasing the functions
performed and the possibility of import substitution are relevant. The paper considers an example of building a
remotely controlled unmanned underwater vehicle (RCUV) of the micro class, the main purpose of which is to
use for educational purposes, in particular for involving schoolchildren in engineering and programming, students
in programming microcontrollers, practical study of control systems, digital image processing using wavelet
transform. The article presents the basic principles and features of the design, hardware, algorithmic and
software implementation of a robotic designer based on a RCUV of the micro class. The justification for the
application of the design solution for using the RCUV for educational purposes is given, the principles of algorithmic
movement of the underwater unit are considered. Based on the two-dimensional wavelet transform for
processing underwater images, an algorithm was developed and verified. The wavelet transform is a modern
and effective tool for identifying local features of signals and image processing. The use of two-dimensional
wavelet decomposition, which is the process of decomposing a signal into high-frequency and low-frequency
components, allows us to form four matrices of wavelet coefficients containing approximating ones with lowfrequency
components and detailing coefficients (high-frequency) of three types: carrying information about the
vertical, horizontal and diagonal parameters of the analyzed image. In the process of image processing after
applying the wavelet transform, the approximation coefficients are changed to increase the image contrast, then
the RGB components are determined based on the approximation matrix of the wavelet coefficients based on
grayscale and the average and maximum values are calculated for each of the components. Then the color rendering
coefficient and improvement coefficients are calculated, on the basis of which a modified matrix of wavelet
coefficients is formed and the inverse transform is applied. As a result of applying the algorithm to test images,
the possibility of color correction was demonstrated, in particular, the reduction of the influence of green and
blue components by 8.6%. The results obtained can be used in the construction of image recognition systems in
the underwater environment and the design of autonomous unmanned underwater vehicles.
SECTION IV. NANOTECHNOLOGY, ELECTRONICS AND RADIO ENGINEERING
-
SYNTHESIS OF THE DESIGN OF BROADBAND MATCHING OF A DIPOLE RADIATOR
V. А. Obukhovets, N. V. SamburovAbstract ▼The classical half-wave dipole has a rather small operating frequency band. The paper presents a
comprehensive method for extending the frequency band of a dipole radiator. The broadband matching
effect is provided based on the principle of private compensation of complex load. As the basis of the
matching device, a matching method using a reactive loop is used, which has a good matching quality
with a complex load with minimal geometric dimensions. A feature of the method is the consideration of
the issue of matching a single design "matching device – Radiator-reflector". For this, it is necessary to
take into account the influence of both the structural elements of the transmission line matching and the mutual
reaction of the reflector and the symmetrical dipole. The Purpose of the work is to synthesize the design
of a symmetrical dipole radiator with a matching reactive loop. The paper presents a design containing a
dipole excited from a two-wire line (which is also its struts), shorted at the end. This two-wire line is connected
in the middle part to the coaxial supply line. The reflector has a complex shape in order to provide the
necessary distance from the dipole to the reflector. For this purpose, the design of the dipole radiator has
been formatted, the number, nomenclature and range of variable parameters have been determined, and a
mathematical model has been formulated and verified. Based on this model, numerical studies of the design
alignment level in a range of variable parameters have been carried out. Using a mathematical model, the
possibility of broadband matching is demonstrated, and the parameters of the primary model for
electrodynamic modeling are found. Based on the formed primary model, a computational experiment was
conducted using 3D electromagnetic simulation (HFSS) software in order to determine the optimal geometry
and dimensions of the radiator structure. In one case, the maximum value of the operating frequency band
was chosen as the criterion of optimality, in the other - the maximum directional coefficient. These cases
reflect the practical tasks of using emitters of this type. The possibility of matching in a frequency band of at
least 80% has been demonstrated. The results of verification of the mathematical model, mathematical and
electrodynamic modeling, as well as the layout of the radiator are presented -
CHARACTERISTICS OF THE SIGNAL AND NOISE MIXTURE AT THE OUTPUT OF A LOGARITHMIC RECEIVER
А. V. Andrianov, А.N. Zikiy, А. S. KochubeyAbstract ▼An experimental study of the statistical parameters of a mixture of signal and noise at the output of
a logarithmic receiver was carried out: mean, standard deviation, mode, median, coefficients of asymmetry
and kurtosis. The presence of these distribution parameters makes it possible to approximate the
probability distribution function of a mixture of signal and noise by the Edgeworth series of four terms.
Logarithmic receivers are an important component of radio communication, radio navigation, radar and
electronic warfare systems. They determine important characteristics such as frequency and dynamic
range, sensitivity and noise immunity. The purpose of this work is to refine the model of a mixture of signal
and noise at the output of a logarithmic receiver. Most well-known publications use the assumption of
a normal distribution law of a mixture of signal and noise at the output of a logarithmic receiver.
The refinement of the signal-noise mixture model lies in the fact that this distribution is described analytically
by the Edgeworth series, and the coefficients of the Edgeworth series are measured experimentally
using a mock-up of a logarithmic receiver and a digital oscilloscope. In this case, the average value and
standard deviation are measured directly and displayed on the oscilloscope screen, and the coefficients of
asymmetry and kurtosis are obtained by processing an array of data recorded from the oscilloscope.
The MATLAB program is used as a means of processing an array of data. To illustrate the results of the
experiments, screenshots from the oscilloscope screen are shown, which depict oscillograms and histograms
of a mixture of signal and noise. The following distribution parameters are obtained: – the average
value varies from 671 to 1938 mV; – the RMS value varies from 23.51 mV to 0.553 mV; – the coefficient of
asymmetry varies from minus 0.078 to 0.313; – the kurtosis coefficient varies from 2.394 to 3.471. The
results obtained allow us to build the detection characteristics of a logarithmic receiver and estimate the
probability of a false alarm. -
LOW-PROFILE ANTENNA ARRAY FOR A BASE STATION
Vo Ba Au, Y.V. YukhanovAbstract ▼The design of a low-profile antenna array for a base station is considered. The main part of the design
is a square dipole array, which is thickened vibrators. The design uses a balun in the form of a snake,
which provides the formation of transmission lines and support for radiators with a square contour. The
improvement of the operating frequency band and the reduction of the height were achieved by placing a
dielectric material with εr = 2, tan(δ) = 0.002 directly between the dipole and the ground, the electrical
thickness of which was 0.16λ at the central frequency of the operating wavelength range. The results of a
numerical study of the characteristics of an elementary cell of an antenna array with periodic boundary
conditions on the edges in the ANSYS HFSS software are presented. VSWR of antenna elements and prototype
element Kathrein 739622 are shown. The dependence of VSWR of antenna element on frequency at
different values of dipole radius is shown. The influence of balun size on characteristics of antenna array
element is investigated. It was established by calculation that the choice of the dipole radius shortens the
dipole by 1.5 times, and the choice of the size of the “Snake” shaped balun ensures a lower antenna
height, without deteriorating the antenna’s characteristics. Radiation patterns in horizontal and vertical
planes are shown. Based on the proposed element, models of finite antenna arrays are developed. This
antenna usually consists of a row of 4 identical elements installed along a vertical line to form an antenna
array. VSWR and gain of antenna array and also radiation patterns in the horizontal and vertical planes
at different frequencies are shown. The results show that due to the proposed original idea of transforming
a rectilinear balun into a curvilinear one in the form of a "snake" and thickened vibrators, it was possible
to obtain a design of a basic cellular communication emitter of 1.5 times smaller dimensions, compared to
those antennas used in practice in Kathrein 739622 with good characteristics -
METHODOLOGY OF REALIZATION ON PLIS OF A CONTROLLED RECURSIVE FILTER WITH A FINITE IMPULSE RESPONSE IN THE FORM OF AN APPROXIMATION OF THE HANN WINDOW
D.I. Bakshun, S.P. Tarasov, I. I. TurulinAbstract ▼The paper considers the methodology of realization of recursive filter with finite impulse response
(FIR) in the form of Hahn window with the possibility of controlling the duration (in counts or cycles) of
FIR, including in the process of filtering, based on simultaneous sequential compensation of samples from
the recursive part. A brief review of the existing solution for controlling the duration of a rectangular
pulse characteristic is performed and the methods of realization of impulse response of more complex
shape on the example of a Hahn window are proposed. The method proposed by the authors allows to
achieve stability of the filter when the duration of the impulse characteristic changes in time. The structure
of the module for realization of the filter on the basis of Field-Programmable Gate Array (FPGA) is developed.
The recursive filter structure considered in this paper has significantly lower computational
complexity compared to the classical FIR filter structure, and it can be used in embedded systems with
limited computational resources. The lower computational complexity is achieved by using the function
approximating the Hahn window, which is a third-degree polynomial, as the FIR. Filtering is accomplished
by using two independent filters, one tuned to the duration of the FIR before its change and the
other tuned to the duration of the FIR afterwards with the result summarized. This approach is based on
the principle of linearity of the system, which allows combining the output signals of the filters without
losing their properties. The control of the delay duration is performed based on the ability of the dualported
RAM memory to simultaneously write and read. When changing the FIR duration, the calculation
of filter coefficients is performed during filtering, thus eliminating interruptions between the output signal
sections before and after changing the FIR duration. There is a protection against entering a new parameter
of the FIR duration before the transient compensation due to the previous change of the FIR duration
is completed. After the compensation procedure is completed, the filter tuned to the FIR duration before it
was changed is terminated. -
SMALL-SIZED WELDING INVERTER FOR SEMI-AUTOMATIC WELDING WITH HIGH FREQUENCY AC CURRENT
V. V. Burlaka, S.V. Gulakov, А. Y. Golovin, D. S. MironenkoAbstract ▼The design of a small-sized high-efficiency welding inverter with high-frequency alternating current
output for semi-automatic welding is considered. The inverter is distinguished by good power density and
lowered power losses due to absence of output power rectifier. It is shown that when high-frequency alternating
current is supplied to the welding arc, several problems arise: the non-constant inductance of the
welding circuit presents a significant reactance at conversion frequencies of tens of kHz, limiting the arc
current; at high frequency, a surface effect (skin effect) begins to manifest itself. To solve the problem of
current limitation, a scheme with reactance compensation is proposed by connecting a capacitor in series
with the welding circuit and introducing frequency control of the current in the resulting series-resonant
circuit. The aim of the work is to develop a welding inverter for semi-automatic welding with highfrequency
alternating current, ensuring high-quality process flow. As a result of the research, a smallsized
welding inverter for semi-automatic arc welding with high-frequency alternating current was developed
and prototyped. Laboratory tests of the designed inverter have shown steady arc burning and stable
process flow. The developed inverter can be easily modified to increase the welding current. The structure
of the power section of the developed welding power supply also allows it to be used for induction heating
tasks by connecting an inductor with an inductance of 2...7 μH to the output terminals and introducing
minor adjustments into the microcontroller control program to implement inductor current control.
Thanks to the increased power factor, the developed inverter current drawn from the supply grid is
25...40% lower than that of the widespread welding inverters without a power factor corrector. This reduces
the load on the distribution supply grid and allows welding operations to be carried out when powered
from a "weak" grid or with a long power cable -
DEVELOPMENT OF AN INTEGRATED APPROACH TO ELECTRICAL EQUIPMENT FAULT DETECTION USING CONVOLUTIONAL NEURAL NETWORKS
А.Е. Kolodenkova, S.S. VereshchaginaAbstract ▼Electrical equipment (EE) is a key part of industrial electrical systems where unexpected mechanical
failures in operation can cause serious consequences (disruption of the technological process, reduction
in the quality and quantity of manufactured products and emergencies). For timely detection of such
faults, as well as to ensure normal operation of the systems, it is required to conduct regular assessment of
EE technical state using modern computer technologies under conditions of incomplete and fuzzy information.
To solve this problem, we propose an approach using quantization and convolutional neural networks
(CNNs) which differs from existing approaches by complex processing of thermograms obtained
with a thermal imaging device; images with black-and-white and color graphs obtained from instruments
or built based on statistical data. This approach provides an opportunity to improve the accuracy of classification
of various EE malfunctions, reduce unscheduled equipment failures due to prompt decisionmaking
regarding the EE technical state under conditions of incomplete and fuzzy information. The review
of studies in this subject area by both Russian and foreign scientists reflects a number of successful experiments
on the use of CNNs. The CNN developed to classify faults outputs a class number to which the current
state of the equipment relates (class 1 – serviceable EE; class 2 – serviceable EE with small deviations).
This paper considers a generalized scheme and algorithm of a complex approach to EE fault detection
with their detailed description. The study results were obtained when diagnosing the asynchronous
motor АИР63А4У1 and confirm the validity and objectivity of using the proposed approach -
THE INFLUENCE OF SYNTHESIS CONDITIONS ON THE MORPHOLOGY OF ZnO NANORODS, OBTAINED BY CHEMICAL BATH DEPOSITION METHOD
V. А. Voronkin, Е. М. Bayan, V.V. PetrovAbstract ▼Nanostructured materials, particularly zinc oxide (ZnO), have attracted significant attention due to
their wide range of applications, including piezoelectric devices, gas sensors, and photocatalysis. In particular,
ZnO nanorods with their one-dimensional structure possess a high surface area and tunable morphology.
This study investigates the effect of various synthesis conditions on the morphology of ZnO nanorods formed
by chemical deposition. The impact of zinc oxide precursor concentration and auxiliary substances in the
seeding solution, thermal treatment time, seed layer thickness, seed center diameter, and substrate type on
the morphology of ZnO nanorods is examined. It is found that changing the concentration of hexamethylenetetramine
(HMTA) has a minor effect on nanorod dimensions, while reducing the seeding solution concentration
results in decreasing their length from 380±28 nm to 247±41 nm. Increasing the seed layer thickness
promotes larger nanostructures and leads to an increase in average rod diameter from 86±12 nm to 102±13
nm and length from 356±29 nm to 391±46 nm. Reducing the seeding solution concentration decreases seed
center diameters from 9±1 nm to 7±1 nm; conversely, reducing thermal treatment time increases them due to
incomplete thermal decomposition of precursors. Horizontal positioning of substrates suppresses vertical
growth due to active nucleation in bulk reaction solutions followed by deposition onto substrates; vertical
positioning enhances crystal length instead. The obtained results provide valuable insights for directed synthesis
of ZnO nanorods with specified characteristics for various applications