No. 2 (2021)
Full Issue
SECTION I. INFORMATION PROCESSING ALGORITHMS
-
HYBRID ENCRYPTION BASED ON SYMMETRIC AND HOMOMORPHIC CIPHERS
L. K. Babenko , Е.А. Tolomanenko6-18Abstract ▼The purpose of this work is to develop and research a hybrid encryption algorithm based on the joint application of the symmetric encryption algorithm Kuznyechik and homomorphic encryp-tion (Gentry scheme or BGV scheme). Such an encryption algorithm can be useful in situations with limited computing resources. The point is that with the correct expression of the basic operations of the symmetric encryption algorithm through Boolean functions, it becomes possible on the transmitting side to encrypt the data with a symmetric cipher, and the secret encryption key - with a homomorphic one. In this case, manipulations can be carried out on the receiving side so that the original encrypted message is also encrypted only with a homomorphic cipher. In this case, symmetric encryption is removed, but the information remains inaccessible to the node that pro-cesses it. This property of secrecy makes it possible to carry out resource-intensive operations on a powerful computing node, providing homomorphically encrypted data for a low-resource node for the purpose of their subsequent processing in encrypted form. The article presents the developed hybrid algorithm. As a symmetric encryption algorithm, Kuznyechik encryption algorithm is used, which is part of the GOST R34.12 - 2015 standard. In order to be able to apply homomorphic encryption to data encrypted with the Kuznyechik cipher, the Kuznyechik algorithm S-boxes is presented in a boolean form using the Zhegalkin polynomial. Also, the linear transformation L is presented in the sequence form of performing the simplest operations of addition and multiplication on the transformeddata. The primary modeling of the developed algorithm was carried out on a simplified version of the KuzchyechikS-KN1 algorithm.
-
DEVELOPMENT AND RESEARCH OF THE METHOD OF VECTOR ANALYSIS OF EMG OF THE FOREARM FOR CONSTRUCTION OF HUMAN-MACHINE INTERFACES
N. A. Budko, M. Y. Medvedev , A.Y. BudkoAbstract ▼The paper deals with the problems of increasing the depth and increasing the long-term stability
of communication channels in human-machine interfaces, built on the basis of data on the
electrical activity of the forearm muscles. A possible solution is to use the method of analysis of
electromyogram (EMG) signals, which combines vector and command control. In view of the possibility
of random displacement of the position of the electrodes during operation, a mathematical
model was built for vector analysis of EMG in spherical coordinates, which is invariant to the
spatial arrangement of the electrodes on the forearm. Command control is based on gesture
recognition by means of a pretrained artificial neural network (ANN). Vector control consists in
solving the problem of calibrating the channels of EMG sensors according to the spatial arrangement
of the electrodes and calculating the resulting vector of muscle forces used as an additional
information channel to set the direction of movement of the operating point of the control object.
The proposed method has been tested on actually recorded EMG signals. The influence of the
duration of the processed signal fragments on the process of extracting information about the
rotational movement of the hand was investigated. Since the change in the position of the electrodes
between operating sessions is different, an algorithm for reassigning and calibrating the
amplification of the EMG channels is presented, which makes it possible to use a once trained
ANN for recognition and classification of gestures in the future. Practical application of the results
of the work is possible in the development of algorithms for calibration, gesture recognition
and control of technical objects based on electromyographic human-machine interfaces. -
AUTOMATED STRUCTURAL-PARAMETRIC SYNTHESIS OF A STEPSED DIRECTIONAL RESPONDER ON CONNECTED LINES BASED ON A GENETIC ALGORITHM
Y. V. Danilchenko , V.I. Danilchenko, V. M. KureichikAbstract ▼All major manufacturers go to a decrease in the dimensions of modern microelectronic devices.
This leads to the transition to new standards for designing and manufacturing SBSS.
The well-known automated design algorithms are not fully able to implement new requirements
when designing a SBI. In this regard, when solving the tasks of design design, there is a need todevelop new methods for solving this class task. One of these techniques can be a hybrid multidimensional
search system based on a genetic algorithm (GA). An automated approach to the design
of the SB based on a genetic algorithm is described, which makes it possible to create an algorithmic
medium in the field of multidimensional genetic search to solve the NP full tasks, in particular
the placement of the VSA elements. The purpose of this work is to find ways to place the elements
of the SBI based on the genetic algorithm. The scientific novelty is to develop a modified
multidimensional genetic algorithm for automated design of super-high integrated circuits. The
formulation of the problem in this paper is as follows: optimize the placement of the ELEMENTS
of the SBI by using, multidimensional modified hectares. The practical value of the work is to create
a subsystem that allows you to use the developed multidimensional architecture, methods and
algorithms to effectively solve the tasks of the design design of the SDI, as well as conduct a comparative
analysis with existing analogues. The fundamental difference from the well-known approaches
in the application of new multidimensional genetic structures in the automated design of
the SBI, in addition, the modified genetic algorithm was righteous. The results of the computational
experiment showed the advantages of a multidimensional approach to solving the tasks of placing
the Elements of the SBI compared to existing analogues. Thus, the problem of creating methods,
algorithms and software for the automated placement of the SBS elements is currently of particular
relevance. Its solution will improve the qualitative characteristics of the designable devices,
will reduce the timing and costs of design. -
SOFTWARE SUBSYSTEM FOR SOLVING NP-COMPLEX COMBINATORIAL LOGIC PROBLEMS ON GRAPHS
V.V. Kureichik, Vl. Vl. KureichikAbstract ▼The paper is devoted to the development of the software for solving NP-hard and NP-hard
combinatorial-logical problems on graphs. The paper contains a description of graphs combinatorial-
logical problems. New multilevel search architectures such as simple combo, parallel combo,
two-levels, integrated, and hybrid are proposed to effectively address them. These architectures
are based on methods inspired by natural systems. The key difference between these architectures
is the division of search into two or three levels and the use of various algorithms for evolutionary
modeling and bioinspired search on them. This allows obtaining sets of quasi-optimal solutions to
perform parallel processing and partially eliminate the problem of premature convergence. The article
provides a detailed description of the developed software subsystem and its modules. As modules
in the subsystem, there are five developed architectures and a set of developed algorithms for evolutionary
modeling and bioinspired search, such as evolutionary, genetic, bee, ant, firefly and monkey.
Thanks to its modular structure, the subsystem has the ability to design more than 50 different search
combinations. This makes it possible to use all the advantages of bioinspired optimization methods
for efficiently solving NP-complex combinatorial-logical problems on graphs. To confirm the effectiveness
of the developed software subsystem, a computational experiment was carried out on test
examples. The series of tests and experiments carried out have shown the advantage of using a software
product for solving combinatorial-logical problems on graphs of large dimension, in comparison
with known algorithms, which indicates the prospects of using this approach. The time
complexity of the developed algorithms is O(nlogn)) at best, and O (n3) at worst. -
AN EVOLUTIONARY ALGORITHM FOR SOLVING THE DISPATCHING PROBLEM
V. V. Kureichik, A. E. Saak, Vl.Vl. KureichikAbstract ▼The paper considers one of the most important optimization tasks – the dispathing task that belongs
to the class of NP-complex optimization problems. The paper presents the formulation of this
problem. In Grid systems the array of users' requests for computer services is modelled by an extended
linear polyhedral of coordinate resource rectangles. In this case, dispatching is represented
by the localization of a linear polyhedron in the envelope of the area of computational and
time resources of the system according to the multipurpose criterion of the quality of the applied
assignment. Due to the complexity of this problem, the authors propose methods of evolutionary
modelling for its effective solution and describe a modified evolutionary search architecture.
Three additional blocks are introduced as a modification. This is a block of "external environment",
a block of evolutionary adaptation and a block of "unpromising solutions." The authors
have developed a modified evolutionary algorithm that uses the Darwin’s and Lamarck’s evolution
models. This makes it possible to significantly reduce the time for obtaining the result, partially
solve the problem of premature convergence of the algorithm, and obtain sets of quasi-optimal
solutions in polynomial time. A software module has been developed in the C # language. A computational
experiment has carried out on test examples and shown that the quality of solutions
obtained on the basis of the developed evolutionary algorithm is, on average, 5 percent higher
than the results of solutions obtained using the known algorithms of sequential, initial-ring and
level at comparable time, which indicates the effectiveness of the proposed approach. -
ALGORITHM FOR TRAINING THE ARTIFICIAL NEURAL NETWORK OF FACTOR PREDICTING THE POWER CABLE LINES INSULATING MATERIALS LIFE
N.K. Poluyanovich, M. N. DubyagoAbstract ▼The article is devoted to the research of thermofluxtual processes in accordance with the
theory of thermal conductivity for solving the problems of factor prediction of the residual life of
insulating materials based on the non-destructive temperature method. The relevance of the task of
developing algoritma for predicting the temperature of SCL cores in real time based on the data of
the temperature monitoring system, taking into account the change in the current load of the line
and external heat removal conditions, is justified. The experimental method revealed the types of
artificial neural networks, their architecture and composition, which provide maximum prediction
accuracy with a minimum set of significant factors. A neural network has been developed to determine
the temperature regime of the current-carrying core of the power kawhite. The minimum
set of significant factors and the dimension of the input training vector is determined, which provides
the versatility of the neural network prediction method. A neural network for determining the
temperature mode of the current-carrying core is designed to diagnose and predict the electrical
insulation (EI) life of a power cable. The model allows assessing the current isolation state and
predicting the residual resource of the SCL. Comparative analysis of experimental and calculated
characteristics of learning algorithms of isostic neural is carried out. It has been found that the
proposed algorithm of artificial neural network can be used for prediction of current-carrying
core temperature mode, three hours in advance with accuracy up to 2.5% of actual value of core
temperature. The main field of application of the developed neural network for determining the
temperature mode of the current-carrying core is in di-agnostics and predicting the electrical
insulation (EI) life of the power cable. The development of an intelligent system for predicting the
temperature of the LCS core contributes to the planning of the operation modes of the electric
network in order to increase the reliability and energy efficiency of their interaction with the integrated
energy system.
SECTION II. COMMUNICATION, NAVIGATION AND GUIDANCE
-
DIRECTIONAL AND POLARISATION PROPERTIES OF A MICROSTRIP RECONFIGURABLE ANTENNA WITH TUNABLE FREQUENCY AND POLARISATION
А. А. Vaganova, N.N. Kisel, A.I. PanychevAbstract ▼A reconfigurable antenna is an antenna with parameters that can be varied according to the
requirements of a particular situation. Under variable parameters we can understand the operating
frequency range, radiation pattern, polarization, as well as various combinations of these parameters.
In this paper, we propose the design of a reconfigurable microstrip antenna with tunable
frequency and polarization, and investigate its radiation pattern and polarisation properties. The
antenna has compact dimensions and can be used in wireless communication systems operating in
the 2–7 GHz range. 5 pin diodes are included into the antenna design to change the resonant frequency
and polarization of the antenna by turning the diodes on and off. The simulation of the
proposed antenna was performed in the FEKO program and the main parameters of the antenna
were obtained. The analysis of the simulation results showed that for the lower part of the studied
frequency range (2.05, 2.45 and 3.7 GHz), the polarization is linear. When operating in the higher
sub-range (5.4, 5.6 and 5.75 GHz), the antenna is circularly polarized, and the direction of the
polarisation vector rotation depends on the connection of the diodes. This ability to switch polarization
to orthogonal at the same frequency allows to perform efficient signal reception in the conditions
of multipath propagation. -
DECIMETER RANGE GENERATOR
A. N. Zikiy, A. S. KochubeyAbstract ▼The receiver's heterodyne and the transmitter's master oscillator are the most important
components, as they determine their stability and range properties. In recent years, the Meteor
plant has created a number of new voltage-controlled generator chips with high electrical parameters.
However, the advertising materials of this company do not contain a number of parameters
that are important from the point of view of the consumer. Therefore, the purpose of this work
is to study the main characteristics of the generator, including those not declared by the supplier:
the width of the signal spectrum, the average steepness of the modulation characteristic, the level
of harmonics. The object of the study is the GUN382 microchip in a typical switching circuit.
The results of an experimental study of a VCO operating in the 1200 MHz region are presented.
The estimation of parasitic products in the spectrum of the output signal is given. Photos of the
spectrum of the output signal showing a small width of the spectral line are presented. The modulation
characteristics are measured when the control voltage and supply voltage change, and their
average steepness is calculated. These data allow us to make reasonable requirements for the
stability of the control and supply voltages. The results obtained can be used in receiving and
transmitting equipment for communication, navigation, and electronic warfare. The article expands
the idea of the line of generators of the Meteor plant, demonstrates their high electrical
characteristics: operating frequency range 1200 ± 16 MHz; output power of at least 1 dBm; supply
voltage + 5V; control voltage from 0 to 8 V; the level of the second and third harmonics does
not exceed minus 22 dB in relation to the useful signal. -
RESULTS OF NUMERICAL STUDY OF SCATTERING CHARACTERISTICS IN ANTENNA RADOMES BASED ON METAL-DIELECTRIC GRATINGS
А. О. KasyanovAbstract ▼Mathematical model of multilayered printed frequency selective surfaces with dielectric covers
is presented in this paper. The model is built on the suggestion of array infinity and perfect
conductivity of microstrip elements. Such printed structures can be used as frequency selective
surface and as covers with controllable characteristics (for example tunable filters, adaptive radar
cover, electronically switched polarizers). Full-wave analysis is executed by the integral equation
method of electromagnetics. The numerical solution of an integral equation has been obtained by
Galerkin’s method. Unknown distribution of surface magnetic currents has been approximated by
roof-top basic functions. The generalized scattering matrix method was used for simulation of
multilayered printed frequency selective surface. The paper presents the compound algorithm
which combines the integral equation method with the method of generalized scattering matrix. A
lot of numerical examples are presented proving the algorithm effectiveness. By means of this
model there were synthesized multilayer frequency selective surface as periodic arrays of the
printed elements, which have arbitrary shape of reradiators. It is known, that the printed elements
of special shape ensure, as rejecting and as transacting of electromagnetic waves in the given
frequencies, and have neglected angular sensitivity. The results of constructive synthesis of printed
frequency selective surfaces as rejecting or transmitting filters, which have neglected angular
sensitivity, are represented in paper. Such an algorithm is rather flexible and multiple repeats the
basic problem solution. It makes the procedure of computer code preparing much more effective
and do not require to change the problem decision itself. -
HYBRID EXECUTION OF QUERIES TO ANALYTICAL DATABASES
P. A. KurapovAbstract ▼Analytical database engines should benefit from evolving heterogeneous distributed architectures
and utilize their resources efficiently: various accelerators, complex memory hierarchy,
and distributed nature of systems bring performance improvement opportunities. The article reviews
existing approaches for in-memory DBMS query executor implementation using hardware
accelerators, especially GPUs. Massive parallelism and high on-device memory bandwidth make
graphics processors a promising alternative as a core query evaluation device. Existing methods
do not utilize all modern hardware capabilities and usually are bound, performance-wise, by relatively
slow PCIe data transfer in a GPU-as-a-co-processor model for each kernel representing a
single relational algebra operator. Another existing approaches’ issue is explicit code base separation
for relational algebra operators code generation (for CPU and GPU) that significantly
limits possibilities of joint device usage for performance increase and make it less feasible. The
article presents an efficient query execution method using an example of two device classes (CPU
and GPU) by compiling queries into a single, device agnostic, intermediate representation (SPIRV)
and an approach for corresponding hybrid physical query plan optimization based on extended
classical “Exchange” operator with explicit control over heterogeneous resources and parallelism
level available. A cost model composition process using basic compute DBMS patterns benchmarking
and buses bandwidth data for both relational and auxiliary operators is proposed. Potential
processing speedup from holistic query optimization is estimated empirically with a commercial
open source DBMS OmniSci DB. Preliminary results show significant (3-8x, depending on
device choice) possible speedup even with just using the right device for the job. -
OUTLIERS DETECTION IN THE STEREO-VISUAL ODOMETRY BASED ON HIERARCHICAL CLUSTERIZATION
P. A. PantelyukAbstract ▼This paper presents an approach to stereo-visual odometry without explicitly calculating
optical flow. Visual odometry is a method of obtaining navigation information by processing a
sequence of frames from onboard cameras. There are two approaches to processing video information
- using well-localized areas of the image - feature points and using all high-contrast pixels
- a direct method. The direct method works by using the intensities of all high-contrast pixels in
the image, making it possible to reduce the computational complexity spent searching, describing,
and matching feature points and increasing the accuracy of motion estimation. However, methods
of this class have a drawback - the presence of moving objects in the frame significantly reduces
the accuracy of the estimation of motion parameters. Outlier detection techniques are used to
avoid this problem. Classical methods for detecting outliers in input data, such as RANSAC, are
poorly applicable and have high computational costs due to the computationally complex function
of rating hypotheses. This work aims to describe and demonstrate an outlier detection approachbased on the hierarchical clustering algorithm, which selects the statistically most probable solution,
bypassing the stage of rating each hypothesis, which significantly reduces the computational
complexity. For hierarchical clustering, a measure of the distance between hypotheses with low
sensitivity to errors in estimating motion parameters is proposed. An extension of the stereo-visual
odometry algorithm is also proposed to work in more complex visibility conditions due to the transition
from the intensity representation of the image to a multichannel binary one. Transforming
an image to a multichannel binary representation gives invariance to changes in image brightness.
However, it requires modification of nonlinear optimization algorithms to work with binary descriptors.
As a result of the work, it has been shown that the proposed outlier detection algorithm
can operate in real-time on mobile devices and can serve as a less resource-intensive replacement
for the RANSAC algorithm in problems of visual odometry and optical flow eviction. Qualitative
metrics of the proposed solution are demonstrated on the KITTI dataset. The dependences of the
performance of the algorithm on the parameters of the algorithm are given.
SECTION III. MODELING OF PROCESSES AND SYSTEMS
-
MODEL OF SCATTERING OF RADAR SIGNALS FROM UAV
V. A. DerkachevAbstract ▼In this article, a model of scattering of radar signals from unmanned aerial vehicles (UAVs)
of a multi-rotor type is considered for the formation of training data for a neural network classifier.
Recently, there has been an increased interest in studying the issue of detecting and classifying
small unmanned aerial vehicles (UAVs), which is associated with the development of the UAV
range in sales and production. In addition to the development of UAVs, an increase in the performance
of computers made it possible to create classifiers using new neural network algorithms.
This model generates radar images obtained as a result of the reflection of a chirp radar signal
from an unmanned aerial vehicle, taking into account the configuration, characteristics, current
location and flight parameters of the observed object. When calculating the reflected signal, the
angles of rotation of the UAV (pitch, roll and yaw), flight speed, size and location of propellers in
the current UAV configuration are taken into account. The resulting model can be useful for the
formation of a training set of a classifier of unmanned aerial vehicles of a multi-rotor type, builtusing convolutional neural networks. The need to use a model that generates data for a neural
network is due to the requirement for a large number of training and verification samples, as well
as a wide variety of configurations of unmanned aerial vehicles, which greatly increases the complexity
and cost of creating a training dataset using experimental measurements. In addition to
training the neural network itself, this model can be used to assess the detection and classification
of various types of multi-rotor UAVs, in the development of a specialized radar station for detecting
this type of objects. -
RESEARCH OF ELECTROMECHANICAL TRANSDUCER’S FASTENING INFLUENCE TO THE ACCELERATION PIEZOSENSOR’S CHARACTERISTICS
V.V. YanchichAbstract ▼The research is carried out to obtain data necessary to improve calculation accuracy and
for construction optimization in piezosensors of mechanical values. These sensors are widely used
for control, monitoring and diagnostics of complex equipment and engineering structures. The
task of the research is to study the features of working deformations in electromechanical
piezotransducer in area of fastening to sensor base and to estimate their influence to the main
metrological characteristics. The object of the research is electromechanical transducer in the
form of cylindrical monolithic block made of piezoelectric ceramics with height-to-diameter ratio
0.33 to 2. This transducer is fixed on the sensor’s base which is affected by progressive acceleration
of vibration oscillations from the controlled object. Using the ANSYS Multiphysics softwarepackage a mathematical model of the transducer with two fundamentally different types of fastening
“free sliding” and “rigid” has been investigated. At the same time the mechanism of
transverse mechanical shunting of transducer’s deformation in limit region of the rigid fastening
was revealed. “Fastening effect coefficient” and its determination formula for various transducer’s
height-to-diameter ratios are proposed for quantitative estimate of fastening conditions influence
to transducer’s characteristics. Taking into account the properties of structural materials
used in practice and the most frequently used elastic compression of elements a method was developed
and experimental studies were carried out to determine the effect of transducer’s fastening in
real sensor designs. It was found that base material properties and transducer dimensions ratio in
real sensor design can cause changes in voltage conversion coefficient up to 15%, the charge
conversion coefficient up to 22%, electric capacity up to 9% and longitudinal resonance frequency
up to 16%. The influence of boundary fastening conditions decreases simultaneously with the increase
of transducer’s relative height. The calculating data were obtained experimentally for the
fastening effect coefficient when making the sensor’s base of metals with elastic modulus of
74300 GPa and density of 270017700 kg/m3. The results of research carried out can be taken
into account when designing piezosensors of mechanical values.
SECTION IV. INFORMATION ANALYSIS AND PATTERN RECOGNITION
-
ERROR ESTIMATION FOR MULTIPLE COMPARISON OF NOISY IMAGES
A.N. Karkishchenko, V. B. MnukhinAbstract ▼The aim of this work is to study the effect of noise on the image on the quality of comparison of a
finite set of images of the same shape and size. This task inevitably arises when analyzing scenes, detecting
individual objects, detecting symmetry, etc. The noise factor must be taken into account, since the
difference between digital objects can be caused not only by the mismatch of the compared images of
real objects, but also by distortions due to noise, which is almost always takes place. This differenceturns out to be proportional to the level of the noise component. The main result of this article is an
analytical estimate for the probability of a given level of error, which may arise in the multiple comparison
of a finite set of commensurate digital images. This estimate is based on a low-level comparison,
which is a pixel-by-pixel calculation of image differences using the Euclidean metric. In this case, a
standard assumption is made about the independent normal noise of image intensities with zero mathematical
expectation and a priori established standard deviation in each pixel. The evidence presented in
the article allows us to assert that the obtained estimate should be regarded as sufficiently "cautious"
and it can be expected that in reality the scatter of the measure caused by noise in the image will be
significantly less than the theoretically found boundary. The estimates obtained in this work are also
useful for detecting various types of symmetry in images, which, as a rule, lead to the need to calculate
the difference of an arbitrary number of commensurate digital areas. In addition, they can be used as
theoretically grounded threshold values in tasks requiring a decision on the coincidence or difference of
images. Such threshold values inevitably appear at various stages of processing noisy images, and the
question of their specific values, as a rule, remains open; at best, heuristic considerations are proposed
for their selection. -
TEXT VECTORIZATION USING DATA MINING METHODS
Ali Mahmoud Mansour , Juman Hussain Mohammad, Y. A. KravchenkoAbstract ▼In the text mining tasks, textual representation should be not only efficient but also interpretable,
as this enables an understanding of the operational logic underlying the data mining
models. Traditional text vectorization methods such as TF-IDF and bag-of-words are effective and
characterized by intuitive interpretability, but suffer from the «curse of dimensionality», and they
are unable to capture the meanings of words. On the other hand, modern distributed methods effectively
capture the hidden semantics, but they are computationally intensive, time-consuming,
and uninterpretable. This article proposes a new text vectorization method called Bag of weighted
Concepts BoWC that presents a document according to the concepts’ information it contains. The
proposed method creates concepts by clustering word vectors (i.e. word embedding) then uses the
frequencies of these concept clusters to represent document vectors. To enrich the resulted document
representation, a new modified weighting function is proposed for weighting concepts based
on statistics extracted from word embedding information. The generated vectors are characterized
by interpretability, low dimensionality, high accuracy, and low computational costs when used in
data mining tasks. The proposed method has been tested on five different benchmark datasets in
two data mining tasks; document clustering and classification, and compared with several baselines,
including Bag-of-words, TF-IDF, Averaged GloVe, Bag-of-Concepts, and VLAC. The results
indicate that BoWC outperforms most baselines and gives 7 % better accuracy on average -
LIMITING THE NUMBER OF DIFFERENT TEST VECTORS TO OBTAIN ALL SOLUTIONS OF A SYSTEM OF THE SECOND MULTIPLICITY LINEAR EQUATIONS ON MULTIPROCESSOR COMPUTER SYSTEM
А.К. MelnikovAbstract ▼In the paper we consider calculation of all integer nonnegative solutions of a linear equation
system (LES) of the second types order by a method of sequential vector testing. The method
checks whether a vector is a solution of the LES. We consider different vectors and test if they
belong to the set of the LES solutions. As a result, after such testing we obtain all solutions of the
LES. The LES testing vector consists of the elements which are the numbers of some alphabet signs
with the same number of occurrences in the sample. The LES unites the number of occurrences of
the elements of all types into the considering sample, the power of the alphabet, the size of the
sample, and the limitation for the maximum number of occurrences of the alphabet signs into the
sample. The LES solution is the base for calculation of exact statistics probability distributions
and their exact approximations by the method of the second types order. Here, the exact approximations
are Δexact distributions. The difference between the Δexact distributions and the exact
distributions does not exceed the predefined arbitrary small value Δ. The number of test vectors is
one of those which defines algorithmic complexity of the method of second types order. Without it,
it is impossible to define the parameters of samples, and to calculate exact distributions and their
exact approximations for limited hardware resource. We consider various test vectors for the limited
maximum number of occurrences of the alphabet signs in the sample, and for the unlimited
one. We have obtained formulas to calculate the number of tests for various vectors. Here, the
values of the power of the alphabet, the size of the sample, and the limitations for the maximum
number of occurrences of the alphabet signs into the sample can be arbitrary. Using the obtained
formulas, we can get all integer nonnegative solutions of the LES of the second types order. We
can use the obtained formula for analysis of algorithmic complexity of calculations of exact distributions
and their exact approximations with the predefined accuracy Δ.
SECTION V. MANAGEMENT SYSTEMS
-
GENERALIZED CIRCULAR CRITERION FOR THE ABSOLUTE SUSTAINABILITY OF DISTRIBUTED SYSTEMS
Z. R. Mayransaev, A. B. ChernyshevAbstract ▼Systems control with distributed parameters is one of the complex and important sections of
cybernetics, like the science of control, information and systems. The need to study and develop
this scientific discipline is due to the fact that to control many objects you have to take into account
their geometric parameters, that is, their spatial length. So far, many results have been
achieved in the field of distributed system theory, but for the most part these results are aimed at
the study of linear systems. In the course of researching non-linear automatic systems, as one of
the main tasks, the task of finding possible states of equilibrium of the system under study is
solved. Research into the sustainability of such systems is also a major challenge. Using the technique
of decomposition of functions that describe distributed signals in rows, according to the
general theory of the Fourier series, a class of distributed systems is allocated, in which decomposition by its own vector functions is permissible. Due to this capability, the transmission function
that describes an object with distributed parameters appears as a combination of transmission
functions in separate spatial mods. The concept of "generalized coordinates" is introduced to take
into account the spatial coordinates. For systems with distributed parameters, the spatialamplifier
gain factor is adopted as a direct non-linear angular angular ratio. A cylindrical criterion
for the absolute stability of non-linear distributed systems has been developed and formulated,
based on the generalization of the circular criterion. An illustration of the spatial sector of
nonlineaivity is given. For the first time, a generalized circular criterion for the stability of distributed
systems has been developed, taking into account the dependence of non-linear characteristics
on spatial coordinates. A graphic illustration of this criterion is presented. -
TOP-DOWN VS BOTTOM-UP METHODOLOGIES FOR ADAS SYSTEM DESIGN
D.E. Chickrin, А. А. EgorchevAbstract ▼Selection of the principal design methodology has a significant impact on final product
quality, including its evolvability and scalability. The article discusses the features of traditional
bottom-up and top-down design methodologies in the context of ADAS (driver assistance and automated
driving systems). Necessity of the combined design methodology is shown due to unacceptability
of “pure” methodologies for design of this kind of systems. For this purpose, the features
and limitations of the top-down approach are considered: commitment to maximum compliance
of the developed system with its requirements; methodological rigor of the approach; difficulty
of system testing in the process of the development; sensitivity to changes in requirements.
The features and limitations of the bottom-up approach are considered: possibility of iterative
development with obtaining intermediate results; possibility of using standard components; scalability
and flexibility of the developed system; possibility of discrepancy of functions of subsystems
to requirements, which may appear only at later stages of development; possible inconsistency in
development of separate subsystems and elements. The features and factors of ADAS system development
are considered: increased requirements for reliability and safety of the system; heterogeneity
of used components. Two stages of ADAS-systems development are distinguished: the
stage of intensive development and the stage of extensive evolution. The applicability of one or
another methodology to various aspects of ADAS system development and evolution (such as:
requirements definition; compositional morphism; scalability and extensibility; stability and sustainability;
cost and development time; development capability) is considered. A comparison of themethodologies concludes that there are aspects of technical system design and development in
which there is a significant advantage of one or the other of the methodologies. Only the bottomup
approach can ensure the proper evolution of the system. However, for complex systems, it is
critical to define the initial requirements for the system, which can only be achieved using the topdown
methodology.