University proceedings. volga region. technical sciences
ISSN (print): 2072-3059
Founder: Penza State University
Editor-in-Chief: Volchikhin Vladimir Ivanovich, Doctor of Engineering. Sc., professor
Frequency / Access: 4 issues per year / Open
Included in: RISC, Higher Attestation Commission List
Registration: the journal is registered by the Federal Service for Supervision in the Sphere of Telecom, Information Technologies and Mass Communications.
Registration certificate: ПИ № ФС77-26983 from 19.01.2007.
Periodicity: 4 issues per year Number of copies: 1000 copies.
Scientific areas (subject groups):
2.2.4. Measuring instruments and methods (by type of measurement)
2.2.9. Design and technology of instrumentation and electronic equipment
2.2.11. Information-measuring and control systems
2.3.1. System analysis, management and information processing
2.3.2. Computer systems and their elements
2.5.5. Technology and equipment of mechanical and physical-technical processing
2.5.6. Technology of mechanical engineering
2.5.9. Methods and devices for monitoring and diagnostics of materials, products, substances and the natural environment
The journal publishes original articles describing results of fundamental and applied research in engineering sciences, analysis of advanced technologies and accomplishments of sciences and engineering practice, as well as survey articles by leading experts in the journal’s subject area.
Publication of articles in the "University proceedings. Volga region. Technical sciences" is carried out in accordance with the Creative Commons Attribution License, CC-BY 4.0.
Our publication offers charge-free permanent full-text access to all published materials. Readers have an opportunity of unlimited usage of works provided authorship indication.
Current Issue
No 4 (2025)
COMPUTER SCIENCE, COMPUTER ENGINEERING AND CONTROL
Model and algorithm for forming spatially distributed groups of aerial objects to optimize their energy supply
Abstract
Background. The object of the research is the energy supply system for spatially distributed groups of aerial objects. The subject of the research is algorithms for forming optimal clusters for simultaneous energy transmission to multiple unmanned aerial vehicles. The purpose of the work is to develop a new hybrid algorithm that optimizes the energy supply of aerial object groups, taking into account their current energy state and spatial distribution. Materials and methods. The research develops and analyzes a hybrid optimization algorithm for energy supply of spatially distributed groups of aerial objects. The algorithm is based on an adapted greedy method for solving the set covering problem with the application of a specialized objective function that includes three key criteria: maximizing the number of simultaneously serviced objects, prioritizing objects with critical charge levels, and minimizing angular movements of the energy support complex. The proposed strategy takes into account the spatial-energy characteristics of objects to optimize the distribution of energy resources. A mathematical model of spatial intersections of energy beam radiation patterns has been developed for the formation of optimal clusters. Results. A new hybrid algorithm has been proposed that provides priority service to objects with critically low charge levels and minimizes the number of necessary rotations of the energy support complex. The results of numerical modeling demonstrated the superiority of the developed algorithm over classical clustering methods (k-means, DBSCAN, hierarchical clustering) according to key metrics: coverage coefficient, efficiency of servicing aerial objects with critical charge, and angular efficiency. Conclusions. The developed hybrid algorithm for forming spatially distributed groups of aerial objects is more effective than existing methods for optimizing the energy supply of unmanned aerial vehicles. The proposed approach has a wide range of practical applications in energy supply systems for group flights of unmanned aerial vehicles for various purposes.
5-17
A simple algorithm for suppressing noise growth due to sample size reduction when assessing the impact of artillery barrel wear on projectile dispersion
Abstract
Background. The purpose of the work is to suppress noise arising from statistical calculations aimed at the use of small samples during control tests of artillery barrels. Materials and methods. Simulation modeling is used to reproduce the growth of the shell dispersion ellipse due to wear of the artillery barrel. Instead of estimates based on the standard deviation of the data, additional estimates of the minimum and maximum values of the deviation of the impact points from the center of dispersion of the projectiles are used. Results. It was shown that the controlled statistical parameters in a sample of 21 experiments have distributions close to normal and are essentially independent. This allows them to be converted to a single probability scale using linear transformations and then accumulate data by averaging. As a result, it is possible to reduce the probability of errors due to the use of a small sample in control tests of barrels to a value corresponding to doubling the sample size.
18-28
Analytical assessment of the probabilistic-temporal characteristics of the start and end of activities in an information and telecommunication system
Abstract
Background. The object of this study is mathematical models used to calculate the temporal characteristics of the start and end of actions in an information and telecommunications system under the influence of random factors. The purpose of this work is to derive formulas for calculating the probabilistic-temporal characteristics of an information and telecommunications system, taking into account the uniform and normal distribution of random factors at the start and end times of activities and actions. Materials and methods. The impact of independent random variables on the start and end times of actions and activities in a system is analyzed for both bounded (uniformly distributed) and unbounded (normally distributed) systems. Results. Formulas are derived for calculating the impact of random deviations in the start and end times of actions and activities under uniform and jointly uniform and normal distributions of deviation values from their ideal positions. Probability theory methods are employed, using the convolution integral for independent random variables. A comparison of the calculated and well-known experimental data revealed their qualitative agreement. Conclusions. The results can be applied in the design and operation of information and telecommunications systems to assess and minimize the impact of random factors on their operation.
29-38
Target assignments based on tuple representation of data
Abstract
Background. The study considers the formulation and solution of a balanced problem of target assignment in a military air defense system using a combinatorial algorithm. The purpose of the study is to develop a combinatorial algorithm for assigning targets based on the representation of subsets of targets assigned to weapons in the tuple form. Materials and methods. The set of tracked targets is pre-divided into tuples. Each tuple corresponds to one multi-channel weapon. Preliminary division into tuples is performed automatically using (without using) a constructive algorithm, with asymptotic complexity O(n). Each tuple is assigned to one operational weapon. The size of the tuple’s targets is determined by the number of operational weapon channels. The possibility of reassigning targets and changing the contents of tuples is assessed using a local objective function. Results. A local objective function is defined to evaluate the possibility of transforming tuples of objectives. A global objective function is defined to evaluate the effectiveness of target allocation. The possibility of simultaneous parallel transformation of the assignment of several targets in one, two or more pairs of tuples is investigated. Conclusions. The formalized statement of the problem is similar to the statement of a linear discrete programming problem.
39-47
Сonceptual representation of the properties of autonomous reactive agents in intelligent computing systems
Abstract
Background. Conceptual models of automata-type autonomous reactive agents can be constructed using the automata-based programming paradigm in conjunction with conceptual models of knowledge- and rule-based artificial intelligence. Reactive agents are considered as entities underlying an intelligent system; in addition, the concept of a reactive agent is proposed to be used as a basis for agent-oriented reactive programming as a combination of the agent-oriented approach to programming with the automata and reactive approaches. The purpose of the work is to enhance the intellectual capabilities of finite automata and Petri nets through their integration with conceptual knowledge graphs, which will enable more active implementation of hybrid integrated models in interdisciplinary fields of artificial intelligence and enhance the intellectual capabilities of agent-based systems. Materials and methods. The operation of autonomous reactive agents is formalized based on first-order predicate logic, deductive inference rules, and discrete-event (automata and reactive) approaches. The concepts of reactive programming also extend to reactive agent-based programming. Results and conclusions. The concept of a reactive agent is proposed as the basis for agent-oriented reactive programming, combining agent-oriented programming with automata-based and reactive approaches. The effectiveness of the proposed approach is illustrated by program fragments in Python and SWI-Prolog.
58-76
Methodology for assessing the effectiveness of neural network quantization methods for implementation on devices with limited computing power
Abstract
Background. This study examines the problem of neural network inference on devices with limited computing capabilities. A method for reducing device computational resource requirements through the use of neural network weight quantization is presented. The effectiveness of various quantization methods is studied on two target platforms. Materials and methods. The quantization methods investigated in this work were post-training quantization (PQT) and during training quantization (QAT). Results. Neural network models solving the problem of image classification were trained and quantized, and the efficiency of quantization was evaluated on a personal computer and on an embedded Jetson Nano computerA performance evaluation metric is introduced and validated. Conclusions. QAT preserves model accuracy while reducing model size no worse than PQT while maintaining acceptable accuracy. Using QAT accelerates performance on devices with limited computing power.
48-57
ELECTRONICS, MEASURING EQUIPMENT AND RADIO ENGINEERING
Algorithm for testing radio-electronic equipment under conditions of ionizing radiation
Abstract
Background. Active exploration of near-Earth space requires reducing the cost and accelerating the development of electronic equipment (EE) resistant to factors affecting a spacecraft during its operation in orbit. Materials and methods. Today, to confirm the resistance of EE to ionizing radiation, tests are carried out using physical sources, which is associated with high cost, duration and risks of unsatisfactory test results. Results. To improve the quality of development of electronic equipment for space purposes, a technique for testing of the measurement channel when simulating the effect of ionizing radiation (IR) using software methods is proposed, implemented using an information and measuring system in the process of real temperature exposure to the object under study. Conclusions. The proposed solution will improve the quality of confirmation and forecasting of the parameters of space technology products by supplementing existing approaches with measurements of changes in EE parameters as part of preliminary tests on the territory of the manufacturer's enterprise when simulating exposure to ionizing radiation.
77-83
Extreme signal filtering using amplitude modulation
Abstract
Background. It is proposed to use amplitude modulation in the tasks of determining the parameters of complex-shaped signals based on decomposition. The purpose of the study is to eliminate the effect of pushing low-frequency components into the highfrequency (HF) region in the presence of attenuation of the HF components and, as a result, to increase the accuracy of determining the parameters of complex-shaped signals based on the parameters of the components. Materials and methods. The research was carried out in the Matlab environment using models of complex-shaped signals with known parameters, as well as on real signals. Results. It is confirmed that the use of AM before decomposition avoids mixing of components and improves the accuracy of determining signal parameters. Conclusions. The proposed solutions, the use of amplitude modulation, make it possible to avoid pushing high-frequency components (modes) into the low-frequency region due to the fact that the low-frequency components are the first to be highlighted. The decomposition of the original signal can be obtained from the decomposition of the modulated signal.
84-91
Minimizing the error of dynamic pressure measurement with piezoelectric sensors in autonomous devices
Abstract
Background. The study examines the temperature and time dependencies of the properties of piezoelements included in dynamic pressure piezosensors, with a focus on conditions close to real-world operation, such as in power plants operating at high pressures and temperatures. The purpose of this work is to analyze mechanisms behind measurement errors, including thermosensitivity, material aging, thermoelastic stresses, pyroelectric effects, and discrepancies in the coefficients of linear thermal expansion of structural materials. Materials and methods. A modified bismuth titanate, NFI-TV-3, which is used in hightemperature piezosensors, was selected as the primary object of study. Results. The results of field tests of NFI-TV-3 piezoelements under cyclic temperature and pressure exposure, as well as the temporal stability of their electrophysical characteristics, are presented. It is shown that the main changes in the piezomodule occur within the first five cycles of thermomechanical treatment, reaching up to 15 %, after which stabilization comparable to artificial aging processes is achieved. Aging coefficients were determined for the NFI-TV-3 material, making it possible to predict changes in characteristics over time. Conclusions. Based on the obtained data, a hybrid method for compensating temperature and temporal error is proposed, employing physical-mathematical modeling and elements of machine learning; this approach makes it possible to account for individual nonlinear effects, enhance measurement accuracy, increase the calibration interval, and reduce requirements for the amount of experimental data. The obtained results and the developed method have practical value for the creation of highly accurate and reliable piezosensors for power plants and autonomous devices.
92-107
Electrical signal generator for testing and checking neurointerfaces
Abstract
Background. Brain–computer interfaces (BCIs) are widely used in rehabilitation, cognitive training, and virtual reality. However, their implementation is limited by the lack of reproducible test methods that allow for the evaluation of the entire signal path–from recording brain electrical activity (EEG) to adapting rehabilitation procedures. The purpose of this study is to develop a hardware and software system for testing a neural interface with an autonomous generator of EEG-like signals, providing reproducible scenarios, regression validation of algorithms, and results monitoring. Materials and Methods. A test complex connected to the neural interface input instead of EEG electrodes is proposed. The key element is a multichannel signal generator based on the Waveform 4 Click board (AD9106 microcircuit) controlled by an ESP32 microcontroller. The generated signals are described by a vector of parameters (amplitudes, frequencies, and phases of rhythms, noise level, artifact parameters). The implemented modes include playback of recorded sessions, rhythm simulation with an adjustable signal-to-noise ratio, and calibration of the measuring circuit. Results. The developed system provides an output signal frequency range of 0.05– 200 Hz, amplitude resolution down to microvolts, simulated electrode output impedance in the range of 22–47 kOhm, and intrinsic noise of no more than 10 μV in the operating band. The system supports a cascade testing methodology: from hardware verification to assessing the stability of recognition algorithms and adaptation mechanisms for rehabilitation measures. Conclusion. The proposed test suite with an autonomous EEG-like signal generator enables debugging and verification of neural interface algorithms without subject participation, improves research reproducibility, and facilitates regression control during software and hardware modifications.
108-117
MACHINE SCIENCE AND BUILDING
Analysing the technological parameters of the high-temperature thermomechanical treatment of metallic products
Abstract
Background. The implementation of new technologies in the manufacturing processes of structural steel products allows for the improvement of their operational properties. One of the highly effective methods for enhancing the complex properties of steels and alloys is high-temperature thermomechanical treatment. The complexity of the process necessitates the analysis of technological parameters using informational models that enable the identification of the main parameters that have the greatest impact on the physical and mechanical properties of the products. Materials and methods. The analysis of the technological parameters of the process (deformation temperature, and tempering temperature, deformation degree) was carried out based on an informational model. The determination of the degree of influence of the model factors on the objective function was performed using statistical data processing methods. Results. An assessment was made of the degree of influence of technological (deformation temperature, and tempering temperature, deformation degree) parameters of the process on the values of yield strength, elongation, impact toughness, and hardness of the processed samples. Conclusions. Using the statistical data processing methods for informational models, along with computational devices and specialized software, allows for the analysis and assessment of the degree of influence of technological parameters of the high-temperature thermomechanical treatment process on the mechanical properties of the products.
118-129
Using regression model and transport problem methods in planning the operation of CNC machines
Abstract
Background. The relevance of the study is determined by the fact that the conversion and expectation of CNC machines is one of the reasons for the loss of efficiency in the use of high-technology equipment, which is considered in this article from the point of view of planning the operation of CNC machines. The purpose of the study is to use the regression model and transport task methods to effectively plan the operation of the CNC machines. Materials and methods. The design of CNC machines in this article is based on the transport task methods using the regression model and the Overall Equipment Effectiveness (OEE) factor. Results. The transport task methods have been adapted to plan the operation of CNC machines using the regression model and the Overall Equipment Effectiveness (OEE) factor. Conclusions. The results of the computational experiments confirmed the prospect of applying a regression model and transport task methods to plan the operation of CNC machines.
130-143
Improving the quality of mating surfaces of complex-shaped parzs using glow discharge plasma
Abstract
Background. Ensuring the quality of mating surfaces of complex-shaped parts is an important task in mechanical engineering. Traditional finishing methods often fail to guarantee uniform layer modification without distorting the geometry. The purpose of this work is to substantiate the use of glow discharge plasma for precision quality improvement of such surfaces. Materials and methods. The study was conducted on samples made of 40Kh and 12Kh18N10T steels. A setup with a glow discharge plasma generator equipped with parameter control (voltage 350–550 V, current 80–200 mA, pressure 10–80 Pa, Ar + C3H8 medium) was used. Microstructural analysis, microhardness measurements (HV 0.1/10), and profilometry were applied. Models of ion implantation and ion flux density distribution were developed. Results. The treatment enabled the formation of a strengthened layer up to 35 μm deep. Microhardness increased by a factor of 1.4–1.6, and roughness (Ra) decreased by 30–40 %. The dependence of modification depth and roughness on processing parameters was established, showing good agreement between experimental data and calculated models. Conclusions. The glow discharge processing technology is effective for the finishing modification of complex-shaped mating surfaces. The developed scheme with automated control ensures reproducibility and allows for the integration of the method into serial production to enhance the wear resistance and reliability of machine assembly units.
144-158
Experimental and analytical evaluation relationship of machining accuracy to thermal field precision lathe
Abstract
Background. Known constraints for estimating thermal deformations of machine tools contain stochastic components in the values of individual parameters, and this reduces the accuracy of calculations. At finishing operations of machining workpieces on precision lathes, thermal deformations lead to such a change in the relative position of the tool and workpiece that it does not allow to stably obtain dimensions with tolerances of 1... 2 quarts. Therefore, it is necessary to identify the relationship between processing accuracy and thermal processes to minimize the effect of temperature on the accuracy chart in order to predict and control processing accuracy. Materials and methods. Studies of the effect of the change in the thermal field on the accuracy of processing were carried out on the basis of a systematic approach to the analysis of heat-release sources on the precision turning module TPARM-100M, taking into account its design features, in particular, aerostatic supports of the spindle and slide guides. Multipoint control of the temperature of the module units and errors in the size of the processed parts was carried out until the steady-state thermal mode of the machine was reached. Experimental-analytical method based on calculation of coefficients of partial correlation between processing errors and temperature change in eight control points is used to assess relationship of processing accuracy with thermal field of module. Values of partial correlation coefficients are used to determine informative control points, temperature change in which has the greatest effect on processing accuracy, which makes it possible to determine ways to minimize processing errors. Results. Four informative points of the thermal field of the turning module are determined from the values of the partial correlation coefficients: the temperature of the ambient air, the spindle assembly, the frame and the air supplied to the module supports. Conclusions. The analysis of the influence of thermal processes on the accuracy of processing on a precision turning module made it possible to form a methodology for experimental and analytical studies of the connection of an accuracy diagram with a change in the thermal field of the module, taking into account its design features. The calculation of partial correlation coefficients made it possible to establish the most influencing sources of thermal perturbations and determine the requirements for stabilization of the temperature of the module nodes within 0.5 ° C to implement the processing of parts in 1... 2 qualites.
159-173














