Vol 12, No 4 (2025)
MATHEMATICAL MODELING, NUMERICAL METHODS AND COMPLEX PROGRAMS
Triplet-based knowledge mining using pretrained large language models
Abstract
Extracting structured information from text is a key task in natural language processing. Large language models for information extraction tasks achieve high accuracy thanks to pre-training on huge volumes of data. However, such models require significant computational resources and are unavailable for local use due to their dependence on cloud infrastructure. Therefore, compact, open-source large language models that can be retrained locally are increasingly being used to address this problem. This paper evaluates the effectiveness of retraining compact large language models for automated triplet information extraction from unstructured text. The Mistral model with seven billion parameters was used in the study. The model was fine-tuned on a custom dataset consisting of 650 examples, each containing an instruction, an input text and an expected output. The results confirm the effectiveness of retraining: the F1-score increased several-fold compared to the baseline model. The retrained version of the model demonstrates competitiveness with the large-scale DeepSeek language model with 685 billion parameters. The obtained results highlight the potential of compact open large language models for knowledge extraction tasks under resource constraints, such as knowledge graph construction.
13-19
Analysis of Mathematical Models of Memristors for Use in Logical Nanoelectronic Memory Circuits of Artificial Intelligence Systems
Abstract
In the article, memristor devices capable of changing their conductivity depending on the degree of their participation in the signal transmission process are considered as the basis of micro- and nanoelectronic devices. At its core, a memristor is a resistor endowed with a memory function, whose volt-ampere characteristic is nonlinear. His work is based on the dependence of resistance on the integral of the charge flowing through the device, acting as a state variable. These unique properties open the way to the design of fundamentally new electronic systems characterized by exceptional energy efficiency and high performance. Moreover, they serve as the basis for creating self-learning machines capable of adapting to dynamic changes in the external environment. The scope of practical application of memristors is extensive: non-volatile memory for storing information, including binary and multilevel cells; active switching elements in logic integrated circuits; plastic synapses that emulate the work of neurons in neuromorphic artificial intelligence systems built on a nanoelectronic element base.
20-28
ТЕОРЕТИЧЕСКАЯ ИНФОРМАТИКА, КИБЕРНЕТИКА
Automatic identification of modal parameters of dynamic systems based on vibration response
Abstract
In this work, the algorithm determines the identification of modal parameters of engineering structures and buildings described as a linear dynamic time-invariant system in spatial change. Modal parameters are estimated on the basis of recorded vibration response under the assumption of random nature of the disturbing forces. The paper describes the features of the algorithm and provides references to relevant sources that allow a deeper understanding of the algorithm details. The paper proposes an approach to determining the values of modal parameters when performing a number of consecutive identifications, which can be further applied to automate the process for real-time operation or when processing the results of multiple testing. The algorithm allows you to obtain a stable model. The invariance of the system in time is a key factor that ensures the synthesis of mathematical models for ensuring the information security of the objects under consideration.
29-39
SYSTEM ANALYSIS, INFORMATION MANAGEMENT AND PROCESSING, STATISTICS
Intelligent information processing technologies for managing small and medium-sized enterprises based on a regularizing Bayesian approach
Abstract
Purpose of the study. To develop and theoretically substantiate a model of intelligent information processing technology designed to support management decision-making in small and medium-sized enterprises (SMEs) under conditions of uncertainty and data incompleteness, based on the application of a regularizing Bayesian approach (RBP).
Methods of research: systems analysis, decision theory, artificial intelligence and machine learning methods, in particular, Bayesian belief networks and neural networks, as well as probability theory and mathematical statistics. The core of the methodology is the regularizing Bayesian approach, which allows for the formalization and consideration of prior information to improve the robustness of models on small samples.
Results. Based on the conducted analysis, a structural and functional model of an intelligent SME management technology is proposed. The model integrates data collection and preprocessing modules and a Bayesian inference kernel implementing regularization procedures. It is shown that the use of BBP reduces the risk of model overfitting with the limited statistical data typical of SMEs and improves the quality of management forecasts and decisions. Recommendations for the application of the technology for demand forecasting, risk assessment, and personnel management are developed.
Scientific novelty: adaptation and development of the regularizing Bayesian approach methodology for solving semi structured management problems in small and medium-sized enterprises. Unlike standard machine learning methods, the proposed technology formally incorporates prior expert information and industry knowledge for decision regularization, which is critical in the highly volatile and data-poor environments typical of the SME sector.
40-50
Intelligent information and measuring systems based on digital twins for predictive maintenance of industrial equipment
Abstract
The article discusses the concept of using intelligent information and measuring systems (IIMS) built on the basis of digital twin technology to solve problems of predictive maintenance of industrial equipment. The architectural features, operating principles and key components of such systems are analyzed. The essence of a digital twin is revealed as a virtual copy of a physical object capable of reflecting its state and predicting behavior in real time. Particular attention is paid to the methods of collecting, processing and analyzing data, as well as the use of machine learning algorithms to build accurate predictive models. The article presents the main performance metrics and quality indicators used to evaluate models for predicting failures and the remaining life of equipment. Practical examples and industry cases of successful implementation of IIMS based on digital twins in such areas as mechanical engineering, energy and transport are considered. As a practical implementation, the concept of a hardware and software complex for monitoring and collecting statistical data on technological processes is proposed. The article demonstrates that the integration of digital twins into information and measuring systems is a promising direction for increasing the reliability, efficiency and economic feasibility of industrial equipment operation due to the transition from reactive and planned preventive maintenance strategies to a proactive, predictive approach.
51-60
The procedure for aggregating initial data on the required quality level of complex data processing systems
Abstract
The aim of the research is to develop a procedure for aggregating data on the required quality level of complex data processing systems through the integration of multicriteria analysis with machine learning methods. Existing approaches based on GOST R 59797–2021 and ISO/IEC 25010 demonstrate limited efficiency due to the absence of a unified aggregation procedure]. A three-stage hybrid procedure has been devised: collection and normalization of quality indicators using a modified z-transformation; calculation of adaptive weights via synthesis of the AHP method with a random forest algorithm; formation of an integrated criterion for the required quality level. Validation was carried out on two industrial systems with scales of 50–80 TB/day.
Results include an increase in forecast accuracy from 82.1 to 92.4%, a 3.4-fold reduction in decision-making time, and a decrease in critical incidents by 34–45%. The algorithmic complexity is O(n2m + n log nk), with execution time under 30 seconds. The procedure is applicable to CDPS with data volumes exceeding 10 TB/day and requires at least 500 historical observations. The findings are valuable for architects and specialists in quality management of critically important information systems.
61-70
AUTOMATION OF MANUFACTURING AND TECHNOLOGICAL PROCESSES
Development of a software and laboratory complex for studying cryptography on elliptic curves
Abstract
This article presents a software and laboratory suite for studying the mathematical foundations and practical applications of elliptic curve cryptography (ECC). The suite is implemented in Python using the PyQt6 framework and the sympy library for cryptographic computations. The program provides an interactive interface for entering elliptic curve parameters, visualizing points on the curve, constructing Cayley tables for point addition, and checking group properties. Key features of the suite include the implementation of the Tonelli–Shanks algorithm for finding absolute square roots, the ability to work with curves over finite fields of large order, and a bilingual interface (Russian/English). The developed suite can be used in educational settings to teach the fundamentals of elliptic curve cryptography.
71-80
MANAGEMENT IN ORGANIZATIONAL SYSTEMS
Improved algorithm for notifying the population about emergency situations caused by major fires
Abstract
The article proposes an algorithm for alerting the population in case of fires using feedback via SMS and mapping services. The mechanism integrates distributed ledger technology and artificial intelligence to improve the accuracy, coverage and adaptability of the system. The article also develops a model that takes into account the distribution of population density during alerting in a given zone, as well as the optimal radius of the alert zone. The model is implemented in the form of software “Model of the effective radius of alerting in case of fire”. Results. The results obtained can be used to adjust the existing model of public notification in case of large fires, both natural and man-made. This work is intended for those who make managerial decisions and manage the forces and means of extinguishing fires.
81-90
Neuroinformatics methodology in a decision support system for industrial process management
Abstract
Modern organizational and technical systems—as a set of interconnected technical means and the personnel responsible for their operation and intended use—reflect all the trends in digitalization and automation of human activity occurring in the era of the fourth industrial revolution. The complexity of the interrelations between system components and influencing factors determines the complexity of the functions implemented by such systems, while simultaneously increasing the cost of erroneous design decisions. The purpose of this work is to illustrate current trends and an example of a solution for overcoming exponential explosion problems while taking into account multiple factors using neuroinformatics tools to improve efficiency and minimize errors in making optimal decisions on managing multidimensional production processes in organizational and technical systems. An analysis of the subject area revealed the feasibility of solving optimization problems using dynamic neural networks with feedback. In particular, dynamic-static networks have been identified as the most appropriate architectures for solving linear programming problems, due to the clear interpretation of neural network solutions and the ease of implementing inequality constraints. A software implementation for the solution to this problem is described. The experimental dependencies of the performance indicators for classifying the states of production processes, which are subsequently used in the control loop of the technological process of petrochemical production, are presented.
91-104
MATHEMATICAL AND SOFTWARE OF COMPUTЕRS, COMPLEXES AND COMPUTER NETWORKS
GPU-accelerated quantification of atomic orderliness in amorphous alloys via HRTEM image processing
Abstract
Understanding the structural characteristics of amorphous alloys at the atomic scale is crucial for elucidating their unique mechanical, thermal, and magnetic properties. However, the absence of long-range order in these materials poses significant challenges for conventional structural analysis techniques. This work presents a GPU-accelerated software framework designed for high-throughput processing and quantitative analysis of High-Resolution Transmission Electron Microscopy (HRTEM) images to reveal hidden atomic orderliness in amorphous alloys. The proposed system integrates parallelized image preprocessing, processing, atom detection, radius-based clustering, and graph-theoretical and entropy-based metrics to quantify short- and medium-range order. A modular architecture enables efficient GPU computation using CUDA, CuPy, and optimized memory strategies, achieving speedups of up to 220× compared to CPU implementations. Validation was conducted on both simulated datasets (FeB, CoNiFeSiB) and real HRTEM images of amorphous alloys (CoP, NiW, Fe-based 71КНСР). Results demonstrate strong correlations between cluster size, bond angle distributions, and entropy metrics with macroscopic material properties such as hardness and thermal stability. Larger clusters and obtuse bond angles were found to indicate increased local structural order, while entropy measures provided sensitive discrimination of disorder.
105-115
METHODS AND SYSTEMS OF INFORMATION PROTECTION, INFORMATION SECURITY
Development of a digital twin prototype for an automated smart grid control system for cybersecurity threat analysis
Abstract
This paper presents a prototype of a digital twin (DT) for an automated control system (ACS) of a smart power grid, developed for the analysis of cybersecurity threats. The proposed architecture replicates the behavior of the control layers of the energy system and incorporates modules for synthetic data generation, attack simulation, anomaly detection, and threat assessment. Experimental validation was carried out in a laboratory environment through the execution of typical cyberattack scenarios (DoS, malware injection, control signal compromise). A comparative evaluation of two configurations – one based solely on real data and the other incorporating synthetic data – demonstrated an increase in F1-score metrics when using extended datasets. The study discusses the limitations of the prototype, including simplified modeling of physical processes and the need for manual verification of generated data. The results suggest the applicability of the proposed approach for testing threat detection mechanisms in smart grid environments.
116-123
Ensuring security of data processing and transmission in promising wireless communication systems at the design stage using deep machine learning based on artificial intelligence
Abstract
Ensuring the security of data processing and transmission in advanced high-speed wireless communication systems is one of the priority tasks. This paper demonstrates that machine learning is widely used in the design of such systems at the upper layers of wireless communication systems. However, its application at the physical layer is hampered by complex channel environments and the limited learning capabilities of algorithms that describe the changing data transmission channel. This paper presents examples of applying deep machine learning methods at the physical layer to various wireless communication systems. Methods for creating a new architecture for remote access systems based on machine learning using an autoencoder are proposed. It is shown that the application of deep learning with artificial intelligence at the physical layer of wireless communication systems can facilitate the design of complex scenarios with unknown data transmission channel models. Algorithms based on deep machine learning with artificial intelligence demonstrate competitive performance with lower complexity or latency. They may find potential application in advanced secure high-speed, interference-resistant communication systems.
124-130
Development of a two-factor authentication system in WEB applications based on fingerprinting
Abstract
This article proposes a two-factor authentication system in WEB applications based on the fingerprint method. It provides an overview of known two-factor authentication methods in WEB applications and shows the disadvantages of such systems. The architecture of a two-factor authentication system is proposed, cryptographic algorithms for a digital signature are selected, and algorithms for interaction between system components during registration of new devices are developed. An algorithm for interaction between system components during authentication is developed and a software implementation in C# is proposed, and the choice of this programming language is justified. The program components are shown, component codes are presented, and the server and client parts of the system are presented during registration of a new user in the system. The quality of the developed system is assessed, and the coefficients of false rejections for five fingers of the right hand are shown.
131-142
Infrastructural conflict of critical information infrastructure software in the context of destructive effects of cyber attacks
Abstract
The paper considers the phenomenon of infrastructural conflict of critical information infrastructure software, which occurs under the influence of cyber attacks. It is shown that the software environment of the critical information infrastructure is one of the most vulnerable levels, since it combines technological, architectural and organizational dependencies, forming a complex of vulnerabilities of infrastructural origin. The author’s concept of infrastructural conflict is presented as a state of uncoordinated functioning of software, hardware and protective components, leading to degradation, cascading failures and loss of controllability of technological processes. A structural model of an infrastructure software conflict is formulated, including the interaction of three key actors: the source of destructive influences, the software of critical information infrastructure, and information security systems. A classification of destructive influences on software components is performed, the mechanisms of conflict occurrence are described, and the factors of its escalation in SCADA/PLC environments are highlighted. The infrastructure conflict theorem for critical information infrastructure software has been developed, which makes it possible to analyze the dynamics of attacks, the likelihood of transition to a conflict state, the sensitivity of the software infrastructure to loads, and the response efficiency of the information security management system. The influence of infrastructure dependencies on the cyberimmunity of the software environment and the survivability of critical information infrastructure is shown. The results can be used to build monitoring systems, develop response strategies, assess the resilience of software architecture, and model cyberattack scenarios based on the presented model of infrastructural conflictology.
143-154
INFORMATICS AND INFORMATION PROCESSING
On the applicability of acoustic sensors in the problem of road surface defects
Abstract
The article is devoted to the development and substantiation of a multimodal methodology for non-destructive monitoring of the condition of the road surface based on acoustic data supplemented by visual observations. At the level of feature analysis, the study demonstrates that the spectrograms of signals recorded when driving over smooth and damaged surfaces contain stable differences in the time-frequency structure suitable for automatic classification and mapping of defects. The requirements for the type of microphone (sensitivity, bandwidth over 8 kHz, directivity) and placement conditions (height, distance to the source, shielding objects) are justified. A two-sensor architecture with time synchronization combining stereo video and acoustics is proposed; it is shown that the correlation of modalities increases the reliability of localization and the quality of defect classification in real traffic conditions. Taken together, the presented approach forms the basis for long-term, fault-tolerant monitoring systems for road infrastructure with early detection of risks, even with increased noise levels and reduced visibility.
155-163
A synchronous electric engine rotor slip determination in the absence of a rotation speed sensor
Abstract
Rotor slip of the induction motor (IM) is one of the key indicators used in the analysis of the technical condition of the asynchronous motor and the detection of its defects. In practice, the presence of built-in tachometers as part of IM is extremely rare, and their additional installation is a complex and expensive procedure. The paper compares non-invasive methods for determining the slip frequency of the IM rotor by vibration acceleration spectra, current spectra, and by consumed active power at various motor loads. Effects of belt drive and methods of its tension (spring-loaded and fixed) on distortion of spectra of signals measured on the motor are determined. The obtained results showed good convergence of indirect methods for determining rotor slip frequency with tachometer readings. The method of determining slip by the consumed active power showed good accuracy, with minimal requirements for computing power.
164-177
Decomposition of complex systems represented by Petri nets
Abstract
This paper addresses the problem of decomposition of complex systems represented as Petri nets, with the aim of optimizing the synthesis procedure for new parallel systems. An improved approach is proposed, combining Johnson’s algorithm with the depth-first search method. Johnson’s algorithm is applied for the efficient detection of elementary cycles, which makes it possible to overcome the limitations of the traditional matrix-based method, characterized by high computational and spatial complexity, as well as the inability to guarantee the extraction of elementary cycles of a given length. An adapted depth-first search algorithm is used to identify linear fragments of acyclic Petri nets. An analytical and experimental comparison of the proposed algorithms with well-known counterparts has been carried out on both complete and sparse Petri nets. The experimental data indicates a significant advantage of Johnson’s algorithm when applied to sparse Petri net structures, manifested in increased execution speed and completeness in detecting elementary cycles. The proposed approach demonstrates a substantial reduction in time costs at the decomposition stage and contributes to the creation of conditions (a knowledge base) that restrict the set of synthesized structures and simplify subsequent stages of parallel system design. The obtained results confirm the practical efficiency and applicability of the proposed Petri net decomposition methods in the synthesis of complex system structures.
178-186
Analysis of storage formats for multidimensional data models in the context of multidimensional cubes
Abstract
The article considers the issues of efficient storage of multidimensional data models in the context of modern analytical systems. Particular attention is paid to the architecture of multidimensional cubes, which involve storing aggregated facts at the intersection of many dimensions. A review of modern data storage formats is provided – Parquet, ORC, Iceberg, Delta Lake, Hudi – from the standpoint of their applicability to multidimensional analytics tasks. It is shown that existing solutions are focused mainly on tabular structures and do not provide full support for multidimensional relationships, hierarchies and aggregations. The difficulties of integration between different storage formats and the lack of a unified approach to describing metadata are analyzed. Based on the identified limitations, design tasks facing the multidimensional cube storage format are formulated. A conceptual storage model is proposed that combines the principles of relational and multidimensional data organization. The multidimensional model is a table of facts, dimensions, as well as a metadata level and an API interface.
187-194
Fuzzy cognitive maps in reliability analysis of complex human-machine systems: Theore-tical and applied aspect
Abstract
The aim of the study is to analyze the applicability of fuzzy cognitive maps (FCM) to analyze the reliability of complex human-machine systems (HMS), as well as to develop algorithms for evaluating factors influencing the reliability of the system, taking into account expert assessments. The paper highlights the limitations of classical probabilistic and regression methods, which are difficult to apply to HMS due to the interdependence of qualitative assessments, as well as the need to take into account the presence of a human factor. As an alternative solution, the use of fuzzy cognitive maps is considered, which provides the possibility of representing the dynamics of the system in the form of an oriented weighted graph, where the vertices are key concepts, and the arcs are cause–and-effect relationships evaluated by experts. Using the example of an analysis of the reliability of an intelligent video monitoring system of a protected object, the construction of a fuzzy cognitive map is demonstrated, an algorithm for calculating importance indices and coefficients of the combined influence of factors is given to determine the integral indicator of the reliability of the system. A computational algorithm has been developed, and the results of its software implementation are presented. The factors that have the greatest impact on the target variable are highlighted. The prospects of using graph knowledge bases for organizing the collection and storage of information forming cognitive fuzzy maps are noted. The advantages of the considered approach include the possibility of using the method when working with expert information, the integration of heterogeneous factors within a single model, its adaptability and scalability.
195-204
