


No 2 (2024)
Information processing and data analysis
Processing of Data for Inductive Inference Based on Non-Strict Probability
Abstract
Based on methods of inductive logic, an approach to identifying of implication relationships “If A, then b” in Big Data is considered. This approach is considered in conditions of low reliability and inconsistency of data. To work in this condition, logics with vector semantics in the form of VTF logics are used. The presence or absence of phenomena in tables of their joint occurrence is formalized by truth vectors with components v+ and v-, where v+ is a measure of the true of a statement about the presence of a phenomenon, v- is a measure of its false. On the base of statistical induction principal, the indicator of the validity of a causal relationship is calculated as the average value of the truth vectors of the corresponding non-strict propositions. The resulting value is interpreted as a non-strict probability of the relationship, which acts as a vector indicator of its validity. The applicability of the approach for processing qualitative and quantitative data, as well as data containing artifacts, is shown.



Methodological Support of Information Systems in Assessing Reliability
Abstract
The article discusses the development of methodological support for assessing the reliability of information systems (IS), as well as assessing the effectiveness of IS application based on the reliability indicator. The advantage of the proposed methodological support is its simplicity, which makes it possible for IS technical personnel to master it. The presented methodological support includes the se- lection of reliability indicators, determination of failures and failures of the IS, development of a reliability scheme, development of a mathematical model for assessing reliability, development of rules for collecting and calculating statistical data, rules for evaluating efficiency. The presented methodological support has been tested within the framework of scientific and methodological support for the modernization of the State Automated System “Elections”, as well as in a number of projects and brought to the current methodology.



Conceptual Model of Web Page Loading
Abstract
The article discusses the conceptual model of web page loading developed by the authors. The model consists of three entities: Backend (characterized by the metrics: Page creation time, Number of SQL queries, SQL query execution time, Cache size, Page size, Number of errors) environment (characterized by the metrics: DNS, TLS, Connection, Load speed), frontend (characterized by metrics: FCP, LCP, TBT, CLS, SI). The presented conceptual model proposes to consider the process of loading a page, including all stages of its formation from the user requesting a page to rendering the received content. Its use in the process of optimizing the loading speed of a web page gives a new look at possible problems of a web project. The paper presents an example of using this model to solve the current problem of optimizing page loading speed.



The Method of Decomposition of Biological Objects on MRI Images with a Similar Background
Abstract
The results of the development of a method for recognizing and measuring objects on MRI brain inventions based on the introduction of a mathematical description of the transition area between biological objects and the subsequent decomposition of the general image of the brain into separate biological objects are presented. A distinctive feature of the developed approach is the use of neural networks only to simplify the search for key features of a given biological object, whereas the main part of the method is implemented in the form of procedures for calculating signs of transitions between biological structures by introducing a mathematical function and then filtering false detected transitions.



Development of a Multi-Aspect Ontology for Decision Support in Production Systems
Abstract
Development of new data-driven concepts based on artificial intelligence methods leads to the appearance of new approaches to decision support in production systems aimed at increasing their efficiency. However, the use of existing uncoordinated data and knowledge to improve the quality of decision-making processes remains a challenging task due to the diversity of their terminologies and cognitive models. The paper proposes an approach to development of a multi-aspect ontology for decision support in production maintenance. The multi-aspect ontology is based on a layered approach to integrating knowledge about various aspects of a complex problem domain (its constituents or subdomains) while preserving the autonomy of the original ontologies. The developed multi-aspect ontology supports interaction between aspects using inference mechanisms what increases the efficiency of in- formation flows and the degree of automation of related processes. The given example shows that the proposed approach can significantly reduce the involvement of human workers in maintenance pro- cesses in an enterprise, as well as the cognitive load on operators and maintenance technicians.



Intelligent systems and technologies
Detection of Tears on Document Page Using Analysis of Infrared Image
Abstract
This paper examines the problem of detecting tears on a protected document page. We present an approach based on analyzing the document image in the infrared range. It is assumed that in this case it is possible to separate the damage from the protective elements applied by IR-transparent inks. So the problem of tears detection might be reduced to a search for thin lines of a certain length adjacent to the border of the document page. Thus, we developed a tear search algorithm based on the search for "ridge" type lines followed by checking whether the line satisfies the specified properties. We created and pub- lished a VIUR dataset with Russian banknotes in order to test the algorithm. The recall of the proposed algorithm is 0.87, the precision is 0.94.



Some Features of Literary Texts when Comparing them to Determine their Authorship
Abstract
A method for analyzing literary author's texts based on selecting the most frequent auxiliary parts of speech characteristic of a particular author's style and calculating their weighting coefficients has been developed. This linguistic analysis of natural language text (NLP) is based on the calculation of the most frequently used prepositions, conjunctions and particles in literary works. The process of calculating weight coefficients, determined by the ratio of the values of auxiliary parts of speech in the text to its total volume, has been analyzed in detail. Experimental results on establishing the authorship of literary texts for two authors are presented. The results were obtained by comparing the numerical values of the same type of weighting coefficients, expressed as percentages. The theoretical and practical results obtained can be used to analyze, identify linguistic features, and differences not only in literary texts, but, in the future, in texts of any genre and style.



Mathematical modeling
Assimptotic Numerical Method for Multidimensional Integrals of Forecasting of Thermokarst Lakes
Abstract
We develop an analytical method for the approximate calculation of multidimensional integrals, focused on solving balance equations in Randomized Machine Learning procedures. The latter are used to forecast the evolution of thermokarst lakes’ area. The method is based on the series expansion of an analytical function - the exponential - and the transformation of multidimensional integrals into the product of simple one-dimensional integrals on interval sets.



On Some Properties of Nonlinear Integral Models of Dynamic Processes
Abstract
The paper presents algorithms for constructing dynamic models of technical (energy) systems in conditions of noisy data. We consider a class of nonlinear systems of Volterra-type integral equations of the first kind with an input signal consisting of two components. The problem of identifying the input signal of linear systems is well known when reduction to a system of equations of the second kind is performed by differentiating Volterra integral equations of the first kind. When constructing models, a control input action is formed that provides the specified response of the dynamic system. Identification algorithms based on the theory of Volterra polynomial equations are used. The paper considers the case with noisy initial data, including when the condition of non-degeneracy of matrices in front of the main part is violated at some fixed points in time.



Optimizing Costs for Digital Transformation of the Region
Abstract
The article is devoted to the financial problems of digital transformation of regions of the Russian Federation. Principles and criteria for cost optimization are proposed when forming a state order for the digitalization of public administration in the regions and the budgetary sector (health care, edu- cation, social sphere, urban services), taking into account the current geopolitical situation in the country and the world, the tasks of digital transformation of the region and the installations of federal authorities, aimed at integrating regional systems into the federal information structure and using domestic software products and hardware.



Software engineering
Development of Simulation Controls Based on the Driftfluxfoam Solver of the OpenFOAM Platform
Abstract
The paper covers the issues of expanding the capabilities of the graphical shell for the Open- FOAM software package by connecting a module for controlling numerical modeling in the field of continuum mechanics using the driftFluxFoam solver program. Existing approaches to solving the problem of the lack of a graphical user interface for OpenFOAM are analyzed, their shortcomings are identified, and the relevance of the ongoing research is proven. The main components of the paper are high- lighted: research topic, goals and objectives, results, novelty of the work and expected practical value. The technology stack by means of which the research goal is achieved is presented and argued, the features and advantages of each technology are highlighted. A diagram is provided showing the step- by-step process of a user working with the product. The stages of conducting a numerical experiment using the proposed graphical shell and the driftFluxFoam solver are described. The key techniques pro- posed by the author and distinguishing the current product from its closest analogues are highlighted. The possibility of using the selected stack to achieve development goals was confirmed, and the main directions for further research in the topic under consideration were formulated.


