• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

  1. (School of Information Technology and Engineering, Kazakh-British Technical University, Kazakhstan.)



Artificial Intelligence, Financial Decision-Making, Predictive Modeling, Business Process Model and Notation.

1. Introduction

Making financial decisions is still a big deal for both personal and industrial success. But with the fast growth of global markets, more complex financial tools, and a huge increase in data, traditional ways of making decisions are becoming less effective. In order to address these challenges, intelligent financial systems that use artificial intelligence (AI), technical analysis, and process automation to provide adaptive, real-time analytic insights are gaining popularity[1].

Decades of historical asset data, technical indicators as the relative strength index (RSI) and automatic decision-making logic based on business process notation (BPMN) were used to identify unusual market behavior and potential anomalies in the financial sector. The integration of RSI into the analytic module allows the system to more accurately assess the momentum of financial assets and recognize overbought or oversold conditions in the market.

There are a bunch of ways to forecast the stock markets and machine learning methods for forecasting share prices are often divided into three big categories: support vector machines (SVM), neural networks (NN) and random forests. Linear regression, one of the simplest and oldest machine learning methods, serves as a starting point and describes the relationship between share prices and a variety of explanatory factors. Despite its simplicity and clarity, this approach is limited by its linear nature and is not always able to effectively reflect the complex relationships in financial markets.

To overcome these constraints, more complex methods have been introduced, such as decision trees among which Random Forest stands out. These methods outperform linear models by taking into account non-linear relationships between features, improving prediction accuracy.

Another important market forecasting method is the SVMs, which transforms the input data into a higher-dimensional space for optimal separation of various classes. This ability makes SVMs especially useful for market classification or price prediction based on multiple input parameters. However, due to their advantages, SVMs can be resource-intensive and require careful tuning to get the best results.

The coming of neural networks has really pushed the field of stock market forecasting. Simple feedforward neural networks (FNNs), as well as more composite architectures like LSTM networks, recurrent neural networks (RNNs) are successfully simulate the timing dependencies and non-linear patterns that are appropriate in stock market data.

This study presents a detailed comparative study of various machine learning algorithms for stock market time series forecasting. Both basic algorithms widely used in practice such as linear regression and support vector machines, and modern methods, including convolutional neural networks (CNN), long short-term memory (LSTM) and transformer-based architectures, are considered.

2. Literature Review and Used Algorithms

2.1 XGBoost (eXtreme Gradient Boosting)

XGBoost is the most advanced version of the gradient boosting decision tree algorithm (Gradient Boosted Decision Trees, GBDT), which is based on the similar principles but includes the usage of second derivatives to increase the accuracy of the loss function, regression to prevent overfitting and data batching to boost processing speed. Due to its efficiency, flexible and optimized memory usage, XGBoost has become popular in fields such as economics, data intelligence and recommendation systems.

The XGBoost algorithm is widely used for stock price prediction based on high-frequency trading data and has been considered one of the leading machine learning methods. The high accuracy of daily stock price predictions using XGBoost shows that economic management can be done effectively with modern algorithms, as long as you can avoid overfitting and under-fitting during model parameter tuning. Thus, we can say that real-time economic analysis based on complex algorithms will get more popular due to a deeper understanding of market processes and the strategy development based on them[2].

2.2 SVM

SVMs have been the center of attention in the machine learning community for a long time. In a study by Yang et al.[3], SVMs were used to estimate market volatility through deviations in prediction boundaries and their reduction. It was also investigated using only an asymmetric boundary for a downward slope can reduce the negative risks too often.

The proposed methodology showed the highest prediction accuracy for daily closing prices of a stock index. The use of the SVM model provided significant advantages for both investors and regulators in evolving markets such as the Indian stock market. Further research in this area could enhance this approach by expanding the model to include macroeconomic variables other than share prices that also have a significant impact, such as interest rates, exchange rates or the consumer price index (CPI)[3].

2.3 LSTM

All previous researchers studied the effectiveness of the LSTM models in dealing with the non-stationarity of financial time series, including noise, volatility, and non-linearity, which are characteristic of the stock market. The LSTM model was trained on historical stock price data, and the prediction accuracy was estimated based on real market price movements [4].

The results confirmed that LSTM models work well for forecasting stock price dynamics, as they are capable of forming forecast distributions and effectively managing time dependencies while capturing long-term patterns in the data. The new LSTM model outperforms traditional models in terms of forecast accuracy and consistency, making it more useful for practical applications[5]. Nevertheless, the authors also emphasise the inherent unpredictability of the stock market and noted that, while LSTM can substantially improve forecast accuracy, their consistent application cannot fully eliminate the inherent uncertainty in financial forecasting[6].

2.4 GARCH (Generalized Autoregressive Conditional Heteroskedasticity)

An extensive scientific literature has been formed on GARCH models used to predict the volatility of company shares, stock indices of countries, and commodity prices. The various GARCH model options as linear and nonlinear, asymmetric and symmetric models are assessed in these markets.

The usage of artificial neural networks for volatile forecasting is a fairly new and modern research area, as these models that traditionally used for this purpose. However, artificial neural networks have showed high efficiency in forecast, classification, and anomaly detection tasks.

2.5 Random Forest and Artificial Neural Networks

The stock market is simulated with the Random Forest method, which is an assembly machine learning method that constructs a bunch of decision trees and gives the most widespread prediction among them. In modern machine learning research, stock market forecasting is considered not only an academic problem but also a cutting-edge one due to its non-linearity and volatility. These properties are better reflected by modern techniques.

Despite the fact that algorithms like artificial neural networks (ANN) and SVM have been actively studied, ensemble methods like random forest are not often used in e-commerce system. However, in a study by Xiong et al.[7] recent studies show that integrating knowledge graphs and graph-based neural models can significantly improve the accuracy of stock market forecasting.

The results revealed that integrating a Random Forest classifier into the task allows for accuracy ranging from 85 to 95%. The model was tested on metrics such as recall, precision, accuracy specificity and the ROC curve. The work emphasises the many advantages of non-linear models over linear models in predicting stock market trends[7, 8].

3 Methodology

3.1 Model Structure

Two types of data sets are used in this analysis: the source data set and the training data set. The source data set, representing a large data set based on an exchange-traded fund (ETF), which used for train the model. The training data set is then used to solve new problems. During training, the source data gets pre-processed and fed into the pre-trained model. The financial decision-making system architecture is designed to be scalable, modular, and adaptable[9]. It combines historical data analysis, machine learning models, technical indicators as the RSI, moving average convergence divergence (MACD), and automated process execution using BPMN. This multi-layered design ensures that the system not only provides accurate financial forecasts but also responds intelligently to dynamic market conditions[10].

Fig. 1. The architecture of the Machine Learning-Based Forecasting Framework.

../../Resources/kiee/KIEE.2026.75.3.658/fig1.png

3.2 Selecting a Model

This section shows how to predict future market prices. The proposed method is shown in Fig. 1. Figure 1 illustrates the multi-stage structure of the given stock market trends forecasting system based on machine learning methods. The process starts with the data collection stage, where historical data on the stock market, including stock prices, technical indicators, trading volumes and macroeconomic indicators, are collected[11]. This is data pre-processing stage, which includes cleaning, normalisation, feature generation and training sample formation. At the model selection step, a suitable prediction algorithm is selected based on data characteristics and research objectives[12, 13]. The models covered include Random Forest, LSTM, GARCH, SVM, and XGBoost, each of which represents various classes of statistical and intelligent analysis methods [14]. After selecting a model, training is performed, where the model is trained on historical data to reveal patterns in price dynamics. The model evaluation phase involves measuring its predictive accuracy using metrics such as Accuracy, RMSE, AUC(Area Under the Curve), and others. At the comparative analysis stage, the performance of different models is compared to identify the most suitable approach depending on market conditions. Then, robust testing is conducted to ensure the stability of models in stressful or unstable market situations. The process ends with the formulation of conclusions and recommendations, where the results are summarized and suggestions are made on how to use the model in applied tasks of automated decision-making in the stock market[15~ 17].

4 Data Collection

This study uses two key datasets curated to support the quantitative analysis of stock price dynamics and technical indicators. The main goal of the data collection process is to ensure high-quality, time-aligned, and analytically robust financial data suitable for empirical modeling and anomaly detection.

4.1 Historical Stock Market Data

The first dataset consists of multi-year historical trading data for selected equities, including daily open, close prices, high, low, trading volume and turnover. These indicators serve as foundational inputs for time series modeling and trend analysis. The data was gathered via automated extraction from verified financial data providers, ensuring consistency, accuracy, and replicability of results[18].

4.2 Technical Indicator Series

The second dataset contains pre-calculated values of the RSI, which widely used momentum oscillator in technical analysis. RSI values were computed using a 14-day rolling window applied to the historical price data, allowing for the identification of overbought or oversold market conditions. This dataset enables the integration of momentum-based signals into the modeling framework[19].

4.3 Temporal Scope

The combined data spans the period from November 29, 2018 to November 29, 2023, covering various market phases, including bullish expansions and bearish corrections. This five-year time frame captures diverse financial conditions, making it suitable for evaluating the stability and adaptability of forecasting models across different economic cycles[20~ 22].

The dataset contains essential variables required for a comprehensive analysis of stock dynamics. These variables are presented in Table 1.

For the purposes of this research, a structured dataset was collected containing daily trading indicators for the shares of leading American technology companies as Apple, Microsoft, NVIDIA, Alphabet, Amazon, Meta, Tesla and others[23, 26]. The data was extracted from the Bloomberg platform, which guarantees its accuracy and financial reliability as illustrated in Fig. 2.

Each row of the dataset in Fig. 2 contains the trading session date, the opening price, daily maximum and minimum prices, the closing price, the total volume of trades, information about dividends paid, data on stock splits and the name of the corresponding company[26~ 27].

Table 1. Identified variables for modeling

Variable Description
Date Date corresponding to the trading session
Previous close price Stock's closing price recorded on the preceding trading day
Opening price Initial transaction price at market open on the specified trading day
Highest price Peak price reached by the stock during the trading session
Lowest price Minimum price at which the stock was traded throughout the day
Closing price Final transaction price recorded before the market closed on the given day
Average price Arithmetic mean of the daily high and low prices
Traded volume Number of shares exchanged throughout the trading session

Fig. 2. A fragment of the original dataset from the Bloomberg platform: daily market indicators for the largest US technology companies, including prices and trading volumes.

../../Resources/kiee/KIEE.2026.75.3.658/fig2.png

5 Results and Discussion

In this study, one of the main technical indicators is the RSI, which is used to assess the strength and direction of price momentum. RSI helps figure out if an asset is overbought or oversold based on recent price changes. The RSI calculation formula is formed on the ratio of the average positive and negative price changes over a given period (usually 14 days),

(1)
$RSI = 100 - \left( \frac{100}{1 + \left( \frac{Average Gain}{Average Loss} \right)} \right),$

where Average Gain is the average value of positive price changes over a period and Average Loss is the average value of negative price changes over a period.

The RSI indicator ranges from 0 to 100. Values above 70 are traditionally interpreted as a signal of overbought, while values below 30 are interpreted as a signal of oversold. To ensure data integrity, preprocessing steps included filtering out incomplete records, harmonizing date formats, and aligning price and RSI series[28, 29]. All operations were performed programmatically to support reproducibility and ensure methodological transparency. At the heart of the research lies a robust data management pipeline that gathers, processes, and structures financial time series data from various sources, including public APIs, stock exchange records, and institutional datasets.

Figure 3 shows a visualization of stock price dynamics with overbought and oversold signals based on the RSI. The upper graph displays the closing prices of the day of selected share over a five-year period, where the blue bars indicate closing prices, red markers indicate overbought conditions (RSI > 70), and green markers indicate oversold periods (RSI < 30). This visualisation helps to clearly identify key turning points in the market and potentially significant trend reversal signals. The lower chart shows the RSI time series with dotted lines representing the 70 and 30 levels. These thresholds are commonly used in technical analysis to assess extreme values of market impulsiveness. The RSI is calculated based on a 14-days moving window and synchronized with real market conditions.

Fig. 3. Stock closing price chart with RSI indicators

../../Resources/kiee/KIEE.2026.75.3.658/fig3.png

This figure illustrates one the key components of the data processing pipeline under investigation, where raw financial time series are enhanced with technical indicators. By normalization, threshold filtering and feature engineering, these signals are transformed into structured data ready for use in machine learning models to predict market dynamics and identify anomalies.

5.1 R-squared

R-squared evaluates how well the predicted values approximate the actual data. It is defined as:

(2)
$R^2 = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y}_i)^2},$

where $y_i$ = actual value, $\hat{y}_i$ = predicted value, $\bar{y}_i$ = mean of actual values, n = number of observations. An $R^2$ value closer to 1 indicates better explanatory power of the model.

5.2 Root Mean Square Error

RMSE evaluates the square root of the average squared differences between actual and predicted values, penalizing large deviations more severely:

(3)
$RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2}.$

This metric is sensitive to outliers and often used when large errors are particularly undesirable.

5.3 Mean Absolute Error

MAE measures the average of absolute differences between predicted and actual values, offering a linear and more interpretable error metric:

(4)
$MAE = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|.$

It is less sensitive to outliers than RMSE and is thus suitable for applications where all errors are equally weighted.

5.4 Accuracy

For classification-based models, prediction Accuracy is defined as the proportion of correct predictions among total predictions made:

(5)
$Accuracy = \frac{Number of correct predictions}{Total number of predictions}$

$= \frac{TP + TN}{TP + TN + FP + FN},$

where TP = True Positives, TN = True Negatives, FP = False Positives, and FN = False Negatives. This metric is effective for balanced datasets but should be supplemented with precision, recall, and F1-score in imbalanced scenarios.

To clearly assess the effectiveness of machine learning models in time series forecasting, graphs comparing actual closing prices with predicted results were constructed. Below are visualisations for each of the tested models, including XGBoost, Random Forest, LSTM, and others. These graphs provide a visual illustration of the accuracy of predictions, model stability. Each visualization is followed by a brief description reflecting the behavior of the model on the test sample.

Fig. 4. Comparison of actual and predicted closing prices of shares using the XGBoost, LSTM, SVM, and Random Forest models.

../../Resources/kiee/KIEE.2026.75.3.658/fig4.png

Figure 4 shows a graph comparing the actual and predicted closing prices of shares using the XGBoost, LSTM, SVM, and Random Forest models. Despite the volatility, the predictions show a close approximation to the actual market trend, indicating the model's potential ability to capture the overall market movement, but with some systematic deviations and underestimation of the amplitude of fluctuations.

Figure 5 shows a comparison of the actual closing price and Random Forest model predictions on the training (green) and test (pink) samples. The orange line shows the real closing prices, where the model predictions on the training data are almost the same as the real prices, showing good matching with historical patterns. The test sample shows a slight increase in dispersion and noise, but the overall trend continues, indicating Random Forest's ability to capture nonlinear time series patterns without excessive overfitting.

Figure 6 shows the predicted 10-day forward volatility of TCS shares, estimated using the XGBoost, LSTM, SVM, Random Forest, and GARCH models. The volatility clustering typical of GARCH is clearly visible by periods of calm behaviour followed by spikes representing market shocks, after the volatility returns to its average level. This structure validates the ability of model to account for heteroskedasticity and retain memory of recent changes in dispersion, which is critical for an adequate assessment of future risk.

Fig. 5. Comparison of actual and predicted closing prices for (A) XGBoost, (B) SVM, (C) LSTM, (D) GARCH, and (E) Random Forest models.

../../Resources/kiee/KIEE.2026.75.3.658/fig5.png

Fig. 6. 10-day volatility forecast for TCS shares obtained using the XGBoost, LSTM, SVM, Random Forest, and GARCH models.

../../Resources/kiee/KIEE.2026.75.3.658/fig6.png

Table 2. Comparative Model Evaluation

Model $R^2$ $RMSE$ $MAE$
XGBoost 0.89198 101.69 74.30
LSTM 0.89265 10.18 17.50
SVM 0.75126 150.39 115.97
Random Forest 0.71907 203.07 308.08
GARCH 0.88141 23.25 23.91

A summary of all evaluations for these models is provided in Table 2 for different metrics, such as RMSE, R-squared, MAE. These quality indicators could be used to determine which model is best suitable for the actual time series data.

Fig. 7. Comparison of models in terms of forecast accuracy.

../../Resources/kiee/KIEE.2026.75.3.658/fig7.png

Figure 7 shows a comparative analysis of models based on accuracy metrics for market signal forecasting. The LSTM model demonstrated the highest accuracy (93.54%), which reflects the ability of model to account for long-term and seasonal dependencies in time series. It is followed by GARCH (91.28%), showing strong results due to the modelling of conditional heteroscedasticity and then XGBoost (88.14%), which accurately captures non-linear patterns. SVM (80.11%) and Random Forest (77.41%) models performed less well, which may indicate their limited ability to adapt to the complex time series structure of the data or require more careful hyperparameter tuning. This distribution of accuracies provides useful guidance for choosing priority models for further integration into trading strategies and decision-making systems.

However, this implementation has some limitations. The structure assumed stationarity inside the selected time windows and did not include explicit detection of market mode changes or consideration of macroeconomic covariates, which can reduce the model's stability during structural changes and sudden shifts in market dynamics.

6 Conclusion

This study presented a comparative analysis of statistical volatility models and machine learning techniques for financial time series prediction. We have tested to understand which model gives the best effect, so that the best model can be used for further work to obtain more accurate forecasts. Among the evaluated models, the LSTM achieved the highest forecasting accuracy, while the GARCH model complemented the analysis by capturing conditional heteroskedasticity and identifying characteristic volatility clustering patterns. The XGBoost model demonstrated an ability to detect nonlinear dependencies; however, systematic amplitude underestimation was observed in several modes. Despite yielding relatively lower accuracy metrics, both Random Forest and SVM models remain valuable from a comparative perspective. Notably, a trading strategy derived from SVM-generated signals consistently outperformed a passive asset-preservation benchmark, as evidenced by the cumulative return profiles.

A visual comparison of actual and predicted values not only validated the behavior of the models but also identified their strengths and systematic deviations. Analysis of cumulative returns showed that turning predictions into trading decisions can bring extra profits compared to the basic hold strategy, confirming the practical value of integrating predictive signals into the trading logic. A comparative analysis of model accuracy formed a clear hierarchy of priorities for their use in future workflows, with LSTM and GARCH demonstrating themselves as the most reliable components of the basic structure.

Acknowledgments

This research has been funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. BR28712579)

References

1 
M. Xie, 2019, Development of Artificial Intelligence and Effects on Financial System, Journal of Physics: Conference Series, Vol. 1187, No. 3, pp. 032084DOI
2 
T. Chen, C. Guestrin, 2016, XGBoost: a Scalable Tree Boosting System, pp. 785-794DOI
3 
Y. Yang, C. H. Liu, S. Chen, 2014, Stock market prediction based on SVM optimized by genetic algorithm, Procedia Computer Science, Vol. 31, pp. 1130-1135DOI
4 
T. Fischer, C. Krauss, 2018, Deep learning with long short-term memory networks for financial market predictions, European Journal of Operational Research, Vol. 270, No. 2, pp. 654-69DOI
5 
W. Bao, J. Yue, Y. Rao, 2017, A deep learning framework for financial time series using stacked autoencoders and long-short term memory, PLOS ONE, Vol. 12, No. 7, pp. e0180944DOI
6 
Y. Bao, Z. Yue, Y. Rao, 2019, A deep learning framework for financial time series using stacked LSTM, Procedia Computer Science, Vol. 147, pp. 632-638DOI
7 
X. Ding, Y. Zhang, T. Liu, J. Duan, 2016, Knowledge-Driven Event Embedding for Stock Prediction, pp. 2133-2142Google Search
8 
C. Krauss, X. A. Do, N. Huck, 2017, Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500, European Journal of Operational Research, Vol. 259, No. 2, pp. 689-702DOI
9 
G. E. P. Box, G. M. Jenkins, G. C. Reinsel, G. M. Ljung, 2015, Time Series Analysis: Forecasting and ControlGoogle Search
10 
O. Shobayo, S. Adeyemi-Longe, O. Popoola, O. Okoyeigbo, 2025, A Comparative Analysis of Machine Learning and Deep Learning Techniques for Accurate Market Price Forecasting, Analytics, Vol. 4, No. 1, pp. 5DOI
11 
M. Ballings, D. Van den Poel, N. Hespeels, R. Gryp, 2015, Evaluating multiple classifiers for stock price direction prediction, Expert Systems with Applications, Vol. 42, No. 20, pp. 7046-56DOI
12 
E. Chong, C. Han, F. C. Park, 2017, Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies, Expert Systems with Applications, Vol. 83, pp. 187-205DOI
13 
J. Patel, S. Shah, P. Thakkar, K. Kotecha, 2015, Predicting stock market index using fusion of machine learning techniques, Expert Systems with Applications, Vol. 42, No. 4, pp. 2162-2172DOI
14 
L. Zhang, C. Aggarwal, G. J. Qi, 2017, Stock Price Prediction via Discovering Multi-Frequency Trading Patterns, pp. 2141-2149DOI
15 
A. Bansal, A. Singh, S. Roy, K. Agarwal, 2024, Price Wise a Deep Learning Approach to Stock Price Prediction, pp. 1-6DOI
16 
L. Mochurad, A. Dereviannyi, 2024, An ensemble approach integrating LSTM and ARIMA models for enhanced financial market predictions, Royal Society Open Science, Vol. 11, No. 9, pp. 240699DOI
17 
X. Duan, M. Pan, 2024, An intelligent financial data mining system using a fuzzy clustering multimedia approach, Journal of Control and Decision, Vol. 12, No. 4, pp. 1-10DOI
18 
Z. Li, J. Wang, 2017, An optimization application of artificial intelligence technology in enterprise financial management, Boletin Tecnico/Technical Bulletin, Vol. 55, No. 11, pp. 83-89Google Search
19 
D. Jahed Armaghani, E. Tonnizam Mohamad, M. Hajihassani, S. V. Alavi Nezhad Khalil Abad, A. Marto, M. R. Moghaddam, 2015, Evaluation and prediction of flyrock resulting from blasting operations using empirical and computational methods, Engineering with Computers, Vol. 32, No. 1, pp. 109-21DOI
20 
X. Gong, Z. Wang, L. Wang, 2018, Research on financial early warning model for papermaking enterprise based on particle swarm K-means algorithm, Paper Asia, Vol. 34, No. 6, pp. 41-45Google Search
21 
S. Zhai, 2017, Research on enterprise financial management and decision making based on decision tree algorithm, Boletin Tecnico/Technical Bulletin, Vol. 55, No. 15, pp. 166-173Google Search
22 
L. Wang, Y. Liu, J. Wu, 2018, Research on financial advertisement personalised recommendation method based on customer segmentation, International Journal of Wireless and Mobile Computing, Vol. 14, No. 1, pp. 97DOI
23 
H. Sun, Z. Yao, Q. Miao, 2021, Design of Macroeconomic Growth Prediction Algorithm Based on Data Mining, Mobile Information Systems, Vol. 2021, pp. 1-8DOI
24 
Y. Kang, 2022, Fusion analysis of management accounting and financial accounting based on data mining, Vol. 12330, pp. 375-380DOI
25 
D. Koishiyeva, A. Bissembayev, T. Iliev, J. W. Kang, A. Mukasheva, 2024, Classification of Skin Lesions using PyQt5 and Deep Learning Methods, pp. 1-7DOI
26 
S. Lessmann, B. Baesens, H. V. Seow, L. C. Thomas, 2015, Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research, European Journal of Operational Research, Vol. 247, No. 1, pp. 124-36DOI
27 
W. Jin, N. Wang, L. Zhang, X. Tian, B. Shi, B. Zhao, 2025, A Review of AI-Driven Automation Technologies: Latest Taxonomies, Existing Challenges, and Future Prospects, Computers, Materials and Continua, Vol. 84, No. 3, pp. 3961-4018DOI
28 
A. Tolkynbekova, D. Koishiyeva, A. Bissembayev, D. Mukhammejanova, A. Mukasheva, J. W. Kang, 2025, Comparative Analysis of the Predictive Risk Assessment Modeling Technique Using Artificial Intelligence, Journal of Electrical Engineering & Technology, Vol. 20, No. 6, pp. 4509-26DOI
29 
O. Bayazov, A. Aidos, J. W. Kang, A. Mukasheva, 2025, Voice Biometric Authentication Using AI: A Comparative Study on Neural Network Robustness to Noise and Spoofing, The Transactions of the Korean Institute of Electrical Engineers, Vol. 74, No. 10, pp. 1731-1739DOI

저자소개

Almas Saduakas
../../Resources/kiee/KIEE.2026.75.3.658/au1.png

He received the B.S. degree in Mathematical and Computer Modeling from the International Information Technologies University, Almaty, Kazakhstan, and the M.S. degree in Data Science from Kazakh–British Technical University (KBTU). He is currently pursuing the Ph.D. degree in Computer Science and Artificial Intelligence at KBTU, where he also serves as a Senior Lecturer at the School of Information Technology and Engineering.

Assel Mukasheva
../../Resources/kiee/KIEE.2026.75.3.658/au2.png

She received the B.S., M.S., and PhD. degrees from Satbayev University, Almaty, Kazakhstan, in 2004, 2014, and 2020, respectively. In September 2023, she joined Kazakh-British Technical University, where she is currently an professor in School of Information Technology and Engineering. Big Data, cyber security, machine learning, and comparative study of deep learning methods.

Alibek Bisembayev
../../Resources/kiee/KIEE.2026.75.3.658/au3.png

He, PhD in Economics, is a distinguished academic and industry expert with over 21 years of multidisciplinary experience spanning higher education, finance, government, IT, and retail sectors. He currently holds the position of Associate Professor at the School of IT and Engineering at Kazakh-British Technical University, where he has been contributing for the past five years.

Dina Koishiyeva
../../Resources/kiee/KIEE.2026.75.3.658/au4.png

She received the B.S. and M.S. degrees from Almaty University of Energy and Telecommunications, Kazakhstan, in 2015 and 2024, respectively. She has authored six peer-reviewed international publications and is the first author of several of them. Her research interests include medical image analysis, multimodal learning, deep learning for healthcare, and segmentation of biomedical data.

강정원(Jeong Won Kang)
../../Resources/kiee/KIEE.2026.75.3.658/au5.png

He received his B.S., M.S., and Ph.D. degrees in electronic engineering from Chung-Ang University, Seoul, Korea, in 1995, 1997, and 2002, respectively. In March 2008, he joined the Korea National University of Transportation, Republic of Korea, where he currently holds the position of Professor in the Department of Transportation System Engineering, the Department of SMART Railway System, and the Department of Smart Railway and Transportation Engineering.