Predictive Analytics for Production Line Downtime: A Comprehensive Study Using Advanced Machine Learning Models
Main Article Content
Abstract
This study delves into predictive analysis, employing advanced machine learning models to predict downtime in production lines. The research follows a detailed methodology, starting by exploring production lines and their conditions. The focal point is Overall Equipment Effectiveness (OEE), which includes availability, performance, and quality, with a special focus on downtime impacts. Through an OEE tracking system, downtime is categorized into planned and unplanned, offering insights into critical components affecting production line availability. The datasets cover three years, including runtime, unplanned downtime due to equipment failures, and idle times, emphasizing instances prone to frequent equipment failures.
Examining relationships through a heatmap reveals essential correlations crucial for accurate predictions. We outline the feature selection process and fundamental data preprocessing steps to maintain dataset integrity. Introducing machine learning models such as Xgboost, Prophet, Multi-Layer Perceptron (MLP), and Long Short-Term Memory (LSTM), each model is carefully configured with past window parameters and forecast horizons. To prevent overfitting, the study employs K-fold cross-validation and early stopping mechanisms, ensuring robust model performance.
Two datasets, comprising runtime, equipment-induced downtime, and idle time, are utilized for analysis. Data preprocessing involves a heatmap for characteristic-target correlation and basic steps like mean-based null value imputation. Results, presented in terms of Mean Absolute Error (MAE), underscore the superior performance of LSTM. The discussion interprets these findings, confirming the potential of LSTM for downtime prediction, supported by visualizations comparing actual and predicted downtimes. Future directions include comprehensive model comparisons, expanding metrics, and increasing datasets, building on the solid foundations established by this study.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
References
Nakajima, S. (1988). An Introduction to TPM, Productivity Press, Portland, OR.
Next Generation Digital Factory Platform. SCW.AI. (2023, September 1). https://scw.ai
Tianqi, C. & Guestrin, C. (2016). Xgboost: A scalable and accurate implementation of gradient boosting machines. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 785-794.
Taylor, S. J. & Letham, B. (2017). Prophet: forecasting at scale. In Proceedings of the symposium on advances in neural information processing systems, pp. 3293-3301. DOI: https://doi.org/10.7287/peerj.preprints.3190v2
Rumelhart, D. E., Hinton, G. E. & Ronald, J. W. (1986). Learning representations by back-propagating errors." Nature 323, no. 6088: 533-536. DOI: https://doi.org/10.1038/323533a0
Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural computation 9, no. 8, 1735-1780. DOI: https://doi.org/10.1162/neco.1997.9.8.1735