Enhancing Multi-Class Classification of Non-Functional Requirements Using a BERT-DBN Hybrid Model

Authors

DOI:

https://doi.org/10.29407/intensif.v9i2.24637

Keywords:

Natural Language Processing, BERT-DBN, Augmentation Data, Bayesian Optimization, Non Functional Requirements

Abstract

Background: Software requirements classification is essential to group Non-Functional Requirements (NFR) into several aspects, such as security, usability, performance, and operability. The main challenges in NFR classification are data limitations, text complexity, and high generalization needs. Objective: This research seeks to create a classification model using a hybrid of BERT and DBN, optimize hyperparameters, and improve data representation. Methods: A BERT and DBN-based approach is used, where DBN enhances BERT's ability to extract hierarchical features. Bayesian Optimization determines the optimal hyperparameters and data augmentation is applied to enrich the dataset variation. The model is tested on the PROMISE dataset consisting of 625 data. Results: The BERT-DBN model achieves 95% accuracy on the baseline configuration and 94% on the extensive configuration, better than the previous model, BERT-CNN. The model shows stability without any indication of overfitting. Conclusion: The combination of data augmentation, hyperparameter optimization, and DBN's ability to capture hierarchical patterns improves the accuracy of NFR classification, making it more effective than existing methods, and is expected to enhance text-based classification for software requirements.

Downloads

Download data is not yet available.

Author Biographies

  • Badzliana Aqmar Suris, Universitas Ahmad Dahlan

    Mathematics, Universitas Ahmad Dahlan

  • Aris Thobirin, Universitas Ahmad Dahlan

    Mathematics, Universitas Ahmad Dahlan

  • Sugiyarto Surono , Universitas Ahmad Dahlan

    Mathematics, Universitas Ahmad Dahlan

  • Mohamed Naeem Antharathara Abdulnazar, ISMA University

    Dept. Computer Systems, ISMA University

References

[1] A. Badawi, “the Effectiveness of Natural Language Processing (Nlp) As a Processing Solution and Semantic Improvement,” Int. J. Econ. Technol. Soc. Sci., vol. 2, no. 1, pp. 36–44, 2021, doi: 10.53695/injects.v2i1.194.

[2] V. Dogra et al., “A Complete Process of Text Classification System Using State-of-the-Art NLP Models,” Comput. Intell. Neurosci., vol. 2022, 2022, doi: 10.1155/2022/1883698.

[3] A. Jarzebowicz and P. Weichbroth, “A Qualitative Study on Non-Functional Requirements in Agile Software Development,” IEEE Access, vol. 9, pp. 40458–40475, 2021, doi: 10.1109/ACCESS.2021.3064424.

[4] T. Hey, J. Keim, A. Koziolek, and W. F. Tichy, “NoRBERT: Transfer Learning for Requirements Classification,” Proc. IEEE Int. Conf. Requir. Eng., vol. 2020-Augus, pp. 169–179, 2020, doi: 10.1109/RE48521.2020.00028.

[5] S. Saleem, M. N. Asim, L. Van Elst, and A. Dengel, “FNReq-Net: A hybrid computational framework for functional and non-functional requirements classification,” J. King Saud Univ. - Comput. Inf. Sci., vol. 35, no. 8, p. 101665, 2023, doi: 10.1016/j.jksuci.2023.101665.

[6] E. D. Canedo and B. C. Mendes, “Software requirements classification using machine learning algorithms,” Entropy, vol. 22, no. 9, 2020, doi: 10.3390/E22091057.

[7] S. Tiun, U. A. Mokhtar, S. H. Bakar, and S. Saad, “Classification of functional and non-functional requirement in software requirement using Word2vec and fast Text,” J. Phys. Conf. Ser., vol. 1529, no. 4, 2020, doi: 10.1088/1742-6596/1529/4/042077.

[8] R. K. Gnanasekaran, S. Chakraborty, J. Dehlinger, and L. Deng, “Using recurrent neural networks for classification of natural language-based non-functional requirements,” CEUR Workshop Proc., vol. 2857, 2021.

[9] K. Kaur and P. Kaur, “BERT-CNN: Improving BERT for Requirements Classification International Conference on Machine Learning and Data Engineering using CNN,” Procedia Comput. Sci., vol. 218, no. 2022, pp. 2604–2611, 2023, doi: 10.1016/j.procs.2023.01.234.

[10] J. Joo, “International Journal of Medical Informatics Predicting medical specialty from text based on a domain-specific,” vol. 170, no. December 2022, 2023, doi: 10.1016/j.ijmedinf.2022.104956.

[11] B. Yang, B. Zhang, K. Cutsforth, S. Yu, and X. Yu, “Emerging industry classification based on BERT model,” Inf. Syst., vol. 128, no. October 2024, p. 102484, 2025, doi: 10.1016/j.is.2024.102484.

[12] M. Escarda, C. Eiras-Franco, B. Cancela, B. Guijarro-Berdiñas, and A. Alonso-Betanzos, “Performance and sustainability of BERT derivatives in dyadic data,” Expert Syst. Appl., vol. 262, no. October 2023, p. 125647, 2025, doi: 10.1016/j.eswa.2024.125647.

[13] D. Wu, J. Yang, and K. Wang, “Exploring the reversal curse and other deductive logical reasoning in BERT and GPT-based large language models,” Patterns, p. 101030, 2024, doi: 10.1016/j.patter.2024.101030.

[14] M. A. Hamza et al., “Computational Linguistics with Optimal Deep Belief Network Based Irony Detection in Social Media,” 2023, doi: 10.32604/cmc.2023.035237.

[15] A. Loganthan, “DeepSecure watermarking : Hybrid Attention on Attention Net and Deep Belief Net based robust video authentication using Quaternion Curvelet Transform domain,” Egypt. Informatics J., vol. 27, no. July, p. 100514, 2024, doi: 10.1016/j.eij.2024.100514.

[16] B. K. Sethi, D. Singh, S. K. Rout, and S. K. Panda, “Long Short-Term Memory-Deep Belief Network-Based Gene Expression Data Analysis for Prostate Cancer Detection and Classification,” vol. 12, no. November 2023, 2024, doi: 10.1109/ACCESS.2023.3346925.

[17] N. Mahendran and D. R. Vincent P M, “Deep belief network-based approach for detecting Alzheimer’s disease using the multi-omics data,” Comput. Struct. Biotechnol. J., vol. 21, pp. 1651–1660, 2023, doi: 10.1016/j.csbj.2023.02.021.

[18] A. K. Shukla and P. K. Muhuri, “Deep belief network with fuzzy parameters and its membership function sensitivity analysis,” Neurocomputing, vol. 614, no. November 2022, p. 128716, 2025, doi: 10.1016/j.neucom.2024.128716.

[19] X. Li, W. Zhang, and Q. Ding, “Intelligent rotating machinery fault diagnosis based on deep learning using data augmentation,” J. Intell. Manuf., vol. 31, no. 2, pp. 433–452, 2020, doi: 10.1007/s10845-018-1456-1.

[20] Sugiyarto, J. Eliyanto, N. Irsalinda, and M. Fitrianawati, “Fuzzy sentiment analysis using convolutional neural network,” AIP Conf. Proc., vol. 2329, no. February, 2021, doi: 10.1063/5.0042144.

[21] S. Ünalan, O. Günay, I. Akkurt, K. Gunoglu, and H. O. Tekin, “Journal of Radiation Research and Applied Sciences A comparative study on breast cancer classification with stratified shuffle split and K-fold cross validation via ensembled machine learning,” J. Radiat. Res. Appl. Sci., vol. 17, no. 4, p. 101080, 2024, doi: 10.1016/j.jrras.2024.101080.

[22] A. Kousar et al., “MLHS-CGCapNet: A Lightweight Model for Multilingual Hate Speech Detection,” IEEE Access, vol. 12, no. August, pp. 106631–106644, 2024, doi: 10.1109/ACCESS.2024.3434664.

[23] M. Akram et al., “Uncertainty-aware diabetic retinopathy detection using deep learning enhanced by Bayesian approaches,” Sci. Rep., vol. 15, no. 1, p. 1342, 2025, doi: 10.1038/s41598-024-84478-x.

[24] A. Hossein, S. Das, J. Liu, and M. A. Rahman, “Using Bidirectional Encoder Representations from Transformers ( BERT ) to classify traffic crash severity types,” Nat. Lang. Process. J., vol. 3, no. April, p. 100007, 2023, doi: 10.1016/j.nlp.2023.100007.

[25] A. Subakti, H. Murfi, and N. Hariadi, “The performance of BERT as data representation of text clustering,” J. Big Data, 2022, doi: 10.1186/s40537-022-00564-9.

[26] Y. Kim et al., “OPEN A pre ‑ trained BERT for Korean medical natural language processing,” Sci. Rep., pp. 1–10, 2022, doi: 10.1038/s41598-022-17806-8.

[27] H. Kang, S. Goo, H. Lee, J. Chae, and H. Yun, “Fine-tuning of BERT Model to Accurately Predict Drug – Target Interactions,” pp. 1–15, 2022, doi: 10.3390/pharmaceutics14081710.

[28] D. Argade, V. Khairnar, D. Vora, S. Patil, K. Kotecha, and S. Alfarhood, “Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism,” Heliyon, vol. 10, no. 4, p. e26162, 2024, doi: 10.1016/j.heliyon.2024.e26162.

[29] S. Kula, R. Kozik, and M. Choraś, “Implementation of the BERT-derived architectures to tackle disinformation challenges,” Neural Comput. Appl., vol. 34, no. 23, pp. 20449–20461, 2022, doi: 10.1007/s00521-021-06276-0.

[30] N. Q. K. Le, Q. T. Ho, T. T. D. Nguyen, and Y. Y. Ou, “A transformer architecture based on BERT and 2D convolutional neural network to identify DNA enhancers from sequence information,” Brief. Bioinform., vol. 22, no. 5, pp. 1–7, 2021, doi: 10.1093/bib/bbab005.

[31] M. T R, V. K. V, D. K. V, O. Geman, M. Margala, and M. Guduri, “The stratified K-folds cross-validation and class-balancing methods with high-performance ensemble classifiers for breast cancer classification,” Healthc. Anal., vol. 4, no. July, p. 100247, 2023, doi: 10.1016/j.health.2023.100247.

[32] N. F. Idris, M. A. Ismail, M. I. M. Jaya, A. O. Ibrahim, A. W. Abulfaraj, and F. Binzagr, “Stacking with Recursive Feature Elimination-Isolation Forest for classification of diabetes mellitus,” PLoS One, vol. 19, no. 5 May, pp. 1–18, 2024, doi: 10.1371/journal.pone.0302595.

[33] S. Surono, M. Yahya Firza Afitian, A. Setyawan, D. K. E. Arofah, and A. Thobirin, “Comparison of CNN Classification Model using Machine Learning with Bayesian Optimizer,” HighTech Innov. J., vol. 4, no. 3, pp. 531–542, 2023, doi: 10.28991/HIJ-2023-04-03-05.

[34] D. Xing, W. Chen, C. Tatsuoka, and X. Lu, “Ba-ZebraConf: A Three-Dimension Bayesian Framework for Efficient System Troubleshooting,” pp. 1–12, 2024, doi: 10.48550/arXiv.2412.11073.

[35] G. T. Ribeiro, J. G. Sauer, N. Fraccanabbia, V. C. Mariani, and L. dos Santos Coelho, “Bayesian optimized echo state network applied to short-term load forecasting,” Energies, vol. 13, no. 9, 2020, doi: 10.3390/en13092390.

[36] G. Coulson and D. Ferrari, Context-Aware Systems and Applications , and Nature of Computation and Communication. 2020.

[37] A. K. Shukla and P. K. Muhuri, “A novel deep belief network architecture with interval type-2 fuzzy set based uncertain parameters towards enhanced learning,” Fuzzy Sets Syst., vol. 477, no. October 2023, p. 108744, 2024, doi: 10.1016/j.fss.2023.108744.

[38] J. Tang, J. Wu, and J. Qing, “A feature learning method for rotating machinery fault diagnosis via mixed pooling deep belief network and wavelet transform,” Results Phys., vol. 39, no. May, p. 105781, 2022, doi: 10.1016/j.rinp.2022.105781.

[39] A. Punitha and V. Geetha, “Automated climate prediction using pelican optimization based hybrid deep belief network for Smart Agriculture,” Meas. Sensors, vol. 27, no. February, p. 100714, 2023, doi: 10.1016/j.measen.2023.100714.

[40] R. Savitha, A. Ambikapathi, and K. Rajaraman, “Online RBM : Growing Restricted Boltzmann Machine on the fly for unsupervised representation,” Appl. Soft Comput. J., vol. 92, p. 106278, 2020, doi: 10.1016/j.asoc.2020.106278.

[41] J. Pei et al., “Two-Phase Virtual Network Function Selection and Chaining Algorithm Based on Deep Learning in SDN / NFV-Enabled Networks,” vol. 38, no. 6, pp. 1102–1117, 2020, doi: 10.1109/JSAC.2020.2986592.

[42] A. Albasir, Q. Hu, K. Naik, and N. Naik, “Unsupervised detection of security threats in cyber- physical system and IoT devices based on power fin- gerprints and RBM autoencoders,” pp. 1–25, 2021, doi: 10.20517/jsss.2020.19.

Downloads

Published

2025-07-12

How to Cite

[1]
B. A. Suris, A. Thobirin, S. Surono, and M. N. A. Abdulnazar, “Enhancing Multi-Class Classification of Non-Functional Requirements Using a BERT-DBN Hybrid Model”, INTENSIF: J. Ilm. Penelit. dan Penerap. Tek. Sist. Inf., vol. 9, no. 2, pp. 195–210, Jul. 2025, doi: 10.29407/intensif.v9i2.24637.