Comparative Analysis of Transformer-Based Method In A Question Answering System for Campus Orientation Guides

Abstract views: 277 , PDF downloads: 248
Keywords: Question Answering, NLP, Transformer, IndoBERT, RoBERTa

Abstract

The campus introduction process is a stage where new students acquire information about the campus through a series of activities and interactions with existing students. However, the delivery of campus introduction information is still limited to conventional methods, such as using guidebooks. This limitation can result in students having a limited understanding of the information needed during their academic period. The one of solution for this case is to implement a deep learning system with knowledge-based foundations. This research aims to develop a Question Answering System (QAS) as a campus introduction guide by comparing two transformer methods, namely the RoBERTa and IndoBERT architectures. The dataset used is processed in the SQuAD format in the Indonesian language. The collected SQuAD dataset in the Indonesian language consists of 5046 annotated data. The result shows that IndoBERT outperforms RoBERTa with EM and F1-Score values of 81.17 and 91.32, respectively, surpassing RoBERTa with EM and F1-Score values of 79.53 and 90.18.

Downloads

Download data is not yet available.

Author Biographies

Fedryanto Dartiko, Universitas Bengkulu

Department Informatics Universitas Bengkulu

Mochammad Yusa, Universitas Bengkulu

Department Informatics Universitas Bengkulu

Aan Erlansari, Universitas Bengkulu

Department Information System Universitas Bengkulu

Shaikh Ameer Basha, Bearys Institute of Technology

Department of Mathematics
Bearys Institute of Technology, Mangalore-574199 India

References

O. Sanllorente, R. Ríos-Guisado, L. Izquierdo, J. L. Molina, E. Mourocq, and J. D. Ibáñez-Álamo, “The importance of university campuses for the avian diversity of cities,” Urban For. Urban Green., vol. 86, p. 128038, 2023, doi: https://doi.org/10.1016/j.ufug.2023.128038.

S. Colby, “HCRC 2022: A Novel Conference Approach for Disseminating Information on Assessing the Healthfulness of College Campuses,” J. Nutr. Educ. Behav., vol. 55, no. 7, Supplement, p. 108, 2023, doi: https://doi.org/10.1016/j.jneb.2023.05.230.

A. Ansari, M. Maknojia, and A. Shaikh, “Intelligent question answering system based on artificial neural network,” in 2016 IEEE International Conference on Engineering and Technology (ICETECH), 2016, pp. 758–763. doi: 10.1109/ICETECH.2016.7569350.

Y. Tan et al., “Research on Knowledge Driven Intelligent Question Answering System for Electric Power Customer Service,” Procedia Comput. Sci., vol. 187, pp. 347–352, 2021, doi: https://doi.org/10.1016/j.procs.2021.04.072.

T. Shao, Y. Guo, H. Chen, and Z. Hao, “Transformer-Based Neural Network for Answer Selection in Question Answering,” IEEE Access, vol. 7, pp. 26146–26156, 2019, doi: 10.1109/ACCESS.2019.2900753.

X. Hu, S. Zhu, and T. Peng, “Hierarchical attention vision transformer for fine-grained visual classification,” J. Vis. Commun. Image Represent., vol. 91, p. 103755, 2023, doi: https://doi.org/10.1016/j.jvcir.2023.103755.

Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” no. 1, 2019, [Online]. Available: http://arxiv.org/abs/1907.11692

F. Koto, A. Rahimi, J. H. Lau, and T. Baldwin, “IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP,” COLING 2020 - 28th Int. Conf. Comput. Linguist. Proc. Conf., pp. 757–770, 2020, doi: 10.18653/v1/2020.coling-main.66.

H. Ngai, Y. Park, J. Chen, and M. Parsapoor, “Transformer-Based Models for Question Answering on COVID19,” pp. 1–7, 2021, [Online]. Available: http://arxiv.org/abs/2101.11432

Y. Wijaya, Rahmaddeni, and F. Zoromi, “Chatbot Designing Information Service for New Student Registration Based on AIML and Machine Learning,” JAIA - J. Artif. Intell. Appl., vol. 1, no. 1, pp. 01–10, 2020, doi: 10.33372/jaia.v1i1.638.

A. F. Hanifah and R. Kusumaningrum, “Non-Factoid Answer Selection in Indonesian Science Question Answering System using Long Short-Term Memory (LSTM),” Procedia Comput. Sci., vol. 179, pp. 736–746, 2021, doi: https://doi.org/10.1016/j.procs.2021.01.062.

T. Noraset, L. Lowphansirikul, and S. Tuarob, “WabiQA: A Wikipedia-Based Thai Question-Answering System,” Inf. Process. Manag., vol. 58, no. 1, p. 102431, 2021, doi: https://doi.org/10.1016/j.ipm.2020.102431.

T. T. Aurpa, R. K. Rifat, M. S. Ahmed, M. M. Anwar, and A. B. M. S. Ali, “Reading comprehension based question answering system in Bangla language with transformer-based learning,” Heliyon, vol. 8, no. 10, p. e11052, 2022, doi: https://doi.org/10.1016/j.heliyon.2022.e11052.

N. Volkmann, A. Riedel, N. Kemper, and B. Spindler, “Applied Research Note: Comparison of different methods for beak measurements in laying hens,” J. Appl. Poult. Res., vol. 32, no. 4, p. 100373, 2023, doi: https://doi.org/10.1016/j.japr.2023.100373.

Z. A. Guven and M. O. Unalir, “Natural language based analysis of SQuAD: An analytical approach for BERT,” Expert Syst. Appl., vol. 195, p. 116592, 2022, doi: https://doi.org/10.1016/j.eswa.2022.116592.

X. Zhou, D. Nurkowski, A. Menon, J. Akroyd, S. Mosbach, and M. Kraft, “Question answering system for chemistry—A semantic agent extension,” Digit. Chem. Eng., vol. 3, p. 100032, 2022, doi: https://doi.org/10.1016/j.dche.2022.100032.

Z. H. Syed, A. Trabelsi, E. Helbert, V. Bailleau, and C. Muths, “Question Answering Chatbot for Troubleshooting Queries based on Transfer Learning,” Procedia Comput. Sci., vol. 192, pp. 941–950, 2021, doi: https://doi.org/10.1016/j.procs.2021.08.097.

A. Rácz, D. Bajusz, and K. Héberger, “Effect of Dataset Size and Train/Test Split Ratios in QSAR/QSPR Multiclass Classification,” Molecules, vol. 26, no. 4, 2021, doi: 10.3390/molecules26041111.

A. Apicella, F. Isgrò, A. Pollastro, and R. Prevete, “On the effects of data normalization for domain adaptation on EEG data,” Eng. Appl. Artif. Intell., vol. 123, p. 106205, 2023, doi: https://doi.org/10.1016/j.engappai.2023.106205.

S. Ullah et al., “TNN-IDS: Transformer neural network-based intrusion detection system for MQTT-enabled IoT Networks,” Comput. Networks, vol. 237, p. 110072, 2023, doi: https://doi.org/10.1016/j.comnet.2023.110072.

S. Zhang, C. Lian, B. Xu, J. Zang, and Z. Zeng, “A token selection-based multi-scale dual-branch CNN-transformer network for 12-lead ECG signal classification,” Knowledge-Based Syst., vol. 280, p. 111006, 2023, doi: https://doi.org/10.1016/j.knosys.2023.111006.

Z. Zhang, J. David, and J. Liu, “Batch sizing control of a flow shop based on the entropy-function theorems,” Expert Syst. Appl., vol. 213, p. 118958, 2023, doi: https://doi.org/10.1016/j.eswa.2022.118958.

L. Liu, O. Perez-Concha, A. Nguyen, V. Bennett, and L. Jorm, “Automated ICD coding using extreme multi-label long text transformer-based models,” Artif. Intell. Med., vol. 144, p. 102662, 2023, doi: https://doi.org/10.1016/j.artmed.2023.102662.

V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter,” in NeurIPS EMC^2 Workshop, 2019.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” CoRR, vol. abs/1810.0, 2018, [Online]. Available: http://arxiv.org/abs/1810.04805

H. Sharma and A. S. Jalal, “A survey of methods, datasets and evaluation metrics for visual question answering,” Image Vis. Comput., vol. 116, p. 104327, 2021, doi: https://doi.org/10.1016/j.imavis.2021.104327.

Y. Kim, S. Bang, J. Sohn, and H. Kim, “Question answering method for infrastructure damage information retrieval from textual data using bidirectional encoder representations from transformers,” Autom. Constr., vol. 134, p. 104061, 2022, doi: https://doi.org/10.1016/j.autcon.2021.104061.

T. Wolf et al., “Transformers: State-of-the-Art Natural Language Processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online: Association for Computational Linguistics, 2020, pp. 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-demos.6

N. B. Kurniawan and others, “A systematic literature review on survey data collection system,” in 2018 International Conference on Information Technology Systems and Innovation (ICITSI), 2018, pp. 177–181. doi: 10.1109/ICITSI.2018.8696036.

J. Lorenz, F. Barthel, D. Kienzle, and R. Lienhart, “Haystack: A Panoptic Scene Graph Dataset to Evaluate Rare Predicate Classes,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 62–70. doi: 10.1109/ICCVW60793.2023.00013.

N. M. Kebonye, “Exploring the novel support points-based split method on a soil dataset,” Measurement, vol. 186, p. 110131, 2021, doi: https://doi.org/10.1016/j.measurement.2021.110131.

S. Wang et al., “Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects,” Inf. Fusion, vol. 76, pp. 376–421, 2021, doi: https://doi.org/10.1016/j.inffus.2021.07.001.

H. Zhu, H. Peng, Z. Lyu, L. Hou, J. Li, and J. Xiao, “Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation,” Expert Syst. Appl., vol. 215, p. 119369, 2023, doi: https://doi.org/10.1016/j.eswa.2022.119369.

G.-K. Wu, J. Xu, Y.-D. Zhang, B.-Y. Wen, and B.-P. Zhang, “Weighted feature fusion of dual attention convolutional neural network and transformer encoder module for ocean HABs classification,” Expert Syst. Appl., vol. 243, p. 122879, 2024, doi: https://doi.org/10.1016/j.eswa.2023.122879.

C. M. Greco, A. Simeri, A. Tagarelli, and E. Zumpano, “Transformer-based language models for mental health issues: A survey,” Pattern Recognit. Lett., vol. 167, pp. 204–211, 2023, doi: https://doi.org/10.1016/j.patrec.2023.02.016.

PlumX Metrics

Published
2024-02-10
How to Cite
[1]
F. Dartiko, M. Yusa, A. Erlansari, and S. A. Basha, “Comparative Analysis of Transformer-Based Method In A Question Answering System for Campus Orientation Guides”, intensif, vol. 8, no. 1, pp. 126-143, Feb. 2024.