Ensemble Method Builds a Predictive by Integrating Several Models for Accurate Answer Prediction on Chatbot

Main Article Content

Merry Anggraeni
Hillman Akhyar Damanik

Abstract

One of the technological advances that has had a positive impact is the Chatbot. A chat service which is actually a feature that has been very often used by tech-savvy people. But the difference is, the ones who reply in the chat process are robots or virtual characters. The chatbot will provide answers to the questions given to it which is basically the chatbot assigned to understand what context the user (user) means, then reply to it with the appropriate context. However, each context has a different input, human language has a very flexible way so that it is often found inaccuracies in the prediction of answers given by Chatbot. This could be due to the incorrect choice of algorithms for the classification of the context or the lack of training data provided. To overcome this, this study will focus on strengthening the prediction of chatbot answers with the method of the machine learning algorithm (Machine Learning) ensemble 5 classification technique heterogeneously between basic classifiers and meta-algorithms and using maximum voting (Majority Vote) or Hard Voting on type ensemble. Classification is the process of finding a model or pattern that can describe and differentiate classes in a dataset. The goal is that the model can be used to predict objects with unknown class labels. It was found that the accuracy results were 86% for the data set with 6 classes, the avg Macro for each precision and recall was 92%, and the f1-score was 89%. Weighted avg for precision is 93% and each for recall and f1-score is 86%

Article Details

Section
Informatics

References

(Galitsky B., Explainable Machine Learning for Chatbots. In: Developing Enterprise Chatbots., 2019).

Kuncheva, L. and Whitaker, C., Measures of diversity in classifier ensembles, Machine Learning, 51, pp. 181-207, 2003

Sollich, P. and Krogh, A., Learning with ensembles: How overfitting can be useful, Advances in Neural Information Processing Systems, volume 8, pp. 190-196, 1996.

Brown, G. and Wyatt, J. and Harris, R. and Yao, X., Diversity creation methods: a survey and categorisation., Information Fusion, 6(1), pp.5-20, 2005.

Adeva, J. J. García; Cerviño, Ulises; Calvo, R. "Accuracy and Diversity in Ensembles of Text Categorisers" (PDF). CLEI Journal. 8 (2): 1–12.

Ho, T., Random Decision Forests, Proceedings of the Third International Conference on Document Analysis and Recognition, pp. 278-282, 1995.

Gashler, M.; Giraud-Carrier, C.; Martinez, T. "Decision Tree Ensemble: Small Heterogeneous Is Better Than Large Homogeneous" (PDF). The Seventh International Conference on Machine Learning and Applications. 2008: 900–905..

Y. Pristyanto, N. A. Setiawan, and I. Ardiyanto, “Hybrid Resampling to Handle Imbalanced Class on Classification of Student Performance in Classroom,” in The First International Conference on Informatics and Computational Sciences (ICICoS 2017), 2017, pp. 215–220.

T. M. Christian and M. Ayub, “Exploration of classification using NBTree for predicting students’ performance,” in Proceedings of 2014 International Conference on Data and Software Engineering, 2014, pp. 1–5.

Liu, Han and Haig, Ella, “Nature Inspired Framework of Ensemble Learning for Collaborative Classification in Granular Computing Context”, 2019/10/01

G. Gray, C. McGuinness, and P. Owende, “An application of classification models to predict learner progression in tertiary education,” 2014 4th IEEE Int. Adv. Comput. Conf. IACC 2014, pp. 549– 554, 2014.

M. Mayilvaganan and D. Kalpanadevi, “Comparison of classification techniques for predicting the performance of students academic environment,” Commun. Netw. Technol. (ICCNT), 2014 Int. Conf. Comput. Intell. Comput. Res., pp. 113–118, 2014

Ming Li, Peilun Xiao,and JuZhang,” Text classification based on ensemble extreme learning machine”, Natural Science Foundation of China(No.61672488), Ministry of science and Technology, 2018.

Zhu Hong1, Jin Wenzhen1 and Yang Guocai1, “An Effective Text Classification Model Based on Ensemble Strategy”, Journal of Physics: Conference Series, Volume 1229, 2019 3rd International Conference on Machine Vision and Information Technology (CMVIT 2019) 22–24 February 2019, Guangzhou, China.

Fan Huimin, Li Pengpeng, Zhao Yingze, Li Danyang, “An Ensemble Learning Method for Text Classification Based on Heterogeneous Classifiers”,2018 International Conference on Sensor Network and Computer Engineering (ICSNCE 2018)

Lior Rokach, “Ensemble-based classifiers”, 19 November 2009, Springer Publisher, Science Business Media B.V. 2009

Mediana Aryuni, “Penerapan Ensemble Feature Selection Dan Klasterisasi Fitur Pada Klasifikasi Dokumen Teks”, ComTech Vol.4 No. 1 Juni 2013: 333-342

Vandana Korde and C Namrata Mahender, “Text Classification And Classifiers: A Survey”, International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.2, March 2012.

R.Manicka chezian dan C.Kanakalakshmi, “Performance Evaluation of Machine Learning Techniques for Text Classification”, Proceedings of the UGC Sponsored National Conference on Advanced Networking and Applications, 27th March 2015.

Nitin Hardeniya, Jacob Perkins, Natural Language Processing: Python and NLTK, Packt Publishing, Copyright ©2016

Yoga Pristyanto, “Penerapan Metode Ensemble Untuk Meningkatkan Kinerja Algoritme Klasifikasi Pada Imbalanced Dataset”, Jurnal TEKNOINFO, Vol. 13, No. 1, 2019, 11-16, ISSN: 2615-224X

Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241–259. https://doi.org/10.1016/S0893-6080(05)80023-1

Mccallum A, Nigam K. A comparison of event models for Naive Bayes text classification. In: AAAI-98 workshop on ‘Learning for Text Categorization’; 1998.

C.M. Bishop, “Pattern recognition and machine learning” (information science and statistics) Springer-Verlag New York, Inc., Secaucus, NJ, USA (2006).