Virtual Assistant for Thesis Technical Guide Using Artificial Neural Network

Mohammad Ovi Sanjaya, Saiful Bukhori, Muhammad `Ariful Furqon

Abstract


This study focuses on finding best practice for Artificial Neural Network (ANN) implementation in the information system for student’s thesis technical instructions. The machine learning model applied sequential model, it means ANN only use 1 input layer, a hidden/dense layer and 1 output layer. The Stochastic Gradient Decent (SGD) method was applied into data training process. The results of this study are chatbot applications, and model testing using the confusion matrix. The result of model evaluation are 99,49% accuracy and 91% in F-1 score.

Keywords


Chatbot; Artificial Neural Network; Confussion Matrix; Machine Learning Model; Stochastic Gradient Descent

Full Text:

PDF PDF

References


Ali, N. Chatbot: A conversational agent employed with named entity recognition model using artificial neural network. 2020: arXiv preprint arXiv:2007.04248. 20

Vamsi, G. Krishna; RASOOL, Akhtar; HAJELA, Gaurav. Chatbot: A deep neural network based human to machine conversation model. In: 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, 2020. p. 1-7.

Villegas-Ch W, Arias-Navarrete A, Palacios-Pacheco X. Proposal of an Architecture for the Integration of a Chatbot with Artificial Intelligence in a Smart Campus for the Improvement of Learning. Sustainability. 2020; 12(4): 1500.

Villegas-Ch, W., García-Ortiz, J., Mullo-Ca, K., Sánchez-Viteri, S., & Roman-Cañizares, M. Implementation of a virtual assistant for the academic management of a university with the use of artificial intelligence. Future Internet. 2021;13(4). https://doi.org/10.3390/FI13040097

Jiao, Anran. An Intelligent Chatbot System Based on Entity Extraction Using RASA NLU and Neural Network. Journal of Physics: Conference Series. 2020; 1487(12014): 1742-6596.

Gurney, K. 1997. *An Introduction to Neural Networks*. London: UCL Press.

M., Rodrigo, Valiati, J.F., P., Wilson. Document-level sentiment classification: An empirical comparison between SVM and ANN. Expert Systems with Applications. 2013; 40(2); https://doi.org/10.1016/j.eswa.2012.07.059.

Kalarani, P., Brunda, S. Selva. Sentiment analysis by POS and joint sentiment topic features using SVM and ANN. Soft computing. 2019; 23(15); 7076-7079; https://doi.org/10.1007/s00500-018-3349-9.

K. Kamran, J.M. Kiana, H. Mojtaba, M. Sanjana, B. Laura, B. Donald. Text Classification Algorithm: A survey. Information. 2022;13(2), 83; https://doi.org/10.3390/info13020083

M., Luis, Granz, Maximilian, Landgraf, Tim. Chaotic Dynamics are Intrinsic to Neural Network Training with SGD. Advances in Neural Information Processing Systems. 2022; 35; 5219-5229; ISBN: 9781713871088.

Wang, Junxiang, Li, Hongyi, Zhao, Liang. Accelerated Gradient-free Neural Network Training by Multi-convex Alternating Optimization. Neurocomputing. 2022; 487; 130-143; https://doi.org/10.1016/j.neucom.2022.02.039

Khyani, Divya, et al. An interpretation of lemmatization and stemming in natural language processing. Journal of University of Shanghai for Science and Technology, 2021, 22.10: 350-357.

Khurana, Diksha, et al. Natural language processing: State of the art, current trends and challenges. Multimedia tools and applications, 2023, 82.3: 3713-3744.

Hinkelmann, K. Neural Network, P.7. Zürich: University of Applied Sciences Northwestern Switzerland. 2018: 48.

Brownlee, J. A Gentle Introduction to the Rectified Linear Unit (ReLU). 2019; https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/.

Chen, B. Learning Rate Schedule in Practice: an example with Keras and TensorFlow 2.0. 2020; https://towardsdatascience.com/learning-rate-schedule-in-practice-an-example-with-keras-and-tensorflow-2-0-2f48b2888a0c.

Goodfellow Ian, Bengio Yoshua, Courville Aaron. Deep Learning. Cambridge: MIT Press. 2016; 180.

Jiang, M., Liang, Y., Feng, X. et al. Text classification based on deep belief network and softmax regression. Neural Comput & Applic.2018; 29; 61–70; https://doi.org/10.1007/s00521-016-2401-x

Bottou, Leon. Stochastic Gradient Descent Tricks. Lecture Notes in Computer Science. 2015; 7700(25). ISBN : 978-3-64-235288-1, 978-3-64-235289-8.

Wu, Wei, et al. Learning dynamics of gradient descent optimization in deep neural networks. Science China Information Sciences, 2021, 64: 1-15.

Narkhede, S. Understanding Confusion Matrix. 2018; https://towardsdatascience.com/understanding-confusion-matrix-a9ad42dcfd62.

Krstinic, Damir, et al. Multi-label classifier performance evaluation with confusion matrix. Computer Science & Information Technology, 2020, 1.

Maratea, Antonio; Petrosino, Alfredo; Manzo, Mario. Adjusted F-measure and kernel scaling for imbalanced data learning. Information Sciences, 2014, 257: 331-341.

George, S.; Srividhya, V. Performance evaluation of sentiment analysis on balanced and imbalanced dataset using ensemble approach. Indian Journal of Science and Technology, 2022, 15.17: 790-797.




DOI: http://dx.doi.org/10.24014/ijaidm.v6i2.23473

Refbacks

  • There are currently no refbacks.


Office and Secretariat:

Big Data Research Centre
Puzzle Research Data Technology (Predatech)
Laboratory Building 1st Floor of Faculty of Science and Technology
UIN Sultan Syarif Kasim Riau

Jl. HR. Soebrantas KM. 18.5 No. 155 Pekanbaru Riau – 28293
Website: http://predatech.uin-suska.ac.id/ijaidm
Email: ijaidm@uin-suska.ac.id
e-Journal: http://ejournal.uin-suska.ac.id/index.php/ijaidm
Phone: 085275359942

Click Here for Information


Journal Indexing:

Google Scholar | ROAD | PKP Index | BASE | ESJI | General Impact Factor | Garuda | Moraref | One Search | Cite Factor | Crossref | WorldCat | Neliti  | SINTA | Dimensions | ICI Index Copernicus 

IJAIDM Stats