The Fine tuning of Language models for automation of Humor Detection
Main Article Content
Abstract
In this paper, we propose a method that showcases a novel approach for humor identification using ALBERT and automation of best fit loss function identification and also the Optimiser identification. We have used two configurations of ALBERT, Albert-base and Albert-large. Using different hyper-parameters, we compare their results to obtain the best results for the binary classification problem of detecting texts that are humorous and those that are not humorous. We also determine the best optimizer and loss function that can be used to achieve state-of-the-art performance. The proposed system has been evaluated using metrics that include accuracy, precision, recall, F1-score, and the amount of time required. Among multiple loss functions, Adafactor on Albert-base model have shown promising results with 99\% of accuracy. Paper also talks about comparison of the proposed approach with other language models like BERT, ROBERTa to see a steep decline of 1/3rd in the time taken to train the model on 160K sentences.
Article Details
Upon receipt of accepted manuscripts, authors will be invited to complete a copyright license to publish the paper. At least the corresponding author must send the copyright form signed for publication. It is a condition of publication that authors grant an exclusive licence to the the INFOCOMP Journal of Computer Science. This ensures that requests from third parties to reproduce articles are handled efficiently and consistently and will also allow the article to be as widely disseminated as possible. In assigning the copyright license, authors may use their own material in other publications and ensure that the INFOCOMP Journal of Computer Science is acknowledged as the original publication place.