KLASIFIKASI BERITA HOAKS BAHASA INDONESIA MENGGUNAKAN INDOBERT FINE-TUNING DENGAN PENDEKA-TAN FOCAL LOSS PADA DATA TIDAK SEIMBANG
Abstract
Keywords
Full Text:
PDFArticle Metrics :
References
B. Juarto and Yulianto, “Indonesian News Classification Using IndoBert,” Int. J. Intell. Syst. Appl. Eng., vol. 11, no. 2, pp. 454–460, 2023.
S. Vosoughi, D. Roy, and S. Aral, “News On-line,” Science (80-. )., vol. 1151, no. March, pp. 1146–1151, 2018.
C. J. L. Tobing, I. G. N. L. Wijayakusuma, L. Putu, I. Harini, and U. Udayana, “Detection of Political Hoax News Using Fine-Tuning IndoBERT,” vol. 9, no. 2, pp. 354–360, 2025.
M. Y. Ridho and E. Yulianti, “From Text to Truth : Leveraging IndoBERT and Machine Learning Models for Hoax Detection in Indonesian News,” vol. 10, no. 3, pp. 544–555, 2024, doi: 10.26555/jiteki.v10i3.29450.
M. A. Fathin, Y. Sibaroni, and S. S. Prasetyowati, “Handling Imbalance Dataset on Hoax Indonesian Political News Classification using IndoBERT and Random Sampling,” J. Media Inform. Budidarma, vol. 8, no. 1, p. 352, 2024, doi: 10.30865/mib.v8i1.7099.
R. Yang et al., “CNN-LSTM deep learning architecture for computer vision-based modal frequency detection,” Mech. Syst. Signal Process., vol. 144, p. 106885, 2020, doi: 10.1016/j.ymssp.2020.106885.
J. Devlin, M.-W. Chang, K. Lee, K. T. Google, and A. I. Language, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Naacl-Hlt 2019, no. Mlm, pp. 4171–4186, 2018, [Online]. Available: https://aclanthology.org/N19-1423.pdf
P. K. Verma, P. Agrawal, V. Madaan, and R. Prodan, “MCred: multi-modal message credibility for fake news detection using BERT and CNN,” J. Ambient Intell. Humaniz. Comput., vol. 14, no. 8, pp. 10617–10629, 2023, doi: 10.1007/s12652-022-04338-2.
S. Raza, D. Paulen-Patterson, and C. Ding, “Fake news detection: comparative evaluation of BERT-like models and large language models with generative AI-annotated data,” Knowl. Inf. Syst., pp. 1–30, 2025, doi: 10.1007/s10115-024-02321-1.
A. R. Hanum et al., “Analisis Kinerja Algoritma Klasifikasi Teks BERT Dalam Mendeteksi Berita Hoaks,” vol. 11, no. 3, pp. 537–546, 2024, doi: 10.25126/jtiik938093.
L. Geni, E. Yulianti, and D. I. Sensuse, “Sentiment Analysis of Tweets Before the 2024 Elections in Indonesia Using IndoBERT Language Models,” J. Ilm. Tek. Elektro Komput. dan Inform., vol. 9, no. 3, pp. 746–757, 2023, doi: 10.26555/jiteki.v9i3.26490.
F. Koto, A. Rahimi, J. H. Lau, and T. Baldwin, “IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP,” COLING 2020 - 28th Int. Conf. Comput. Linguist. Proc. Conf., pp. 757–770, 2020, doi: 10.18653/v1/2020.coling-main.66.
D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
A. Rogers, O. Kovaleva, and A. Rumshisky, “A primer in bertology: What we know about how bert works,” Trans. Assoc. Comput. Linguist., vol. 8, pp. 842–866, 2020, doi: 10.1162/tacl_a_00349.