Muhammad Faris
Endro Ariyanto
Yogi Anggun Saloko Yudo



The problem with the Ardunio microcontroller-based fire detection system with fire and smoke sensors is the detection distance. For example, in another research, it was stated that the maximum distance for fire detection on two pieces of paper that were burned was 140 cm. This means that if the fire point is at a farther distance, the system cannot detect a fire early, of course, this will be problematic if used in a wider room. Based on these problems, a system is needed that can detect fires in large rooms. A method that can be used is detection using image classification. MobileNetV2 is a real-time model for classifying or detecting an object in an image. In this study, the model was built using real-time based on the TensorFlow and Keras libraries. The system will use a laptop with an Nvidia GeForce MX130 GPU, a 48MP resolution smartphone camera, and the OpenCV library for the image classification process, as well as Telegram for sending fire notifications via the Re-quests library. The test results obtained on burnt 80/90 motorcycle tires, the most optimal detection distance is 7 meters with an accuracy of 99.91%. While testing on two sheets of paper that are burned, the most optimal detection distance is 3 meters with an accuracy of 99.75%. The average response time obtained varies greatly from 74.5 ms to 117.1 ms, which depends on the internet network connection.


klasifikasi; citra; deteksi; kebakaran; mobilenetv2

Full Text:


Article Metrics :


Unit Pengelola Statistik, “Kejadian Kebakaran di DKI Jakarta Tahun 2020,” 2021. [online] Available: [Accessed 28 November 2021].

Simbolon, C.G., Hanuranto, A.T. dan Novianti, A., 2020. Desain Dan Implementasi Prototipe Pendeteksi Dini Kebakaran Gedung Menggunakan Algoritma fuzzy logic Berbasis Internet Of Things (IOT). eProceedings of Engineering, 7(2), p. 3532

Madhar, M., 2018. Rancang Bangun Sistem Monitoring Deteksi Dini Kebakaran Dengan Fitur Gps Berbasis Website. JATI (Jurnal Maha-siswa Teknik Informatika), 2(1), pp.367-372.

Edel, G. and Kapustin, V., 2022, July. Exploring of the MobileNet V1 and MobileNet V2 models on NVIDIA Jetson Nano microcomput-er. In Journal of Physics: Conference Series (Vol. 2291, No. 1, p. 012008). IOP Publishing.

Hussain, D., Ismail, M., Hussain, I., Alroobaea, R., Hussain, S. and Ullah, S.S., 2022. Face Mask Detection Using Deep Convolutional Neural Network and MobileNetV2-Based Transfer Learning. Wireless Communications and Mobile Computing, 2022.

Dai, W., Dai, Y., Hirota, K. and Jia, Z., 2020. A Flower Classification Approach with MobileNetV2 and Transfer Learning. In Proceedings of the 9th International Symposium on Computational Intelligence and Industrial Applications (ISCIIA2020), Beijing, China (Vol. 31).

Indraswari, R., Rokhana, R. and Herulambang, W., 2022. Melanoma image classification based on MobileNetV2 network. Procedia Com-puter Science, 197, pp.198-207.

Akay, M., Du, Y., Sershen, C.L., Wu, M., Chen, T.Y., Assassi, S., Mohan, C. and Akay, Y.M., 2021. Deep learning classification of sys-temic sclerosis skin using the MobileNetV2 model. IEEE Open Journal of Engineering in Medicine and Biology, 2, pp.104-110.

Shahi, T.B., Sitaula, C., Neupane, A. and Guo, W., 2022. Fruit classification using attention-based MobileNetV2 for industrial applica-tions. Plos one, 17(2), p.e0264586.

Rahman, S., Titania, A., Sembiring, A., Khairani, M. and Lubis, Y.F.A., 2022. Analisis Klasifikasi Mobil Pada Gardu Tol Otomatis (GTO) Menggunakan Convolutional Neural Network (CNN). Explorer, 2(2), pp.54-60.

Nufus, N., Ariffin, D.M., Satyawan, A.S., Nugraha, R.A.S., Asysyakuur, M.I., Marlina, N.N.A., Parangin, C.H. and Ema, E., 2021, De-cember. Sistem Pendeteksi Pejalan Kaki Di Lingkungan Terbatas Berbasis SSD MobileNet V2 Dengan Menggunakan Gambar 360° Ter-normalisasi. In Prosiding Seminar Nasional Sains Teknologi dan Inovasi Indonesia (SENASTINDO) (Vol. 3, pp. 123-134).

Sakti, W.W., Abhiyaksa, M. and Arif, R., 2022. FOD Detection Camera Pada Object Landasan Bandara. SKYHAWK: Jurnal Aviasi Indo-nesia, 2(1), pp.11-14.

Dewi, I.A., 2019. Deteksi Manusia menggunakan Pre-Trained MobileNet untuk Segmentasi Citra Menentukan Bentuk Tubuh. MIND (Mul-timedia Artificial Intelligent Networking Database) Journal, 4(1), pp.65-79.

Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. and Adam, H., 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).

Mihigo, I.N., Zennaro, M., Uwitonze, A., Rwigema, J. and Rovai, M., 2022. On-Device IoT-Based Predictive Maintenance Analytics Mod-el: Comparing TinyLSTM and TinyModel from Edge Impulse. Sensors, 22(14), p.5174.

Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M. and Ghemawat, S., 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.

Moolayil, J., Moolayil, J. and John, S., 2019. Learn Keras for deep neural networks (pp. 33-35). Birmingham: Apress.

Culjak, I., Abram, D., Pribanic, T., Dzapo, H. and Cifrek, M., 2012, May. A brief introduction to OpenCV. In 2012 proceedings of the 35th international convention MIPRO (pp. 1725-1730). IEEE.

Młodzianowski, P., 2022. Weather Classification with Transfer Learning-InceptionV3, MobileNetV2 and ResNet50. In Conference on Multimedia, Interaction, Design and Innovation (pp. 3-11). Springer, Cham.

Kaya, A., Keceli, A.S., Catal, C., Yalic, H.Y., Temucin, H. and Tekinerdogan, B., 2019. Analysis of transfer learning for deep neural net-work based plant classification models. Computers and electronics in agriculture, 158, pp.20-29.

A. Saied, “FIRE Dataset,” 2020. [online] Available: [Accessed 14 January 2023].

C. Ganteng, “Fire Detection Dataset,” 2020. [online] Available: <> [Accessed 14 January 2023].