PENGEMBANGAN MODEL CAPSULEGAN UNTUK PENGHAPUSAN HUJAN CITRA TUNGGAL

Haryanto Hidayat
Munawir Munawir
Muhammad Taufik Dwi Putra
Arief Suryadi Satyawan


DOI: https://doi.org/10.29100/jipi.v9i2.5534

Abstract


Pengaruh cuaca hujan pada kualitas gambar sering menjadi masalah di bidang Computer Vision (CV), karena informasi-informasi penting yang diperlukan oleh algoritma CV menjadi hilang. Berbagai macam solusi telah diusulkan oleh para peneliti untuk menyelesaikan masalah tersebut, mulai dari menggunakan filter tradisional hingga penerapan metode Deep Learning. Penerapan algoritma Deep Learning, seperti Deep Convolutional Generative Adversarial Network (DCGAN) digunakan karena tingkat kualitas gambar yang diproduksi sangat baik, tetapi masih ada kekurangan yaitu hilangnya informasi spasial antar komponen hujan, sehingga tidak dapat mengidentifikasi dimana letak garis hujan yang menyebabkan tersisanya garis hujan pada gambar. Capsule Network (CapsNet) menjadi solusi dalam permasalahan tersebut, dengan memperhatikan hubungan antara detail parsial dengan objek global, informasi-informasi spasial pada gambar seperti posisi dan rotasi antar objek dapat dipertahankan, dengan begitu penggunaan CapsNet pada arsitektur akan memberikan pengaruh yang cukup signifikan. Dengan menggabungkan kedua metode tersebut akan didapatkan model de-raining yang dapat menghasilkan gambar lebih tajam sekaligus menghilangkan garis hujan secara efektif. Kami menggabungkan CapsNet pada bagian arsitektur Discriminator untuk pengklasifikasian yang lebih baik. Hasil perbandingan dengan model lain menunjukkan bahwa penggabungan kedua arsitektur tersebut menghasilkan gambar yang lebih baik dibandingkan dengan kebanyakan model Deep Learning lain. Meskipun begitu, masih terdapat kekurangan yaitu gambar yang dihasilkan masih memiliki efek blur dan sisa hujan akibat proses pelati-han yang tidak stabil.

Full Text:

PDF

Article Metrics :

References


X. Jiao, Y. Liu, J. Gao, X. Chu, R. Liu, and X. Fan, “PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation,” May 2023, [Online]. Available: http://arxiv.org/abs/2305.15709

Y. Gou, P. Hu, J. Lv, J. T. Zhou, and X. Peng, “Multi-Scale Adaptive Network for Single Image Denoising,” Adv Neural Inf Process Syst, vol. 35, pp. 14099–14112, Mar. 2022, [Online]. Available: http://arxiv.org/abs/2203.04313

D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive Image Deraining Networks: A Better and Simpler Base-line,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3937–3946, 2019, [Online]. Available: https://github.com/csdwren/PReNet.

X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3855–3863, 2017.

S. Li et al., “Single Image Deraining: A Comprehensive Benchmark Analysis,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3838–3847, 2019, [Online]. Available: https://github.com/lsy17096535/Single-Image-Deraining

H. Zhang and V. M. Patel, “Convolutional sparse & low-rank coding-based rain streak removal,” in Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, Institute of Electrical and Electron-ics Engineers Inc., May 2017, pp. 1259–1267. doi: 10.1109/WACV.2017.145.

X. Chen, J. Pan, J. Lu, Z. Fan, and H. Li, “Hybrid CNN-Transformer Feature Fusion for Single Image Deraining,” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, pp. 378–386, 2023, [Online]. Availa-ble: www.aaai.org

X. Zhang, H. Li, Y. Qi, W. Kheng Leow, and T. Khim Ng, “Rain Removal In Video By Combining Temporal And Chromatic Properties,” IEEE International Conference on Multimedia and Expo, 2006.

A. K. Tripathi and S. Mukhopadhyay, “Removal of rain from videos: a review,” Signal Image Video Process, vol. 8, no. 8, pp. 1421–1430, Nov. 2014, doi: 10.1007/s11760-012-0373-6.

V. Santhaseelan and V. K. Asari, “Utilizing Local Phase Information to Remove Rain from Video,” Int J Comput Vis, vol. 112, no. 1, pp. 71–89, Mar. 2015, doi: 10.1007/s11263-014-0759-8.

T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “A Novel Tensor-based Video Rain Streaks Re-moval Approach via Utilizing Discriminatively Intrinsic Priors,” IEEE Conference on Computer Vision and Pattern Recognition, 2017.

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of ori-entation of streaks,” Int J Comput Vis, vol. 93, no. 3, pp. 348–367, Jul. 2011, doi: 10.1007/s11263-011-0421-7.

J. H. Kim, J. Y. Sim, and C. S. Kim, “Video deraining and desnowing using temporal correlation and low-rank ma-trix completion,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, Sep. 2015, doi: 10.1109/TIP.2015.2428933.

W. Ren, J. Tian, Z. Han, A. Chan, and Y. Tang, “Video Desnowing and Deraining Based on Matrix Decomposition,” n Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4210–4219, 2017.

Y. L. Chen and C. T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” in Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics En-gineers Inc., 2013, pp. 1968–1975. doi: 10.1109/ICCV.2013.247.

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposi-tion,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1742–1755, Apr. 2012, doi: 10.1109/TIP.2011.2179057.

K. Jiang et al., “Multi-Scale Progressive Fusion Network for Single Image Deraining,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8345–8355, Mar. 2020, [Online]. Available: http://arxiv.org/abs/2003.10985

Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain Streak Removal Using Layer Priors,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2736–2744, 2016.

Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via discriminative sparse coding,” Proceedings of the IEEE international conference on computer vision, pp. 3397–3405, 2015.

X. Fu, B. Liang, Y. Huang, X. Ding, and J. Paisley, “Lightweight Pyramid Networks for Image Deraining,” IEEE Trans Neural Netw Learn Syst, vol. 31, no. 6, pp. 1794–1807, May 2018, [Online]. Available: http://arxiv.org/abs/1805.06173

X. Chen, H. Li, M. Li, and J. Pan, “Learning A Sparse Transformer Network for Effective Image Deraining,” Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5896–5905, 2023, [Online]. Available: https://github.

W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised Transfer Learning for Image Rain Removal,” Pro-ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3877–3886, 2019, [Online]. Available: https://www.photoshopessentials.com/photo-effects/rain/

X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2944–2956, Jun. 2017, doi: 10.1109/TIP.2017.2691802.

R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive Generative Adversarial Network for Raindrop Removal from a Single Image,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2482–2491, Nov. 2017, [Online]. Available: http://arxiv.org/abs/1711.10098

R. Li, L.-F. Cheong, and R. T. Tan, “Heavy Rain Image Restoration: Integrating Physics Model and Conditional Ad-versarial Learning *,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1633–1642, 2019.

H. Zhang, V. Sindagi, and V. M. Patel, “Image De-raining Using a Conditional Generative Adversarial Network,” IEEE transactions on circuits and systems for video technology, vol. 30, no. 11, pp. 3943–3956, Jan. 2017, [Online]. Available: http://arxiv.org/abs/1701.05957

S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic Routing Between Capsules,” Adv Neural Inf Process Syst, vol. 30, Oct. 2017, [Online]. Available: http://arxiv.org/abs/1710.09829

E. Xi, S. Bing, and Y. Jin, “Capsule Network Performance on Complex Data,” arXiv preprint, Dec. 2017, [Online]. Available: http://arxiv.org/abs/1712.03480

A. Jaiswal, W. Abdalmageed, Y. Wu, and P. Natarajan, “CapsuleGAN: Generative Adversarial Capsule Network,” Proceedings of the European conference on computer vision (ECCV) workshops, 2018, [Online]. Available: https://github.com/hindupuravinash/the-gan-zoo

C. Xiang, M. Su, C. Zhang, F. Wang, M. Yang, and Z. Niu, “E-CapsGan: Generative adversarial network using cap-sule network as feature encoder,” Multimed Tools Appl, vol. 81, no. 18, pp. 26425–26442, Jul. 2022, doi: 10.1007/s11042-022-12279-3.

H. Zhang and V. M. Patel, “Density-aware Single Image De-raining using a Multi-stream Dense Network,” Proceed-ings of the IEEE conference on computer vision and pattern recognition, pp. 695–704, 2018, [Online]. Available: https://github.com/hezhangsprinter/DID-MDN

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134, Nov. 2016, [Online]. Available: http://arxiv.org/abs/1611.07004

U. Sara, M. Akter, and M. S. Uddin, “Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Com-parative Study,” Journal of Computer and Communications, vol. 07, no. 03, pp. 8–18, 2019, doi: 10.4236/jcc.2019.73002.

D. R. I. M. Setiadi, “PSNR vs SSIM: imperceptibility quality assessment for image steganography,” Multimed Tools Appl, vol. 80, no. 6, pp. 8423–8444, Mar. 2021, doi: 10.1007/s11042-020-10035-z.

W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep Joint Rain Detection and Removal from a Single Image,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1357–1366, Sep. 2017, [Online]. Available: http://arxiv.org/abs/1609.07769

A. Jabbar, X. Li, and B. Omar, “A Survey on Generative Adversarial Networks: Variants, Applications, and Train-ing,” ACM Computing Surveys (CSUR), vol. 54, no. 8, pp. 1–49, 2021.