skip to main content

Perbandingan Pre-Trained CNN: Klasifikasi Pengenalan Bahasa Isyarat Huruf Hijaiyah

*Yulrio Brianorman orcid  -  Fakultas Teknik dan Ilmu Komputer, Universitas Muhammadiyah Pontianak, Jl. Ahmad Yani No. 111, Pontianak, Indonesia., Indonesia
Rinaldi Munir  -  Sekolah Tinggi Elektro dan Informatika, Institut Teknologi Bandung, Jl Ganesha No. 10, Bandung, Indonesia, Indonesia
Open Access Copyright (c) 2023 JSINBIS (Jurnal Sistem Informasi Bisnis)

Citation Format:
Abstract

The number of documented deaf people continues to increase. To communicate with each other, the deaf use sign language. The problem arises when Muslims with hearing impairment or deafness need to recite the Al-Quran. Muslims recite Al-Quran using their voice, but for the deaf, there are no available means to do the reciting. Thus, learning hijaiyah letters using finger gestures is considered important to develop. In this study, we use the recognition of hijaiyah letters based on pictures as the learning model. The real-time-based recognition then uses the learning model. This study uses 4 CNN pre-trained models, namely MnetV2, VGG16, ResNet50, and Xception. The learning process shows that MnetV2, VGG16, and Xception reach the accuracy limit of 99.85% in 2, 3, and 11 s, respectively. Meanwhile, ResNet50 cannot reach the accuracy limit after processing 100 s. ResNet50 achieves 82.12% accuracy. The testing process shows that MnetV2, VGG16, and ResNet50 achieve 100% precision, recall, f1-score, and accuracy. ResNet50 shows figures 81.55%, 86.04%, 82.04%, and 82.58%. The implementing process of the learning outcomes from MnetV2 shows good performance for recognizing finger shapes in real-time.

Fulltext View|Download
Keywords: Sign Language; Hijaiyah Letters; Pre-Trained Model; CNN

Article Metrics:

  1. Alom, M. Z., Taha, T. M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M. S., Van Esesn, B. C., Awwal, A. A. S., Asari, V. K., 2018. The history began from alexnet: a comprehensive survey on deep learning approaches. http://arxiv.org/abs/1803.01164
  2. Aly, W., Aly, S., Almotairi, S., 2019. User-independent american sign language alphabet recognition based on depth image and PCANet features. IEEE Acces 7, 123138–123150. https://doi.org/10.1109/ACCESS.2019.2938829
  3. Ameen, S., Vadera, S., 2017. A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images. Expert Systems 34(3). https://doi.org/https://doi.org/10.1111/exsy.12197
  4. Ameur, S., Ben Khalifa, A., Bouhlel, M. S., 2020. A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion. Entertainment Computing 35, 1–10. https://doi.org/10.1016/j.entcom.2020.100373
  5. Barbhuiya, A. A., Karsh, R. K., Jain, R., 2021. CNN based feature extraction and classification for sign language. Multimedia Tools and Applications 80(2), 3051–3069. https://doi.org/10.1007/s11042-020-09829-y
  6. Bin Makhashen, G. M., Luqman, H. A., El-Alfy., E.-S. M., 2019. Using gabor filter bank with downsampling and SVM for visual sign language alphabet recognition. 2nd smart cities symposium (SCS 2019) 15–21. https://doi.org/10.1049/cp.2019.0188
  7. Chevtchenko, S. F., Vale, R. F., Macario, V., 2018. Multi-objective optimization for hand posture recognition. Expert Systems with Applications 92, 170–181. https://doi.org/10.1016/j.eswa.2017.09.046
  8. Hosoe, H., Sako, S., Kwolek, B., 2017. Recognition of JSL finger spelling using convolutional neural networks. 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA) 85–88. https://doi.org/10.23919/MVA.2017.7986796
  9. Islam, M. R., Mitu, U. K., Bhuiyan, R. A., Shin, J., 2018. Hand gesture feature extraction using deep convolutional neural network for recognizing American sign language. 2018 4th International Conference on Frontiers of Signal Processing (ICFSP) 115–119. https://doi.org/10.1109/ICFSP.2018.8552044
  10. Nurhayati, O. D., Eridani, D., Tsalavin, M. H., 2022. Sistem isyarat bahasa Indonesia (Sibi) metode convolutional neural network sequential secara real time. Jurnal Teknologi Informasi Dan Ilmu Komputer 9(4), 819–828. https://doi.org/10.25126/jtiik.202294787
  11. Prasetyo, E., Purbaningtyas, R., Adityo, R. D., Prabowo, E. T., Ferdiansyah, A. I., 2021. Perbandingan convolution neural network untuk klasifikasi kesegaran ikan bandeng pada citra mata. Jurnal Teknologi Informasi Dan Ilmu Komputer 8(3), 601–608. https://doi.org/10.25126/jtiik.202184369
  12. Rochmawanti, O., Utaminingrum, F., Bachtiar, F. A., 2021. Analisis performa pre-trained model convolutional neural network dalam mendeteksi penyakit tuberculosis. Jurnal Teknologi Informasi Dan Ilmu Komputer 8(4), 805–814. https://doi.org/10.25126/jtiik.202184441
  13. Shen, Z., Savvides, M., 2020. Meal v2: boosting vanilla resnet-50 to 80%+ top-1 accuracy on imagenet without tricks. http://arxiv.org/abs/2009.08453
  14. Swastika, W., 2020. Studi awal deteksi covid-19 menggunakan citra ct berbasis deep learning. Jurnal Teknologi Informasi dan Ilmu Computer 7(3), 629–634. https://doi.org/10.25126/jtiik.202073399
  15. Tao, W., Leu, M. C., Yin, Z., 2018. American sign language alphabet recognition using convolutional neural networks with multiview augmentation and inference fusion. Engineering Applications of Artificial Intelligence 76, 202–213. https://doi.org/10.1016/j.engappai.2018.09.006
  16. Yudistira, N., Widodo, A. W., Rahayudi, B., 2020. Deteksi covid-19 pada citra sinar-x dada menggunakan deep learning yang efisien. Jurnal Teknologi Informasi dan Ilmu Komputer 7(6), 1289–1296. https://doi.org/10.25126/jtiik.202073651

Last update:

No citation recorded.

Last update: 2024-04-28 01:53:43

No citation recorded.