BibTex Citation Data :
@article{TEKNIK46439, author = {Ichsan Arsyi Putra and Oky Dwi Nurhayati and Dania Eridani}, title = {Human Action Recognition (HAR) Classification Using MediaPipe and Long Short-Term Memory (LSTM)}, journal = {TEKNIK}, volume = {43}, number = {2}, year = {2022}, keywords = {Classification; Deep Learning; Human Action Recognition; MediaPipe; Long Short-Term Memory}, abstract = { Human Action Recognition is an important research topic in Machine Learning and Computer Vision domains. One of the proposed methods is a combination of MediaPipe library and Long Short-Term Memory concerning the testing accuracy and training duration as indicators to evaluate the model performance. This research tried to adapt proposed LSTM models to implement HAR with image features extracted by MediaPipe library. There would be a comparison between LSTM models based on their testing accuracy and training duration. This research was conducted under OSEMN methods (Obtain, Scrub, Explore, Model, and iNterpret). The dataset was preprocessed Weizmann dataset with data preprocessing and data augmentation implementations. Video features extracted by MediaPipe: Pose was used in training and validation processes on neural network models focusing on Long Short-Term Memory layers. The processes were finished by model performance evaluation based on confusion matrices interpretation and calculations of accuracy, error rate, precision, recall, and F1score. This research yielded seven LSTM model variants with the highest testing accuracy at 82% taking 10 minutes and 50 seconds of training duration. }, issn = {2460-9919}, pages = {190--201} doi = {10.14710/teknik.v43i2.46439}, url = {https://ejournal.undip.ac.id/index.php/teknik/article/view/46439} }
Refworks Citation Data :
Human Action Recognition is an important research topic in Machine Learning and Computer Vision domains. One of the proposed methods is a combination of MediaPipe library and Long Short-Term Memory concerning the testing accuracy and training duration as indicators to evaluate the model performance. This research tried to adapt proposed LSTM models to implement HAR with image features extracted by MediaPipe library. There would be a comparison between LSTM models based on their testing accuracy and training duration. This research was conducted under OSEMN methods (Obtain, Scrub, Explore, Model, and iNterpret). The dataset was preprocessed Weizmann dataset with data preprocessing and data augmentation implementations. Video features extracted by MediaPipe: Pose was used in training and validation processes on neural network models focusing on Long Short-Term Memory layers. The processes were finished by model performance evaluation based on confusion matrices interpretation and calculations of accuracy, error rate, precision, recall, and F1score. This research yielded seven LSTM model variants with the highest testing accuracy at 82% taking 10 minutes and 50 seconds of training duration.
Article Metrics:
Last update:
AI Trainer: Autoencoder Based Approach for Squat Analysis and Correction
Last update: 2024-11-22 01:53:37
The Authors submitting a manuscript do so on the understanding that if accepted for publication, copyright of the article shall be assigned to jurnal TEKNIK and Faculty of Engineering, Diponegoro University as publisher of the journal.
Copyright transfer agreement can be found here: [Copyright transfer agreement in doc] and [Copyright transfer agreement in pdf].
View My Stats