Myanmar Continuous Speech Recognition System Using Convolutional Neural Network

Автор: Yin Win Chit, Win Ei Hlaing, Myo Myo Khaing

Журнал: International Journal of Image, Graphics and Signal Processing @ijigsp

Статья в выпуске: 2 vol.13, 2021 года.

Бесплатный доступ

Translating the human speech signal into the text words is also known as Automatic Speech Recognition System (ASR) that is still many challenges in the processes of continuous speech recognition. Recognition System for Continuous speech develops with the four processes: segmentation, extraction the feature, classification and then recognition. Nowadays, because of the various changes of weather condition, the weather news becomes very important part for everybody. Mostly, the deaf people can’t hear weather news when the weather news is broadcast by using radio and television channel but the deaf people also need to know about that news report. This system designed to classify and recognize the weather news words as the Myanmar texts on the sounds of Myanmar weather news reporting. In this system, two types of input features are used based on Mel Frequency Cepstral Coefficient (MFCC) feature extraction method such MFCC features and MFCC features images. Then these two types of features are trained to build the acoustic model and are classified these features using the Convolutional Neural Network (CNN) classifiers. As the experimental result, The Word Error Rate (WER) of this entire system is 18.75% on the MFCC features and 11.2% on the MFCC features images.

Еще

Automatic Speech Recognition, Convolutional Neural Network, Mel Frequency Cepstral Coefficient, Continuous Speech, Speech Segmentation

Короткий адрес: https://sciup.org/15017390

IDR: 15017390   |   DOI: 10.5815/ijigsp.2021.02.04

Список литературы Myanmar Continuous Speech Recognition System Using Convolutional Neural Network

  • G. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Comput., vol. 14, pp. 1771–1800, 2002.
  • D. Hau and K. Chen, “Exploring hierarchical speech representations using a deep convolutional neural network”. 11th UK. (UKCI ‟11), Manchester, U.K., 2011.
  • I. G. Khaing, K. Z. Linn, “Myanmar Continuous Speech Recognition System based on DTW and HMM”, IJIET., Vol.2, Issue 1, February, 2013.
  • H. Lukman and Thiang, “Limited Word Recognition Using Fuzzy Matching ". ICOLA. Jakarta, 2002.
  • H. Singh and A. K. Bathla "A Survey on Speech Recognition ". 9th ICMT, 2005.
  • A. Stolcke, and et al., “Highly accurate phonetic segmentation using boundary correction models and system fusion,” ICASSP, 2014, 5552–5556.
  • S. Karpagavalli and E. Chandra, “A Review on Automatic Speech Recognition Architecture and Approaches”. IJSP, Image Processing and Pattern Recognition, 9(4), 2016, 393-404.
Статья научная