Using Closely-related Language to Build an ASR for a Very Under-resourced Language: Iban

This paper describes our work on automatic speech recognition system (ASR) for an under-resourced language, namely the Iban language, which is spoken in Sarawak, a Malaysian Borneo state. To begin this study, we collected 8 hours of speech data due to no resources yet for ASR concerning this lang...

Full description

Bibliographic Details
Main Authors: Juan, Sarah Samson, Besacier, Laurent, Lecouteux, Benjamin, Tan, Tien-Ping
Format: Proceeding
Language:English
Published: 2014
Subjects:
Online Access:http://ir.unimas.my/id/eprint/8881/
http://ir.unimas.my/id/eprint/8881/1/COCOSDA-sarahsamsonjuan.pdf
Description
Summary:This paper describes our work on automatic speech recognition system (ASR) for an under-resourced language, namely the Iban language, which is spoken in Sarawak, a Malaysian Borneo state. To begin this study, we collected 8 hours of speech data due to no resources yet for ASR concerning this language. Following the lack of resources, we employed bootstrapping techniques on a closely-related language to build the Iban system. For this case, we utilized Malay data to bootstrap the grapheme-to-phoneme system (G2P) for the target language. We also developed several G2Ps to acquire Iban pronunciation dictionaries, which were later evaluated on the Iban ASR for obtaining the best version. Subsequently, we conducted experiments on cross-lingual ASR by using subspace Gaussian Mixture Models (SGMM) where the shared parameters obtained in either monolingual or multilingual fashion. From our observations, using out-of-language data as source language provided lower WER when Iban data is very imited.