There are round 7,000 languages on the planet, however current speech recognition fashions cowl solely about 100 of them comprehensively. It is because these sorts of fashions are likely to require enormous quantities of labeled coaching information, which is accessible for under a small variety of languages, together with English, Spanish, and Chinese language.
Meta researchers received round this drawback by retraining an current AI mannequin developed by the corporate in 2020 that is ready to be taught speech patterns from audio with out requiring giant quantities of labeled information, reminiscent of transcripts.
They skilled it on two new information units: one which accommodates audio recordings of the New Testomony Bible and its corresponding textual content taken from the web in 1,107 languages, and one other containing unlabeled New Testomony audio recordings in 3,809 languages. The staff processed the speech audio and the textual content information to enhance its high quality earlier than operating an algorithm designed to align audio recordings with accompanying textual content. They then repeated this course of with a second algorithm skilled on the newly aligned information. With this methodology, the researchers had been capable of educate the algorithm to be taught a brand new language extra simply, even with out the accompanying textual content.
“We will use what that mannequin discovered to then shortly construct speech programs with very, little or no information,” says Michael Auli, a analysis scientist at Meta who labored on the undertaking.
“For English, we now have tons and plenty of good information units, and we now have that for a number of extra languages, however we simply don’t have that for languages which might be spoken by, say, 1,000 individuals.”
The researchers say their fashions can converse in over 1,000 languages however acknowledge greater than 4,000.
They in contrast the fashions with these from rival corporations, together with OpenAI Whisper, and declare theirs had half the error fee, regardless of protecting 11 instances extra languages.
Nevertheless, the staff warns the mannequin continues to be vulnerable to mistranscribing sure phrases or phrases, which may end in inaccurate or probably offensive labels. In addition they acknowledge that their speech recognition fashions yielded extra biased phrases than different fashions, albeit solely 0.7% extra.
Whereas the scope of the analysis is spectacular, the usage of non secular texts to coach AI fashions might be controversial, says Chris Emezue, a researcher at Masakhane, a company engaged on natural-language processing for African languages, who was not concerned within the undertaking.
“The Bible has a variety of bias and misrepresentations,” he says.
