Meta wants to build a universal language translator

Deepak Gupta February 23, 2022
Updated 2022/02/23 at 8:46 PM

during a Inside the Lab: Building for the Metaverse with AI Wednesday’s livestream event, Meta CEO Mark Zuckerberg not only laid out his company’s unwavering vision for the future, dubbed the Metaverse. He also revealed that Meta’s research division is working on a universal speech translation system that could streamline user interactions with AI within the company’s digital universe.

“The big goal here is to build a universal model that can incorporate knowledge in all modalities… all the information that is captured through advanced sensors,” said Zuckerberg. “This will allow for a vast scale of predictions, decisions and generation, as well as entirely new architecture training methods and algorithms that can learn from a vast and diverse range of different inputs.”

Zuckerberg noted that Facebook has continually strived to develop technologies that allow more people around the world to access the Internet and is confident that these efforts will also translate to the Metaverse.

“This will be especially important as people start teleporting through virtual worlds and experiencing things with people from different backgrounds,” he continued. “Now, we have a chance to improve the internet and set a new standard where we can all communicate with each other, no matter what language we speak or where we come from. And if we get it right, this is just an example. of how AI can help bring people together on a global scale.”

Meta’s plan is twofold. First, Meta is developing No Language Left Behind, a translation system capable of learning “all languages, even if there isn’t much text available to learn,” according to Zuckerberg. “We are creating a single model that can translate hundreds of languages ​​with state-of-the-art results and most language pairs – everything from Austrian to Uganda to Urdu.”

Second, Meta wants to create a Babelfish AI. “The goal here is instant speech-to-speech translation in all languages, even the most widely spoken; the ability to communicate with anyone in any language,” promised Zuckerberg. “This is a superpower that people have always dreamed of and AI will deliver that in our lives.”

These are big claims from a company whose machine-generated dominance doesn’t extend below the belt line, however Facebook-cum-Meta has a long and broad history of AI development. In the past year alone, the company has announced advances in self-supervised learning techniques, natural language processing, multimodal learning, text-based generation, AI-powered understanding of social norms, and has even built a supercomputer to aid in its machine learning research.

The company still faces the big hurdle of data scarcity. “Machine translation (MT) systems for text translations often rely on learning millions of sentences of annotated data,” Facebook AI Research wrote in a blog post from wednesday. “Because of this, MT systems capable of high-quality translations were developed for just a few languages ​​that dominate the web.”

Translating between two non-English languages ​​is even more challenging, according to the FAIR team. Most MT systems first convert one language to text and then translate it into the second language before converting the text back to speech. This slows down the translation process and creates an over-reliance on the written word, limiting the effectiveness of these systems for primarily spoken languages. Direct speech-to-speech systems like the one Meta is working on would not be harmed in this way, resulting in a faster and more efficient translation process.

All products recommended by Ploonge are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published.