Hello Readers, thanks for reading the inaugural post of my blog on language and business. I’ll be covering how language works, and how it works for you.
With Seoul in a frenzy over the recently-ended AlphaGo matches, the language industry is wondering: will AI come for us next? Guest poster Darren Lewis seemed the right person to ask.
AlphaGo and the AI Revolution: Is Natural Language Understanding Next?
Darren Lewis studied computer science at Stanford University and then worked at Google on the Gmail team, helping it grow to become the world’s largest email service. He also worked with the Google Translate team to integrate their services into Gmail and other Google products, connecting hundreds of millions of people across the globe speaking over a hundred different languages. His current research interests focus on the intersection of natural language processing, computer vision, and artificial intelligence.
The historic victory of Google DeepMind’s AlphaGo over South Korean Go master Lee Sedol has pushed popular interest in artificial intelligence (AI) to an all-time high. No longer just the stuff of Hollywood movies, AI is improving at exponential rates and is now capable of doing incredible things that were considered impossible just a few years ago. Can the same techniques that gave AlphaGo its “human-like intuition” in the game of Go be used to build software with human-like language understanding? Or is the domain of language safe from the machines? Let’s take a look.
Up until just a few months ago, computer scientists thought that building a professional level Go program was at least a decade away. AI had already solved chess back in 1997 when IBM’s Deep Blue used raw computational power to defeat chess legend Garry Kasparov. But Go is exponentially more complicated than chess, with the number of potential games far exceeding the number of atoms in the universe. And perhaps more importantly, unlike chess, in Go it’s incredibly difficult to evaluate how “good” a board position is — yet professionals develop a remarkable intuition for precisely this challenge. AI couldn’t simply compute its way out of this problem; it needed to learn and behave more like a human, looking for patterns and using intuition to focus only on “good” moves. The result is the brilliant AlphaGo, a system that uses general-purpose “deep neural networks,” or “deep learning,” to model the intricate game of Go and develop its own playing strategy.
Interestingly for those interested in language, the exact same deep learning techniques that power AlphaGo are currently being applied directly to the problems of natural language understanding and machine translation. Six decades ago, in the heady early days of AI, computer scientists optimistically believed that a small, dedicated team working over the summer could develop software to model human cognition and understand and translate between languages. It turned out it wasn’t quite that easy. They discovered quickly that languages are incredibly difficult to model. Words have different meanings in different contexts, grammar has a certain fluidity about it despite having a definite structure, and concepts like tone and nuance layer additional complexities on top of the raw meaning of the words themselves. Today’s state-of-the-art translation systems, like Google Translate, are trained by processing massive numbers of “parallel texts” in two languages, such as United Nations transcripts, with the system gradually learning to identify a statistical relationship between a phrase in one language and a phrase in another. Between languages that are grammatically similar, such as Spanish and Italian, or Korean and Japanese, the system often works quite well. But structurally-different languages, like English and Korean, fare much worse. Not to mention that there isn’t a one-to-one correspondence between languages; as any translator or interpreter knows, there are an endless number of ways to translate a sentence, many of which can be considered “good” translations.
In order to tackle this challenge, computer scientists are now moving beyond pure statistical translation and are hoping that massive neural networks of the type employed by AlphaGo will be able to more effectively model the complexities of language. So far, this has proved extremely successful in the closely related field of voice recognition, improving accuracies to levels unimaginable just a few years ago. But translation is a much more difficult problem. In order to build a perfect machine translator, that system would not only need to have a human-level abstract model of language, it would also need to understand cultural context, history, intent, audience, and countless other aspects of language that drive our speech but are often taken for granted. Language is arguably the most complicated thing that the human mind is capable of, and a perfect translation system would need to model aspects of cognition that we aren’t even close to understanding.
But as we saw with AlphaGo, advances in AI often happen all of a sudden — in quantum leaps, rather than in incremental improvements. Based on the steep trajectory of deep learning, we can expect to see large improvements in automatic translation in the upcoming years. However, even a system with 99% accuracy might contain 10 errors in an article of this length alone. If perfection is what we’re striving for, then we’re nowhere close, and as scientists, we are largely limited by our incomplete understanding of how we humans process language. But even imperfect translation systems can have immense value. When we integrated Google Translate into Gmail, for instance, we immediately got thank you emails from users who suddenly had a much smoother way to communicate with relatives speaking a different language. The translations are nowhere near perfect, but they were more than enough to facilitate real human connections. When it comes to automatic translation, every domain has its own quality requirements, and the real question always comes down to this: How good is good enough?
Only a human can answer that one.
—————————–
For more information on how Meridian Linguistics Ltd leverages progress in computational linguistics to return quality translation products, click here.