It’s both strange and fascinating that with all the smart technology in our lives today, machines still haven’t conquered the complex art of language translation.
There have been major strides forward of course, particularly in the past few years, but the language is a new frontier. Just as meteorologists attempt to predict the weather with limited success, conquering language, with its constantly evolving conditions, is no easy task.
One might imagine that, with such advanced technology, doing so would be a piece of cake. But language, like the wind, is not easily predicted.
Despite several companies claiming to have achieved parity with human translation (you can read more on that subject here), this particular frontier at present remains wild and unconquered.
A Brief Overview of Machine Translation History
Peter Troyanskii was a Soviet scientist who presented his colleagues with the world’s first translation machine in 1933. It was a simple device consisting of a typewriter, a camera, and cards in four different languages. While remarkable, it was largely overlooked, and Troyanskii died 20 years later still trying to perfect his invention.
In 1954, what became known as the Georgetown-IBM experiment began. The clunky IBM 701 became the first computer to tackle translation. Translating 60 Russian sentences into English, it was rudimentary but impressive enough to spur the funding of machine learning projects all over the world. In addition to the US, countries like Japan, Canada, and France all jumped aboard the machine translation bandwagon.
Interestingly, enthusiasm for this new project fizzled out quickly when those working on it encountered semantic barriers that they couldn’t overcome. In the US, the work essentially went dormant due to a lack of funding. While Canada and a handful of European countries continued with the research, it wasn’t until the internet was developed in the early 1990s that the concept took root.
At that point, a statistical approach to machine translation was adopted. The worldwide web was connecting the world like never before, and it was time to revisit translation.
Neural Machine Translation
In September 2016, Google made an exciting announcement. They rolled out a game-changing technology with the catchy name of the Google Neural Machine Translation system (GNMT). With the ability to translate entire sentences at a time, instead of single words, it encodes semantics. Using a huge simulated neural network, it uses multiple layers of processing to literally teach itself. Drawing on data from the previous task, it is continuously adding layer upon layer of information, continually tweaking and evolving each translation over time.
This type of learning referred to as “deep learning,” is the most significant breakthrough that machine translation has ever seen. It simulates the way the human brain works. Imagine a child first learning to walk. Data from the previous session fuels each successive try until over time; the child is walking. Perhaps a bit shaky at first, but each time he sets to walking, his gait improves. Soon he is running. Nobody taught him any of this. Although his parents might like to take the credit, it was the child’s internal neural networks that guided him. In short, he taught himself to walk. And so it is with neural machine translation.
This deep learning system uses millions upon millions of examples to deduct the best translation. Initially, the GNMT was enabled in eight languages. Two years later, it now supports over 80.
The Challenges of Machine Translation
While neural machine translation has significantly improved regarding its quality, even this type of translation still presents many challenges. The most glaringly obvious is that artificial intelligence is just that – artificial. AI simply cannot replicate certain qualities that humans possess (at least, not yet).
Every language has thousands of words that can have different meanings or implications based on their context. It isn’t yet possible for a machine to understand the context in which something is written. Gestures, emotions, and culture are all complexities that affect a word’s contextual meaning.
These are all factors that translation software cannot understand in the way that a human can. In many cases, the only way to adequately translate content is through the use of professional translators or translation services.
Another problem is a lack of meaningful feedback. Although translation software may seem to have the ability to think, it does not. It is programmed to find solutions. This is not the same as being able to collaborate with a professional translation or localization expert who can provide you with valuable feedback. Someone who is familiar with the language and culture you are translating into will be able to catch errors in style and tone that could otherwise be disastrous.
The Future of Machine Translation
For neural machine translation to be successful, it needs a clearly defined set of parameters. For example, AI has been able to beat humans in chess because there are particular rules to the game that are based on logic and reasoning. But language is very often not logical or reasonable. Because there are infinite directions that can be taken in language, programming a machine to choose the correct one is daunting, and in all likelihood, impossible.
In spite of the many advances that we’ve seen with machine translation, it is still not ready to replace human translators. While professional translators regularly use software to aid them in translation, it is still a tool that is in its infancy. And although some companies have claimed to have achieved total parity with human translation, these claims are proving very hard to substantiate.
For translation to be relevant and accurate, nothing can replace a professional and experienced human translator. They are far better at delivering creative solutions that cater to the values and expectations of your target audience.