Google app adds 20 languages to text translation service.

Google's instant visual translation feature has expanded from seven to 27 languages, now that the search giant has further improved its neural machine network.

The feature is part of the Google Translate app, but integrates technology from Quest Visual, which Google acquired in May 2014. Quest Visual made the Word Lens app that delivered an auto-translate of any wording on signs photographed using a smartphone camera, without the need for internet. 

Google has been serving users of the app with the seven languages that make up its main Google Translate service -- meaning you could swiftly translate road signs, menus, or any other written word to and from English, French, German, Italian, Portuguese, Russian and Spanish. Those languages and their translations were perfected over time using some unlikely sources -- including official United Nations documents and EU policy, which both have to be written up using the exact same wording, in multiple nominated languages. 

 Languages added to the visual translate feature include:Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, Filipino, Finnish, Hungarian, Indonesian, Lithuanian, Norwegian, Polish, Romanian, Slovak, Swedish, Turkish and Ukrainian. Each can be translated to and from English, but not between each other. One-way translations from English to Hindi or Thai are also available. For each language, users will need to download a separate pack, with files under 2MB each. 

Google Translate software engineer Otavio Good explained a bit about the tech behind this transition in a blog post, focussing on the leaps made in language processing in the advent of neural networks.  

"Five years ago, if you gave a computer an image of a cat or a dog, it had trouble telling which was which. Thanks to convolutional neural networks, not only can computers tell the difference between cats and dogs, they can even recognise different breeds of dogs." We've seen the fruits of that advancement with the weird and wonderful artistic renderings of Google's Deep Dream, most recently. 

Good goes on to explain how the system is trained to recognise the difference between letters and objects and letters and non-letters, ensuring even the scruffiest of penmanship, dirt and smudges don't interfere with the translation, by training the system using these examples. The system then looks up the word it thinks it's seeing in the dictionary, but is able to account for the odd error in its recognition. 

This is all pretty standard fare. Where it gets interesting is in the use of a mini neural net to allow offline real-time translations.

Login or Signup to post a comment