top of page
Writer's pictureBrad Miller - Director of Operations

The Evolution of Machine Translation: a Timeline

Machine translation can get a bad reputation for its inaccuracy and low-quality results. But in fact, machine translation is nothing of the sort, and is routinely used by language service providers to increase consistency in translations while saving time, and reducing costs.


The idea of machine translation dates back as early as the 17th century and has made extreme progress since World War II and the Cold War. Here’s a brief history of how machine translation has evolved into the valuable resource that’s used today.


Late 17th Century

The first idea of machine translation was in the 17th century by philosophers Leibniz and Descartes. Both men put forward proposals for codes that would relate words between languages, making it easy to decode one language from another. However, all of these proposals were just theoretical, and no actual machine was made from their ideas.


1933

Soviet scientist Peter Troyanskii created a machine to be used for “the selection and printing of words when translating from one language to another.” The invention was very simple and all it was was a set of cards in four different languages, a typewriter, and a film camera.


The machine operator would take the first word from the text, find the corresponding card, take a photo, and type its morphological characteristics - its noun, plural, genitive- on the typewriter. The tape and the camera’s film were used simultaneously, making a set of frames with certain words and their morphology. In linguistics, morphology is the study of words, how they are formed, and their relationship with other words in the same language.


Troyanskii presented his idea to the Academy of Sciences of the USSR, but the invention was deemed useless. Troyanskii died without finishing the machine, and no one in the world knew about his invention until two Soviet scientists found his patents more than 20 years later, in 1956.


1949

The first talks of machine translation in the Western world came via a memorandum written by American mathematician Warren Weaver in 1949. At the request of the Rockefeller Foundation, Weaver published a paper in which he discussed the possibility of having texts be completely translated by a computer with no human involvement. Weaver believed machine translation could stem from the methods used in cryptography for the decoding of secret messages in the Cold War.


Weaver’s paper became widely renowned as the spark that set off machine translation research throughout the United States.


1954

Georgetown University and IBM conducted the first machine translation experiment. On January 7, 1954, the IBM 701 computer automatically translated 60 Russian sentences into English. The experiment was small - the operator, who did not speak Russian, used punch cards to input 60 sentences created from a small vocabulary of 250 words. The result was accurate enough to inspire Germany, France, Canada, and Japan to all compete with the United States in the race to find the perfect machine translation solution to use in the Cold War against the Soviet Union.


1962

The Association for Machine Translation and Computational Linguistics was formed in the United States.


1964

The National Academy of Sciences forms the ALPAC committee to study machine translation.


1970

The French Textile Institute uses machine translation to translate abstracts from and into French, English, German, and Spanish.


1984-1989

Trados launched, and was the first company to develop and market translation memory technology for commercial use.


1996

Machine translation goes online with the company Systran offering free translations of small texts. At this time, machine translation would translate each word individually, and not look at the meaning and context of the sentence. As a result, the translated material may be clunky and rudimentary.


Early 2000s

The world’s leading technology companies start investing in machine translation research. This includes Google, Facebook and Microsoft.


2006

Google Translate was launched. A year later the program began offering automatic translations for the Polish language. Despite the low quality of the translations, Google Translate became so popular that it started to force its commercial counterparts out of the industry.


2014

Researchers started to propose machine translation systems based on neural networks. Instead of basing automatic translations on statistics and data, neural translation is modeled on the way the human brain functions. The Google Neural Machine Translation (NMT) system translates whole sentences instead of individual words or phrases. The tool uses an encoder to break down sentences, interpret and extract meaning, and match up the words with the most relevant equivalent in the target language.


This method results in more accurate translations, compared to previous tools.


2020

Facebook announced that its machine translation system was getting smarter and is exceeding their expectations. Their translation system was able to teach itself how to translate between language pairs other than the ones programmed in their system. Facebook announced its NMT was the first multilingual machine translation model that can translate between any pair of 100 languages without relying on English data.


This discovery has been groundbreaking and opens up the potential for new machine translation research in the modern age.


While the evolution of machine translation has been incredible, there is still a need for human linguists during the translation and localization process. Proper human translation and localization completes what a machine can’t- and that’s adapting material to the culture and language of a region.


We’d love to talk more about how our team at Language Intelligence uses machine translation within our end-to-end translation and localization services. Give us a call today.


bottom of page