Neural machine translation promises high-quality translation results for technical editorial staff. But if the company terminology is not included, NMT can be tricky. TermSolutions Managing Director Prof. Dr. Rachel Herwartz in her latest article in the trade journal for technical communication.
Technology with a lot of potential
The first attempts at machine translation were made in the 1950s. But it took almost 70 years before the translation from the computer led to usable results. Deep learning technologies were necessary for this. With neural machine translation, a new type of translation technology has been available since 2016, with which an acceptable level of productivity can be achieved: time savings of up to 30 percent and cost savings of 50 percent.
It doesn’t work without post-editing
The output is getting better and better. However, all types of machine translation that go beyond pure content development require post-processing. According to Prof. Dr. Rachel Herwartz at NMĂś is more than just correcting mistakes. Post-editing demands much more of the translator today because it is changing to “augmented translation”. The author uses practical examples to explain why this is so and what exactly is behind it. In doing so, she takes a very close look at the interaction between the translation memory, MT system and terminology database.
No NMT process without a terminology database
The latter is particularly crucial in neural machine translation, writes Herwartz. Because that would solve the terminology problem that arises here frequently. The connection to a terminology database is therefore mandatory. The author draws her conclusion accordingly: A prerequisite for the meaningful use of neural machine translation is prescriptive terminology work. And: The functioning interaction of all components that make it easier for people to post-process a machine translation – in pre-editing, post-editing and quality assurance.
The full article will soon be available for tekom members on thetk website.