Writing, speaking, and translating the future

Tools

I break the tools into 2 categories 1. Content lifecycle tools 2. Translation management tools. I briefly touched on translation tooling in the discussion on production.  The tools of localization are to speed and improve translations as well as simplify the overall localization process for the vendors, translators, and localization specialists. These tools bleed into the metrics and reporting for overall launches, but the tools are complex and are rarely helpful for people outside the production of localized content.

CAT

Computer-assisted translation tools were introduced to localization to reduce the time and effort of translation while improving the consistency and the adherence to style guides. These client-side tools were once only used by translators. Many CAT tools are becoming deeply integrated into TMS products in SaaS offerings (Memsource, XTM).  These tools add productivity metrics of individual translators, and locales,The newest wave of CAT tools are more focused on Machine Translation and post-editing activities than on pure translation activities and have webhooks or APIs (Matecat, Lilt) so they can be integrated with multiple TMS products. A separate set of CAT tools (albeit less sophisticated) are integrated with String Repository solutions (Transifex, Phraseapp, Pontoon,Qordoba).  There is still not a great solution on the market to track the development of strings, mobile application content, and marketing/ help content.

TMS

TMS tools are for large-scale translation management and leverage. For large enterprises and LSPs they are the lifeblood of the operation. But until recently these tools were transactional tools only. They tracked the progress and leverage of content during the process, but the finished documents themselves were often discarded after a short time. Most new TMS products give much more granular control over how much is saved and when it is discarded, but the TMS products are not designed to track the content lifecycle. For that a CMS is necessary. And the interaction between the two systems presents a lot of interesting challenge.

Quality tools

There may be quality tools, restricted vocabularies, or business English used in the creation of content, but I will restrict my discussion here to the quality of localized content because it is a huge, poorly defined area of localization. TMS and CAT products often have a component for quality built in but I think it is important to distinguish objective and subjective quality in any discussion of quality.

Object quality is quality that is programmatically measurable. Spelling is a good example of this. Most tools integrate spell checkers based on hunspell or proprietary dictionaries. Anything else becomes a bit more subjective (though some toolmakers will claim terminology, grammar, and style can be measured objectively, I’m not convinced the claim is anything more than marketing hyperbole).

Subjective quality is not measurable programmatically. Quality is in the eye of the beholder. Toolmakers claim statistical relevance, readability, and distance edits can measure quality at scale, but these measurements do not equal quality.  Quality is also not yet measurable by artificial intelligence or machine learning. Though some toolmakers will claim otherwise.

“Quality” is measured by humans, and if humans are not given the proper tools to do so then they are left to their own devices. Let me elaborate on this point. If I ask for a linguistic QA from an internal resource, but I don’t give them a yardstick for the exercise how is quality measured? Or to ask a more basic question, how do I know my reviewer actually reviewed the material?  The answer is they find errors. The errors are not prioritized or categorized but they do prove the reviewer did something.  Without a yardstick each error carries the same weight and a second review needs to be done to prioritize the errors based on the company or group’s priorities.

This situation reminds me of my days as a tenured English teacher at a community college. We measured quality of writing through norming processes. A group of tenured faculty would grade the same essays using a rubric (our yardstick). The rubric used prescriptive language to help graders decide on subjective elements of an essay.  So fluency, research, and logic were basic categories for an argumentative essays. Through successive rounds of norming we reached similar conclusions and had to continue discussing what our weights were to get within a margin of error in that process.

What is MQM or DQF if it is not an industry norming exercise using a quality rubric that is optimized for each quality type and group’s tolerance for ceategories of errors for these different quality typres. This is currently the best way to measure quality because the errors are categorized and weights can be assigned based on the content type. Unfortunately, it is also one of the most arduous processes in localization and as the demand for high quality content is reduced based on the ephermerality of the content so will the demand for “quality”.

Enter MT and automated quality

There are dreams of reaching a ML-based process to measure quality of translations at scale. Bleu. Meteor, WER, TER, and many more acronyms are behind the theories. But suffice to say this is an uncracked nut. No one has a great method for measuring quality at scale yet especially not against more than one language pair. I’m just touching on this here because I’ll need to do a much large set of articles here to explain MT it’s evolution and theories behind measuring quality.  Because I find it interesting to explain I may write a few 10,000 foot articles that completely over-simplify the science, effort, and research that goes into MT.

Business process tools

Plunet and XTRF dominate this category, but Protemos is an up and coming startup in the field of managing the business of translations. These are tools that sit somewhere between business accounting and business process. The value is evident if translations are your business, but they are really tools that need to integrate with the CAT, TMS, termionology, and other production tools to be valuable (or at least the data for those tools). I mention them here because some enterprises choose to integrate them into the overall localization process.

Term database

Term databases can be integrated into TMS or CMS tools, or they can be standalone or even home growth. These tools are a part of the overall content creation process if they are present. And they are essential to consistency across your business and content. They also help to save money and lower cycle time. However, they are usually one of the last tools integrated into a business and when it is added there is a lot of cruft that must be fixed before the company can reap the benefits of the new localized content.

CMS

The standard CMS system begins to do more heavy-lifting when coupled with a TMS. They may push or pull content to/from the TMS. Track versions, locales, customizations. Serve as a rendering engine for in context review or many other things. If you are working at scale you’ll need a CMS and if you are working to create an extensible and agile platform you’ll want integrations between your CMS and TMS via APIs.

I am not addressing MT in this post because I consider it a technology rather than a tool. Though some services like Lilt and SDL’s adaptve product are taking MT and creating full-fledged tools and services from the technology. And some TMS products are integrating the technology. But I’ll save that for another post.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.