Writing, speaking, and translating the future

Production Localization: The Tip of the Iceberg

Production Localization: The Tip of the Iceberg 1

Localization as we know it

The most obvious area of localization and the one that most organizations are familiar with is the production aspect of localization. Traditionally companies accept the costs of localization as a part of doing global business.  At first there wasn’t much thought to strategy and ROI was not really tracked. That has changed in some interesting ways. In this post I’ll examine the ways localization process and tools are changing and where I think they are headed.

Localization Always

There are certain standard localization production tasks that are the backbone of translation and they will remain a part of localization for the foreseeable future.

Translate, edit, proof

TEP processes are still a large part of localization. TEP processes may morph over time, but as long as there is a need for human discernment there will be humans performing translation, editing, proofing, post-editing, or training roles.

File management

Many of the file management tasks in localization are being automated, but there is still a decent amount of file management and version control that requires humans.

Data munging

Localization engineering work is becoming more important as the speed and amount of content grows. Though some of the well-defined work can be automated, there will always be new work here. This is especially true since companies are working to create visibility across all of their data and data creation processes.

Localization past

Boxed Localization

During this earliest era of localization the box mentality prevailed (in more ways than one). There were boxed products that needed to be shipped and sold in X number of countries so localization had to happen and it usually happened after the initial release. I’m sure they did some analysis for the right number of localized boxes to produce. For example, the returned products might have been used to justify a reduction in production of a particular locale, but often the final decisions were made by marketing requests and without data analysis.

Sim-shipping was a big innovation in the industry at this time. Organizations simultaneously developed and localized their products. This caused some churn and waste, but now it is a common practice in most localization. Pseudo-loc and code freezes lower the churn and help reduce the time to market. But this presents some new wrinkles around deployment and version control that localization tools are still trying to address.

Localization Present

Can you hear me now?

The “now” of localization is all about speech and social network localization. The ephemerality and volume of social network content has altered the speed and quality required for localization. And the voice assistants have scaled the need for data to solve for AUIs (Audio User Interfaces). Facebook and Instagram don’t need the quality of a boxed product, or have time for a long product release cycle. Amazon and Baidu demands billions of sound bites in hundreds of languages to harden AUIs.  But as with social networks there is no demand for long form translation. It is all declarations and commands.

Tools for producing ephemera and web content

The needs of social networks and new internet companies have drastically changed the localization industry. They have redefined localization and many of the tools to do the work.

The new internet companies have attempted to redefine localization as a product concern. This is great because it raises the value of localization to an essential component of the product. And they are examining it with A/B testing and journey mapping as an essential user experience issue. As I would say in Amazonese: They’ve made localization a “customer obsession” issue.  But this too is a form of atomizing localization. As a product concern it becomes even more a production issue.  This is reflected in the tooling that many of these companies are creating.  Mojito  from Box, Polyglot.js from Airbnb and Pontoon from Mozilla are all great production tools for localization. They are are utilitarian tools to get the job done. However, they don’t help someone understand the metrics or value of an enterprise’s localization efforts on the overall goal or mission of the company. At this point the company wide concerns and impact of localization are not a part of the new tooling.  Though perhaps these companies are leveraging the data from these tools to create company impact data elsewhere that necessary to link the production of content to the generation of revenue and the strategy of the company.

Localization Industry Tools for these New Models

Some of the tools created by localization start ups trying to solve the same issues as pure localization issues (e.g. Transifex, and  Phraseapp) but they too lack the holistic view purely because they are 3p tools focused on localization as a pure production issue. And these companies over time tend to move from proxy to TMS. Smartling  is a well-funded and more mature example of this metamorphosis.  And again all of these are good tools but they are purely operational localization tools. They are PM tools that help manage the process of localizing content.

Machine Translation (S and N) and Machine Translation Post-Editing

Machine translation has become essential to meet the demands of ephemeral and other large scale localization efforts. Statistical (S) MT has long replaced most rule-based (RB) MT, but it is far from perfect though the effort and the linguistic expertise required to create the MT engines is much less than what was required for RBMT.  MT (S and N)  still require a discerning human to train, review, and post-edit the content. Neural (N) Machine Translation is better at capturing rhythms of speech and meaning but it suffers from a more profound type of nonsense at times. See this article on recursive neural networks and Shakespeare for an interesting analysis. I’ve excerpted a small portion from an RNN generated passage to show how Neural can create fluent nonsensical sentences.  Though the passage is not a translation it illustrates that a full corpus of content cannot always generate new content that is usable.

KING LEAR:

O, if you were a feeble sight, the courtesy of your law,
Your sight and several breath, will wear the gods
With his heads, and my hands are wonder’d at the deeds,
So drop upon your lordship’s head, and your opinion
Shall be against your honour.

It will be a long time before humans are not needed to evaluate quality, train MT engines, or post-edit the output. As lower quality translation has become acceptable for some use cases, raw NMT and SMT use has grown, but it will not fully supplant post-edited or human translations any time soon.

In another post I’ll take up MT training processes. I consider the MT use in production as a separate process that is ongoing and takes trained personnel. The job of the localization professionals is to adapt the MT and provide a feedback loop for retraining the MT engies.

Quality review at scale

There have been claims by multiple tool makers that they can evaluate translation quality at scale through artificial intelligence and machine learning. I think to evaluate the claims we need to divide quality into what I call subjective and objective quality. Subjective quality is judgment based. It is the appropriate word choice and tone to translate the content and it is validated because others would agree in principle on the choices a translator made. But subjective quality is not easily measurable and is usually done across multiple reviewers with a matrix or rubric. Spelling on the other hand is an objective quality issue (unless you’re dealing with historic content and orthographic reforms of Portuguese or French). It is usually clear if a word in a certain language is spelled incorrectly. Grammar is less objective.  And in translations, style, form, tone (formal vs. informal),  and even meaning are even less objective. I don’t think there is truly a way to measure subjective quality. The MQM and DQF formats have addressed this by making quality something that is dimensional or spectrum-based. But that does not translate well to machine learning tasks. So what exactly is being measured as quality? These are really aggregates of human decisions within similar parameters.  “In the aggregate human reviewers in these situations assessed the quality as Y.”

Localization Tools

At the production level localization tools are essential to large scale translation processes. APIs, connectors, and integrated CAT tools have altered the industry, but essentially the requirement to manage process, move/ munge  content, translate, and manage the content lifecycle is still essential to the translation process.

The overall enterprise tooling and development processes must be integrated into the localization process, but I will take these up in a separate post.

TMS/CAT

Startups in the localization space have collapsed the  CAT/TMS paradigm. CAT tools are often integrated into TMS products. TMS components are stripped down to microservices that move the content for rapid development, or TMS products are taking on connection, analytics, and string repository duty.  However, these tools still stop short of connectors for full business integration. There is no easy way to take the data from a TMS and integrate it into financial systems, or enterprise strategy. It is a gap that is especially obvious to large enterprises.

Terminology

Terminology is another obvious savings, but few enterprises, have integrated terminological tasks with localization tasks. Those that have hire full-time staffs to manage their terminology. But even these companies are hard pressed to rapidly change the name of a product or offering across all of their markets and locales.

Production Metrics

Localization metrics are important  to help adjust mid-course and to provide data for continual improvement of the production processes. Questions about cost, time, and quality in localization are the starting point for those improvements. Evaluating localization metrics in isolation is not helpful for the product or vision of the company, and many companies limit localization production metrics to  improving localization processes. That is changing and in future posts I’ll dive into some of these broader use cases where the data is used to direct and inform company strategy.

  Localization Future

Magic Leap in my mind may present interesting localization challenges of the future.  I’ll write a separate article on Magic Leap and localization because the issues it raises are separate from the trends in the industry.

 Conclusion

Production localization is complex. It requires expert practitioners and well established processes for continuous improvement; but it is really the tip of the iceberg for localization. If an enterprise accepts the premise that localization is a linchpin for international commerce and global success, then it must be a C-level concern. And this is the core premise of Whole Enterprise Localization Design.  In future posts I’ll further elaborate some of these concepts and illustrate why large enterprises and fast growing start ups need to incorporate localization and globalization into every aspect of the their business.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.