You can’t manage what you can’t measure. In this three-part series, we’ll explore the data that’s desperately needed to improve modern translation management strategies.
How do you verify the quality of content written in a language you do not speak? That’s one riddle the translation industry has still never quite solved for frustrated global marketers. Sure, most providers can point to a proficiency test their translators passed and a multi-step review process that governs every project. But beyond the opinion of the last editor to touch your content, what assurances can they really offer you?
According to SDL’s Translation Technology Insights report, 59% of translation professionals either don’t measure quality at all or rely on purely qualitative criteria. Or, to put it another way, many are praying to the gods of proofreading and hoping no errors are detected by their client’s customers or colleagues.
But errors are detected, aren’t they? We know this objectively, by virtue of the fact that 64% of translation professionals say they regularly perform rework based on third-party feedback. But we also know it intuitively when we hear whispers that our foreign language content feels clumsy, awkward, or robotic.
So going forward, how can we derive a measure of translation quality that everyone trusts?
The Road to Quantified Translation Quality
You don’t need translation project experience to recognize the pitfalls of qualitative feedback. Whether it was music or a meal, we can all remember a time when our creative output was met with wildly different opinions. Some praised our performance, others scolded our incompetence, and we were left without a clear idea of how or if we needed to improve.
While it’s always important to weigh the balance of personal opinions, valid quality ratings must ultimately be infused with some set of inarguable, objective standards as well. And everyone from food critics to figure skating judges can attest to the fact that art is not necessarily exempt from quantification.
Translation professionals are slowly coming to the same conclusion. The (now defunct) Localization Industry Standards Association (LISA) was the first to popularize a quality assurance framework which assessed various grammatical, stylistic, and formatting elements on simple 1-10 scales. The Translation Automation User Society (TAUS) has since improved upon this approach with a more dynamic, comprehensive model of its own.
Despite the debt of gratitude we owe these organizations for moving the conversation in the right direction, their solutions share the same fundamental flaw: a quality assessment protocol that’s applied after the fact. Reviewers survey a sample of published content and retroactively count the errors.
So while it is nice to quantify a translator’s accuracy, learning about their poor ratings after content went live is akin to learning about your taxi driver’s history of traffic violations after he’s driven you into a ditch.
Yes, you’ll have an objective reason to switch service providers in the future. But the damage was already done.
The Power of Predictive Assessment
Poor creative outputs are rarely a surprise. A closer analysis of their production will almost always reveal that corners were cut at a time when careful preparation was required instead. Success, then, is really just the result of good habits applied at key moments.
Some translation success factors are already fairly obvious. Referencing visual context, reusing strings stored in translation memory, and spending ample time on review are all behaviors that correlate well with translation quality. But even then, most companies can still only see these activities in hindsight if it all.
That’s all starting to change, however, with the arrival of cloud-based translation management systems that track behavior as it happens. Every action can be captured as a real-time data point. As a result, we can now objectively analyze linguistic habits prior to publication and make an informed prediction about the resulting translation quality.
That’s the logic behind Smartling’s new Quality Confidence Score™ (QCS), a percentage-based prediction of expected accuracy based on the analysis of more than 75 behavior-based success factors.
This dynamic metric gives considerable power to translation customers. The unparalleled transparency made possible by data enables them to make shrewd assessments of Smartling translators and outside agencies alike. As a result, careless behaviors can be identified and addressed long before they have a chance to endanger brand reputations.
In addition to holding your translation providers accountable, the QCS can also inspire strategic workflow adjustments. If the ratings associated with technical content are comparatively low, for example, that might be the reminder you need to create a glossary translators can reference. At the same time, consistently high ratings associated with a certain translator may give you the luxury of scaling back an expensive review protocol.
So when asked how we think marketers should go about assessing the quality of their foreign language content, the answer these days is actually quite simple.
Decide with data.
Contact us today to learn exactly how data could be improving your translation outcomes.