Finally. The first welcome email to arrive in your inbox all afternoon. Your translations have returned from the agency and you’re ready to share them with the world.
There’s one question to consider before clicking publish, though: How do you, as someone without sufficient foreign language skills, know if the translations are actually good?
Determining Translation Quality is Imperative
Most vendors will address this concern in your initial vetting process, assuring you that their translators are accomplished academics with decades of certified experience. They may even elaborate on the exact quality control measures they apply throughout the translation process.
But it’s clear that past promises don’t always predict future satisfaction. In fact, SDL’s Translation Technology Insights report revealed that 64% of translation professionals regularly perform rework in response to third-party feedback.
So how might the industry help customers feel more confident in its services?
Well, one option is to shift the translation quality conversation into a language everyone understands.
Embracing Quantified Translation Quality
The advancement of objective translation quality criteria began in earnest in the 1990s with the introduction of the ASTM J2450 and the LISA QA Model. These frameworks functioned as error-counting scorecards, enabling reviewers to quantify and categorize translation issues in a standardized fashion for the first time.
Modern iterations like the TAUS DQF have continued this good work, further refining quality calculations and even generating valuable industry benchmarks. But error-counting scorecards of any kind all share similar limitations.
First and foremost, they’re designed for reactive analysis. Translations are published first, consumed by your audience second, and graded by your assessors last. As a result, any damage caused by low-quality content will already be done by the time errors are recorded and resolved.
And since these translation quality assessments are typically set up to sample only a portion of what’s produced, the applicability of their conclusions will always be limited.
Conducting more immediate or exhaustive tests isn’t the answer, though. These error-counting exercises already require significant resources to administer, and expecting them to cover more content in more languages as your translation strategy matures simply isn’t feasible.
Instead, what we need to do is move quality assurance further upstream in the translation process.
Ensure Translation Quality, Pivot From Reactive to Predictive
You don’t need to wait until a project is published to gauge how well it will perform. Just as there are known tactics I can apply to improve the future search engine ranking of this blog post, there are known tactics linguists can apply to improve the eventual quality of their translations.
The process predicts the result.
You probably already know some of these positive predictors. Recruiting in-country linguists, pairing text with visual context, and spending more time on proofreading, for example, are three behaviors that obviously correlate with high translation quality.
But not all success factors are so obvious. And unless you can compile, comprehend, and respond to all these variables prior to publication, you’ll be no better off than someone using reactive scorecards.
Here’s where Smartling is uniquely positioned to intervene. Our cloud-based translation management platform has processed billions of jobs over the last eight years, generating a massive pool of data that we’ve used to derive a new translation quality metric that:
Quantifies and weighs dozens of translation variables, automatically and in real time
Unifies this data into one actionable number (the Quality Confidence Score™)
Predicts the likelihood of achieving a professional-quality output
We hope this holistic, predictive metric ultimately empowers customers to take more control of the translation quality conversation. Because the truth is, you no longer need to resign yourself to unfavorable or unpredictable outcomes.
Data is here to reveal exactly which behaviors are (and are not) contributing value and fueling success.