top of page

Control Shift: How LLMs Are Changing Localization in a Way Machine Translation Never Could

By Vasso Pouli, Founder of NVLoc

Since OpenAI released ChatGPT to the world, and with tech companies racing to train and release better, faster, and more specialized language models, LLMs and AI have become household names.


With the debate on human vs. machine once again at the forefront of the language industry, we find ourselves having the same discussions as when machine translation was introduced, covering topics such as:


  • Control

  • Use cases

  • Specialized expertise and skillsets

  • Naming and self-identification

  • Pricing

  • Processes and standards


But here’s what I think has indeed changed this time round.


Control Has Changed Sides

The most impactful change brought about by LLMs—and the primary driver of disruption—is that control has shifted from the language service providers (LSP) side to the buyer side. OpenAI’s decision to release ChatGPT to the public created a sense of power and autonomy that previously resided with service providers.


Use Cases Are Crowdsourced, Not Imposed

Unlike machine translation, which was cloaked in science and came with a pre-defined use case (embedded in its name), LLMs were launched without a specific use case application. No sandbox environment, no constraints—just a playground for free experimentation. As a result, their potential applications were effectively crowdsourced to millions of users across industries, making it one of the largest user research projects ever conducted.


Technology Can No Longer be Separated from Offering

Like many service providers across industries, LSPs have traditionally outsourced their technology in a transactional rather than strategic manner, relying almost entirely on tech providers’ roadmaps. When these roadmaps were forward-looking, LSPs benefited; when they stagnated, LSPs’ capabilities were limited. The focus has largely been on internal efficiencies rather than on building a customer-centric infrastructure.


LLMs and AI have abruptly made us realize that technology cannot be an afterthought, something layered on top of your service offering. Instead, technology must be the spinal cord that drives a competitive, future-proof service. Technology should not only support operations; it should define and measure how services are rendered and how they evolve, enabling greater speed, quality, and scalability.


Expertise Still Being Acquired, and the Matching Skill Set Still Being Defined

As new use cases emerge, we witness a familiar pattern. Buyers drive change, while suppliers initially struggle to justify their value, and then scramble to adapt their solutions to the new reality. Understanding the new advanced technology tools is easier for those with a technical background, so both buyer and supplier teams are seeking engineering expertise. Meanwhile, non-technical professionals are either upskilling or resisting change. The players best positioned to gain exponential value from the latest technology relatively early are tech providers, both within and outside the language industry.


Naming as a Self-Identification Stronghold Is No More Justified

New job titles are emerging as new teams form and new skills are required. Yet, translators continue to safeguard their title as their last stronghold of self-identification. Historically, job titles were tied to academic degrees, but that has not been the case in many industries for years. Translators have long performed various tasks—editing, proofreading, post-editing, research, terminology management, localization, culturalization, trans-adaptation, layout checks, linguistic testing—but these shifts never sparked debates on self-identification. So why has the rise of LLMs triggered such discussions? Even if translators need to rethink their title, their value can now extend to new areas: terminology validation, register adjustment, fact-checking, source validation, and relevance assessment.


Discussion Should Be About Offerings, Not Just Pricing

Buyers, LSPs, and freelance linguists are once again negotiating pricing, but what’s the point of discussing the price of an obsolete offering? Translation is already free. If translation is free, any bundle that previously included translation needs to be reevaluated, along with the associated pricing models.


Take the legacy TEP (Translation, Editing, Proofreading) workflow, for example. The proofreading step has already been made obsolete for most content types except document-based content. Yet, we still refer to TEP instead of TE. Traditionally, translation (T) accounted for 70–80% of the rate, while editing (E) made up 20–30%. But now that LLMs provide the translation step for free, the real value in the service offering shifts from translation to editing. It follows, then, that editing LLM-generated content should account for its own independent pricing—potentially an equivalent to what was previously considered the "new word rate" for TEP. New product, new price.


Current Processes and Standards Are Hardly Relevant

Just as naming conventions, service offerings, and pricing models are being challenged, so too are industry processes and standards.


LSPs have long justified their value through vendor management and project management functions. But now, what kind of talent should vendor managers source or train if not traditional ‘translators’? How will localization project managers run projects involving LLMs across various tasks? Are ISO standards for quality, translation workflows, and machine translation post-editing (ISO 9001, 17100, 18587) still relevant when LLM-generated content is replacing machine translation, or when workflows demand different skill sets? Management teams now need to make significant new strategic decisions.



A Win-Win Future: Maximizing the Potential of LLMs

While localization professionals can leverage LLMs for tasks such as quality control, terminology management, and workflow automation, the broader industry must consider deeper structural shifts.


Key areas of focus include:


  • Talent Mobility—Greater movement between the provider and buyer sides can unlock value, allowing professionals to shift into roles related to AI governance, content strategy, asset qualification, and data-driven informed decision-making.

  • Limitations of Current LLMs—Existing LLMs are either too broad—focusing on large-scale models with limited predictability for specific use cases—or too narrow, prioritizing English over true multilingual enablement. The industry must advocate for AI models that better serve global linguistic diversity, where localization professionals can contribute with real value.

  • Interdisciplinary Work Groups—To keep up with AI advancements, industry stakeholders should establish interdisciplinary work groups to redefine standards, workflows and quality expectations that align with evolving AI capabilities.

  • Academia as a Bridge—Universities and research institutions can play a greater role in bridging the gap between legacy market standards and the latest technological advancements. By updating their curricula and fostering collaboration between academia and industry, they can help train a future-ready workforce with the relevant skills needed to navigate this evolving landscape.


Some of these ideas are not new, and yet we find ourselves, once again pondering upon them in the sphere of theory because it’s hard to move from idea to implementation, because it’s hard to be the first to change, because the status quo is really good at sticking around.

So, where do we start? Share your thoughts in the comments!




 
 
 

Yorumlar


bottom of page