Why should we train AI to “reason” in multilingual, multicultural, multimodal mode for built-in global-fit outcomes?
- Talia Baruch
- Sep 17
- 3 min read
By Talia Zur Baruch, GlobalSaké & LocLearn Founder

“The limits of my language mean the limits of my world.” (Ludwig Wittgenstein)
Dear GlobalSaké Community,
As we’re now wrapped deeper into September’s Autumn season here in the Northern hemisphere, this is my cue to go visit of the Ancient Wise Giants—the Sequoias and Redwoods of Northern California-Oregon (some of which are over 2,500 years old!) for some perspective.
This is also a good time for reflection and observation. As we explore the limits and opportunities of Agentic AI in our industry today, it’s helpful to also reflect on the limits of our own language on how we perceive the world.
Language doesn’t just describe reality—it shapes how we view it. The concepts available to us are limited by the words we use. Learning a new language expands our world of concepts and stretches our perceived possibilities. Multilingual people, who’ve lived across different cultures and regions, are often better tuned to multi-perspectives, and develop a broader, more adaptive outlook to creative problem solving.
Multilingual and Creative Outlook
A 2009 study published in the Journal of Personality and Social Psychology (American Psychological Association) showed that living abroad enhances creative problem-solving. The process of adapting to new cultural contexts enhances flexibility and expands one’s ability to approach problems from multiple perspectives.
What does that have to do with training AI?
Today, AI reasoning is mostly language-agnostic. Large language models (LLMs) use vector-based mathematical representations that map meaning across languages. However, since English dominates training data, AI often appears to “think” in English. To unlock richer, more creative reasoning, we should train AI to operate multilingually, multiculturally, and multimodally.
A common misconception is that multilingual AI is limited primarily by the insufficient non-English training data. However, LLMs can leverage sample datasets, transfer learning, and machine translation to bridge these gaps. Still, expanding high-quality non-English corpora improves accuracy and contextual appropriateness for the diverse use cases of the world.
Targeting Global-Fit AI
As agentic AI evolves, systems will increasingly:
• Specialize in expertise domains such as finance, law, or creative writing.
• Operate multimodally, e.g., compressing a long-form video into snap, platform-specific reels, or alternating images for cultural fit (eg, Google's Gemini 2.5 Flash Image Nano-Banana model).
• Optimize for proactive outcomes, not just prompted responses.
For global users, this means AI must integrate contextual metadata to understand cultural nuance, regional norms, and user intent—ensuring relevant-fit performance across markets.
Human expertise will remain crucial. I see the role of expert Linguists and cultural specialists gradually shifting to help design guardrails, create effective multilingual prompts, and build feedback loops. Over the next three years, their role will shift toward validating checkpoints that keep AI aligned with cultural expectations and regional ethical boundaries.
Code Vibing: No-/Low-Code Agentic AI Platforms
A wave of no- and low-code AI platforms now enable users to build full-stack web and mobile applications with conversational input. Here are some examples of dominant players in the market today:
Base44. Clever Agentic AI platform for all-in-one solutions with no-code conversational input.
Founded in late 2024 by Israeli entrepreneur Maor Shlomo.
Acquired by Wix.com in June 2025.
Enables users to build complete web and mobile apps (front-end, back-end, databases, authentication, deployment) through natural language.
Lovable AI. As far as I can tell, Lovable is currently the most sophisticated Agentic AI platform.
A text-to-app builder designed for non-technical founders.
Generates full-stack applications from plain-language descriptions.
Recently introduced Agent Mode (beta), which allows autonomous planning, debugging, web searches, and even image generation—reducing build errors.
Cursor. This is the most i18n-ready Agentic AI platform I’ve seen so far.
An AI-powered IDE (fork of Visual Studio Code) for developers.
Provides natural-language code generation, in-editor chat, smart refactoring, and project-aware code understanding.
Stands out for internationalization (i18n) support, including translation key extraction, multilingual file handling (e.g., en.json, fr.json), and Composer workflows.
Can operate in non-English languages (e.g., “always respond in German”), illustrating adaptability to mixed-language development environments.
Key Takeaway
To achieve global-fit AI, training must go beyond raw data. It requires designing systems that can integrate linguistic, cultural, and multimodal layers. In the same way that us Humans expand our perspectives for creative solutions by exposure to multilingual, multicultural, multiregional environments, AI must also be designed with built-in exposure to diverse languages, cultural contexts, and modalities to expand its conceptual “world of wisdom.”
Comments