Introduction: India just pressed the turbo button
Tech Mahindra’s CEO Mohit Joshi just dropped a banger on the Q2 FY26 earnings call: the company is building a 1-trillion-parameter sovereign LLM under the IndiaAI Mission. Big claim. Big number. Bigger implications. And yes, the goal is India-controlled governance and deployment exactly the right tone for a nation-scale model.
But before we sprint, let’s set the scene.
The IndiaAI Mission (₹10,371.92 crore) and the 8-team model push
The IndiaAI Mission approved in March 2024 with a ₹10,371.92 crore outlay (~$1.25B) is India’s moonshot to make AI in India and make AI work for India. Moreover, the government is explicitly funding foundational models tailored to Indian languages, contexts, and public-sector needs.
In September 2025, the government named eight entities to build these models, including Tech Mahindra and IIT Bombay’s BharatGen consortium. Notably, BharatGen is tasked with a trillion-parameter effort too so this is very much a multi-front push.
Meanwhile, India is scaling the pipes: the IndiaAI Compute pillar now cites tens of thousands of GPUs onboarded for affordable access far beyond the original 10k target so researchers and startups aren’t throttled by compute scarcity.
Why a trillion? The scale story (and its limits)
Let’s talk scale. Historically, larger models have delivered better performance up to a point. Google’s Switch Transformer crossed 1.6T parameters (sparse MoE), proving trillion-class models are technically feasible, though not always apples-to-apples with dense models. Additionally, Microsoft/NVIDIA’s MT-NLG 530B set a high-water mark for dense architectures. However, parameter count alone isn’t destiny.
Furthermore, recent releases like Llama 3.1 (405B) show that smart training, data quality, and optimization can close gaps without brute-forcing size. And crucially, compute-optimal scaling laws (Chinchilla et al. and follow-ups) emphasize the right balance of parameters vs tokens vs training budget, rather than “just make it bigger.”
Bottom line: 1T parameters is headline-worthy. Yet real-world gains will come from Indian-specific data, evaluation on Indic tasks, and responsible deployment not just a bigger number.
Control, compliance, and confidence
“Sovereign” isn’t just patriotic branding. It signals data governance, residency, and policy control that match India’s public-sector workflows and regulatory priorities. Consequently, a sovereign LLM can be tuned for Indic languages, government services, health, agri, and more without shipping sensitive data offshore. Tech Mahindra explicitly framed the model under the IndiaAI Mission as an indigenous build for India’s needs.
Additionally, IndiaAI’s pillars (datasets via AIKosh, safe & trusted AI, skilling, and startup financing) are built to keep capability and accountability at home.
10k+ GPUs? Try many tens of thousands.
Training any frontier model is a hardware marathon. The good news: the IndiaAI GPU program is ramping fast Round-2 tenders drew bids for ~18,000 GPUs, while subsequent rounds added ~3,850 more devices and even Google Trillium TPUs. Nevertheless, provisioning contiguous, high-bandwidth clusters for multi-month training runs is still non-trivial.
And yes, the PIB’s latest update boasts 38,000 GPUs onboarded across the ecosystem. However, scheduling, interconnects, storage bandwidth, and reliability still define whether a trillion-class dense run is practical or whether MoE routing and curriculum tricks become essential.
Will it be the “second-largest” model on Earth?
Short answer: maybe depending on definitions. For disclosed dense models, 1T would leapfrog 530B by a mile. But for sparse MoE, Google has published 1.6T; and several frontier labs run undisclosed MoE counts where “total parameters” and “active parameters” differ. In other words, any “global rank” is speculative and shifts with each release. So, celebrate the ambition but keep the asterisks.
The India advantage: Language depth, public datasets, citizen scale
Here’s where India can win data flywheels that match reality on the ground. With AIKosh (dataset platform) and sectoral initiatives, IndiaAI is curating Indian-centric corpora for governance, healthcare, agriculture, and more. Therefore, if Tech Mahindra and BharatGen feed the model the right multilingual, culturally grounded data and test it on Indian tasks we’ll see impact that raw parameter counts can’t predict.
Moreover, an India-governed model can standardize evaluation benchmarks for Indic use-cases (translation in low-resource dialects, public-scheme Q&A, court filings search, code-mixed chat, etc.). That, frankly, is where utility is won.
Hype well-placed execution will decide the headlines
I love the ambition. India needs a sovereign, multilingual, policy-aligned model. Nevertheless, success hinges on four gritty things:
- Compute at scale (and the ops muscle to keep it humming). The tenders are promising; cluster-level engineering will matter even more.
- Data curation that’s diverse, de-biased, and deeply Indian. AIKosh helps, but quality control will be king.
- Evaluation on Indian tasks, not just Western leaderboards. Benchmarks must be public and merciless.
- Safety & governance baked in from day one so the model is trusted where it matters most: public service delivery.
If those boxes get ticked, the trillion tag won’t just be marketing it’ll translate to measurable wins for citizens and businesses.
Quick facts (so you can win the chai-point debate)
- What was announced? Tech Mahindra says it’s building a 1T-parameter sovereign LLM under IndiaAI.
- Who else is in? Eight entities total, including IIT Bombay’s BharatGen, Fractal, Avataar AI, Zeinteiq, Genloop, NeuroDX, and Shodh AI.
- Mission size? ₹10,371.92 crore approved in March 2024; IndiaAI pillars span compute, datasets, skills, safety, startups, and more.
- Compute today? Government updates cite 38,000 GPUs onboarded across the ecosystem; tenders continue to expand capacity (including TPUs).
- How big is 1T globally? Huge but comparisons are messy (dense vs MoE; disclosed vs undisclosed counts).
Conclusion: Build big. Tune local. Ship value.
This is India thinking scale + sovereignty. And that combo if paired with ruthless engineering and India-first evaluation can change how citizens access services, how startups build, and how we govern data.
So yes, shout about the 1T. But measure the mission by Indic benchmarks, public outcomes, and trust. Because that’s how India wins the AI decade.
Sources & further reading
- Tech Mahindra’s 1T sovereign LLM announcement (earnings call coverage). (Moneycontrol)
- IndiaAI Mission: budget, pillars, GPU capacity, AIKosh datasets (official). (Press Information Bureau)
- Eight consortia, including IIT Bombay’s BharatGen (1T target). (India Today)
- GPU tender progress (bids, TPUs added). (The Economic Times)
- Scaling laws & model sizes: Switch Transformer (1.6T), MT-NLG 530B, Llama 3.1 (405B), compute-optimal research. (jmlr.org)