Southeast Asia leads the world in AI optimism. Its governance frameworks are nowhere near ready.
- Southeast Asia leads the world in AI optimism, but its governance frameworks haven’t caught up.
- Malaysia posted the largest AI optimism increase of any country surveyed. The region’s responsible AI maturity hasn’t kept pace.
Southeast Asia trusts AI more than anywhere else in the world. Stanford University’s 2026 AI Index Reportpublished this week, documents exactly how wide the gap between that optimism and its governance infrastructure has become.
In Malaysia, Thailand, Indonesia, and Singapore, more than 80% of respondents say AI will profoundly change their lives in the next three to five years. Malaysia recorded the largest increase of any country surveyed from 2024 to 2025, up nine percentage points on that measure.
North American and European respondents sit at the opposite end–lower excitement, higher nervousness. The United States ranks 24th in generative AI adoption at 28.3%, while Singapore is at 61% and the UAE is at 54%.

The issue is what sits on the other side of that optimism.
High AI optimism, uneven governance
The Stanford report’s responsible AI chapter draws on a survey conducted jointly with McKinsey across multiple regions and industries. In 2025, the global average responsible AI maturity score was 2.3 on a four-point scale, meaning most organisations are still integrating responsible AI practices rather than having them fully operational.
Asia-Pacific registered 2.5, the highest regional score, but that still places the region firmly in the “integrating” band rather than anything approaching comprehensive operational control. The maturity scale matters because it captures not just whether organisations have responsible AI policies on paper, but whether those policies are embedded into how decisions actually get made.
At 2.5, the region is ahead of North America at 2.2 and Latin America at 2.2–but the gap between a score of 2.5 and the full operational standard of 4.0 is substantial. Knowledge and training gaps were the top obstacle to responsible AI implementation globally in 2025, cited by 59% of respondents–up from 51% in 2024.
For a region where public AI enthusiasm is outpacing enterprise governance infrastructure, that skills gap is the most immediate vulnerability.
The trust picture and what it reveals

Southeast Asian countries report not just optimism about AI, but high trust in their governments to regulate it. Singapore leads all 30 surveyed countries at 81%, followed by Indonesia at 76%, Malaysia at 73%, and Thailand at 70%. The global average is 54%. The United States sits last at 31%.
That trust advantage is significant because government credibility creates the conditions for meaningful AI governance to be implemented and accepted. Countries where the public distrusts their government’s ability to regulate AI face a harder path to building the oversight infrastructure that responsible deployment requires.
But trust in government and the existence of adequate governance frameworks are not the same thing. The Stanford report notes that national AI strategies are expanding, particularly among developing economies, and that state-backed investments in AI supercomputing are rising. Yet model production remains concentrated in the US and China.
Southeast Asian countries are largely consumers and deployers of AI systems built elsewhere–which means the governance frameworks they develop need to account for dependency on foreign AI infrastructure, not just domestic deployment.
Where the optimism-governance tension is sharpest
The report’s workplace data adds texture. In India, China, Nigeria, the UAE, and Saudi Arabia, over 80% of employees reported using AI at work on a semi-regular or regular basis. For India specifically–which posted the sharpest rise in AI nervousness of any country surveyed, up 14 percentage points alongside a modest two-point increase in excitement–the picture is a region where usage is accelerating faster than comfort.
Organisational support for responsible AI governance also shows a regional pattern worth noting. According to the University of Melbourne and KPMG global workplace survey cited in the report, India scored among the highest for organisational support of AI strategy, literacy, and governance, with around 85 to 90% of respondents saying their organisation supports all three.
Countries at the other end–Japan and South Korea among them–reported the lowest levels of support for AI literacy and the least confidence in responsible AI governance. That split matters for anyone building or deploying AI products across the region. The same market that shows the highest workplace AI usage and organisational commitment to governance is also the one experiencing the fastest-rising anxiety.
High adoption and great concern are running simultaneouslywhich is precisely the condition that tends to produceregulatory action.
Malaysia’s position
There is no national mandate for AI in school curricula–a gap covered separately in the Stanford report’s education chapter, which finds over 80% of Malaysian respondents expecting AI to profoundly change their lives, against a policy environment that has not formalised AI literacy at the school level.
Public enthusiasm outpacing policy infrastructure is not unusual in fast-moving markets. What makes Malaysia’s position distinctive is the scale of the infrastructure investment arriving alongside that enthusiasm. The governance question–who oversees AI systems operating on Malaysian data, in Malaysian data centres, under Malaysian regulatory frameworks that are still being shaped–will not wait for the enthusiasm to mature into policy.
What the numbers actually say
Stanford’s report is measured in its framing. It does not characterise Southeast Asia as unprepared. It documents where sentiment, adoption, and governance infrastructure each stand and lets the gaps speak for themselves.
What those gaps say is that a region with some of the world’s strongest AI confidence scores is building its governance capacity from a position of genuine institutional trust, which is an advantage, but on a compressed timeline, against systems and models it largely did not build, in a global regulatory environment where the EU is trusted more than any single national government to set the standards.
The optimism is real. The work ahead of it is too.
Source: Stanford HAI 2026 AI Index Report, published April 2026. Survey data from Ipsos AI Monitor 2025, University of Melbourne and KPMG International 2025, and McKinsey & Company 2025.
TNG – Latest News & Reviews
