Sam Altman said AI requires immediate safeguards and international coordination, proposing a body similar to the International Atomic Energy Agency.
Altman emphasises AI must be democratised to avoid dangerous centralisation, highlighting resilience as a core safety strategy.
He acknowledged that while AI will disrupt jobs, it could also boost economic growth and access to services.
Sam Altman, CEO of OpenAI, while speaking at the AI Summit 2026, said artificial intelligence does need regulations and safeguards, and urgently; adding that like many other power technologies, international coordination for AI regulations is necessary.
“In particular, we expect the world may need something like an IAEA (International Atomic Energy Agency) for international coordination of AI and especially for it to have the ability to rapidly respond to change and circumstances,” Altman said.
Altman, while speaking at the public forum, also suggested the near-term future of artificial intelligence, predicting that early forms of superintelligence could appear within just a few years. On the current trajectory, we could be only a couple of years away from early versions of true super intelligence, he said.
“By the end of 2028, more of the world's intellectual capacity could reside inside data centres than outside them,” admitting that it could be an extraordinary statement to be made, but it bears serious considerations.
He added that superintelligence at some point on its developing curve, would be capable of doing a better job of a major company than any executive or doing better research than any scientists.
“We understand that with technology this powerful, people want answers. But it is important to be humble about what we don’t know, and to remember that sometimes our best guesses could be wrong,” he said, furthering that as a society-wide debate is convened around AI, society needs to use each successive level of AI, have time to integrate and understand it.
Altman stated as this possibility is navigated, it should be guided by three beliefs: democratisation of AI as the only fair way forward.
“Centralisation of this technology in one company or country could lead to ruin,” he said, adding that a desirable future decades from now should have liberty, democracy and increase in human agency.
Secondly, AI resilience is a core safety strategy. Lastly, he mentioned that the future of AI is not going to unfold exactly like anyone predicts, adding that many people have stakes at shaping the outcome. “The development of AI has already held many surprises and I assume there are bigger ones to come.”
Altman also said that AI could improve the economics of many sectors. “AI can make a lot of things much cheaper and drive much faster economic growth,” he said, adding that it has already improved access to high-quality healthcare and education. In the coming years, he noted, robots will manufacture many physical goods, making supply chains more automated and cheaper.
Talking ab out India, Altman mentioned that the country has as many as 100 million weekly users, more than one-third of which are students.
He also addressed concerns that AI might take away jobs. The other side of this is that current jobs are going to get disrupted, which is always the case with new technology, he said. “AI can do more and more of the things that drive our economy today.”
However, he added that each generation is built on the work of the generation before, and with new tools, the scaffolding gets a little taller.
In the spirit of growth, he said that AI and technology are the only way forward. Furthermore, he said that there is a need for collective agency over AI, "to democratise AI”, and to provide people with agency and power, not just tools and wealth.
“The next few years will test global society as this technology continues to improve at a rapid pace. We can choose to either empower people or concentrate power,” he said.






















