The government of Singapore released a plan Today for global collaboration on artificial intelligence security following a meeting of researchers from the United States, China and Europe. The document presents a shared vision to work on AI security by international cooperation rather than competition.
“Singapore is one of the rare countries on the planet that can be heard well with the east and the west,” explains Max Tegmark, a MIT scientist who helped convene the meeting of the Lighting Lights last month. “They know they are not going to build [artificial general intelligence] They themselves – they will do it – it is therefore in their interest to have the countries that will build it. “”
The countries most likely to build AG are, of course, the United States and China – and yet these nations seem more determined to surpass each other than to work together. In January, after the Chinese startup Deepseek published a point model, President Trump described it as “alarm clock for our industries” and said that the United States had to be “focused on the laser in competition to win”.
Singapore consensus on global AI security research priorities calls for researchers to collaborate in three key areas: studying the risks posed by border AI models, exploring more sure means of building these models and developing methods to control the behavior of the most advanced AI systems.
The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a leading AI event held in Singapore this year.
Researchers from Openai, Anthropic, Google Deepmind, Xai and Meta all attended the IA security event, as are the academics of institutions such as MIT, Stanford, Tsinghua and the Chinese Academy of Sciences. Experts from AI security institutes in the United States, the United Kingdom, France, Canada, China, Japan and Korea have also participated.
“At a time of geopolitical fragmentation, this complete synthesis of advanced research on AI security is a promising sign that the world community is gathering with a common commitment to shape a safer future of AI,” said Xue Lan, dean of Tsinghua University, in a press release.
The development of increasingly competent AI models, some of which have surprising capacities, has prompted researchers to worry about a risk range. While some focus on short -term damage, including problems caused by biased AI systems or the potential of criminals to exploit technologyA significant number believe that AI can constitute an existential threat to humanity when it begins to thwart humans in more areas. These researchers, sometimes called “AI Doomers”, fear that models deceive and manipulate humans in order to pursue their own objectives.
AI potential has also attracted discussions to an arms race between the United States, China and other powerful nations. Technology is considered in political circles as essential to economic prosperity and military domination, and many governments have sought to mark their own visions and regulations governing the way in which they should be developed.