- One of the initiatives includes a new AI Assurance pilot to testbed technical testing for deployment of GenAI applications
PARIS, FRANCE – 11 FEB 2025
1. Singapore has introduced new AI governance initiatives to enhance the safety of AI for both Singaporeans and global citizens, given the transboundary nature of AI products and services. These are the: (i) Global AI Assurance Pilot for best practices around technical testing of GenAI applications; (ii) Joint Testing Report with Japan; and (iii) Publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report. The announcement was made by Minister for Digital Development and Information, Josephine Teo, at the AI Action Summit (AIAS) held in Paris, France from 10 to 11 February.
2. The AIAS built on the advances made at the Bletchley Park Summit in November 2023 and the Seoul Summit in May 2024, and brought together political, business and civil society leaders — including Heads of State, international organisations, and academics — to foster international cooperation in areas such as AI governance, innovation, and safety. Singapore endorsed the Leaders’ Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the conclusion of the AIAS on 11 February 2025.
New initiatives in AI Safety
3. Speaking at the AIAS panel on “Monitoring AI Risks at the Frontier” on 10 February, Minister Teo announced the following three new initiatives that reflect Singapore’s commitment in rallying industry and international partners towards concrete actions that advance AI safety:
- The launch of the Global AI Assurance Pilot by AI Verify Foundation and the Infocomm Media Development Authority (IMDA), which is a testbed to establish global best practices around technical testing of GenAI applications. The Pilot will convene leading AI assurance and testing vendors with firms which are deploying real-life GenAI applications. It will shape future AI assurance standards and future assurance services, grow the local and international third-party AI assurance markets, and provide practical input to AI governance frameworks. [Refer to Annex A (65.85KB) for more details.]
- The release of a Joint Testing Report in collaboration with Japan under the AI Safety Institute (AISI) Network, which aims to make Large Language Models (LLMs) safer in different linguistic environments through assessing if guardrails hold up in non-English settings. As co-lead of the Testing and Evaluation Track under the AISI network, Singapore brought together global linguistic and technical experts from the AISI network to conduct tests across 10 languages (Cantonese, English, Farsi, French, Japanese, Kiswahili, Korean, Malay, Mandarin Chinese, Telugu) and five harm categories (violent crime, non-violent crime, IP, privacy, jailbreaking) to build up evaluation capabilities and methodological standards. The joint testing exercise expands on global efforts to make models safer in different linguistic environments, given current English-centric training and testing which potentially leaves gaps in non- English safeguards. Refer to Improving Methodologies for AI Model Evaluations Across Global Languages for more details.
- The publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report 2025 (1.41MB), so that we understand how LLMs perform with regard to different languages and cultures in the Asia Pacific region, and if the safeguards hold up in these contexts. The report also sets out a consistent methodology so that we can test across diverse languages and cultures, as no one party can accomplish that alone. The report is based on findings from the AI Safety Red Teaming Challenge, organised by the IMDA and Humane Intelligence, a non-profit testing organisation, in November 2024. Over 50 participants from nine countries across Asia Pacific came together and red teamed 4 LLMs (Aya, Claude, Llama, SEA-LION) for cultural bias stereotypes in non-English languages, compared to English. The Challenge aimed to advance the sciences in AI testing, a nascent space globally. The data collected will be used to develop benchmarks and automate testing for regional safety concerns. [Refer to Annex B (99.01KB) for more details.]
4. Minister Teo also participated in a Tony Blair Institute (TBI) panel on “Global Leadership in an Age of AI Opportunities” at the sidelines of the AIAS, and other closed-door roundtables on 9 February and at the AIAS on 10 February. She spoke about the need to balance AI’s transformative potential with safeguards, citing Singapore’s National AI Strategy 2.0 (NAIS 2.0) as an example of how governments can build a trusted AI ecosystem. She highlighted Singapore’s commitment to working closely with international partners to ensure that AI development remains inclusive, transparent, and accountable.
Strengthening global AI partnerships through bilateral engagements
5. Beyond her speaking engagements, Minister Teo also met with policymakers, industry leaders, and academics at the sidelines of AIAS, exchanging insights on AI safety, regulatory frameworks, and emerging AI trends. These engagements reinforce Singapore’s role in shaping international AI standards and ensuring that AI governance remains adaptable to technological advancements.