About Trust Technologies
The digital landscape is growing. As the use of technology becomes more prevalent in our daily lives, the trustworthiness of technology has become an important facet to ensure that the systems we use and transactions we make remain secure, and fair, and protect user privacy. IMDA is looking ahead at nascent technologies that help guarantee privacy, verify trust, and provide confidence to businesses and individuals as they transact digitally.
Digital Trust Centre
The Digital Trust Centre (DTC), launched at the AsiaTechX (ATx) Singapore 2022 on 1 June 2022, will lead Singapore’s research and development efforts for trust technologies and support talent development in this space. Funded by IMDA and the National Research Foundation (NRF) under the Research, Innovation and Enterprise 2025 (RIE 2025) and hosted by the Nanyang Technological University, the centre is a national effort to focus on key areas of Trust Technologies, such as Privacy Enhancing Technologies (PETs) for data sharing and solutions to evaluate the trustworthiness of AI systems in Singapore.
This national effort to reinforce Singapore’s position as a trusted digital innovation hub will comprise of:
- Trust Tech Research – Enable Institutes of Higher Learning (IHL) and Research Institutes (RI) to pursue research excellence in Trust Technologies and drive local and international collaborations.
- Trust Tech Innovations – Encourage academia and enterprises to co-develop and mature research ideas into market-ready solutions.
- New sandbox environment – Enable businesses to experiment with Trust Technologies to alleviate challenges with data sharing.
- Deepen local capabilities – Nurture 100 R&D talents in digital trust.
Singapore AI Safety Institute
Designated as Singapore’s AI Safety Institute (AISI) in 2024, the DTC will also be addressing gaps in global AI safety science, leveraging Singapore’s work in AI evaluation and testing. It will pull together Singapore’s research ecosystem, collaborate internationally with other AISIs to advance the sciences of AI safety and provide science-based input to our work in AI governance.
The Singapore AISI will focus on the following research areas:
- Testing & Evaluation methodologies for AI models.
- Safe Model Design, Development and Deployment practices throughout the AI lifecycle.
- Content Assurance to mitigate risks associated with AI-generated content.
- Governance & Policy to inform AI governance frameworks.
Recognising the global nature of AI development and the need for international collaboration to advance AI safety, the Singapore AISI is in discussions with counterparts like the US and UK AISIs and actively seeking collaboration in the following areas:
- Joint Research Initiatives: Partnering on research projects or challenges to address AI safety challenges.
- Sharing Best Practices: Exchanging knowledge and expertise on AI safety methodologies and governance frameworks.
- Talent Development: Collaborating on programs to cultivate a global pool of AI safety experts.
For more information on the Singapore AISI, please refer to the factsheet - Digital Trust Centre designated as Singapore’s AI Safety Institute.
For enquiries, contact the Digital Trust Centre and Singapore AISI, hosted by the Nanyang Technological University dtc@ntu.edu.sg
A.I. Verify – AI governance testing framework & toolkit
IMDA and Personal Data Protection Commission (PDPC) developed A.I. Verify, a governance testing framework and toolkit that provides AI system owners with an objective and verifiable way to demonstrate responsible AI. The toolkit aims to provide a “one-stop” tool for technical tests to be conducted, offering a guided interface to help organisations in Singapore through the testing process. A.I. Verify is currently released as a Minimum Viable Product (MVP), and we welcome organisations to pilot the MVP with us.