Singapore embraces AI to solve everyday problems

As city-state shuns heavy regulation and works with Big Tech to build AI tools, some say it could become too reliant on tech firms.

AI_Tools_Singapore
With an explosion in the use of generative AI globally, governments are racing to curb its harms - from election disinformation to deepfakes - without throttling innovation or the potential economic benefits. Image: , CC BY-SA 3.0, via Flickr.

Booking a badminton court at one of Singapore’s 100-odd community centres can be a workout in itself, with residents forced to type in times and venues repeatedly on a website until they find a free slot. Thanks to AI, it could soon be easier.

The People’s Association, which runs the community centres, worked with a government tech agency to build a chatbot powered by generative artificial intelligence to help residents find free courts in the city-state’s four official languages.

The booking chatbot, which could be rolled out shortly, is among more than 100 generative AI-based solutions spurred by the AI Trailblazers project, launched last year to find AI-based solutions to everyday problems.

The project, backed by Singapore government agencies and Google, has also led to the development of tools to scan job applicant’s CVs, develop customised teaching curriculums, and generate transcripts of customer service calls.

It is part of the Southeast Asian nation’s AI strategy that is light on regulation and keen on “AI for all”, said Josephine Teo, minister for communications and information.

“Regulations are certainly part of good governance, but in AI, we have to make sure there is good infrastructure to support the activities,” she said at a briefing last month at Google’s Singapore office where some of the new tools were demonstrated.

We are not looking at regulation - we see a trusted ecosystem as critical for the public to use AI confidently. So we need an ecosystem where companies are comfortable, that allows for innovation and to deploy in a way that is safe and responsible, which in turn brings trust.

Denise Wong, assistant chief executive, Infocomm Media Development Authority

“Another very important aspect is building capabilities …(and) making sure that people not only have access to the tools, but are provided with opportunities to grow the skills that will enable them to use these tools well,” Teo said.

With an explosion in the use of generative AI globally, governments are racing to curb its harms - from election disinformation to deepfakes - without throttling innovation or the potential economic benefits.

In Singapore, the focus is on AI adoption in the public sector and industry, and building an enabling environment of research, skills and collaboration, said Denise Wong, an assistant chief executive at Infocomm Media Development Authority (IMDA), which oversees the country’s digital strategy. 

“We are not looking at regulation - we see a trusted ecosystem as critical for the public to use AI confidently,” she told Context.

“So we need an ecosystem where companies are comfortable, that allows for innovation and to deploy in a way that is safe and responsible, which in turn brings trust,” she said.

Responsible AI

With its stable business environment, Singapore consistently ranks near the top of the global innovation index, climbing to fifth place last year on the strength of its institutions, human capital and infrastructure.     

On AI, Singapore was an early adopter, releasing its first national AI strategy in 2019 with the aim of individuals, businesses, and communities using AI “with confidence, discernment, and trust”.

It began testing generative AI tools in its courts last year, and uses them in schools and in government agencies, and released its second national strategy in December, with the mission “AI for the public good, for Singapore and the world”.

Also last year, Singapore set up the AI Verify Foundation to develop testing tools for responsible use, and a generative AI sandbox for trialling products. IMDA, along with technology companies IBM, Microsoft, Google and Salesforce, are among its primary members.

The toolkit, on code-sharing platform GitHub, has drawn the interest of dozens of local and global companies, Wong said. 

“It provides users the means to test on parameters they care about, like gender representation or cultural representation, and nudges them toward the desired outcome,” she added.

In tests by tech firm Huawei, the toolkit highlighted racial bias in the data, while tests by UBS bank prompted reminders that certain attributes in the data could affect the model’s fairness, according to IMDA.

“We want to enable everyone to use AI responsibly. But governments cannot do this on their own,” Wong said.

Goldilocks model 

Worldwide, there are more than 1,600 AI policies and strategies from 169 countries, according to the Organisation for Economic Co-operation and Development (OECD).

The United States has opted for a market-based model with minimal regulation, while Europe has embraced a rights-based approach, and China has prioritised sovereignty and security, said Simon Chesterman, a senior director at AI Singapore, the lead government programme. 

Singapore has taken a different path. 

“For small jurisdictions like Singapore, the challenge is how to avoid under-regulating – meaning you expose your citizens to risk – or over-regulating, meaning you might drive innovation elsewhere and miss out on the opportunities,” he said.

“In addition to this Goldilocks idea of regulation, there is a real willingness to partner with industry … because industry standards and choices will always be the first line of defence against problems associated with AI,” he said. 

“It also increases the chances that Singapore can reap the benefits of the new knowledge economy.”

The 10-member Association of Southeast Asian Nations’ guide to AI governance and ethics, released earlier this month, recommends principles of transparency, fairness and equity, accountability and integrity, and “human-centricity”.

Yet member countries including Singapore, Cambodia and Myanmar have been criticised for using AI to enhance surveillance, including with facial recognition and crowd analytics systems, and patrol robots.

A second edition of the AI Trailblazers project will be launched in Singapore this year, and help up to 150 more organisations build generative AI solutions for everyday challenges, Teo said.

While these collaborations between the government, industry and academia can accelerate technological progress, there are risks, warned Ausma Bernot, a researcher at Griffith University in Australia.

“There is the possibility of becoming overly reliant on these corporations in the medium- to long-term,” she said.

“The challenge is striking a balance between cooperation and maintaining sovereign control over critical AI infrastructure.”    

At the Trailblazers event, a short film on the People’s Association’s booking chatbot created a buzz of excitement. 

There were more than 140,000 badminton court bookings in 2022, so a tool that can help do it easily is welcome, said Weng Wanyi, director of the National AI Office.

“It will save time and effort,” she said. “At the end of the day, it’s about solving real problems with technology.”

This story was published with permission from Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, climate change, resilience, women’s rights, trafficking and property rights. Visit https://www.context.news/.

Like this content? Join our growing community.

Your support helps to strengthen independent journalism, which is critically needed to guide business and policy development for positive impact. Unlock unlimited access to our content and members-only perks.

Most popular

Featured Events

Publish your event
leaf background pattern

Transforming Innovation for Sustainability Join the Ecosystem →