Skip to content

Address artificial intelligence threats, politicians told

‘We must first ensure trust and transparency’ – privacy commissioner
B.C. Information and Privacy Commissioner Michael McEvoy: “There is much good that comes from advancing AI technologies but if the public is to have confidence in its use we must first ensure trust and transparency is built into its development"

Governments' increasing use of artificial intelligence (AI) technology and people’s inability to avoid official computer services present threats politicians must address with law, privacy watchdogs say.

“Regulatory intervention is necessary,” the B.C. and Yukon ombudsman and information and privacy commissioners said in a report released June 17.

“The regulatory challenge is deciding how to adapt or modernize existing regulatory instruments to account for the new and emerging challenges brought on by government’s use of AI. The increasing automation of government decision-making undermines the applicability or utility of existing regulations or common law rules that would otherwise apply to and sufficiently address those decisions.”

Just as fairness and privacy issues resulting from the use of AI in commercial facial recognition systems have been shown to have bias and infringe people’s privacy rights, government use of AI can have serious, long-lasting impacts on people’s lives and could create tension with the fairness and privacy obligations of democratic institutions, the report said.

And that, they said, undermines trust in government.

“While we recognize that delivering public services through artificially intelligent machine-based systems can be appealing to public bodies for cost reasons, we are concerned if not done right, this perceived efficiency may come at the expense of important rights to fair treatment,” said B.C. Ombudsperson Jay Chalke.

The late Prof. John McCarthy of the Massachusetts Institute of Technology and Stanford University coined the term AI. The report said McCarthy’s definition frames AI in terms of the development of machines that can perform tasks normally requiring human intelligence, such as visual perception, speech recognition, language translation and decision-making. It’s a capacity to respond to challenges and opportunities based on inputs and goals. 

What’s happening, the officials said, is that AI is replacing the judgement of human decision-makers in governments around the world. Such computer judgments could include predicting criminals’ recidivism rates, approving building permits, determining government program eligibility and deciding car insurance premiums. 

 “There is much good that comes from advancing AI technologies but if the public is to have confidence in its use we must first ensure trust and transparency is built into its development,” said B.C. Information and Privacy Commissioner Michael McEvoy.  

The report said global spending on AI was US$12.4 billion in 2019 and is expected to reach US$232 billion by 2025. As part of Canada’s national AI strategy, Ottawa has invested $355 million to develop synergies between retail, manufacturing, infrastructure and information and communications technology sectors to build intelligent supply chains through AI and robotics.

However, another challenge comes from people themselves and a desire for fast service that could put highly personal and private data at risk.

The report said Peter Tyndall, former president of the International Ombudsman Institute and the ombudsman of the Republic of Ireland, has argued one of the biggest challenges facing independent oversight offices and core government alike is people’s expectation of speedy results and high levels of interactivity.

“They expect to interact with public services as they do with Amazon or Facebook, to communicate as they do on WhatsApp,” Tyndall said.

Other concerns highlighted in the report include the challenge of explaining to the public how decisions are made if algorithms are used, a lack of notice provided to people that these systems will be used in decision-making that impacts them and the absence of effective appeals from AI-generated decisions. 

“When our offices reviewed how AI is being used, we saw there is a real gap in uniform guidance, regulation and oversight that governs the use of AI,” said Ombudsman and Information and Privacy Commissioner for Yukon Diane McLeod-McKay. “We are hopeful that public bodies will carefully consider the guidance we are providing when they are using AI.” 

The report makes several recommendations aimed at public bodies delivering public services including:

• the need for public bodies to commit to guiding principles for AI use;
• the need for public bodies to notify an individual when an AI system is used to make a decision about them and describe how the AI system operates in a way that is understandable to the individual;
• the need for government to promote capacity-building, co-operation and public engagement on AI;
• a requirement for all public bodies to complete and submit an artificial intelligence fairness and privacy impact assessment for all existing and future AI programs for review by the relevant oversight body; and
• the establishment of special rules or restrictions for the use of highly sensitive information by AI.

[email protected]