There is global consensus among civil society, academia and industry that artificial intelligence adoption comes with risks and harms. Addressing these concerns have been marginal in Canada’s national AI strategy. The federal government’s major response — the Artificial Intelligence and Data Act (AIDA) — is flawed and does not address AI’s current and tangible impacts on our society.
Our research demonstrates key gaps in Canada’s approaches to AI governance. The first issue is that AIDA as presently drafted does not address government use. This is despite widespread use across the public sector.
The lists 303 applications of AI within government agencies in Canada. The fact that AIDA as presently drafted will not apply to government use means this legislation is out of step with AI governance in other AI leading nations and the interests of .
That we know so little about how the 91原创 government uses AI is just one shortcoming we know through a second report being released today. Our team has also identified key gaps that span the last decade of AI governance in Canada. Part of the is comprised of research teams from Germany, the United Kingdom, Canada and France. documents a lack of critical discussion by all levels of government of AI and its risks, alongside a failure to conduct public consultations.
Need for transparency
AIDA is Canada’s first focused attempt at regulating AI. The act has been tacked onto the end of Bill C-27, and is currently being reviewed by the . It has been widely criticized for 91原创s need.
Even as parliament debates AIDA, the government accelerates AI adoption.
On April 7, the prime minister announced plans to spend $2.4 billion to increase AI adoption and use in Canada. Surprisingly, only four per cent of the is devoted to AI’s social impacts. These include vague , helping workers who might lose their jobs and a paltry amount for a forthcoming AI and data commissioner.
Making government and business uses of AI more transparent, and engaging in meaningful consultation to strengthen oversight and accountability, would demonstrate genuine interest on the part of government of taking public concerns seriously.
Our research shows a gap between the hopes and realities of AI that AIDA must address.
AI registries
We developed the in collaboration with the U.K.-based .
and , including , have been calling for public registries of AI and automated decision-making systems.
AI registries are already produced by a number of cities including , , and .
Our 91原创 TAG register is a start, but limited given the lack of information publicly available about where and how AI and automated systems are being used.
Documenting impacts
The argument for registries is based on the idea that in order to develop effective oversight, policymakers and the public need to be able to see how government agencies and businesses are already making use of AI.
Maintaining this registry — or a similar one — should be delegated to an independent and resourced public authority. This would make it easier for there to be more widespread and meaningful debate about if, where and how AI should be used and the kind of oversight we need.
There is an documenting the ways government and corporate uses of AI and automated systems have already led to harm. Previous research has also documented the strain placed on individuals, communities and review bodies to .
The aims of the 91原创 TAG register are to:
-
advance discussion about the need for resourced, maintained and public registries of government and business uses of AI and automated decision systems (ADS);
-
enable more widespread discussion about if, where and how AI and ADS should be used;
-
stimulate more research and debate about the kinds of systems in use and their impacts;
-
demonstrate the very limited information presently available about systems piloted or in use.
Maintaining registries and archiving these would require that government agencies . Government agencies would also need to make procurement details and company processes more transparent, explain intentions and uses of AI and automated decision systems, and respond to citizens’ requests for information.
Advocates propose registers should include .
AI governance in Canada
The federal government introduced its in 2019. This was supposed to make government uses of AI and algorithmic systems more transparent through mandated impact assessments. At the time of writing, only 18 of these have been published.
The need for a registry is just . Our report documents notable silences that AIDA has not addressed surrounding and data sovereignty, as well as an absence of input from creative and cultural sectors and the environment.
Government policies, instead, have narrowly focused on AI as economic and industrial policy. Consultations have been largely theatrical, letting AI adoption continue despite deep concerns from the public, especially over facial recognition technologies.
91原创s trust suffers as a result. 91原创s have one of the lowest , even though Canada has had one of the .
Even the government’s own policies for procurement have largely sidelined effective consideration of AI’s social impacts. Instead, AI is seen as a remedy for the service-oriented — or — of Canada’s public sector and yet these changes have been made with little public consultation.
AI has profound social implications, despite being .
Withdrawal of AIDA
Our research of Canada’s latest effort to regulate AI, pointing to two significant problems:
1) AIDA will not apply to public sector uses of AI, despite the widespread use of AI and automated systems. This runs counter to expressed concerns of public sector workers. The , the and have called for AIDA to apply to government departments, agencies and crown corporations.
2) and there has been no with the public.
Given these limitations, AIDA is already out of step with the needs of 91原创s. 91原创 legislation also falls short of the regulatory approaches taken by other nations.
Examples include the and the White House and , which apply to AI uses by government institutions.
Canada remains behind the curve. The prime minister’s will not address the problems and challenges of regulating AI. AIDA should be split from the rest of Bill C-27, and sent back for the public consultations and redrafting it so clearly requires.
Joanna Redden receives funding from Social Sciences and Humanities Research Council of Canada and the Natural Sciences and Engineering Research Council of Canada.
Fenwick McKelvey receives funding from Social Sciences and Humanities Research Council of Canada.