91ԭ

Skip to content
Join our Newsletter

Opinion: Let's base AI debates on reality, not extreme fears about the future

The letter assumes AI is becoming, or could become, “powerful digital minds” — a longtermist interpretation of AI’s development that ignores important debates about AI today in lieu of future concerns.
voiceassistant-AI
A group of prominent computer scientists and other tech industry notables are calling for a six-month pause on artificial intelligence technology.

A recent open letter by computer scientists and tech industry leaders has . has responded to the letter on Twitter.

The letter, published by the non-profit Future of Life Institute, has asked for all AI labs to stop training AI systems more powerful than GPT-4, the model behind ChatGPT. The letter argues that AI has been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

The letter assumes AI is becoming, or could become, “powerful digital minds” — a that .

Longtermism and AI

Longtermism is the .

Worries about superintelligent AIs are usually the stuff of science fiction. AI fantasies that can lead to . But like the , these worries translate into major investment not caution. Most major technology firms have .

ChatGPT is obviously not a path to superintelligence. The open letter sees AI language technology like ChatGPT as a cognitive breakthrough — something that allows an AI to compete with humans at general tasks. But that’s only one opinion.

There are many others that see ChatGPT, its GPT-4 model and other language learning models as that merely repeat what they learn online so they appear intelligent to humans.

Superintelligence’s blind spots

Longtermism has direct policy implications that prioritize superintelligence over more pressing matter such as . even consider regulation to stop superintelligence more urgent than addressing the climate emergency.

AI policy implications are immediate, not far off matters. Because GPT-4 is trained on the entire internet and has expressly commercial ends, .

, since machines and cannot hold copyright.

And when it comes to privacy matters, ChatGPT’s approach is hard to distinguish from another AI application, . Both AI models were trained using massive amounts of personal information collected on the open internet. .

These immediate risks are left unmentioned in the open letter, which swings between wild philosophy and technical solutions, ignoring the issues that are right in front of us.

Drowning out pragmatism

The letter follows an old dynamic that my co-author and I identify in a . There is a tendency to view AI as either an existential risk or something mundane and technical.

The tension between these two extremes is on display in the open letter. The letter begins by claiming “advanced AI could represent a profound change in the history of life on Earth” before calling for “robust public funding for technical AI safety research.” The latter suggests the social harms of AI are merely technical projects to be solved.

The focus on these two extremes crowds out important voices trying to pragmatically discuss the immediate risks of AI mentioned above as well as .

The attention being given to the open letter is especially problematic in Canada because two other letters, written by and , have not received the same amount of attention. These letters call for reforms and a more robust approach to AI governance to protect those being affected by it.

An unneeded distraction toward AI legislation

Government responses to the open letter have stressed that Canada does have legislation — . The longterm risks of AI are being used to rush legislation now like AIDA.

AIDA is an important step toward a proper AI governance regime, but it needs to before being implemented. It cannot be rushed to respond to perceived longterm fears.

The letter’s calls to rush AI legislation might end up advantaging the same few firms driving AI research today. Without time to consult, enhance public literacy and listen to those being affected by AI, AIDA risks passing on AI’s accountability and auditing to institutions already well positioned to benefit from the technology, creating a market for a new AI auditing industry.

Humanity’s fate might not be on the line, but AI’s good governance certainly is.

The Conversation

Fenwick McKelvey receives funding from the Social Sciences and Humanities Research Council and Les Fonds de recherche du Québec - Société et Culture (FRQSC). He is co-director of Concordia University's Applied AI Institute.