TORONTO — An artificial intelligence researcher at Google says fixing fairness in the technology "isn't a simple thing," but the world has an opportunity to get it right as the software explodes in popularity.
"It's as though we havebeen given a second chance on how we use this technology to dismantle some of the biases we see in society," Komal Singh, a senior product manager at the tech goliath, said in a Tuesday media briefing in Toronto.
Her optimism comes as the tech sector – and nearly every other industry – have spent the year abuzz about AI’s promise and perils.
Some see the technology as a game changer bound to disrupt everyday life and bring expediency, efficiency and solutions to some of the globe’s biggest challenges.
Others, including Tesla owner Elon Musk and Apple co-founder Steve Wozniak, warn advances in the technology are moving too fast and guardrails are needed before wide-scale deployments.
Geoffrey Hinton, the British-91ԭ computer scientist widely considered the ‘godfather of AI’, is so concerned with the technology he left Google so he can more freely discuss the dangers, which he has said include bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.
Singh acknowledged AI is not without risks.
She's even seen them firsthand.
For example, she said if you ask an AI model to generate an image of a nurse it will often return a woman, while a request for an image of a CEO typically brings up a white man and a query for a software engineer delivers racialized men.
"In a nutshell, I think the takeaway is a lot needs to happen to fix the fairness problem," Singh said.
At Google, some of that work has come from studying skin tones because dark and medium tones often can't be deciphered by computer vision systems, which allow computers to “see and understand” images of people and environments.
"This is problematic because a lot of the social disparities around colourism then get populated into social technical systems, which then end up reinforcing them," Singh said.
To break the cycle, Google partnered with Harvard University professor, Dr. Ellis Monk to design an open-source skin tone scale that can better detect darker tones and be used by AI models to reduce biases.
It's already been deployed within its camera detection technology, on Pixel phones and in search tools, helping to diversify results so keywords bring up a range of skin tones and hair textures.
The company also has a project called Media Understanding for Social Exploration, which studied 12 years of American television shows to analyze representation across skin tones.
Their findings were "stark," Singh said.
Over the past 12 years, there was a slight increase in the screen time given to those with dark and medium skin tones, while the amount of time those with lighter skin spent on screen was slightly lower.
Such work comes at a pivotal time for AI. Top tech companies are racing to develop and launch the most advanced technology, competing with one another for billions of dollars pouring into the industry.
In Canada, Google estimates generative AI could increase the economy by $210 billion and save the average worker over 100 hours a year.
Google leaders accompanying Singh said they're already using AI to improve breast cancer detection, sequence genomes and develop systems that medical clinicians can plug questions into to generate health guidance.
E-commerce giant Shopify Inc. is using the technology to reduce search abandonment, while 91ԭ National Railway Co. is building a digital supply chain platform with automated shipment tracking, Google workers said.
At Google, AI underpins the company’s search engine, maps and cloud storage and is the basis for Bard, its rival product to ChatGPT, that has yet to launch in Canada.
Sam Sebastian, the head of Google Cloud’s 91ԭ operations, promised it would make its 91ԭ debut "very soon," but offered no specific timeline.
Instead, he discussed his overall outlook on AI, saying it was a technological shift unlike anything he's seen in his 25 years working in the industry.
"The only thing close to it is the advent of the mobile phone," he said.
The transformative nature means anyone in the space needs to balance AI's potential with its risks, he said.
For its part, Google has committed to abide by a series of principles as it delves deeper into AI.
It has so far promised not to design or deploy AI technologies that cause or are likely to harm, including weapons or systems used to inflict injuries. It also says it will not work on AI-based “surveillance violating internationally accepted norms” or systems which will contravene laws and human rights.
This report by The 91ԭ Press was first published Sept. 26, 2023.
Companies in this story: (TSX:SHOP, TSX:CNR)
Tara Deschamps, The 91ԭ Press