Feds set rules on use of AI in government services
By Jordan PressIndustry Government Manufacturing AI artificial intelligence manufacturing Safety
Hoping to have people directly interact with bots instead of humans, including in online chats.
OTTAWA — The department that oversees the federal social safety net has quietly started testing artificial-intelligence systems that could one day make it faster and easier to get answers about benefits and services.
A small team inside Employment and Social Development Canada is experimenting with ways to simplify navigating one of the largest service organizations in the country, handling public pensions, employment insurance, family benefits and disability supports. The department has a mix of offices, call centres and correspondence centres.
It’s hoping to have people directly interact with bots instead of humans, including in online chats for people seeking information about government programs.
An early draft of the department’s artificial-intelligence strategy, obtained by The Canadian Press under access-to-information laws, suggests the risks of chatbots, in particular, include “providing incorrect information to Canadians,” “producing incoherent content,” “or the reproduction of undesirable behaviour.”
The worst-case scenario? Internally, a chatbot could tell an employee to “explore a ‘catastrophic’ action”—think a Terminator-like order to kill all humans—while externally a bot could start replicating hate speech. A Microsoft-created Twitter bot did that three years ago after interacting with enough users and “learning” to mimic things they wrote. The decline was driven by people deliberately messing with the bot but it took only one day.
The strategy notes that ESDC needs to manage legal risks, ethical questions and logistical issues, not to mention “public perception”—and rapid technological advances that mean “the answers to these risks are moving targets.”
“We have a higher standard. We must meet a higher standard. Like it or not, I can decide not to go buy stuff (at one store) and go (to another). That’s not an option with government,” Sandy Kyriakatos, the department’s chief data officer, said in a recent interview.
On Monday, the government set the ground rules for how departments and agencies can use artificial intelligence to make decisions about benefits and services, or finding new uses for the technology for long-term projects like preserving and teaching Indigenous languages.
Talking at a conference of government workers Monday morning—hours before she resigned from cabinet—Treasury Board President Jane Philpott said anyone who thinks the era of artificial intelligence is just on the horizon is mistaken and Canadians are ready to get answers from machines about government services.
“When people go online and do their Christmas shopping, AI is influencing how they do their Christmas shopping,” she said after her morning talk. “We in government want to make sure we take advantage of the same kinds of tools to provide good services for people, but be extremely open in how it’s done.”
In setting new rules, Philpott said departments will have to be able to explain why a decision was made on a particular file, beyond saying it was up to a computer. Human beings won’t be eliminated from the process entirely, Philpott said, and will make sure decisions spewed out by machines are fair, consistent and just.
At ESDC, Kyriakatos’s team has started small by trying to create services used by government employees themselves, so they can be tested carefully before going to the public. The first test targets new hires with a chatbot that offers help with questions about working at the sprawling department.
Her team also considered whether virtual-assistant bots could automate some tasks, “freeing up time for higher value work,” and continue to learn to see what works, what doesn’t and how services can be improved.
There has also been outreach to, as the strategy says, demystify AI for the department, considering the anxiety some workers feel about the potential for being replaced by a machine. Kyriakatos said department workers who deal with her team have become more data-literate and “aware of what AI can and can’t do.”
“There is no plan to automate decision-making, there is no plan to put AI into people’s work,” she said.
“There is this idea, that Hollywood idea of AI, that you’re going to build the thing and suddenly it’s going to get up and do something else. That’s not how AI works. It’s very task-specific. You have to train it do something very particular and when people see how it can work, they get very excited about it.”