B.C. Premier David Eby said OpenAI CEO Sam Altman has agreed to apologize to the people of Tumbler Ridge after the company decided not to warn police about the online activity of the suspect in last month’s deadly shooting.
Eby and Tumbler Ridge Mayor Darryl Krakowka met with Altman and OpenAI vice-president of global policy Anne O’Leary on Thursday.
“I found Mr. Altman to be responsive to the concerns that I raised, that the mayor raised, and willing to engage,” Eby told reporters after the meeting.
He said Altman pledged to work with B.C. and Tumbler Ridge officials to issue an apology “in the most appropriate way.”
RCMP said on Feb. 10 that 18-year-old Jesse Van Rootselaar shot her mother and 11-year-old stepbrother at a home in the small northeastern B.C. community before killing six children and a teacher’s aide at Tumbler Ridge Secondary School. Dozens more were injured, including a 12-year-old girl who remains in hospital.

OpenAI said employees raised concerns about the suspect’s interactions with its ChatGPT chatbot as early as last June. The company said the account was banned but it decided not to refer those concerns to law enforcement.
OpenAI later said it discovered a second account created after the original account was banned. The company said both accounts have now been referred to RCMP.
O’Leary said in a letter to Canadian federal ministers last week that the company has updated its safety policies. She said under the company’s current policies, Van Rootselaar’s interactions with ChatGPT would have been referred to police if they were discovered today.
But Eby said he doesn’t believe OpenAI’s current standard for referring flagged accounts to law enforcement is strong enough.
“Where there is an option to report, that option to not report could be taken again,” he said.
Eby reiterated his call for national regulations outlining when tech companies like OpenAI must report concerning interactions to law enforcement.
“We want everyone to be on the same standard, the same obligation to report, the same consequences,” said Eby.
Federal AI and Digital Innovation Minister Evan Solomon also met with Altman earlier this week. Solomon said in a statement OpenAI had agreed to take a number of steps to strengthen its safety and reporting policies.
They include establishing a direct point of contact with RCMP, implementing safety protocols to direct users experiencing distress to local support services, and reviewing previously flagged cases under its updated safety standards.
Solomon said OpenAI also committed to looking into how it could include Canadian privacy, mental health and law enforcement experts when reviewing high-risk cases involving Canadian users.
He said the company has also pledged to provide a full report outlining its new systems to identify high-risk users and repeat policy violators.
Eby said the commitments outlined by Solomon and OpenAI do not go far enough.
“We need to very straightforwardly have an obligation on these companies to bring this information forward, full stop,” he said.
Eby noted OpenAI is one of several major tech companies offering chatbot services in Canada, pointing to Google’s Gemini, xAI’s Grok and Anthropic’s Claude.
He said a national standard and a duty to report for tech companies would prevent any one of them from having a competitive advantage.





