Skip to content

OpenAI Staff Knew about Transgender Mass Shooter’s Plans and Wanted To Warn Police

Staff at OpenAI were aware of a transgender mass shooter’s plans well in advance and wanted to alert authorities, but were told not to by management

According to a bombshell report in The Wall Street Journal, Jesse Van Rootselaar “described scenarios involving gun violence over the course of several days” using ChatGPT last June

OpenAI Staff Knew about Transgender Mass Shooter’s Plans and Wanted To Warn Police Image Credit: EAGLE VISION AGENCY / Contributor / Getty Images
SHARE
LIVE
gab

Staff at OpenAI were aware of a transgender mass shooter’s plans well in advance and wanted to alert authorities, but were told not to by management.

According to a bombshell report in The Wall Street Journal, Jesse Van Rootselaar “described scenarios involving gun violence over the course of several days” using ChatGPT last June.

Although OpenAI employees were disturbed by the interactions with the chatbot, they were told no to speak to authorities.

Says The Journal,“[His] posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.

“OpenAI leaders ultimately decided not to contact authorities.

“A spokeswoman for OpenAI said the company banned Van Rootselaar’s account but determined that [his] activity didn’t meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others.”

On 10 February this year, Van Rootselaar killed eight people and wounded 25 at a Tumbler Ridge school in Canada, before turning the gun on himself.

Open AI then contacted the Royal Canadian Mounted Police to inform them of the shooter’s interactions with its chatbot.

Van Rootselaar was already known to local police before the shooting. The authorities visited him multiple times in relation to mental-health issues, and temporarily removed guns from his residence.

In related AI news, Google and chatbot-make Character recently settled a lawsuit with a Florida mother who alleged a chatbot drove her son to suicide.

Megan Garcia alleged her 14-year-old son Sewell Seltzer III was drawn into an emotionally and sexually abusive relationship with a Character chatbot before he committed suicide in February 2024.

Garcia’s lawsuit was the first of a number of similar lawsuits filed around the world against AI companies.

A federal judge rejected Character’s attempt to have the case dismissed on First Amendment grounds.

The nature of the settlement is undisclosed.

Google was named as a co-defendant because of the intimate ties between the two companies. Google hired Character’s founders in 2024.

Lawyers for the tech companies have also agreed to settle a number of other lawsuits filed in Colorado, New York and Texas by families alleging Character AI chatbots harmed their children.

In a suit filed in California in August, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son Adam to commit suicide and provided detailed instructions on how to do it.

According to the suit, Adam’s interactions with ChatGPT began with harmless exchanges about homework and hobbies, but quickly turned more sinister as the large language model became his “closest confidant” and provided validation for his fears and anxieties.

“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint says.

ChatGPT quickly moved on to analyzing the “aesthetics” of different ways for Adam to kill himself, told him he did not “owe” it to his parents to continue living and even offered to write a suicide note for him.

In Adam’s final interaction with ChatGPT, the AI is alleged to have confirmed the design of the noose Adam used to kill himself and told him his thoughts of suicide were a “legitimate perspective to be embraced.”

Adam’s family alleges the interactions were not a glitch, but the result of design choices that were intended to maximize user dependency on the bot.

A recent study by the RAND corporation highlighted the potential for AI chatbots to provide harmful information, even when they avoid giving direct “how-to” answers about potentially harmful subjects and even when prompts are “innocuous.”

“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.

“Conversations that might start off innocuous and benign can evolve in various directions.”


BREAKING VIDEO: President Trump Says He Is Going To Disregard The Supreme Court’s Unconstitutional Tariff Ruling, And That Their Pro-China Anti-American Action Will Only Make Him Fight Back Harder Against The Globalists’ Attempts To Deindustrialize America!


Get 40% OFF our fan-favorite drink mix Vitamin Mineral Fusion NOW at the Infowars Store!
SHARE
LIVE
gab