Florida attorney general launches criminal investigation into ChatGPT maker OpenAI after deadly FSU shooting
By Hadas Gold, CNN
(CNN) — Florida Attorney General James Uthmeier opened an investigation into OpenAI over whether the company “bears criminal responsibility” for a shooting at Florida State University last year.
The attorney general’s office said it is investigating whether OpenAI’s ChatGPT helped the suspect, Phoenix Ikner, carry out the crime.
“If that bot were a person, they would be charged with a principal in first-degree murder,” Uthmeier said at a press conference on Tuesday. “ChatGPT offered significant advice to the shooter before he committed such heinous crimes.”
Ikner is accused of killing two people and injuring six others on FSU’s campus on April 17, 2025. He has pled not guilty, and his trial is set to begin in October.
Uthmeier said that Ikner submitted multiple queries to ChatGPT prior to the shooting, and that the chatbot “advised” the shooter on weapons and ammunition, “what time of day would be appropriate for the shooting to interact with more people, and where on campus would be the place to encounter a higher population.”
While there have been several lawsuits against AI companies, a criminal investigation is extremely rare.
Uthmeier said OpenAI has been subpoenaed for information about “policies and internal training materials regarding user threats of harm to others” and self-harm, as well as policies for reporting possible crimes.
“We’re going to look at who knew what, designed what or should have known what and if it is clear that individuals knew that this type of dangerous behavior might take place,” Uthmeier said.
An OpenAI spokesperson said in a statement to CNN that the shooting “was a tragedy, but ChatGPT is not responsible for this terrible crime.” OpenAI “proactively” shared the account believed to be linked to Ikner with law enforcement after the shooting, the spokesperson added.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” the spokesperson said.
This not the first time ChatGPT has been accused of helping a suspect plan a mass shooting. After a shooting in British Columbia, Canada, this year, OpenAI said it has “taken steps to strengthen our safeguards,” including changing when the company chooses to alert law enforcement about potentially violent activities.
“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the spokesperson told CNN.
The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
