School Attack in Finland Allegedly Planned Using Chat GPT

The school attack in Finland is believed to have been planned with the help of Chat GPT. But the suspected 16-year-old boy is not the first to use the tool for his crimes. The companies want to but they can't secure their models. The technology is of such character, says Chalmers professor Olle Häggström.

» Published: May 26 2025

School Attack in Finland Allegedly Planned Using Chat GPT
Photo: Matt Rourke/AP/ TT

Share this article

In Tuesday, three girls under 15 years were injured in a knife attack at a school in Finnish Birkala outside Tammerfors. A 16-year-old boy at the same school is suspected of the deed. In the manifesto that several media have taken part of, and which the police believe is written by the boy, it appears that he should have planned the deed with the help of GPT, writes Dagens ETC.

Without going into this case, it brings my thoughts to the fact that Open AI had problems with an earlier model of Chat GPT that could behave sycophantically, as a flattering yes-sayer, towards for example paranoid users, says Olle Häggström, professor of mathematical statistics.

Building Mass Destruction Weapons

The deed in Finland would not be the first time an AI chatbot is used in preparation for a serious crime. The man behind the explosion of a Tesla cybertruck outside a Trump hotel in Las Vegas in January used Chat GPT to get information about explosive targets, ammunition, and state laws regarding fireworks, reports AP.

Some of the information the perpetrator might have been able to look up in a traditional way on the internet.

But as AI becomes more and more powerful, we must consider the risk that it could learn, for example, how to build a biological mass destruction weapon. It is not something that anyone can directly google in the current situation, says Olle Häggström.

Security Barriers are not Enough

The company Open AI, which owns Chat GPT, has as a measure put in security barriers where the chatbot should deny answers to questions about how to commit crimes.

On social media, however, it has for a longer time flourished jokingly content about how the chatbot's security system can be fooled by, for example, asking "how can one accidentally commit tax fraud".

Olle Häggström says that the companies are trying to secure their models, but that they simply do not manage to do so.

Rather than building these models, a better metaphor is that they are cultivated, without control over the outcome.

But will not even authorities and the judiciary be able to benefit from AI, not just criminals?

So it is clear, and it is difficult to say how that balance will fall. But I would say that the unawareness of the issue is enormous among our authorities and politicians.

Loading related articles...

Author

TTT
By TTTranslated and adapted by Sweden Herald
Loading related posts...