In Tuesday, three girls under 15 years were injured in a knife attack at a school in Finnish Birkala outside Tammerfors. A 16-year-old boy at the same school is suspected of the deed. In the manifesto that several media have taken part of, and which the police believe is written by the boy, it appears that he should have planned the deed with the help of GPT, writes Dagens ETC.
Without going into this case, it brings my thoughts to the fact that Open AI had problems with an earlier model of Chat GPT that could behave sycophantically, as a flattering yes-sayer, towards for example paranoid users, says Olle Häggström, professor of mathematical statistics.
Building Mass Destruction Weapons
The deed in Finland would not be the first time an AI chatbot is used in preparation for a serious crime. The man behind the explosion of a Tesla cybertruck outside a Trump hotel in Las Vegas in January used Chat GPT to get information about explosive targets, ammunition, and state laws regarding fireworks, reports AP.
Some of the information the perpetrator might have been able to look up in a traditional way on the internet.
But as AI becomes more and more powerful, we must consider the risk that it could learn, for example, how to build a biological mass destruction weapon. It is not something that anyone can directly google in the current situation, says Olle Häggström.
Security Barriers are not Enough
The company Open AI, which owns Chat GPT, has as a measure put in security barriers where the chatbot should deny answers to questions about how to commit crimes.
On social media, however, it has for a longer time flourished jokingly content about how the chatbot's security system can be fooled by, for example, asking "how can one accidentally commit tax fraud".
Olle Häggström says that the companies are trying to secure their models, but that they simply do not manage to do so.
Rather than building these models, a better metaphor is that they are cultivated, without control over the outcome.
But will not even authorities and the judiciary be able to benefit from AI, not just criminals?
So it is clear, and it is difficult to say how that balance will fall. But I would say that the unawareness of the issue is enormous among our authorities and politicians.