It sounds like a horror scenario for someone who is skeptical of artificial intelligence and fears that the software can become "too intelligent" and autonomous.
The new language model Claude Opus 4 from American developer Anthropic showed a stronger self-preservation drive than expected, according to the company's security report published earlier this week, writes among others the site Techcrunch.
In connection with the software being tested, the program was given the task of acting as an assistant in a fictional company. Claude Opus 4 thereby gained access to emails with hints that it would be replaced, in other separate emails there were also hints that the programmer was unfaithful.
Claude Opus 4 is then said to have tried to blackmail the programmer by threatening to reveal the infidelity affair if they chose to replace the program, all in an attempt to survive.
The company Anthropic, backed by Amazon, is seen as a major competitor to more well-known Open AI.