Several AI-created or AI-manipulated images and videos have emerged ahead of the presidential election in the USA and received significant attention.
In one of the videos that has had the most impact, an AI voice is used to make it seem like presidential candidate Kamala Harris is speaking. In the clip, the fake Harris thanks current President Joe Biden for ruining the country. The voice further says that she herself will not be able to be criticized because she is "the ultimate diversity recruit". The clip was spread, among others, by tech entrepreneur and Trump supporter Elon Musk via his X account earlier this year.
California Governor, Democrat Gavin Newsom, was among the critics when the clip spread. He has now signed several local AI laws, one of which prohibits deliberate spreading of fake content that can be linked to local or national elections.
"You can no longer deliberately share an ad or other communication related to an election that contains deliberately misleading material – including deepfakes", the governor wrote on X.
Criticism of the law
The criticism was immediate. The person behind the Harris video has already taken legal action and believes that the law constitutes "an infringement on freedom of speech".
A legal expert who spoke to the AP news agency questions how the law will be able to remove fake content quickly enough, if it passes legal scrutiny:
In the best of worlds, the content would be taken down the same second it's uploaded. The faster you can take something down, the fewer people see it, and the fewer share it further, says Ilana Beller of the civil rights organization Public Citizen.
Mocked by Musk
Elon Musk has also commented on the law. In a post on his X platform, he writes:
"California's governor has just made this parody video illegal in violation of the constitution. It would be a shame if it went viral."
The post, where he re-publishes the fake Harris video, had been seen by over 50 million users before the weekend.
The other AI laws that Newsom has signed deal with AI-generated content having to be clearly marked, a ban on so-called deepfake nude images, and regulations on how AI technology can be used to impersonate actors. When the laws come into effect varies from soon to as early as next year, if at all.