In a UNICEF survey, at least 1.2 million children in eleven countries reported that images of them had been manipulated by generative AI in sexual contexts over the past year.
The fact that there are so many children in just these eleven countries shows that this is a big problem and that it will become a bigger problem in the future if we do not quickly find different solutions, says Daniel Kardefelt Winther, Head of Research for Children and Digital Media at UNICEF.
Even if the images are not real, they represent severe abuse, he explains.
They are extremely realistic and it can be very difficult to tell the difference. The shame, guilt and all the difficult feelings that come from sexual abuse are also present here. Emotionally, the consequences for the children can be serious.
Extortion
The images can also be used to commit further abuse through, for example, blackmail, where someone threatens to spread the material.
"In countries where sex is even more taboo, this becomes very shameful. We have seen examples where children who have sought help have been ostracized and bullied," says Daniel Kardefelt Winther.
The problem has several dimensions, he explains. The perpetrators can be adults intent on sexually abusing children, but also other children. It could be a classmate who takes nude pictures to bully or just for fun.
"But it is definitely not a fun thing for the person being exposed. It is extremely serious and some children and young people may not always understand how serious this is."
Want to ban
Daniel Kardefelt Winther points out that there is likely no simple solution, but UNICEF has proposed three measures to combat the problem.
On the one hand, parents need to be aware of the technology and know how to support children who are being abused - the same applies to schools, which must understand what this kind of abuse means.
"Then we need some form of legislation that either bans these kinds of apps or prohibits the use of AI to create abusive material," says Kardefelt Winther.
"And the tech industry needs to take far greater responsibility than it has so far. Those who develop AI apps or AI support for apps need to really think about how they can be misused."





