close
close

Kids want social media apps to do more to prevent deepfake nude photos

Kids want social media apps to do more to prevent deepfake nude photos

It’s problematic enough that some kids are sending nude photos of themselves to friends and even strangers online. But artificial intelligence has taken the problem to a whole new level.

About 1 in 10 children say their friends or peers have used generative AI to take nude photos of other children, according to a new report from Thorn. The nonprofit, which fights against child sexual abuse, surveyed more than 1,000 children ages 9 to 17 for its annual survey in late 2023.

Thorn found that 12% of 9- to 12-year-olds knew of friends or classmates who had used AI to take nude photos of their peers, with 8% preferring not to answer the question. Of the 13- to 17-year-olds surveyed, 10% said they knew of peers who had used AI to take nude photos of other children, with 11% preferring not to answer the question. This was Thorn’s first survey to ask children about the use of generative AI to create deepfake nude photos.

“While the motivation behind these events is likely to be driven by adolescent behaviour rather than intent to sexually abuse, the harm suffered by victims is real and should not be minimised in an attempt to deflect responsibility,” the Thorn report said.

Sexting culture is tough enough to tackle without adding AI to the mix. Thorn found that 25% of minors think it’s “normal” to share nude photos of themselves (a slight decrease from surveys going back to 2019), and 13% of respondents said they’ve done so at some point, a slight decrease from 2022.

The nonprofit says sharing nude photos can lead to sextortion, or malicious actors using nude photos to blackmail or exploit the sender. Those who considered sharing nude photos cited leaks or exploitation as reasons for ultimately not doing so.

This year, Thorn asked young people for the first time whether they were paid to send nude photos. 13% of the children surveyed said they knew a friend who had been paid for their nude photos, while 7% did not answer.

Kids want social media companies to help them

Generative AI makes it possible to “create highly realistic images of abuse from benign sources such as school photos and social media posts,” Thorn’s report said. As a result, victims who previously reported an incident to authorities can easily be re-victimized with altered, new abuse material. For example, actress Jenna Ortega recently reported that she was sent AI-generated nude photos of herself as a child on X, formerly Twitter. She chose to delete her account entirely.

It’s not much different from how most kids respond in similar situations, Thorn reported.

The nonprofit found that children, a third of whom have had some form of sexual interaction online, “consistently prefer online safety tools over offline support networks such as family or friends.”

Children often simply block malicious people on social media instead of reporting them to the social media platform or an adult.

Thorn found that children want to be informed about “how to better use online safety tools to defend against such threats,” which they consider normal and inconspicuous in the age of social media.

“Children are showing us that these are the tools of choice in their online safety kit and they expect more from platforms in how they use them. There is a clear opportunity to better support young people through these mechanisms,” Thorn’s analysis found.

In addition to the need for information and guides on how to block and report someone, over a third of respondents said they wanted apps to check in with users to see how safe they felt. A similar number said they wanted the platform to offer support or advice after a bad experience.