Archive because CNN is trash.
A South Korean man has been sentenced to jail for using artificial intelligence to generate exploitative images of children, the first case of its kind in the country as courts around the world encounter the use of new technologies in creating abusive sexual content.
The unnamed man, aged in his 40s, was sentenced to two and a half years in prison this month, according to the Busan District Court and the district’s Public Prosecutor’s Office.
He had created about 360 AI-generated images in April, the prosecutor’s office told CNN. The images were not distributed, and have been confiscated by police.
Prosecutors argued during the case that the definition of sexually exploitative material should include descriptions of sexual behaviors by “virtual humans” and not just the appearance of actual children.
The ruling showed that sexually abusive content can include imagery made with “high level” technology that is realistic enough to look like real children and minors, the prosecutor’s office said.
The case comes as governments around the world grapple with the explosion of the AI industry, with far-reaching impacts ranging from copyright and intellectual property to national security, personal privacy and explicit content.
Many are now racing to regulate the technology – especially as cases like the South Korean sentencing highlight how AI can be used to violate people’s bodily autonomy and safety, especially for women and minors.
Earlier this month, police in Spain launched an investigation after images of underage girls were altered with AI to remove their clothing and sent around town. In one case, a boy had tried to extort one of the girls using a manipulated image of her naked, the girl’s mother told the television channel Canal Extremadura.
For years, deepfakes – highly convincing fake videos made using AI – have been used to put women’s faces into often aggressive pornographic videos, without their consent. The videos often appear so real it can be hard for female victims to deny it isn’t really them.
The issue was thrust into broader public view in February this year when it emerged that a high-profile male video game streamer had accessed deepfake videos of some of his female streaming colleagues.
“From the very beginning, the person who created deepfakes was using it to make pornography of women without their consent,” Samantha Cole, a reporter with Vice’s Motherboard, who has been tracking deepfakes since their inception, told CNN at the time.
The streaming platform, Twitch, responded to the controversy by tightening its policies, calling the deepfake sexual videos “personally violating and beyond upsetting.” Other major platforms are similarly updating their rules, with TikTok adding further restrictions on sharing AI deepfakes in March.
The European Union became one of the first in the world to set regulations on how companies can use AI in June, followed by China in July. And earlier in September, some of the biggest tech leaders in the United States – including Bill Gates, Elon Musk and Mark Zuckerberg – gathered in Washington as the Senate prepares to draft legislation on AI.
Hard to tell if this is because all porn is illegal in South Korea…