Disappointed in the BBC

They’re usually pretty good about researching before they publish, but this article leaves a lot to be desired. It provides zero evidence that the people engaged in these activities actually have the attractions they claim and their only evidence that this content is harmful is a quote from some law enforcement officer. No actual research in sight.

Yeah there are concerns about whether realistic fictional content could make it harder to detect real CSAM, but treating the two as equivalent is absurd. The language in this article makes them sound equally bad.

3 Likes

Reddit had some interesting thoughts

1 Like

I was just reading the UK reddit angle on this one…

Don’t expect any rationality from a country that believes anime drawings being banned a good thing, though…

Edits:

Reading that stupid BBC article…

He warned that a paedophile could, “move along that scale of offending from thought, to synthetic, to actually the abuse of a live child”.

Yeah, they’re going to go from a world of endless possibilities to… something that isn’t.

4 Likes

Lol that reminds me, I particularly disliked how the article tried to shame Japan for not banning fiction. God forbid a country focus on protecting real kids

2 Likes

I knew it, I knew it.

Didn’t even take a few days to pull the “BUT JAPAN!” card.

2 Likes

All the while parroting Chicom propaganda, like calling Falun Gong a “cult”, which I assume is the definition with the negative connotation. Otherwise, sect would be a better word choice.

Ah, so guilty until proven innocent.

2 Likes

@Gilian posted about this on Twitter and I, once again, shared my thoughts on the matter (as I eagerly await her response to my email about community research).

But anyway, the only thing of value to gleam from the article, aside from the reliance on unproven and emotionally-charged assertions of escalation (i.e. fantasy increases risk and leads to contact offending behavior) is the way they put Pixiv on blast for what I’ve only been able to describe as a failure to moderate all of their features equally.

I didn’t wanna bring this up, since talking about this stuff sorta has the added Streisand effect of exacerbating it, but certain parts of Pixiv have been less moderated overall which has seen the entire feature fall prey to sharing spam links to illegal CSAM websites.

The Japanese population isn’t all that big about CSAM. They hate it, in fact, and avoid it, but this type of inaction hasn’t gone unnoticed by Pixiv’s JP userbase.

They’re mad that their groups were taken from them by foreign criminals whose nefarious practices act as a self-fulfilling prophecy, and compare this behavior against the actions taken by those on Allthefallen and other communities to proactively address and report this stuff, it becomes clear what the problem is, and it’s not the fictional/fantasy content.

No, not at all.

Pixiv makes so much money over premium subscriptions and other services. They owe it not only to their community but the fiction/fantasy community to invest properly in moderation across-the-board, not just posts for works/illustrations/content and the comments there.

@elliot please feel free to edit down this message if you feel the information contained herein constitutes a risk.
This is sobering to read about, after I’d gone out of my way to contact Japanese law enforcement and even solicit the services of an interpreter just so I could make sure that they understand what’s been going on.

It almost feels conspiratorial with how flagrant the abuse is, like it’s an operation by government actors, but that’s not something I have evidence to even suggest.

2 Likes

That’s not really new, though. The #ロリ tag (and related) were getting hit with CSAM photo spam for a while last year.

Couldn’t do much other than report the damn things.

Personally I was more taken aback by the fact that pixiv doesn’t have any kind of PhotoDNA system in place.

Edit: Actually, looking under one of the ロリrelated tags seems to have tags for “Tor” and “CP” respectively.

Yeah, but they eventually relented and actually took action against it all after both US and JP users found it so abhorrent and reported it to their local law enforcement.
They were taking photos of known CSAM and running them thru filters to try and pass them off as CGI or simulations, which to anyone with a brain was very clear what it was.

They do. When I inquired about it via email, they didn’t want to specify exactly what they had, but did confirm that they have adopted something.

Those tags are empty of relevant criminality, as I would imagine. To their credit, they do remove spam in comments when reported.

2 Likes

Might be a new thing then. One person was posting CSAM under the guise of it being “AI generated.” (This was before StableDiffusion was a thing.)

I actually reported this person to the FBI in the end because I’d reported it to pixiv about eight times and they did absolutely nothing about it.

1 Like

Everyone could tell what it was.

It was mindboggling and confounding how Pixiv’s own mods just ignored it for so long. I’m guessing they reviewed it, didn’t look long enough at it, assumed it was lawful and marked it off so any later reports autoresolve.

I hope whoever was responsible for that was fired. Such gross incompetence is borderline complicity.

I did the same.

1 Like

This is one of the problems with people equating fictional content with abuse and claiming that people who use fictional content are also interested in abusive content - it gives criminals that false idea that spaces for fictional content will be supportive of their behavior and encourages them to be more blatant about sharing it

3 Likes

Literally a self-fulfilling prophecy, almost.

2 Likes

Getting a bit into the behind the scenes stuff here, but after the Shoe0nhead video, Prostasia started getting people in our contact form attempting to solicit illegal content. The Pixiv thing is the same, imo, just on a public social media platform rather than a private contact form. If you don’t want abusers in a certain space, don’t tell them they’ll be supported in that space.

I believe something similar happened with MSC getting more pro-c and wannabe-abuser applicants after the video. Thankfully they have a pretty robust screening process in place.

That is interesting to know, actually. These communities never seemed to rely on their obscurity as a security measure, but maybe it’s the presentation.

Almost no one attempts to solicit CSAM on Allthefallen anymore, because they know they’ll be reported. This literally circles back around to perceived tolerance. Pixiv, by not having the resources to counteract this influx of criminal users, supports this.

It’s not about normalization at all, but opportunism.

3 Likes

I am pleasantly shocked at the amount of common sense in this Reddit thread.

3 Likes

I guess it depends on the subreddit.

50% of the subreddits will be sane, the other 50% of the subreddits will be kneejerk reactions.

1 Like

It was never about protecting people. It was always about making that which the hard lot rabble society labels “evil” suffer. Read juiceboy’s comment here:

“I’m not asking for anybody to make me feel safe, only that we stop trying to make predators and those adjacent to them feel safe.”
Like I said, it’s no longer about “greatest good to the greatest number”. It’s about “hero” fantasies about “slaying the monster”. And that’s why I always say that I would rather make all the people in the world feel unsafe than let anyone in the world make me feel unsafe. And that’s why I love stories, like Overlord, where the “monsters” win without remorse. “Society” is about as respectable as “God”, which is to say none, as both are just spooks.

1 Like

At least in the US, offering images that are products of abuse is illegal. That should be enough to deter one from offering such content.

The rest of this is a continuation of the same old thing.

1 Like

Some sexual violence prevention researchers commented on the article and that seems to be getting a lot of positive attention

1 Like