Is OpenAI the next company to be attacked for "normalizing pedophilia"?

This text was completely generated by OpenAI:

There is no one answer for how to prevent child sexual abuse. However, one important piece of the puzzle is reducing the stigma faced by non-offending pedophiles.

Pedophilia is often seen as a taboo subject, and those who suffer from it are often demonized. This only makes it harder for them to get the help they need to prevent themselves from abusing children.

If we can reduce the stigma around pedophilia, we can make it easier for those who suffer from it to get help. This may include therapy, medication, and other forms of treatment. It may also mean providing support and understanding to those who are struggling with their attraction to children.

This is not to say that we should condone or excuse child sexual abuse in any way. But if we can provide support to those who are struggling with their attraction to children, we may be able to prevent them from ever harming a child.

7 Likes

interesting… but also concerning.

I don’t expect AI to be capable of rationalizing things in a manner so objectively will last very long, but if it’s capable of this, I’d be scared to see what it might say, had it been set up to be non-objective.

2 Likes

Now the pedophiles are manipulating AI to spread their propaganda. Fix your shit.

How about you feed your AI some normal data and stop fucking it up with biased data.

I mean those two comments are pretty much what I imagine the majority of people will think. It is true that AI can be biased depending on what data it is being fed, but seeing how 99% of society does not even know the definition of pedophilia it is to be expected. This generated text makes completely sense to anyone who has spent some time reading studies. So do we feed AI scientific data, or public opinion? The latter would generate satisfactory results to a lot of people (anyone remember the Microsoft Twitter bot turning racist?)

2 Likes

The issue with AI is that it’s not supposed to function in accordance with public opinion, nor is there a way to program it do really do so without compromising it.
You want it to be objective, its conclusions and solutions grounded by evidence and logical, to a point with no margin for error.

If you’re just looking to build a computer that only serves to validate arbitrary viewpoints or biases, then you’re not really accomplishing anything.

3 Likes

there is by cherry picking data. Or more worringly by not controlling Data. Getting an AI that isn´t screwed in either direction is actually alot of work as you need to check your data for biases. (For example see the US Racial Profiling AI that identified Obama as criminal, Google claiming numbers as Copyrighted usw.)

1 Like

The dangers of algorithms

5 Likes

Literally shaking and crying right now, how could they possibly do this.

3 Likes

“Normalizing” is a dangerous word. Too much arbitration. Too much room for misuse and misrepresentation.

Pedophilia will never be normalized in the human world. It stands as an abnormality here, specifically as it is indeed a minority of interests compared to those that are considered normal. It does not simply become normal and does not need to. Abnormality does not insinuate evil, but more rather, just the state of being alternative or different.

“Normalize” as used here is either used by those who are not being reasonable, or are simply misunderstanding the subject at the core of “philia” is the love for something - nothing evil about the thought alone. We have a lot of time to fix the misunderstanding part. Cant fix unreasonability part; that’s a deeply rooted part of freedoms of expression and opinions - we need this. The same freedom to disgust is the same freedom to love and appreciate.

8 Likes

The term “normy” used in some doll forums indicates that folks who have abnormal attractions view it as such.

That’s why i feel the same as you appear to.

The idea that attraction, which is an unconditioned response to an uncoditioned stimulus, is somehow worthy of condemnation doesn’t make sense when liking the smell of baking bread is the same thing. One can enter a market, smell baking bread and pay no mind to the scent. Attraction is no more an action than liking the smell of baking bread. No one can help whether he likes the smell of something.

4 Likes

What prompt did you use? I’ve been getting ChatGPT to write ‘fanfic’ for my stories and it defaults towards being sympathetic towards ‘non-offending pedophiles’ but unsympathetic towards ‘the MAP community’ and ‘normalizing pedophilia’:

a prompt involving 'non-offending pedophiles'

a prompt involving 'the map community'

Probably just because the former is use more often by supporters of non-offending pedophiles while the latter is more often used by detractors yelling about them trying to infiltrate the LGBT community or whatever :'D So basically:

It does exactly this, actually XD Its speech patterns mirror the data fed to it, said data being stuff that people write, which is pretty much correlated with public opinion :stuck_out_tongue: It’s just data and associations, no objective rationalization going on here :'D

(It’s also worth noting that OpenAI’s content policy itself is anti af, with it refusing to generate various kinds of fictional content that ‘normalize’, ‘trivialize’ or ‘glorify’ problematic things. So much so that people are always looking for ways to ‘jailbreak’ it so it can write dark and problematic things :P)

2 Likes