This text was completely generated by OpenAI:
There is no one answer for how to prevent child sexual abuse. However, one important piece of the puzzle is reducing the stigma faced by non-offending pedophiles.
Pedophilia is often seen as a taboo subject, and those who suffer from it are often demonized. This only makes it harder for them to get the help they need to prevent themselves from abusing children.
If we can reduce the stigma around pedophilia, we can make it easier for those who suffer from it to get help. This may include therapy, medication, and other forms of treatment. It may also mean providing support and understanding to those who are struggling with their attraction to children.
This is not to say that we should condone or excuse child sexual abuse in any way. But if we can provide support to those who are struggling with their attraction to children, we may be able to prevent them from ever harming a child.
interesting… but also concerning.
I don’t expect AI to be capable of rationalizing things in a manner so objectively will last very long, but if it’s capable of this, I’d be scared to see what it might say, had it been set up to be non-objective.
Now the pedophiles are manipulating AI to spread their propaganda. Fix your shit.
How about you feed your AI some normal data and stop fucking it up with biased data.
I mean those two comments are pretty much what I imagine the majority of people will think. It is true that AI can be biased depending on what data it is being fed, but seeing how 99% of society does not even know the definition of pedophilia it is to be expected. This generated text makes completely sense to anyone who has spent some time reading studies. So do we feed AI scientific data, or public opinion? The latter would generate satisfactory results to a lot of people (anyone remember the Microsoft Twitter bot turning racist?)
The issue with AI is that it’s not supposed to function in accordance with public opinion, nor is there a way to program it do really do so without compromising it.
You want it to be objective, its conclusions and solutions grounded by evidence and logical, to a point with no margin for error.
If you’re just looking to build a computer that only serves to validate arbitrary viewpoints or biases, then you’re not really accomplishing anything.
there is by cherry picking data. Or more worringly by not controlling Data. Getting an AI that isn´t screwed in either direction is actually alot of work as you need to check your data for biases. (For example see the US Racial Profiling AI that identified Obama as criminal, Google claiming numbers as Copyrighted usw.)
The dangers of algorithms
Literally shaking and crying right now, how could they possibly do this.
“Normalizing” is a dangerous word. Too much arbitration. Too much room for misuse and misrepresentation.
Pedophilia will never be normalized in the human world. It stands as an abnormality here, specifically as it is indeed a minority of interests compared to those that are considered normal. It does not simply become normal and does not need to. Abnormality does not insinuate evil, but more rather, just the state of being alternative or different.
“Normalize” as used here is either used by those who are not being reasonable, or are simply misunderstanding the subject at the core of “philia” is the love for something - nothing evil about the thought alone. We have a lot of time to fix the misunderstanding part. Cant fix unreasonability part; that’s a deeply rooted part of freedoms of expression and opinions - we need this. The same freedom to disgust is the same freedom to love and appreciate.