Draft best practice principles for sexual content moderation and child protection - master thread

If you are reading this post, you have probably volunteered or been invited to collaborate on a set of best practice principles for sexual content moderation and child protection that are being developed as part of our Multi-Stakeholder Dialogue on Sex, Human Rights, and CSA Prevention. The 0.1 version of the draft was developed based on the discussions at our San Francisco event on May 23, and we are working towards presenting a subsequent draft (whichever version number we can iterate to between now and then) at our workshop at RightsCon on June 13.

Here are the two most important documents that you may like to have reference to in providing your feedback:

And here are links to separate discussion threads on each of the individual principles. Please provide your comments on the individual principles in the threads below.

  1. Prevention of harm
  2. Evaluation of impact
  3. Transparency
  4. Proportionality
  5. Context
  6. Non-discrimination
  7. Human decision
  8. Notice
  9. Remedy

You can use this master thread for general discussion about the principles, such as:

  • Should we give them a more memorable name?
  • Are the Access Now principles a flexible enough basis for what we want to cover?
  • How operationally specific do we want to get in these principles?
  • Should we also be working on a set of template or model terms of service related to child protection?

Here is a consolidated version of the current text:

Best practice principles for sexual content moderation and child protection
Version 0.3—July 3, 2019

1. Prevention of harm

Sexual content should be restricted where it causes direct harm to a child. Indirect harms should not be the basis for blanket content restriction policies unless those harms are substantiated by evidence, and adequate measures are taken to avoid human rights infringements.

2. Evaluation of impact

Companies should evaluate the human rights impacts of their restriction of sexual content, meaningfully consult with potentially affected groups and other stakeholders, and conduct appropriate follow-up action that mitigates or prevents these impacts.

3. Transparency

Companies and others involved in maintaining sexual content policies, databases or blocklists should describe the criteria for assessing such content in detail, especially when those policies would prohibit content that is lawful in any of the countries where such policies are applied.

4. Proportionality

Users whose lawful sexual conduct infringes platform policies should not be referred to law enforcement, and their lawful content should not be added to shared industry hash databases, blocklists, or facial recognition databases.

5. Context

The context in which lawful sexual content is posted, and whether there are reasonable grounds to believe that the persons depicted in it have consented to be depicted in that context, should be considered before making a decision to restrict or to promote it.

6. Non-discrimination

Content moderation decisions should be applied to users based on what they do, not who they are.

7. Human decision

Content should not be added to a hash database or blocklist without human review. Automated content restriction should be limited to the case of confirmed illegal images as identified by a content hash.

8. Notice

Users should be notified when their content is added to a hash database or blocklist, or is subject to context-based restrictions.

9. Remedy.

Internet platforms should give priority to content removal requests made by persons depicted in images that were taken of them as children, and provide users with the means of filtering out unwanted sexual content.

We will be launch the final version of these #SexContentDialogue principles at the Internet Governance Forum in Berlin. If you can get to Berlin, please register to attend.