Question

On a multi-lingual community with almost only user-generated content, is there a commonly used way to treat flagged content (profanity, racism, general illegal stuff etc)?

As there will be a lot non-english content, the only way to handle the flagging itself is crowdsourcing by the community itself and somehow automaticly hide/delete the flagged stuff at a threshold. But what method could be used to stop abuse? e.g. "I don't like him, lets all report this and get it deleted"

Was it helpful?

Solution

Another option you might consider is to allow your users to "hide" other users, i.e. not see the content of hidden users.

This allows people to "remove" other users that they don't feel contribute to the community.

You could also allow users to report bad posts, and allow a human to decide whether or not to hide or delete the post. You would have to have community rules for this to be effective.

OTHER TIPS

FIrst of all, it depends on your content.

But in general, I would start by hide/delete the flagged stuff at a threshold.

When the community grows I would add crowdsourcing and create a balance from both.

I would also do a general scan on all posts to search for keywords which might lead or contain bad content.

Also, you will need to create some tolerance as some posts might contain a reference to illegal stuff but intended for god reasons.

ex: dont take drugs

If the community builds well, I would mostly rely on it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top