Question

I host a few Source game servers and run a plugin that dumps player chat to a MySQL database. I have quiet a bit of chat history and was looking for something interesting to do with it. I'd like to build a system that allows members of my community to determine what is and isn't 'acceptable'.

My thought is that it'd work something like this: Somehow, I allow my community members to view chat logs (without identifying who said what) and they mark the logs as 'acceptable' or 'unacceptable'. I'd have to figure out if it will just show a block of text from a time frame, or just a specific user in a certain time frame, or just individual lines (could be good...could also mean the user completely missed the context of the chat).

This would work somewhat like the captcha system, where multiple users would end up grading the same series of chat logs. From there, I'd get values for groups of words. The theory is that it'd create a threshold where certain things are acceptable and others are not. After a set amount of my existing logs have been graded, I'd have a meaningful way of determining if a message met the standards my community has defined.

My questions are these -

  1. What would you recommend I show my users that are grading the logs? Should I show them a set of X chat lines? Should I show all chat lines in 5 minute intervals? Should I narrow these two windows down by only showing messages of 1 user during those time frames of X lines? Or should the users grade each line individually? I am planning on placing a limit on how many lines/groups a specific community member can grade per day.
  2. What would an appropriate way be to design the database storing all of this data? Currently, each individual chat line is stored as it's own row in MySQL. Each has a unique ID as well as the full text of the chat message sent in game. I've also got the player name and server it was received from but I don't see those as necessary.
  3. I'd like to create this in such as way that it becomes self sufficient / adaptive to the community and what they consider acceptable. Over time, more lines would be graded and added to the thresholds/calculations to determine if a message is 'good'/'bad'. If anyone has built something like this, can you point out pitfalls I should avoid while building this?
Was it helpful?

Solution

I'd rather consider users being able to mark messages as inappropriate in real-time if possible. The normal users can do it, rather than you having to find people to review them offline. If you can't or don't want to go this approach: Messages can probably be identified as inappropriate outside the context of any other messages, but looking at a continuous stream of messages, in the order they appeared in real-time could be helpful. I'd probably go for giving them X continuous messages. With real-time flagging, I'd suggest a few messages before and after, with the flagged message in red, or something similar.

You can try to have some sort of rewards system in place for when users review a certain amount of messages. If you allow for real-time flagging of messages, you can reward people for reviewing flagged messages to confirm the flagged status.

Knowing which player it is could be useful. If a player posts a few inappropriate messages, you can issue a warning or ban or something. Server is probably not so useful, but I'm all for storing a bit of extra info which you may find a use for later on.

I wouldn't really get hung up on the database storage. Having a table with columns time (or simply auto-increment ID, or both), player, server, message, isInappropriate should be fine, depending on what types of operations you want to perform.

An approach you could take (once you have some messages marked as inappropriate) would be pretty similar to a spam filter (you should be able to find more than enough material on that).

The general consideration would be whether you want to be lenient or strict when marking as inappropriate (do you want some inappropriate messages to be missed or some messages that are fine to be flagged). Look into Precision / Recall to give some idea about this.

I suspect that in a chat environment, it might, for the most part, be enough to simply look for (and possibly try to automatically identify) specific words that appear in inappropriate messages.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top