Compliance » Is Content Moderation Even Possible?

Is Content Moderation Even Possible?

Large Headquarter Building with Computers. 3d Render

March 2, 2021

Not according to a post in the Techdirt blog – at least not moderation “at scale” – because neither human moderators nor AI are up to the task of recognizing mockery and other subtle and no-so-subtle varieties of what editor Mike Masnick refers to generically as context. “Low level content moderators tend to only have a few seconds to make decisions on content, or the entire process slows to a crawl, and then the media will slam those companies for leaving ‘dangerous’ content up too long. So tradeoffs are made, and often that means that understanding context is a casualty of the process.

“Yes, it would be great if every content moderator had the time and resources to understand the context of every tweet or Facebook post, but the reality is that we’d then need to employ basically every human being alive to be researching context.”

One example cited by Masnick – close to home he says, because it’s from one of Techdirt’s prolific commenters –  is a reply to a tweet berating Facebook on the grounds that it’s “controlled by Liberals.” The sarcastic rejoinder: These are people who “believe the world is flat, the vaccine is a way to chip them so someone can dial their death number via 5G … [and] that Trump is the messiah.” Twitter demanded that tweet be removed, on the grounds it spread “misleading and potentially harmful information related to COVID-19.”

At that point the commentator appealed and “surprisingly quickly,” says Masnick, his case was reviewed. The appeal was rejected.

Today’s General Counsel / DR

Read full article at:

Share this post: