Pages Menu
Categories Menu

Posted by on Aug 1, 2013 in Miscellany, Philosophy, Politics | 28 comments

Block Bot, #ReportAbuse and a Twitter ‘panic mode’

One thing I didn’t make clear, and should have, in my previous post on the #ReportAbuse campaign on Twitter is that even though I had concerns around the effectiveness of a “report abuse” button, I did agree that Twitter should do something about the trolls. It’s not clear to me why having a particularly thick skin, or staying away from discussing topics that invite abuse, or crudely, simply possessing the property of not-being-a-woman should pretty much guarantee experiencing a friendlier time on Twitter.

A concern around ‘false positives; in terms of being identified as trolls (i.e. people being labelled as abusers too readily, or on too subjective a set of criteria, or for entirely personal reasons) is that reputational harm could be fairly costly, even if you aren’t aware of what it has cost you. For example, in the skeptic/atheist/humanist community, you might be passed over as a potential speaker at a conference, or contributor to a book or journal, because someone points out that you’re a dodgy character, as evidenced by your inclusion on some list or other.

However, I’ve got no issue with people maintaining such lists, so long as they are clear about what the lists are for, don’t add people to such lists capriciously or arbitrarily, and also, allow for some mechanism to be removed from the list.

One such list is James Billingham’s (or oolOn’s, to use the name some will be more familiar with) Block Bot, that has recently enjoyed more widespread attention than usual thanks to the BBC clip embedded below. The Block Bot is clear on what the list is for, but to my mind errs in including a publicly-readable list of offenders. I have no problem with a self-selected community deciding to universally pay no attention to a list of Tweeters, but when these lists are available publicly, passers-by might conclude that there is broad consensus that they are abusers, rather than this being the decision of a specific community. I’d also suggest that the explanatory notes on what the levels of abuse mean should be linked on the same page as the list of abusers, to make the context clearer.

And then, one can – and should – raise concerns about how people get added and taken off the list. It looks like a single report from an authorised blocker is sufficient to get you added to the list, even as a Level 1 (i.e. you’re classed as maximally offensive) abuser. That seems too easy, allowing for a label to be applied in a moment of hasty or capricious judgement, rather than as a result of careful consideration. And then, being removed from the list seems to require posting on the Atheism+ forums – and that seems a high bar, seeing as the sorts of people who might get listed on the Block Bot overlap substantially with people who won’t believe they’ll get a fair hearing there. It also smacks of a guilty-until-proved-innocent model.

There are some folk on the list, especially at Level 3, that I don’t think belong there at all – but this wouldn’t be a concern except for the fact that, as I say above, this list isn’t viewable on an opt-in basis, and also might be construed as indicating worse ‘crimes’ than it does, given the lazy way in which many of us make judgements. You’re told that there’s a list of abusers, and you go and see person X listed as an abuser – especially when levels 1, 2, and 3 aren’t clarified alongside the list, you might leave with a false impression as to the severity of the abuse perpetrated by certain individuals on it.

This is a central point in Damion’s post on the BBC NewsNight story, where he rightly criticises the BBC journalist for not making the distinctions mentioned above, of their being different levels of abuse, perceived or otherwise. The (undoubtedly many) people who visit the BlockBot site subsequent to that programme airing will be primed to make exactly the mistake mentioned above, namely thinking people are worse abusers than they are. Especially in light of the exceedingly vile forms of abuse that are on people’s minds right now (think Criado-Perez or Lindy West), it doesn’t seem fair that someone who has tweeted cynically about Atheism+ stands a chance of being perceived as aligned with the sort of troll who would tweet rape threats.

I’ve gone on too long, so to conclude: in case you didn’t spot it, Flay has made a very interesting suggestion with regard to controlling abuse on Twitter, and I encourage you to read his post detailing what he calls “panic mode” for Twitter. In summary, enabling panic mode would only allow mentions from people you follow to appear in your feed, allowing for a respite, and also simultaneously flagging your mentions for monitoring by Twitter, so as to highlight people who might be violating Twitter’s terms of service.

As I said at the top, something does need to be done, and Twitter has now introduced a button to report abuse on their iOS app, with the same functionality to follow for other platforms. But hopefully, they’ll keep thinking about what mechanism might be best for controlling abuse, and the ‘panic mode’ idea seems worthy of further consideration.

P.S. Two notes after checking some details with oolOn via Twitter

  1. A misconception that’s sometimes arisen around this Block Bot is that people who are added to it are reported for spam, and could therefore stand a greater chance of being blocked suspended by Twitter. This is false – they are hidden from your timeline (though you can still follow them on an individual basis if you like), but would still need to manually be reported as spammers or blocked if you choose to.
  2. The source code for the bot is freely available, in case you wanted to bake your own list of people to hide.

[EDIT] re. point 1 above and the strikethrough, oolOn just said:

https://twitter.com/ool0n/status/362870482984370177

[EDIT 2] In paragraph 4 above, I point out that the Block Bot is most useful to a specific community, rather than as a general tool (in the latter case, it could mislead, and itself lead to abuse). Tim Farley expands on this and other issues with the Bot in this very worthwhile post.

http://www.youtube.com/watch?v=R0UqtZMqxT8