Last year, Twitter’s then CEO Dick Costolo took “personal responsibility” for the continuing abuse problems on his site.
One, called @Assbot, recombined tweets from its human creator’s archive into random statements and then used these to respond to tweets coming from Donald Trump.
The result was a torrent of angry Trump supporters engaging with a bot spouting nonsense.
Aryll: I generally don't take work from strangers for free.
If someone is asking for art from you you should be thankful to them because they like your work!
“Group dynamics and norms of behaviour have already been established there.” So Munger wondered if he could create a bot to manipulate a troll’s sense of group dynamics online.
The idea was to create bots that would admonish people who tweeted racist comments – by impersonating a higher-status individual from their in-group. He identified Twitter accounts that had recently issued a racist tweet, then combed through their previous 1000 tweets to check that the user met his standards for abuse and racism.The simplest way bots can help is with block lists, which specify the accounts you don’t want to see in your feed. But reporting them to prevent harassment of others is a hassle.For example, a form must be filled out for each abusive [email protected] simply deployed a mishmash of existing tweets. Kevin Munger at New York University is interested in group identity on the internet.Offline, we signal which social groups we belong to with things like in-jokes, insider knowledge, clothes, mannerisms and so on.“So it’s really hard to train computers to detect it.” Enter the argue-bots.