What happens when biased robots with unknown agendas join the online conversation?
Key to democracy is public engagement—when people discuss the issues of the day with each other openly, honestly and without outside influence. But what happens when large numbers of participants in that conversation are biased robots created by unseen groups with unknown agendas? As my research has found, that’s what has happened this election season.
Since 2012, I have been studying how people discuss social, political, ideological and policy issues online. In particular, I have looked at how social media are abused for manipulative purposes.
It turns out that much of the political content Americans see on social media every day is not produced by human users. Rather, about one in every five election-related tweets from September 16 to October 21 was generated by computer software programs called “social bots.”
These artificial intelligence systems can be rather simple or very sophisticated, but they share a common trait: They are set to automatically produce content following a specific political agenda determined by their controllers, who are nearly impossible to identify. These bots have affected the online discussion around the presidential election, including leading topics and how online activity was perceived by the media and the public.
How active are they?
The operators of these systems could be political parties, foreign governments, third-party organisations, or even individuals with vested interests in a particular election outcome. Their work amounts to at least four million election-related tweets during the period we studied, posted by more than 400,000 social bots.
That’s at least 15 percent of all the users discussing election-related issues. It’s more than twice the overall concentration of bots on Twitter—which the company estimates at 5 to 8.5 percent of all accounts.
Via: Ars Technica
http://arstechnica.co.uk/information-technology/2016/11/trump-twitter-bots-us-presidential-election/