With each passing day, we see more evidence that Twitter has become the social media platform that’s in the biggest trouble today.
The news is replete with articles about how some people are signing off from Twitter, having “had it” with the politicization of the platform. (To be fair, that’s a knock on Facebook as well these days.)
Then there are reports of how Twitter has stumbled in its efforts to monetize the platform, with advertising strategies that have failed to generate the kind of growth to match the company’s optimistic forecasts. That bit of bad news has hurt Twitter’s share price pretty significantly.
And now, courtesy of a new analysis published by researchers at Indiana University and the University of Southern California, comes word that Twitter is delivering misleading analytics on audience “true engagement” with tweets. The information is contained in a peer-reviewed article titled Online Human-Bot Interactions: Detection, Estimation and Characterization.
According to findings as determined by Indiana University’s Center for Complex Networks & Systems Research (CNetS) and the Information Sciences Institute at the University of Southern California, approximately 15% of Twitter accounts are “bots” rather than people.
That sort of news can’t be good for a platform that is struggling to elevate its user base in the face of growing competition.
But it’s even more troubling for marketers who rely on Twitter’s engagement data to determine the effectiveness of their campaigns. How can they evaluate social media marketing performance if the engagement data is artificially inflated?
Fifteen percent of all accounts may seem like a rather small proportion, but in the case of Twitter that represents nearly 50 million accounts.
To add insult to injury, the report notes that even the 15% figure is likely too low, because more sophisticated and complex bots could have appeared as a “humans” in the researchers’ analytical model, even if they aren’t.
There’s actually an upside to social media bots – examples being automatic alerts of natural disasters or customer service responses. But there’s also growing evidence of nefarious applications abounding.
Here’s one that’s unsurprising even if irritating: bots that emulate human behavior to manufacture “faux” grassroots political support. But what about the delivery of dangerous or inciting propaganda thanks to bot “armies”? That’s more alarming.
The latest Twitter-bot news is more confirmation of the deep challenges faced by this particular social media platform. What’s next, I wonder?