Social media platforms navigate the delicate balance between free speech and censorship.

Everyday Americans weigh in with their views.

The past few months have seen people like Mark Zuckerberg doing mental acrobatics attempting to explain how social media platforms like Facebook intend to control the spate of “fake news” and “hate speech” posts, comments and tweets that are so often the currency of interactive “discourse” online.

And now the First Amendment Center of the Freedom Forum Institute is weighing in with the results of a survey in which ~1,000 Americans were asked for their opinions about the challenges of monitoring and controlling what gets published for the world to see.

The survey, which has been conducted annually since 1997, gives us insights into Americans’ current attitudes about censoring objectionable content balanced against free speech rights.

Asked whether social medial companies should remove certain types of content from their pages in certain circumstances, sizable majorities agreed that companies should do so in the following cases:

  • Social media companies should remove false information: ~83% agree
  • Social media companies should remove hate speech: ~72% agree
  • Social media companies should remove personal attacks: ~68% agree

At the same time, however, when asked whether the government should require social media sites to monitor and remove objectionable content, those opinions were decidedly mixed:

  • Strongly agree with having government involved in these activities: ~27%
  • Somewhat agree: ~21%
  • Somewhat disagree: ~20%
  • Strongly disagree: ~29%
  • Don’t know/not sure: ~3%

So the key takeaway is that Americans dislike objectionable content and think that the social platforms should take on the responsibility for monitoring and removing such content. But many don’t want the government doing the honors.

A mixed result for sure — and one in which governmental authorities could well be d*mned if they do and d*mned if they don’t.

More information about the survey findings can be accessed here.

More raps for Google on the “fake reviews” front.

Google’s trying to not have its local search initiative devolve into charges and counter-charges of “fake news” à la the most recent U.S. presidential election campaign – but is it trying hard enough?

It’s becoming harder for the reviews that show up on Google’s local search function to be considered anything other than “suspect.”

The latest salvo comes from search expert and author Mike Blumenthal, whose recent blog posts on the subject question Google’s willingness to level with its customers.

Mr. Blumenthal could be considered one of the premiere experts on local search, and he’s been studying the phenomenon of fake information online for nearly a decade.

The gist of Blumenthal’s argument is that Google isn’t taking sufficient action to clean up fake reviews (and related service industry and affiliate spam) that appear on Google Maps search results, which is one of the most important utilities for local businesses and their customers.

Not only that, but Blumenthal also contends that Google is publishing reports which represent “weak research” that “misleads the public” about the extent of the fake reviews problem.

Mike Blumenthal

Google contends that the problem isn’t a large one. Blumenthal feels differently – in fact, he claims the problem as growing worse, not getting better.

In a blog article published this week, Blumenthal outlines how he’s built out spreadsheets of reviewers and the businesses on which they have commented.

From this exercise, he sees a pattern of fake reviews being written for overlapping businesses, and that somehow these telltale signs have been missed by Google’s algorithms.

A case in point: three “reviewers” — “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” — have all “reviewed” three very different businesses in completely different areas of the United States:  Bedoy Brothers Lawn & Maintenance (Nevada), Texas Car Mechanics (Texas), and The Joint Chiropractic (Arizona, California, Colorado, Florida, Minnesota, North Carolina).

They’re all 5-star reviews, of course.

It doesn’t take a genius to figure out that “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” won’t be found in the local telephone directories where these businesses are located. That’s because they’re figments of some spammer-for-hire’s imagination.

The question is, why doesn’t Google develop procedures to figure out the same obvious answers Blumenthal can see plain as day?

And the follow-up question: How soon will Google get serious about banning reviewers who post fake reviews on local search results?  (And not just targeting the “usual suspect” types of businesses, but also professional sites such as physicians and attorneys.)

“If their advanced verification [technology] is what it takes to solve the problem, then stop testing it and start using it,” Blumenthal concludes.

To my mind, it would be in Google’s own interest to get to the bottom of these nefarious practices. If the general public comes to view reviews as “fake, faux and phony,” that’s just one step before ceasing to use local search results at all – which would hurt Google in the pocketbook.

Might it get Google’s attention then?

Is the Phenomenon of “Fake News” Overhyped?

fnIn the wake of recent election campaigns and referenda in places like the United States, the United Kingdom, France, Austria and the Philippines, it seems that everyone’s talking about “fake news” these days.

People all across the political and socio-economic spectrum are questioning whether the publishing and sharing of “faux” news items is having a deleterious impact on public opinion and actually changing the outcome of consequential events.

The exact definition of the term is difficult to discern, as some people are inclined to level the “fake news” charge against anyone with whom they disagree.

Beyond this, I’ve noticed that some people assign nefarious motives – political or otherwise – to the dissemination of all such news stories.  Often the motive is different, however, as over-hyped headlines – many of them having nothing to do with politics or public policy but instead focusing on celebrities or “freak” news events – serve as catnip-like clickbait for viewers who can’t resist their curiosity to find out more.

From the news consumer’s perspective, the vast majority of people think they can spot “fake” news stories when they encounter them. A recent Pew survey found that ~40% of respondents felt “very confident” knowing whether a news story is authentic, and another ~45% felt “somewhat confident” of that fact.

But how accurate are those perceptions really? A recent survey from BuzzFeed and Ipsos Public Affairs found that people who use Facebook as their primary source of news believed fake news headlines more than eight out of ten times.

That’s hardly reassuring.

And to underscore how many people are using Facebook versus more traditional news outlets as a “major” source for their news, this BuzzFeed chart showing the Top 15 information sources says it all:

  • CNN: ~27% of respondents use as a “major source” of news
  • Fox News: ~27%
  • Facebook: ~23%
  • New York Times: ~18%
  • Google News: ~17%
  • Yahoo News: ~16%
  • Washington Post: ~12%
  • Huffington Post: ~11%
  • Twitter: ~10%
  • BuzzFeed News: ~8%
  • Business Insider: ~7%
  • Snapchat: ~6%
  • Drudge Report: ~5%
  • Vice: ~5%
  • Vox: ~4%

Facebook’s algorithm change in 2016 to emphasize friends’ posts over publishers’ has turned that social platform into a pretty big hotbed of fake news activity, as people can’t resist sharing even the most outlandish stories to their network of friends.

Never mind Facebook’s recent steps to change the dynamics by sponsoring fact-checking initiatives and banning fraudulent websites from its ad network; by the accounts I’ve read, it hasn’t done all that much to curb the orgy of misinformation.

Automated ad buying isn’t helping at all either, as it’s enabling the fake news “ecosystem” big-time. As Digiday senior editor Lucia Moses explains it:

“One popular method … is tapping the competitive market for native ad widgets. Taboola, Revcontent, Adblade and Content.ad are prominently displayed on sites identified with fake news, while there are a few retargeted and programmatic ads sprinkled in. Publishers install these native ad widgets with a simple snippet of code — typically after an approval process — and when readers click on paid links in the widget, the host publisher makes money.  The ads are made to appear like related-content suggestions and often promote sensational headlines and direct-marketing offers.”

So attempting to solve the “fake news” problem is a lot more complicated than some people might realize – and it certainly isn’t going to improve because of any sort of “political” change of heart. Forrester market analyst Susan Bidel sums it up thus:

“While steps taken by … entities to curb fake news are admirable, as long as fake news generators can make money from their efforts, the problem won’t go away.”

So there we are. Bottom-line, fake news is going to be with us for the duration – whether people like it or not.

What about you? Do you think you can spot every fake news story?  Or do you think at least of few of them come in below radar?