Search quality slips in people’s perceptions … or is it just that we’ve moved the goalpost?

Recently, the American Customer Satisfaction Index reported that the perceived quality of Google and other search platforms is on a downward trajectory. In particular, Google’s satisfaction score has declined two percentage points to 82 out of a possible high score of 100, according to the ACS Index.

Related to this trend, search advertising ROI is also declining. According to a report published recently by Analytic Partners, the return on investment from paid search dropped by more than 25% between 2010 and 2016.

In all likelihood, a falling ROI can be linked to lower satisfaction with search results.  But let’s look at things a little more closely.

First of all, Google’s customer satisfaction score of 82 is actually better than the 77 score it had received as recently as 2015. In any case, attaining a score of 82 out of 100 isn’t too shabby in such customer satisfaction surveys.

Moreover, Google has been in the business of search for a solid two decades now – an eternity in the world of the Internet. Google has always had a laser-focus on optimizing the quality of its search results, seeing as how search is the biggest “golden egg” revenue-generating product the company has (by far).

Obviously, Google hasn’t been out there with a static product. Far from it:  Google’s search algorithms have been steadily evolving to the degree that search results stand head-and-shoulder above where they were even five years ago.  Back then, search queries typically resulted in generic results that weren’t nearly as well-matched to the actual intent of the searcher.

That sort of improvement is no accident.

But one thing has changed pretty dramatically – the types of devices consumers are using to conduct their searches. Just a few years back, chances are someone would be using a desktop or laptop computer where viewing SERPs containing 20 results was perfectly acceptable – and even desired for quick comparison purposes.

Today, a user is far more likely to be initiating a search query from a smartphone. In that environment, searchers don’t want 20 plausible results — they want one really good one.

You could say that “back then” it was a browsing environment, whereas today it’s a task environment, which creates a different mental framework within which people receive and view the results.

So, what we really have is a product – search – that has become increasingly better over the years, but the ground has shifted in terms of customer expectations.

Simply put, people are increasingly intolerant of results that are even a little off-base from the contextual intent of their search. And then it becomes easy to “blame the messenger” for coming up short – even if that messenger is actually doing a much better job than in the past.

It’s like so much else in one’s life and career: The reward for success is … a bar that’s set even higher.

More raps for Google on the “fake reviews” front.

Google’s trying to not have its local search initiative devolve into charges and counter-charges of “fake news” à la the most recent U.S. presidential election campaign – but is it trying hard enough?

It’s becoming harder for the reviews that show up on Google’s local search function to be considered anything other than “suspect.”

The latest salvo comes from search expert and author Mike Blumenthal, whose recent blog posts on the subject question Google’s willingness to level with its customers.

Mr. Blumenthal could be considered one of the premiere experts on local search, and he’s been studying the phenomenon of fake information online for nearly a decade.

The gist of Blumenthal’s argument is that Google isn’t taking sufficient action to clean up fake reviews (and related service industry and affiliate spam) that appear on Google Maps search results, which is one of the most important utilities for local businesses and their customers.

Not only that, but Blumenthal also contends that Google is publishing reports which represent “weak research” that “misleads the public” about the extent of the fake reviews problem.

Mike Blumenthal

Google contends that the problem isn’t a large one. Blumenthal feels differently – in fact, he claims the problem as growing worse, not getting better.

In a blog article published this week, Blumenthal outlines how he’s built out spreadsheets of reviewers and the businesses on which they have commented.

From this exercise, he sees a pattern of fake reviews being written for overlapping businesses, and that somehow these telltale signs have been missed by Google’s algorithms.

A case in point: three “reviewers” — “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” — have all “reviewed” three very different businesses in completely different areas of the United States:  Bedoy Brothers Lawn & Maintenance (Nevada), Texas Car Mechanics (Texas), and The Joint Chiropractic (Arizona, California, Colorado, Florida, Minnesota, North Carolina).

They’re all 5-star reviews, of course.

It doesn’t take a genius to figure out that “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” won’t be found in the local telephone directories where these businesses are located. That’s because they’re figments of some spammer-for-hire’s imagination.

The question is, why doesn’t Google develop procedures to figure out the same obvious answers Blumenthal can see plain as day?

And the follow-up question: How soon will Google get serious about banning reviewers who post fake reviews on local search results?  (And not just targeting the “usual suspect” types of businesses, but also professional sites such as physicians and attorneys.)

“If their advanced verification [technology] is what it takes to solve the problem, then stop testing it and start using it,” Blumenthal concludes.

To my mind, it would be in Google’s own interest to get to the bottom of these nefarious practices. If the general public comes to view reviews as “fake, faux and phony,” that’s just one step before ceasing to use local search results at all – which would hurt Google in the pocketbook.

Might it get Google’s attention then?

When people think “search,” they still think “Google.”

… And they might say it, too — thanks to the rise of voice search.

Over the years, many things have changed in the world of cyberspace. But one thing seems to be pretty much a constant:  When people are in “search” mode online, most of them are playing in Google’s ballpark.

This behavior has been underscored yet again in a new survey of ~800 consumers conducted by Fivesight Research.

Take a look at these two statistics that show how strong Google’s search popularity remains today:

  • Desktop users: ~79% of searches are on Google
  • Smartphone users: ~86% use Google

The smartphone figure above is even more telling in that the percentage is that high whether users are on an iPhone or an Android system.

But here’s another very interesting finding from the Fivesight survey: Google’s biggest competition isn’t other search engines like Bing or Yahoo.  Instead, it’s Siri, which now accounts for ~6% of mobile search market share.

So what we’re seeing in search isn’t a shift to other providers, but rather a shift into new technologies. To illustrate, nearly three in four consumers are using voice technologies such as Siri, Google Now and Microsoft Cortana to supplement their traditional search activities.

Some marketing specialists contend that “voice search is the new search” – and it’s hard not to agree with them. Certainly, voice search has become easier in the past year or so as more mobile devices as well as personal home assistants like Amazon Alexa have been adopted by the marketplace.

It also helps that voice recognition technology continues to improve in quality, dramatically reducing the incidences of “machine mistakes” in understanding the meaning of voice search queries.

But whether it’s traditional or voice-activated, I suspect Google will continue to dominate the search segment for years to come.

That may or may not be a good thing for consumers. But it’s certainly a good thing for Google – seeing as how woefully ineffective the company has been in coming up with any other business endeavor even remotely as financially lucrative as its search business.

Who are the World’s Most Reputable Companies in 2016?

I’ve blogged before about the international reputation of leading companies and brands as calculated by various survey firms such as Harris Interactive.

RI logoOne of these ratings studies is conducted by market research firm Reputation Institute, which collected nearly 250,000 ratings during the first quarter of 2016 from members of the public in 15 major countries throughout the world.

The nations included in the company reputation evaluation were the United States, Canada, Mexico and Brazil in the Americas … France, Germany, Italy, Spain, the United Kingdom and Russia in Europe … India, China, South Korea and Japan in Asia … as well as Australia.

Approximately 200 leading companies were rated by respondents on a total of seven key dimensions of reputation, including:

  • Products and services
  • Innovation
  • Workplace
  • Governance
  • Citizenship
  • Leadership
  • Performance

In the 2016 evaluation, the top-rated companies scored “excellent” (a rating of 80 or higher on a 100-poinst scale) or “strong” (a rating of 70-79) in all seven reputation categories. 2016’s “Top 10” most reputable firms turned out to be these (ranked in order of their score):

#1 Rolex

#2 The Walt Disney Company

#3 Google

#4 BMW Group

#5 Daimler

#6 LEGO Group

#7 Microsoft

#8 Canon

#9 Sony

#10 Apple

Different companies scored highest on specific attributes, however:

  • Apple: #1 in Innovation and in Leadership
  • Google: #1 in Performance and in Workplace
  • Rolex: #1 in Products & Services
  • The Walt Disney Company: #1 in Citizenship and in Governance

VAt the other end of the scale, which company do you suppose was the one that suffered the worst year-over-year performance?

That dubious honor goes to Volkswagen.  In the wake of an emissions scandal affecting the brand internationally, VW’s reputation score plummeted nearly 14 points, which was enough to drop it out of the Top 100 brand listing altogether.

It’s quite a decline from the VW’s #14 position last year.

The complete list of this year’s Top 100 Reputable Companies can be accessed via this link. You may see some surprises …

The “100% ad viewability” gambit: Gimmick or game-changer?

Say hello to the ad industry’s newest acronym: vCPM (viewable cost-per-thousand).

viewabilityA few weeks back, Google announced that it will be introducing 100% viewable ads in the coming months, bringing all online ad campaigns bought on a CPM basis into view across its Google Display Network.

The news comes as a relief to advertisers, who have long complained about the high percentage of ads that never have a chance to be viewed by “real people.”

The statistic that Google likes to reference is that approximately 55% of all display ads are never viewed due to a myriad of factors — such as appearing being below the fold, being scrolled out of view, or showing up in a background tab.

And the problem is only growing larger with the increased adoption of ad blocker software tools.

Google isn’t the only that’s one coming up with in-view advertising guarantees. Facebook recently announced that it will begin selling 100% viewable ads in its News Feed area.

But some are questioning how much of a better benefit 100% viewability will be in actuality. For one thing, ad rates for these program are sure to be higher than for conventional ad buying contracts.

For another, neither Facebook nor Google have stated how long an ad would need to remain in view before an advertiser gets charged. Whether it’s 1 second, 2 seconds or 5 seconds makes a huge difference in the real worth of that exposure to the consumer.

Then there’s the realm of mobile advertising. In a startling analysis conducted and reported on by The New York Times, a mix of advertising and editorial on the mobile home pages of the top 50 news sites was measured.  What the analysis found was that mobile airtime is being chewed up by advertising content far more than by the editorial content people are tuning in to view.

Boston.com mobile readers are a case in point. The analysis found that its readers spend an average of ~31 seconds waiting for ads to load versus ~8 seconds waiting for the editorial content to load.  That translates into a home page visitor paying $9.50 per month — just to view the ads.

ad blockerWhen there’s suddenly a cost implication in addition to the basic “irritation factor,” expect more smartphone and tablet users to avail themselves of ad blockers even more than they do today.

As if on cud, Apple is now allowing ad blockers on the iPhone, giving consumers the ability to conserve data, make websites load faster, and save on usage charges all in one fell swoop.

Sounds like a pretty sweet deal all-around.

In the “right to be forgotten” battles, Google’s on the defensive again.

untitledSuddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.)  But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

file-and-forgetAs I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure.  Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all.  (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results.  Today, I don’t think we’re anywhere closer to knowing.

Social media data mining: Garbage-in, garbage-out?

gigoIt’s human nature for people to strive for the most flattering public persona … while confining the “true reality” only to those who have the opportunity (or misfortune) to see them in their most private moments.

It goes far beyond just the closed doors of a family’s household. I know a recording producer who speaks about having to “wipe the bottoms” of music stars — an unpleasant thought if ever there was one.

In today’s world of interactivity and social platforms, things are amplified even more — and it’s a lot more public.

Accordingly, there are more granular data than ever about people, their interests and their proclivities.

The opportunities for marketers seem almost endless. At last we’re able to go beyond basic demographics and other conventional classifications, to now pinpoint and target marketing messages based on psychographics.

And to do so using the very terms and phrases people are using in their own social interactions.

The problem is … a good deal of social media is one giant head-fake.

Don’t just take my word for it. Consider remarks made recently by Rudi Anggono, one of Google’s senior creative staff leaders. He refers to data collected in the social media space as “a two-faced, insincere, duplicitous, lying sack of sh*t.”

Anggono is talking about information he dubs “declared data.” It isn’t information that’s factual and vetted, but rather data that’s influenced by people’s moods, insecurities, social agenda … and any other set of factors that shape someone’s carefully crafted public image.

In other words, it’s information that’s made up of half-truths.

This is nothing new, actually. It’s been going on forever.  Cultural anthropologist Genevieve Bell put her finger on it years ago when she observed that people lie because they want to tell better stories and to project better versions of themselves.

What’s changed in the past decade is social media, of course.  What better way to “tell better stories and project better versions of ourselves” than through social media platforms?

Instead of the once-a-year Holiday Letter of yore, any of us can now provide an endless parade of breathless superlatives about our great, wonderful lives and the equally fabulous experiences of our families, children, parents, A-list friends, and whoever else we wish to associate with our excellent selves.

Between Facebook, Instagram, Pinterest and even LinkedIn, reams of granular data are being collected on individuals — data which these platforms then seek to monetize by selling access to advertisers.

In theory, it’s a whole lot better-targeted than the frumpy, old fashioned demographic selects like location, age, income level and ethnicity.

But in reality, the information extracted from social is suspect data.

This has set up a big debate between Google — which promotes its search engine marketing and advertising programs based on the “intent” of people searching for information online — and Facebook and others who are promoting their robust repositories of psychographic and attitudinal data.

There are clear signs that some of the social platforms recognize the drawbacks of the ad programs they’re promoting — to the extent that they’re now trying to convince advertisers that they deserve consideration for search advertising dollars, not just social.

In an article published this week in The Wall Street Journal’s CMO Today blog, Tim Kendall, Pinterest’s head of monetization, contends that far from being merely a place where people connect with friends and family, Pinterest is more like a “catalogue of ideas,” where people “go through the catalogue and do searches.”

Pinterest has every monetary reason to present itself in this manner, of course.  According to eMarketer, in 2014 search advertising accounted for more than 45% of all digital ad spending — far more than ad spending on social media.

This year, the projections are for more than $26 billion to be spent on U.S. search ads, compared to only about $10 billion in the social sphere.

The sweet spot, of course, is being able to use declared data in concert with intent and behavior. And that’s why there’s so much effort and energy going into developing improved algorithms for generating data-driven predictive information than can accomplish those twin goals.

Rudi Anggono
Rudi Anggono

In the meantime, Anggono’s admonition about data mined from social media is worth repeating:

“You have to prod, extrapolate, look for the intent, play good-cop/bad-cop, get the full story, get the context, get the real insights. Use all the available analytical tools at your disposal. Or if not, get access to those tools. Only then can you trust this data.”

What are your thoughts? Do you agree with Anggono’s position? Please share your perspectives with other readers here.