Search quality slips in people’s perceptions … or is it just that we’ve moved the goalpost?

Recently, the American Customer Satisfaction Index reported that the perceived quality of Google and other search platforms is on a downward trajectory. In particular, Google’s satisfaction score has declined two percentage points to 82 out of a possible high score of 100, according to the ACS Index.

Related to this trend, search advertising ROI is also declining. According to a report published recently by Analytic Partners, the return on investment from paid search dropped by more than 25% between 2010 and 2016.

In all likelihood, a falling ROI can be linked to lower satisfaction with search results.  But let’s look at things a little more closely.

First of all, Google’s customer satisfaction score of 82 is actually better than the 77 score it had received as recently as 2015. In any case, attaining a score of 82 out of 100 isn’t too shabby in such customer satisfaction surveys.

Moreover, Google has been in the business of search for a solid two decades now – an eternity in the world of the Internet. Google has always had a laser-focus on optimizing the quality of its search results, seeing as how search is the biggest “golden egg” revenue-generating product the company has (by far).

Obviously, Google hasn’t been out there with a static product. Far from it:  Google’s search algorithms have been steadily evolving to the degree that search results stand head-and-shoulder above where they were even five years ago.  Back then, search queries typically resulted in generic results that weren’t nearly as well-matched to the actual intent of the searcher.

That sort of improvement is no accident.

But one thing has changed pretty dramatically – the types of devices consumers are using to conduct their searches. Just a few years back, chances are someone would be using a desktop or laptop computer where viewing SERPs containing 20 results was perfectly acceptable – and even desired for quick comparison purposes.

Today, a user is far more likely to be initiating a search query from a smartphone. In that environment, searchers don’t want 20 plausible results — they want one really good one.

You could say that “back then” it was a browsing environment, whereas today it’s a task environment, which creates a different mental framework within which people receive and view the results.

So, what we really have is a product – search – that has become increasingly better over the years, but the ground has shifted in terms of customer expectations.

Simply put, people are increasingly intolerant of results that are even a little off-base from the contextual intent of their search. And then it becomes easy to “blame the messenger” for coming up short – even if that messenger is actually doing a much better job than in the past.

It’s like so much else in one’s life and career: The reward for success is … a bar that’s set even higher.

In the “right to be forgotten” battles, Google’s on the defensive again.

untitledSuddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.)  But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

file-and-forgetAs I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure.  Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all.  (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results.  Today, I don’t think we’re anywhere closer to knowing.

Information, advertising and eye-tracking: Continuity among the chaos.

digital eyeIn the Western world, humans have been viewing and processing information in the same basic ways for hundreds of years. It’s a subconscious process that entails expending the most judicious use of time and effort to forage for information.

Because of how we Westerners read and write – left-to-right and top-to-bottom – the way we’ve evolved searching for information mirrors the same sort of behavior.

And today we have eye-scanning research and mouse click studies that prove the point.

In conducting online searches, where we land on information is known variously as the “area of greatest promise,” the “golden triangle,” or the “F-scan diagram.”

However you wish to name it, it generally looks like this on a Google search engine results page (SERP):

F-scan diagram

It’s easy to see how the “area of greatest promise” exists. We generally look for information by scanning down the beginning of the first listings on a page, and then continue viewing to the right if something seems to be a good match for our information needs, ultimately resulting in a clickthrough if our suspicions are correct.

Heat maps also show that quick judgments of information relevance on a SERP occur within this same “F-scan” zone; if we determine nothing is particularly relevant, we’re off to do a different keyword search.

This is why it’s so important for websites to appear within the top 5-7 organic listings on a SERP – or within the top 1-3 paid search results in the right-hand column of Google’s SERP.

In recent years, Google and other search engines have been offering enhancements to the traditional SERP, ranging from showing images across the top of the page to presenting geographic information, including maps.

To what degree is this changing the “conditioning” of people who are seeking out information today compared to before?

What new eye-tracking and heat maps are showing is that we’re evolving to looking at “chunks” of information first for clues as to the promising areas of results. But then within those areas, we revert to the same “F-scan” behaviors.

Here’s one example:

Eye-tracking Update

And there’s more:  The same eye-tracking and heat map studies are showing that this two-step process is actually more time-efficient than before.

We’re covering more of the page (at least on the first scan), but are also able to zero in on the most promising information bits on the page.  Once we find them, we’re quicker to click on the result, too.

So while Google and other search engines may be “conditioning” us to change time-honored information-viewing habits, it’s just as much that we’re “conditioning” Google to organize their SERPs in ways that are easiest and most beneficial to us in the way be seek out and find relevant information.

Bottom line, it’s continuity among the chaos. And it proves yet again that the same “prime positioning” on the page favored for decades by advertisers and users alike – above the fold and to the left – still holds true today.

Bing, Blekko, and more new developments in search.

Facebook + BingWhen it comes to the evolution of online search, as one wag put it, “If you drop your pencil, you miss a week.”

It does seem that significant new developments in search crop up almost monthly – each one having the potential to up-end the tactics and techniques that harried companies attempt to put in place to keep up with the latest methods to target and influence customers. It’s simply not possible to bury your head in the sand, even if you wanted to.

Two of the newest developments in search include the introduction of a beta version of the new Blekko search engine with its built-in focus on SEO analytics — I’ll save that topic for a future blog post — along with a joint press conference held last week by Facebook and Microsoft where they announced new functionalities to the Bing search engine. More specifically, Bing will now be displaying search results based on the experiences and preferences of people’s Facebook friends.

What makes the Bing/Facebook development particularly intriguing is that it adds a dimension to search that is genuinely new and different. Up until now, every consumer had his or her “search engine of choice” based on any number of reasons or preferences. But generally speaking, that preference wouldn’t be based on the content of the search results. That’s because the ability for search engines to deliver truly unique search results has been very difficult because they’ve all been based on essentially the same search algorithms.

[To prove the point, run the same search term on Yahoo and Google, and you’ll likely see natural search results are pretty similar one to another. There might be a different mix of image and video results, but generally speaking, the results are based on the same “crawling” capabilities of search bots.]

The Bing/Facebook deal changes the paradigm in that new information heretofore residing behind Facebook’s wall will now be visible to selected searchers.

The implications of this are pretty interesting to contemplate. It’s one thing for people to read reviews or ratings written by total strangers about a restaurant or store on a site like Yelp. But now, if someone sees “likes,” ratings or comments from their Facebook friends, those will presumably carry more weight. With this new font of information, as time goes on the number of products, brands and services that people will be rating will surely rise.

The implications are potentially enormous. Brands like Zappos have grown in popularity, and in consumer loyalty, because of their “authenticity.” The new Bing/Facebook module will provide ways for smaller brands to engender similar fierce loyalty on a smaller scale … without having to make the same huge brand-building commitment.

Of course, there’s a flip side to this. A company’s product had better be good … or else all of those hoped-for positive ratings and reviews could turn out to be the exact opposite!

What’s Happening with Web Search Behaviors?

Search EnginesMore than 460 million searches are performed every day on the Internet by U.S. consumers. A new report titled 2010 SERP Insights Study from Performics, an arm of Publicis Groupe, gives us interesting clues as to what’s happening in the world of web search these days.

The survey, fielded by Lancaster, PA-based ROI Research, queried 500 U.S. consumers who use a search engine at least once per week, found that people who search the Internet regularly are a persistent lot.

Nine out of ten respondents reported that they will modify their search and try again if they aren’t successful in their quest. Nearly as many will try an alternate search engine if they don’t succeed.

As for search engine preference, despite earnest efforts recently to knock Google down a notch or two, it remains fully ensconced on the top perch; three-fourths of the respondents in this survey identify Google as their primary search engine. Moreover, Google users are less likely to stray from their primary search engine and try elsewhere.

But interestingly, Google is the “search engine of choice” for seasoned searchers more than it is for newbies. The Performics study found that Google is the leading search engine for only ~57% of novice users, whereas Yahoo does much better among novices than regular users (~36% versus ~18% overall).

What about Bing? It’s continuing to look pretty weak across the board, with only ~7% preferring Bing.

The Performics 2010 study gives us a clear indication as to what searchers are typically seeking when they use search engines:

 Find a specific manufacturer or product web site: ~83%
 Gather information before making a purchase online: ~80%
 Find the best price for a product or service: ~78%
 Learn more about a product or service after seeing an ad elsewhere: ~78%
 Gather information before purchasing in-store or via a catalog: ~76%
 Find a location for purchasing a produce offline: ~74%
 Find coupons, specials, or sales: ~63%

As for what types of listings are more likely to attract clickthroughs, brand visibility on the search engine results page turns out to be more important than you might think. Here’s how respondents rated the likelihood to click on a search result:

 … If it includes the exact words searched for: ~88%
 … If it includes an image: ~53%
 … If the brand appears multiple times on the SERP: ~48%
 … If it includes a video: ~26%

The takeaway message here: Spend more energy on achieving multiple high SERP rankings than in creating catchy video content!

And what about paid or sponsored links – the program that’s contributing so much to Google’s sky-high stock price? As more searchers come to understand the difference between paid and “natural” search rankings … fewer are drawn to them. While over 90% of the respondents in this research study reported that they have ever clicked on paid sponsored listings, only about one in five of them do so on a frequent basis.

Are “News Hound” Behaviors Changing?

News Hound Behaviors are ChangingMost of the people I know who are eager consumers of news tend to spend far more time on the Internet than they do offline with their nose in the newspaper.

So I was surprised to read the results of a new study published by Gather, Inc., a Boston-based online media company, which found that self-described “news junkies” are more likely to rely on traditional media sources like television, newspapers and radio than online ones.

In fact, the survey, which was fielded in March 2010 and queried the news consumption habits of some 1,450 respondents representing a cross-section of age and income demographics, found that more than half of the “news hounds” cited newspapers as their primary source of news.

By comparison, younger respondents (below age 25) are far more likely to utilize the Internet for reading news (~70% do so).

Another interesting finding in the Gather study – though not terribly surprising – is that younger respondents describe themselves as “interest-based,” meaning that apart from breaking news, they focus only on stories of interest to them. This pick-and-choose “cafeteria-style” approach to news consumption may partially explain the great gaps in knowledge that the “over 40” population segment perceives in the younger generations (those observations being reported with accompanying grunts of displeasure, no doubt).

As for sharing news online, there are distinct differences in the behavior of older versus younger respondents. Two findings are telling:

 More than two-thirds of respondents age 45 and older share news items with other primarily through e-mail communiqués.

 ~55% of respondents under age 45 share news primarily through social networking.

Also, more than 80% of the respondents in Gather’s study revealed that they have personally posted online comments about news stories. This suggests that people have now become more “active” in the news by weighing in with their own opinions, rather than just passively reading the stories. This is an interesting development that may be rendering the 90-9-1 principle moot.

[For those who are unfamiliar with the 90-9-1 rule, it contends that for every 100 people interacting with online content, one creates the content … nine edit, modify or comment on that content … and the remaining 90 passively read/review the content without undertaking any further action. It’s long been a tenet in discussions about online behavior.]

What types of news stories are most likely to generate reader comments? Well, politics and world events are right up there, but local news stories are also a pretty important source for comments:

 Political stories: 28%
 National/international news stories: 27%
 Local news stories: 22%
 Celebrity news: 13%
 Sports stories: 5%
 Business and financial news: 5%

And what about the propensity for news seekers to use search engines to find multiple perspectives on a news story? More than one-third of respondents report that they “click on multiple [search engine] results to get a variety of perspectives,” while less than half of that number click on just the first one or two search result entries.

And why wouldn’t people hunt around more? In today’s world, it’s possible to find all sorts of perspectives and “slants” on a news story, whereas just a few years ago, you’d have to be content with the same AP or UPI wire story that you’d find republished in dozens of papers — often word-for-word.

B-to-B e-Newsletters: Just How Engaged are Recipients?

B-to-B e-NewslettersIn the B-to-B world, marketers are sometimes disappointed with the open rates for the e-newsletters they deploy to their customers and prospects. While some are opened by a large proportion of recipients, it’s common experience for e-newsletter open rates to hover around 20%-25%.

Does this mean that e-newsletters are a poor substitute for B-to-B print media? Unfortunately, it’s difficult to know how these results compare. After all, just because trade magazines are delivered to recipients doesn’t mean that they’re ever read.

It would be nice to compare B-to-B reader dynamics between print and online media, but with quantifiable statistics available for only one side of the equation, that’s pretty difficult.

However, GlobalSpec, the technology services company that operates a vertical search engine of engineering and industrial products, is able to provide us with a few additional clues. It has just published the results of its 2010 Economic Outlook Survey, which queried more than 2,000 U.S. technical, engineering, manufacturing and industrial professionals on a variety of business topics.

As part of the GlobalSpec survey, respondents were asked about their e-newsletter reading habits. And it turns out that more than half of the respondents (~55%) reported that they read work-related e-newsletters daily or several times a week.

Another 30% of respondents reported that they read e-newsletters once a week or several times per month. That leaves only 15% reporting that they rarely or never read e-newsletters.

What’s more, the readership of e-newsletters appears in increasing. In GlobalShop’s 2009 survey, only ~40% of respondents reported reading e-news daily or several times per week. So the increase in activity over just the past year is substantial.

The takeaway news is that more people in the B-to-B segment are “engaged” with e-newsletters than ever before. Whether you’re achieving above or below the 20%-25% open rate threshold is likely a function of the quality of your content … along with how good you’re doing with targeting the right names in your database.

Search Engine Rankings: Page 1 is Where It’s At

All the hype you continually hear about how important it is to get on Page 1 of search engine result pages turns out to be … right on the money.

In a just-released study from digital marketing company iCrossing, nearly 9 million “non-branded” search queries conducted on Google, Yahoo and Bing were analyzed, with the clickthrough percentages from the first, second and third pages of the search engine results (SERPs) tallied.

It turned out that more than 8.5 million clickthroughs were made from the first page of results – a whopping 95% of the total. The rest was just crumbs: Clicks off the second page came in under 250,000, while third-page clicks clocked in at a paltry ~180,000.

The results were essentially the same for the three major search engines (all at 95% or 96%) – so it’s a clean sweep across the board and clearly behavior that fits all across the spectrum.

What this suggests is that when searching on generic or descriptive terms, most people will not go past the first page of results if they can’t find a site link that interests them. If they don’t hit paydirt on the first page, they’re far more likely to try another search using different keywords or phrases until they find a site on Page 1 that does the trick.

Comparing this newest iCrossing study with research from a few years back reveals that Page 1 clicks represent an even higher proportion today; earlier studies from a few years back had it pegged at 80% to 90%.

The implications of this study are be clear: if you’re looking to attract visitors to your site via generic or descriptive subject searches, you’d better make sure your site is designed so that it achieves first-page ranking … or your web efforts will be for naught.

That being said, the recipe for success in ranking hasn’t changed much at all. Despite all of the tempting “link juice” tips and tricks out there, the main keys to getting high rankings continue to be creating loads of good web content … speaking the same “language” as searchers (however inaccurate that might be) … and maintaining lots of good links to and from your site to increase its “relevance” to search engines.

No doubt, it’s getting tougher to achieve Page 1 ranking when there’s so much competition out there, but it’s well worth the effort.

Bing Search: Pike’s Peak … or Halley’s Comet?

Well, it didn’t take long for the marketplace to render its verdict on the Bing search engine phenomenon. Fueled by a multi-million dollar advertising rollout plus an aggressive PR push, web tracking service StatCounter has reported that Bing actually vaulted past Yahoo to become the #2 search engine … for one day.

That’s right. According to StatCounter’s data, on June 4th, Bing captured over 15% of the U.S. search share market, while Yahoo had only around 10%. By the next day, Bing’s share had dropped below 10% while Yanoo notched up a point to 11%. And by Day 3, Bing’s share had fallen still further to just under 7%.

Think it couldn’t get worse? The day after that, Bing was mired below 6% share.

Similar results were recorded worldwide.

What’s behind the primal shrug that Bing seems to have met in the marketplace? Certainly, all the PR hype was successful in getting people curious enough to click through and do a bit of tire-kicking. But it’s obvious that most weren’t particularly impressed by what they experienced, despite the fact that Bing does provide some user-friendly features not available over at Google.

But that’s not nearly enough for success. Google’s users are, by and large, quite satisfied with the search experience. It’s what they know. It’s comfortable. And unless there’s a compelling reason to switch — to change deep-seated habits — most people simply aren’t going to play ball … whether you put millions of dollars in advertising behind your pitch or not.

The folks at Google might have been shaken a least a bit on June 4th when their market share of search dropped to 72%. But they needn’t have worried. Four days later, Google’s share was back up to 80% — where it had been to begin with.

Next case, please?

More Action on the Search Engine Front

Bing logo designWolfram Alpha logoDespite the fact that Google has proven itself to be all but immune from threats posed by competing search engines, hope springs eternal. Within the past couple weeks alone, two new challengers have emerged, accompanied by much fanfare in the business press.

Microsoft takes yet another swipe at Google with its new Bing search engine. Based on an earlier one called “Kumo,” some industry observers — though not all — believe it is a pretty good competitor. Reviewers are particularly pleased with the presentation of refined versions of search queries. Bing also features a rollover display of each link’s content, allowing you to see how useful it will be before clicking through to the site.

The search engine also appears to index more recent “breaking news” items, whereas with Google, those results are not shown unless you click through to Google News — an extra step.

The big question is whether Bing will be able to wean web users away from their habit of searching on Google as their default choice. Certainly, Microsoft is putting some serious promotional dollars behind the launch — upwards of $100 million according to Advertising Age magazine. But based on the tea leaves, a wholesale change in search behavior seems unlikely. Search habits aren’t going to change dramatically unless there is a dramatic improvement in the effectiveness and speed of search activity. Fom what we see of Bing so far, we’re talking about improvements nibbling around on the margin rather than big sweeping change.

But “big sweeping change” just might be the recipe for Wolfram/Alpha, the other new entrant in the search engine sweepstakes. That’s because W/A isn’t actually a search engine in the classsic sense. Instead, its developers refer to it as a “computational knowledge engine” that uses complex algorithms to search databases to come up with answers to questions, rather than presenting a list of sources where the answer might be found. It can report some really cool factual results just based on the user typing in, for example, a date range, several city names, or an animal species.

The key difference between Wolfram/Alpha and Google is that W/A does not index web pages. Instead, it draws answers from a wide range of information-packed databases. So if you want to know the number and magnitude of hurricanes hitting North America in the past 15 years, you’ll get a specific answer rather than being presented with a series of web links wherein you might find the answer to be hiding.

Some observers see the potential for W/A and Google to team up rather than compete against one another. After all, what they do isn’t directly competitive, but in more respects complementary. And in an interesting twist, it turns out that Stephen Wolfram, the ~50-year-old computer scientist and developer who created the software platform upon which W/A is based (called “Mathematica”), once supervised a summer intern by the name of Sergey Brin — who would go on to develop Google with partner Larry Page.

Sergey and Stephen teaming up once again would be quite the coincidence … or would it really?