The disappearing attention spans of consumers.

Today I was talking with one of my company’s longtime clients about how much of a challenge it is to attract the attention of people in target marketing campaigns.

Her view is that it’s become progressively more difficult over the past dozen years or so.

Empirical research bears this out, too. Using data from a variety of sources including Twitter, Google+, Pinterest, Facebook and Google, Statistic Brain Research Institute‘s Attention Span Statistics show that the average attention span for an “event” on one of these platforms was 8.25 seconds in 2015.

Compare that to 15 years earlier, when the average attention span for similar events was 12.0 seconds.

That’s a reduction in attention span time of nearly one-third.

Considering Internet browsing statistics more specifically, an analysis of ~60,000 web page views found these behaviors:

  • Percent of page views that lasted more than 10 minutes: ~4%
  • % of page views that lasted fewer than 4 seconds: ~17%
  • % of words read on web pages that contain ~100 words or less: ~49%
  • % of words read on an average web page (around ~600 words): ~28%

The same study discovered what surely must be an important reason why attention spans have been contracting. How’s this tidy statistic:  The average number of times per hour that an office worker checks his or her e-mail inbox is … 30 times.

Stats like the ones above help explain why my client – and so many others just like her – are finding it harder than ever to attract and engage their prospects.

Fortunately, factors like good content and good design can help surmount these difficulties. It’s just that marketers have to try harder than ever to achieve a level of engagement that used to come so easily.

More results from the Statistic Brain Research Institute study can be found here.

Facebook attempts to shake the “Fakebook” mantle.

There are a growing number of reasons why more marketers these days are referring to the largest social media platform as “Fakebook.”

Back last year, it came to light that Facebook’s video view volumes were being significantly overstated – and the outcry was big enough that the famously tightly controlled social platform finally agreed to submit its metrics reporting to outside oversight.

To be sure, that decision was “helped along” by certain big brands threatening to significantly cut back their Facebook advertising or cease it altogether.

Now comes another interesting wrinkle. According to Facebook’s statistics, the social network claims it can reach millions of Americans across several important age demographics, as follows:

  • 18-24 year-olds: ~41 million people
  • 25-34 year-olds: ~60 million people
  • 35-49 year-olds: ~61 million people

There’s one slight problem with these stats:  U.S. Census Bureau data indicates that the total number of people living in the United States falling in the 18-49 age grouping is 137 million.

That’s a substantially lower figure than the 162 million people counted by Facebook – 25 million (18%) smaller, to be precise.

What could be the reason(s) for the overcount? As reported by Business Insider journalist Alex Heath, a Facebook spokesperson has attributed the “over-counting” to foreign tourists engaging with Facebook’s platform while they’re in the United States.

That seems like a pretty lame explanation – particularly since U.S. tourism outside the country is a reciprocal activity that likely cancels out foreign tourism.

There’s also the fact that there are multiple Facebook accounts maintained by some people. But it stretches credulity to think that multiple accounts explain more than a small portion of the differential.

Facebook rightly points out that its audience reach stats are designed to estimate how many people in a given geographic area are eligible to see an ad that a business might choose to run, and that this projected reach has no bearing on the actual delivery and billing of ads in a campaign.

In other words, the advertising would be reaching “real” people in any case.

Still, such discrepancies aren’t good to have in an environment where many marketers already believe that social media advertising promises more than it actually delivers.  After all, “reality check” information like this is just a click away in cyberspace …

Free-speech “confusion-in-advertising” continues unabated.

Sparring over the guarantees and limits of free speech seems to be growing rather than abating.

How controversial? The advertising rejected by the Washington Metropolitan Area Transit Authority as being too political for public display.

The most recent indication of just how much confusion there is on the topic of free speech comes in the form of a recently filed lawsuit brought by the American Civil Liberties Union against the Washington Metropolitan Area Transit Authority (WMATA) – a public agency popularly known as the DC Metro.

The issue sparking the lawsuit related to a number of ads which the WMATA refused to display due to concerns over the advertising content being “too political for public display.”

Countering WMATA’s efforts to avoid “offending” its customers, the ACLU chose to sue on behalf of itself as well as three companies and organizations that includes:

  • Carafem – a healthcare network specializing in birth control and medication abortion
  • Milo Worldwide, LLC – the corporate entity behind the libertarian political advocate and “extreme commentator” Milo Yiannopolous
  • PETA Foundation (aka FSAP – Foundation to Support Animal Protection) – an animal rights/welfare organization

The lawsuit claims that WMATA refused to display advertising from these organizations for fear of offending some of the people who use its transportation services.

In announcing its intention to defend itself against the ACLU suit, a WMATA spokesperson stated:

“In 2015, WMATA’s board of directors changed its advertising forum to a nonpublic forum and adopted commercial advertising guidelines that prohibit issue-oriented ads, including political, religious and advocacy ads. WMATA intends to vigorously defend its commercial advertising guidelines, which are reasonable and viewpoint-neutral.”

On the point of whether the advertising in question is “issues-oriented,” there is sharp disagreement.

Gabe Walters, manager of legislative affairs for the PETA Foundation, emphasizes that “the government cannot pick and choose who gets to speak based on their viewpoint – no matter how controversial.”

A spokesperson for Milo Yiannopoulos echoed the PETA Foundation statement: “On this issue we are united:  It is not for the government to chase so-called ‘controversial’ content out of the public square.”

Considering the ads that were rejected, a case could be made that they’re hardly “controversial” on their face:

  • The Milo Worldwide ads featured a photo of Milo Yiannopoulos.
  • The Carafem ad copy stated simply “for abortion up to 10 weeks.”
  • The PETA ad showed a pig with the caption, “I’m ME, not MEAT. See the Individual. Go Vegan.”
  • The ACLU ad stated the First Amendment language verbatim.

The ACLU suit contends that none of the advertising in question negates any kind of fundamental right to free speech. Moreover, the abortion pill provided by Carafem is FDA-approved as well as accepted by the American Medical Association.

Even more problematic for the WMATA’s defense, at the same time the agency was rejecting the PETA ad, it approved one from Chipotle promoting a menu item made with pork.

The only difference between them according to the ACLU? The Chipotle ad sends the message that it’s good to eat pork, whereas the PETA ad says the opposite.

Looking at the contours of the lawsuit and the facts of the case, I think the WMATA defense is on pretty shaky ground, and for this reason, I’m pretty sure that the ACLU lawsuit is going to succeed.

Indeed, it’s somewhat distressing that such a suit had to be filed at all, because its point is the First Amendment and what it’s all about: protecting everyone’s speech.

That people are having to re-litigate the issue of free speech in 2017 speaks volumes about the level of confusion that has been introduced into the public sphere in decent years.

It’s time to clear the air.

Consumers continue to grapple with what to do about spam e-mail.

Over the past decade or so, consumers have been faced with basically two options regarding unwanted e-mail that comes into their often-groaning inboxes. And neither one seems particularly effective.

One option is to unsubscribe to unwanted e-mails. But many experts caution against doing this, claiming that it risks getting even more spam e-mail instead of stopping the delivery of unwanted mail.  Or it could be even worse, in that clicking on the unsubscribe box might risk something even more nefarious happening on their computer.

On the other hand, ignoring junk e-mail or sending it to the spam folder doesn’t seem to be a very effective response, either. Both Google and Microsoft are famously ineffective in determining which e-mails actually constitute “spam.”  It isn’t uncommon that e-mail replies to the personal who originated the discussion get sent to the spam folder.

How can that be? Google and Microsoft might not even know the answer (and even if they did, they’re not saying a whole lot about how those determinations are made).

Even more irritating – at least for me personally – are finding that far too many e-mails from colleagues in my own company are being sent to spam – and the e-mails in question don’t even contain attachments.

How are consumers handling the crossed signals being telegraphed about how to handle spam e-mail? A recent survey conducted by digital marketing firm Adestra has found that nearly three-fourths of consumers are using the unsubscribe button – and that figure has increased from two-thirds of respondents in the 2016 survey.

What this result tells us is that the unsubscribe button may be working more times than not. If that means that the unwanted e-mails stop arriving, then that’s a small victory for the consumer.

[To access the a summary report of Adestra’s 2017 field research, click here.]

What’s been your personal experience with employing “ignore” versus “unsubscribe” strategies? Please share your thoughts with other readers.

Today’s Most Expensive Keywords in Search Engine Marketing

I’ve blogged before about the most expensive keywords in search engine marketing. Back in 2009, it was “mesothelioma.”

Of course, that was eight years and a lifetime ago in the world of cyberspace. In the meantime, asbestos poisoning has become a much less lucrative target of ambulance-chasing attorneys looking for multi-million dollar court settlements.

Today, we have a different set of “super-competitive” keyword terms vying for the notoriety of being the “most expensive” ones out there.  And while none of them are flirting with the $100 per-click pricing that mesothelioma once commanded, the pricing is still pretty stratospheric.

According to recent research conducted by online advertising software services provider WordStream, the most expensive keyword categories in Google AdWords today are these:

  • “Business services”: $58.64 average cost-per-click
  • “Bail bonds”: $58.48
  • “Casino”: $55.48
  • “Lawyer”: $54.86
  • “Asset management”: $49.86

Generally, the reasons behind these terms and other terms being so expensive is the dynamic of the “immediacy” of the needs or challenges people are looking to solve.

Indeed, other terms that have high-end pricing include such ones as “plumber,” “termites,” and “emergency room near me.”

Amusingly, one of the most expensive keywords on Google AdWords is … “Google” itself.  That term ranks 25th on the list of the most expensive keywords.

[To see the complete listing of the 25 most expensive keywords found in WordStream’s research, click here.]

WordStream also conducted some interesting ancillary research during the same study. It analyzed the best-performing ads copy/content associated with the most expensive key words to determine which words were the most successful in driving clickthroughs.

Running this textual analysis found that the most lucrative calls-to-action included ad copy that contained the following terms:

    • Build
    • Buy
    • Click
    • Discover
    • Get
    • Learn
    • Show
    • Sign up
    • Try

Are there keyword terms in your own business category or industry that you feel are way overpriced in relation to their value they deliver for the promotional dollar? If so, which ones?

Why are online map locations so sucky so often?

How many times have you noticed location data on Google Maps and other online mapping services that are out-of-date or just plain wrong? I encounter it quite often.

It hits close to home, too. While most of my company’s clients don’t usually have reason to visit our company’s office (because they’re from out of state or otherwise situated pretty far away from our location in Chestertown, MD), for the longest while Google Maps’s pin for our company pointed viewers to … a stretch of weeds in an empty lot.

It turns out, the situation isn’t uncommon. Recently, the Wawa gas-and-food chain hired an outside firm to verify its location data on Google, Facebook and Foursquare.  What Wawa found was that some 2,000 address entries had been created by users, including duplicate entries and ones with incorrect information.

Unlike a company like mine which doesn’t rely on foot traffic for business, for a company like Wawa, that’s the lifeblood of its operations. As such, Wawa is a high-volume advertiser with numerous campaigns and promotions going at once — including ones on crowdsourced driving and traffic apps like Google’s Waze.

With so much misleading location data swirling around, the last thing a company needs to see is a scathing review appearing on social media because someone was left staring at a patch of weeds in an empty lot instead being able to redeem a new digital coupon for a gourmet cookie or whatever.

Problems with incorrect mapping don’t happen just because of user-generated bad data, either. As in my own company’s case, the address information can be completely accurate – and yet somehow the map pin associated with it is misplaced.

Companies such as MomentFeed and Ignite Technologies have been established whose purpose is to identify and clean up bad map data such as this. It can’t be a one-and-done effort, either; most companies find that it’s yet another task that needs continuing attention – much like e-mail database list hygiene activities.

Perhaps the worst online map data clanger I’ve read about was a retail store whose pin location placed it 800 miles east of the New Jersey coastline in the middle of the Atlantic Ocean.  What’s the most spectacular mapping fail you’ve come across personally?

Search quality slips in people’s perceptions … or is it just that we’ve moved the goalpost?

Recently, the American Customer Satisfaction Index reported that the perceived quality of Google and other search platforms is on a downward trajectory. In particular, Google’s satisfaction score has declined two percentage points to 82 out of a possible high score of 100, according to the ACS Index.

Related to this trend, search advertising ROI is also declining. According to a report published recently by Analytic Partners, the return on investment from paid search dropped by more than 25% between 2010 and 2016.

In all likelihood, a falling ROI can be linked to lower satisfaction with search results.  But let’s look at things a little more closely.

First of all, Google’s customer satisfaction score of 82 is actually better than the 77 score it had received as recently as 2015. In any case, attaining a score of 82 out of 100 isn’t too shabby in such customer satisfaction surveys.

Moreover, Google has been in the business of search for a solid two decades now – an eternity in the world of the Internet. Google has always had a laser-focus on optimizing the quality of its search results, seeing as how search is the biggest “golden egg” revenue-generating product the company has (by far).

Obviously, Google hasn’t been out there with a static product. Far from it:  Google’s search algorithms have been steadily evolving to the degree that search results stand head-and-shoulder above where they were even five years ago.  Back then, search queries typically resulted in generic results that weren’t nearly as well-matched to the actual intent of the searcher.

That sort of improvement is no accident.

But one thing has changed pretty dramatically – the types of devices consumers are using to conduct their searches. Just a few years back, chances are someone would be using a desktop or laptop computer where viewing SERPs containing 20 results was perfectly acceptable – and even desired for quick comparison purposes.

Today, a user is far more likely to be initiating a search query from a smartphone. In that environment, searchers don’t want 20 plausible results — they want one really good one.

You could say that “back then” it was a browsing environment, whereas today it’s a task environment, which creates a different mental framework within which people receive and view the results.

So, what we really have is a product – search – that has become increasingly better over the years, but the ground has shifted in terms of customer expectations.

Simply put, people are increasingly intolerant of results that are even a little off-base from the contextual intent of their search. And then it becomes easy to “blame the messenger” for coming up short – even if that messenger is actually doing a much better job than in the past.

It’s like so much else in one’s life and career: The reward for success is … a bar that’s set even higher.