Today I was talking with one of my company’s longtime clients about how much of a challenge it is to attract the attention of people in target marketing campaigns.
Her view is that it’s become progressively more difficult over the past dozen years or so.
Empirical research bears this out, too. Using data from a variety of sources including Twitter, Google+, Pinterest, Facebook and Google, Statistic Brain Research Institute‘s Attention Span Statistics show that the average attention span for an “event” on one of these platforms was 8.25 seconds in 2015.
Compare that to 15 years earlier, when the average attention span for similar events was 12.0 seconds.
That’s a reduction in attention span time of nearly one-third.
Considering Internet browsing statistics more specifically, an analysis of ~60,000 web page views found these behaviors:
Percent of page views that lasted more than 10 minutes: ~4%
% of page views that lasted fewer than 4 seconds: ~17%
% of words read on web pages that contain ~100 words or less: ~49%
% of words read on an average web page (around ~600 words): ~28%
The same study discovered what surely must be an important reason why attention spans have been contracting. How’s this tidy statistic: The average number of times per hour that an office worker checks his or her e-mail inbox is … 30 times.
Stats like the ones above help explain why my client – and so many others just like her – are finding it harder than ever to attract and engage their prospects.
Fortunately, factors like good content and good design can help surmount these difficulties. It’s just that marketers have to try harder than ever to achieve a level of engagement that used to come so easily.
More results from the Statistic Brain Research Institute study can be found here.
There are a growing number of reasons why more marketers these days are referring to the largest social media platform as “Fakebook.”
Back last year, it came to light that Facebook’s video view volumes were being significantly overstated – and the outcry was big enough that the famously tightly controlled social platform finally agreed to submit its metrics reporting to outside oversight.
To be sure, that decision was “helped along” by certain big brands threatening to significantly cut back their Facebook advertising or cease it altogether.
Now comes another interesting wrinkle. According to Facebook’s statistics, the social network claims it can reach millions of Americans across several important age demographics, as follows:
18-24 year-olds: ~41 million people
25-34 year-olds: ~60 million people
35-49 year-olds: ~61 million people
There’s one slight problem with these stats: U.S. Census Bureau data indicates that the total number of people living in the United States falling in the 18-49 age grouping is 137 million.
That’s a substantially lower figure than the 162 million people counted by Facebook – 25 million (18%) smaller, to be precise.
What could be the reason(s) for the overcount? As reported by Business Insider journalist Alex Heath, a Facebook spokesperson has attributed the “over-counting” to foreign tourists engaging with Facebook’s platform while they’re in the United States.
That seems like a pretty lame explanation – particularly since U.S. tourism outside the country is a reciprocal activity that likely cancels out foreign tourism.
There’s also the fact that there are multiple Facebook accounts maintained by some people. But it stretches credulity to think that multiple accounts explain more than a small portion of the differential.
Facebook rightly points out that its audience reach stats are designed to estimate how many people in a given geographic area are eligible to see an ad that a business might choose to run, and that this projected reach has no bearing on the actual delivery and billing of ads in a campaign.
In other words, the advertising would be reaching “real” people in any case.
Still, such discrepancies aren’t good to have in an environment where many marketers already believe that social media advertising promises more than it actually delivers. After all, “reality check” information like this is just a click away in cyberspace …
Sparring over the guarantees and limits of free speech seems to be growing rather than abating.
How controversial? The advertising rejected by the Washington Metropolitan Area Transit Authority as being too political for public display.
The most recent indication of just how much confusion there is on the topic of free speech comes in the form of a recently filed lawsuit brought by the American Civil Liberties Union against the Washington Metropolitan Area Transit Authority (WMATA) – a public agency popularly known as the DC Metro.
The issue sparking the lawsuit related to a number of ads which the WMATA refused to display due to concerns over the advertising content being “too political for public display.”
Countering WMATA’s efforts to avoid “offending” its customers, the ACLU chose to sue on behalf of itself as well as three companies and organizations that includes:
Carafem – a healthcare network specializing in birth control and medication abortion
Milo Worldwide, LLC – the corporate entity behind the libertarian political advocate and “extreme commentator” Milo Yiannopolous
PETA Foundation(aka FSAP – Foundation to Support Animal Protection) – an animal rights/welfare organization
The lawsuit claims that WMATA refused to display advertising from these organizations for fear of offending some of the people who use its transportation services.
In announcing its intention to defend itself against the ACLU suit, a WMATA spokesperson stated:
“In 2015, WMATA’s board of directors changed its advertising forum to a nonpublic forum and adopted commercial advertising guidelines that prohibit issue-oriented ads, including political, religious and advocacy ads. WMATA intends to vigorously defend its commercial advertising guidelines, which are reasonable and viewpoint-neutral.”
On the point of whether the advertising in question is “issues-oriented,” there is sharp disagreement.
Gabe Walters, manager of legislative affairs for the PETA Foundation, emphasizes that “the government cannot pick and choose who gets to speak based on their viewpoint – no matter how controversial.”
A spokesperson for Milo Yiannopoulos echoed the PETA Foundation statement: “On this issue we are united: It is not for the government to chase so-called ‘controversial’ content out of the public square.”
Considering the ads that were rejected, a case could be made that they’re hardly “controversial” on their face:
The Milo Worldwide ads featured a photo of Milo Yiannopoulos.
The Carafem ad copy stated simply “for abortion up to 10 weeks.”
The PETA ad showed a pig with the caption, “I’m ME, not MEAT. See the Individual. Go Vegan.”
The ACLU ad stated the First Amendment language verbatim.
The ACLU suit contends that none of the advertising in question negates any kind of fundamental right to free speech. Moreover, the abortion pill provided by Carafem is FDA-approved as well as accepted by the American Medical Association.
Even more problematic for the WMATA’s defense, at the same time the agency was rejecting the PETA ad, it approved one from Chipotle promoting a menu item made with pork.
The only difference between them according to the ACLU? The Chipotle ad sends the message that it’s good to eat pork, whereas the PETA ad says the opposite.
Looking at the contours of the lawsuit and the facts of the case, I think the WMATA defense is on pretty shaky ground, and for this reason, I’m pretty sure that the ACLU lawsuit is going to succeed.
Indeed, it’s somewhat distressing that such a suit had to be filed at all, because its point is the First Amendment and what it’s all about: protecting everyone’s speech.
That people are having to re-litigate the issue of free speech in 2017 speaks volumes about the level of confusion that has been introduced into the public sphere in decent years.
Over the past decade or so, consumers have been faced with basically two options regarding unwanted e-mail that comes into their often-groaning inboxes. And neither one seems particularly effective.
One option is to unsubscribe to unwanted e-mails. But many experts caution against doing this, claiming that it risks getting even more spam e-mail instead of stopping the delivery of unwanted mail. Or it could be even worse, in that clicking on the unsubscribe box might risk something even more nefarious happening on their computer.
On the other hand, ignoring junk e-mail or sending it to the spam folder doesn’t seem to be a very effective response, either. Both Google and Microsoft are famously ineffective in determining which e-mails actually constitute “spam.” It isn’t uncommon that e-mail replies to the personal who originated the discussion get sent to the spam folder.
How can that be? Google and Microsoft might not even know the answer (and even if they did, they’re not saying a whole lot about how those determinations are made).
Even more irritating – at least for me personally – are finding that far too many e-mails from colleagues in my own company are being sent to spam – and the e-mails in question don’t even contain attachments.
How are consumers handling the crossed signals being telegraphed about how to handle spam e-mail? A recent survey conducted by digital marketing firm Adestra has found that nearly three-fourths of consumers are using the unsubscribe button – and that figure has increased from two-thirds of respondents in the 2016 survey.
What this result tells us is that the unsubscribe button may be working more times than not. If that means that the unwanted e-mails stop arriving, then that’s a small victory for the consumer.
[To access the a summary report of Adestra’s 2017 field research, click here.]
What’s been your personal experience with employing “ignore” versus “unsubscribe” strategies? Please share your thoughts with other readers.
I’ve blogged before about the most expensive keywords in search engine marketing. Back in 2009, it was “mesothelioma.”
Of course, that was eight years and a lifetime ago in the world of cyberspace. In the meantime, asbestos poisoning has become a much less lucrative target of ambulance-chasing attorneys looking for multi-million dollar court settlements.
Today, we have a different set of “super-competitive” keyword terms vying for the notoriety of being the “most expensive” ones out there. And while none of them are flirting with the $100 per-click pricing that mesothelioma once commanded, the pricing is still pretty stratospheric.
According to recent research conducted by online advertising software services provider WordStream, the most expensive keyword categories in Google AdWords today are these:
“Business services”: $58.64 average cost-per-click
“Bail bonds”: $58.48
“Casino”: $55.48
“Lawyer”: $54.86
“Asset management”: $49.86
Generally, the reasons behind these terms and other terms being so expensive is the dynamic of the “immediacy” of the needs or challenges people are looking to solve.
Indeed, other terms that have high-end pricing include such ones as “plumber,” “termites,” and “emergency room near me.”
Amusingly, one of the most expensive keywords on Google AdWords is … “Google” itself. That term ranks 25th on the list of the most expensive keywords.
[To see the complete listing of the 25 most expensive keywords found in WordStream’s research, click here.]
WordStream also conducted some interesting ancillary research during the same study. It analyzed the best-performing ads copy/content associated with the most expensive key words to determine which words were the most successful in driving clickthroughs.
Running this textual analysis found that the most lucrative calls-to-action included ad copy that contained the following terms:
Build
Buy
Click
Discover
Get
Learn
Show
Sign up
Try
Are there keyword terms in your own business category or industry that you feel are way overpriced in relation to their value they deliver for the promotional dollar? If so, which ones?
How many times have you noticed location data on Google Maps and other online mapping services that are out-of-date or just plain wrong? I encounter it quite often.
It hits close to home, too. While most of my company’s clients don’t usually have reason to visit our company’s office (because they’re from out of state or otherwise situated pretty far away from our location in Chestertown, MD), for the longest while Google Maps’s pin for our company pointed viewers to … a stretch of weeds in an empty lot.
It turns out, the situation isn’t uncommon. Recently, the Wawa gas-and-food chain hired an outside firm to verify its location data on Google, Facebook and Foursquare. What Wawa found was that some 2,000 address entries had been created by users, including duplicate entries and ones with incorrect information.
Unlike a company like mine which doesn’t rely on foot traffic for business, for a company like Wawa, that’s the lifeblood of its operations. As such, Wawa is a high-volume advertiser with numerous campaigns and promotions going at once — including ones on crowdsourced driving and traffic apps like Google’s Waze.
With so much misleading location data swirling around, the last thing a company needs to see is a scathing review appearing on social media because someone was left staring at a patch of weeds in an empty lot instead being able to redeem a new digital coupon for a gourmet cookie or whatever.
Problems with incorrect mapping don’t happen just because of user-generated bad data, either. As in my own company’s case, the address information can be completely accurate – and yet somehow the map pin associated with it is misplaced.
Companies such as MomentFeed and Ignite Technologies have been established whose purpose is to identify and clean up bad map data such as this. It can’t be a one-and-done effort, either; most companies find that it’s yet another task that needs continuing attention – much like e-mail database list hygiene activities.
Perhaps the worst online map data clanger I’ve read about was a retail store whose pin location placed it 800 miles east of the New Jersey coastline in the middle of the Atlantic Ocean. What’s the most spectacular mapping fail you’ve come across personally?
Recently, the American Customer Satisfaction Index reported that the perceived quality of Google and other search platforms is on a downward trajectory. In particular, Google’s satisfaction score has declined two percentage points to 82 out of a possible high score of 100, according to the ACS Index.
Related to this trend, search advertising ROI is also declining. According to a report published recently by Analytic Partners, the return on investment from paid search dropped by more than 25% between 2010 and 2016.
In all likelihood, a falling ROI can be linked to lower satisfaction with search results. But let’s look at things a little more closely.
First of all, Google’s customer satisfaction score of 82 is actually better than the 77 score it had received as recently as 2015. In any case, attaining a score of 82 out of 100 isn’t too shabby in such customer satisfaction surveys.
Moreover, Google has been in the business of search for a solid two decades now – an eternity in the world of the Internet. Google has always had a laser-focus on optimizing the quality of its search results, seeing as how search is the biggest “golden egg” revenue-generating product the company has (by far).
Obviously, Google hasn’t been out there with a static product. Far from it: Google’s search algorithms have been steadily evolving to the degree that search results stand head-and-shoulder above where they were even five years ago. Back then, search queries typically resulted in generic results that weren’t nearly as well-matched to the actual intent of the searcher.
That sort of improvement is no accident.
But one thing has changed pretty dramatically – the types of devices consumers are using to conduct their searches. Just a few years back, chances are someone would be using a desktop or laptop computer where viewing SERPs containing 20 results was perfectly acceptable – and even desired for quick comparison purposes.
Today, a user is far more likely to be initiating a search query from a smartphone. In that environment, searchers don’t want 20 plausible results — they want one really good one.
You could say that “back then” it was a browsing environment, whereas today it’s a task environment, which creates a different mental framework within which people receive and view the results.
So, what we really have is a product – search – that has become increasingly better over the years, but the ground has shifted in terms of customer expectations.
Simply put, people are increasingly intolerant of results that are even a little off-base from the contextual intent of their search. And then it becomes easy to “blame the messenger” for coming up short – even if that messenger is actually doing a much better job than in the past.
It’s like so much else in one’s life and career: The reward for success is … a bar that’s set even higher.
There are some interesting new trends we’re now seeing in programmatic ad buying. For years, purchasing online ads programmatically instead of directly with specific publishers or media companies has been on a steady increase. No more.
MediaRadar has just released its latest Consumer Advertising Report covering ad spending, formats and buying patterns. The new report states that programmatic ad buying declined ~12% when comparing the first quarter of 2017 to the same period in 2016.
More specifically, whereas ~45,000 advertisers purchased advertising programmatically in Q1 2016, that figure has dropped to around ~39,500 for the same quarter this year.
This change in fortunes may come as a surprise to some. The market has generally been bullish on programmatic ad buying because it is far less labor-intensive to administrator those types of programs compared to direct advertising programs.
There have been ongoing concerns about the potential of fraud, the lack of transparency on ad pricing, and control over where advertisers’ placements actually appear, but up until now, these concerns weren’t strong enough to reverse the steady migration to programmatic buying.
Todd Krizelman, CEO of MediaRadar, had this to say about the new findings:
“For many years, the transition of dollars from direct ad buying to programmatic seemed inevitable, and impossible to roll back. But the near-constant drumbeat of concern over brand safety and fraud in the first six months of 2017 has slowed the tide. There’s more buying of direct advertising, especially sponsored editorial, and programmatically there is a ‘flight to quality’.”
Krizelman touches on another major new finding from the MediaRadar report: how much better native advertising performs over traditional ad units. Audiences tend to look at advertorials more frequently than display ads, and the clickthrough rates on mobile native advertising, in particular, are running four times higher than what mobile display ads garner.
Not surprisingly, the top market categories for native advertising are ones which lend themselves well to short, pithy stories. Travel, entertainment, home, food and apparel categories score well, as do financial and real estate stories.
The MediaRadar report is based on some pretty exhaustive statistics, with data analyzed from more than 265,000 advertisers covering the buying of digital, native, mobile, video, e-mail and print advertising. For more detailed findings, follow this link.
If it seems to you that chief marketing officers last only a relatively short time in their positions, you aren’t imagining things.
The reality is, of all of the various jobs that make up senior management positions at many companies, personnel in the chief marketing officer position are the most likely to be changed most often.
To understand why, think of the four key aspects of marketing you learned in business school: Product-Place-Price-Promotion.
Now, think about what’s been happening in recent times to the “4 Ps” of the marketing discipline. In companies where there are a number of “chief” positions – chief innovation officers, chief growth officers, chief technology officers, chief revenue officers and the like – those other positions have encroached on traditional marketing roles to the extent that in many instances, the CMO no longer has clear authority over them.
It’s fair to say that of the 4 Ps, the only one that’s still the clear purview of the CMO is “Promotion.”
… Which means that the chief marketing officer is more accurately operating as a chief advertising officer.
Except … when it comes to assigning responsibility (or blame, depending on how things are going), the chief marketing officer still gets the brunt of that attention.
“All the responsibility with none of the authority” might be overstating it a bit, but one can see how the beleaguered marketing officer could be excused for thinking precisely that when he or she is in the crosshairs of negative attention.
Researcher Debbie Qaqish at The Pedowitz Group, who is also author of the book The Rise of the Revenue Marketer, reports that as many as five C-suite members typically share growth and revenue responsibility inside a company … but the CMO is often the one held responsible for any missed targets.
With organizational characteristics like these, it’s no wonder the average CMO tenure is half that of a CEO (four years versus eight). Research findings as reported by Neil Morgan and Kimberly Whitler in the pages of the July 2017 issue of the Harvard Business Review give us that nice little statistic.
What to do about these issues is a tough nut. There are good reasons why many traditional marketing activities have migrated into different areas of the organization. But it would be nice if company organizational structures and operational processes would keep pace with that evolution instead of staying stuck in the paradigm of how the business world operated 10 or 20 years ago.
Rapid change is a constant in the business world, and it’s always a challenge for companies to incorporate changing responsibilities into an existing organizational structure. But if companies want to have CMOs stick around long enough to do some good, a little more honesty and fairness about where true authority and true responsibility exist would seem to be in order.
Ad spending continues with quite-healthy growth, being forecast to increase by about 10% in 2017 according to a studied released this month by the Association of National Advertisers.
At the same time, there’s similarly positive news from digital advertising security firm White Ops on the ad fraud front. Its Bot Baseline Report, which analyzes the digital advertising activities of ANA members, is forecasting that economic losses due to bot fraud will decline by approximately 10% this year.
And yet … even with the expected decline, bot fraud is still expected to amount to a whopping $6.5 billion in economic losses.
The White Ops report found that traffic sourcing — that is, purchasing traffic from inorganic sources — remains the single biggest risk factor for fraud.
On the other hand, mobile fraud was considerably lower than expected. Moreover, fraud in programmatic media buys is no longer particularly riskier than general market buys, thanks to improved filtration controls and procedures at media agencies.
Meanwhile, a new study conducted by Fraudlogix, and fraud detection company which monitors ad traffic for sell-side companies, finds that the majority of ad fraud is concentrated within a very small percentage of sources within the real-time bidding programmatic market.
The Fraudlogix study analyzed ~1.3 billion impressions from nearly 60,000 sources over a month-long period earlier this year. Interestingly, sites with more than 90% fraudulent impressions represented only about 1% of publishers, even while they contributed ~11% of the market’s impressions.
While Fraudlogix found nearly 19% of all impressions overall to be “fake,” its fraudulent behavior does not represent the industry as a whole. According to its analysis, just 3% of sources are causing more than two-thirds of the ad fraud. [Fraudlogix defines a fake impression as one which generates ad traffic through means such as bots, scripts, click-farms or hijacked devices.]
As Fraudlogix CEO Hagai Schechter has remarked, “Our industry has a 3% fraud problem, and if we can clamp down on that, everyone but the criminals will be much better for it.”
That’s probably easier said than done, however. Many of the culprits are “ghost” newsfeed sites. These sites are often used for nefarious purposes because they’re programmed to update automatically, making the sites seem “content-fresh” without publishers having to maintain them via human labor.
Characteristics of these “ghost sites” include cookie-cutter design templates … private domain registrations … and Alexa rankings way down in the doldrums. And yet they generate millions of impressions each day.
The bottom line is that the fraud problem remains huge. Three percent of sources might be a small percentage figure, but that still means thousands of sources causing a ton of ad fraud.
What would be interesting to consider is having traffic providers submit to periodic random tests to determine the authenticity of their traffic. Such testing could then establish ratings – some sort of real/faux ranking.
And just like in the old print publications world, traffic providers that won’t consent to be audited would immediately become suspect in the eyes of those paying for the advertising. Wouldn’t that development be a nice one …