Fewer brands are engaging in programmatic online advertising in 2017.

How come we are not surprised?

The persistent “drip-drip-drip” of brand safety concerns with programmatic advertising – and the heightened perception that online advertising has been showing up in the most unseemly of places — has finally caught up with the once-steady growth of economically priced programmatic advertising versus higher-priced digital formats such as native advertising and video advertising.

In fact, ad tracking firm MediaRadar is now reporting that the number of major brands running programmatic ads through the first nine months of 2017 has actually dropped compared to the same period a year ago.

The decline isn’t huge – 2% to be precise. But growing reports that leading brands’ ads have been mistakenly appearing next to ISIS or neo-Nazi content on YouTube and in other places on the web has shaken advertisers’ faith in programmatic platforms to be able to prevent such embarrassing actions from occurring.

For Procter & Gamble, for instance, it has meant that the number of product brands the company has shifted away from programmatic advertising and over to higher-priced formats jumped from 49 to 62 brands over the course of 2017.

For Unilever, the shift has been even greater – going from 25 product brands at the beginning of the year to 53 by the end of July.

The “flight to safety” by these and other brand leaders is easy to understand. Because they can be controlled, direct ad sales are viewed as far more brand-safe compared programmatic and other automated ad buy programs.

In the past, the substantial price differential between the two options was enough to convince many brands that the rewards of “going programmatic” outweighed the inherent risks.  No longer.

What this also means is that advertisers are looking at even more diverse media formats in an effort to find alternatives to programmatic advertising that can accomplish their marketing objectives without the attendant risks (and headaches).

We’ll see how that goes.

Facebook attempts to shake the “Fakebook” mantle.

There are a growing number of reasons why more marketers these days are referring to the largest social media platform as “Fakebook.”

Back last year, it came to light that Facebook’s video view volumes were being significantly overstated – and the outcry was big enough that the famously tightly controlled social platform finally agreed to submit its metrics reporting to outside oversight.

To be sure, that decision was “helped along” by certain big brands threatening to significantly cut back their Facebook advertising or cease it altogether.

Now comes another interesting wrinkle. According to Facebook’s statistics, the social network claims it can reach millions of Americans across several important age demographics, as follows:

  • 18-24 year-olds: ~41 million people
  • 25-34 year-olds: ~60 million people
  • 35-49 year-olds: ~61 million people

There’s one slight problem with these stats:  U.S. Census Bureau data indicates that the total number of people living in the United States falling in the 18-49 age grouping is 137 million.

That’s a substantially lower figure than the 162 million people counted by Facebook – 25 million (18%) smaller, to be precise.

What could be the reason(s) for the overcount? As reported by Business Insider journalist Alex Heath, a Facebook spokesperson has attributed the “over-counting” to foreign tourists engaging with Facebook’s platform while they’re in the United States.

That seems like a pretty lame explanation – particularly since U.S. tourism outside the country is a reciprocal activity that likely cancels out foreign tourism.

There’s also the fact that there are multiple Facebook accounts maintained by some people. But it stretches credulity to think that multiple accounts explain more than a small portion of the differential.

Facebook rightly points out that its audience reach stats are designed to estimate how many people in a given geographic area are eligible to see an ad that a business might choose to run, and that this projected reach has no bearing on the actual delivery and billing of ads in a campaign.

In other words, the advertising would be reaching “real” people in any case.

Still, such discrepancies aren’t good to have in an environment where many marketers already believe that social media advertising promises more than it actually delivers.  After all, “reality check” information like this is just a click away in cyberspace …

Today’s Most Expensive Keywords in Search Engine Marketing

I’ve blogged before about the most expensive keywords in search engine marketing. Back in 2009, it was “mesothelioma.”

Of course, that was eight years and a lifetime ago in the world of cyberspace. In the meantime, asbestos poisoning has become a much less lucrative target of ambulance-chasing attorneys looking for multi-million dollar court settlements.

Today, we have a different set of “super-competitive” keyword terms vying for the notoriety of being the “most expensive” ones out there.  And while none of them are flirting with the $100 per-click pricing that mesothelioma once commanded, the pricing is still pretty stratospheric.

According to recent research conducted by online advertising software services provider WordStream, the most expensive keyword categories in Google AdWords today are these:

  • “Business services”: $58.64 average cost-per-click
  • “Bail bonds”: $58.48
  • “Casino”: $55.48
  • “Lawyer”: $54.86
  • “Asset management”: $49.86

Generally, the reasons behind these terms and other terms being so expensive is the dynamic of the “immediacy” of the needs or challenges people are looking to solve.

Indeed, other terms that have high-end pricing include such ones as “plumber,” “termites,” and “emergency room near me.”

Amusingly, one of the most expensive keywords on Google AdWords is … “Google” itself.  That term ranks 25th on the list of the most expensive keywords.

[To see the complete listing of the 25 most expensive keywords found in WordStream’s research, click here.]

WordStream also conducted some interesting ancillary research during the same study. It analyzed the best-performing ads copy/content associated with the most expensive key words to determine which words were the most successful in driving clickthroughs.

Running this textual analysis found that the most lucrative calls-to-action included ad copy that contained the following terms:

    • Build
    • Buy
    • Click
    • Discover
    • Get
    • Learn
    • Show
    • Sign up
    • Try

Are there keyword terms in your own business category or industry that you feel are way overpriced in relation to their value they deliver for the promotional dollar? If so, which ones?

Why are online map locations so sucky so often?

How many times have you noticed location data on Google Maps and other online mapping services that are out-of-date or just plain wrong? I encounter it quite often.

It hits close to home, too. While most of my company’s clients don’t usually have reason to visit our company’s office (because they’re from out of state or otherwise situated pretty far away from our location in Chestertown, MD), for the longest while Google Maps’s pin for our company pointed viewers to … a stretch of weeds in an empty lot.

It turns out, the situation isn’t uncommon. Recently, the Wawa gas-and-food chain hired an outside firm to verify its location data on Google, Facebook and Foursquare.  What Wawa found was that some 2,000 address entries had been created by users, including duplicate entries and ones with incorrect information.

Unlike a company like mine which doesn’t rely on foot traffic for business, for a company like Wawa, that’s the lifeblood of its operations. As such, Wawa is a high-volume advertiser with numerous campaigns and promotions going at once — including ones on crowdsourced driving and traffic apps like Google’s Waze.

With so much misleading location data swirling around, the last thing a company needs to see is a scathing review appearing on social media because someone was left staring at a patch of weeds in an empty lot instead being able to redeem a new digital coupon for a gourmet cookie or whatever.

Problems with incorrect mapping don’t happen just because of user-generated bad data, either. As in my own company’s case, the address information can be completely accurate – and yet somehow the map pin associated with it is misplaced.

Companies such as MomentFeed and Ignite Technologies have been established whose purpose is to identify and clean up bad map data such as this. It can’t be a one-and-done effort, either; most companies find that it’s yet another task that needs continuing attention – much like e-mail database list hygiene activities.

Perhaps the worst online map data clanger I’ve read about was a retail store whose pin location placed it 800 miles east of the New Jersey coastline in the middle of the Atlantic Ocean.  What’s the most spectacular mapping fail you’ve come across personally?

Programmatic ad buying takes a hit.

There are some interesting new trends we’re now seeing in programmatic ad buying. For years, purchasing online ads programmatically instead of directly with specific publishers or media companies has been on a steady increase.  No more.

MediaRadar has just released its latest Consumer Advertising Report covering ad spending, formats and buying patterns. The new report states that programmatic ad buying declined ~12% when comparing the first quarter of 2017 to the same period in 2016.

More specifically, whereas ~45,000 advertisers purchased advertising programmatically in Q1 2016, that figure has dropped to around ~39,500 for the same quarter this year.

This change in fortunes may come as a surprise to some. The market has generally been bullish on programmatic ad buying because it is far less labor-intensive to administrator those types of programs compared to direct advertising programs.

There have been ongoing concerns about the potential of fraud, the lack of transparency on ad pricing, and control over where advertisers’ placements actually appear, but up until now, these concerns weren’t strong enough to reverse the steady migration to programmatic buying.

Todd Krizelman, CEO of MediaRadar, had this to say about the new findings:

“For many years, the transition of dollars from direct ad buying to programmatic seemed inevitable, and impossible to roll back. But the near-constant drumbeat of concern over brand safety and fraud in the first six months of 2017 has slowed the tide.  There’s more buying of direct advertising, especially sponsored editorial, and programmatically there is a ‘flight to quality’.”

Krizelman touches on another major new finding from the MediaRadar report: how much better native advertising performs over traditional ad units. Audiences tend to look at advertorials more frequently than display ads, and the clickthrough rates on mobile native advertising, in particular, are running four times higher than what mobile display ads garner.

Not surprisingly, the top market categories for native advertising are ones which lend themselves well to short, pithy stories. Travel, entertainment, home, food and apparel categories score well, as do financial and real estate stories.

The MediaRadar report is based on some pretty exhaustive statistics, with data analyzed from more than 265,000 advertisers covering the buying of digital, native, mobile, video, e-mail and print advertising. For more detailed findings, follow this link.

Good news: Online advertising “bot” fraud is down 10%. Bad news: It still amounts to $6.5 billion annually.

Ad spending continues with quite-healthy growth, being forecast to increase by about 10% in 2017 according to a studied released this month by the Association of National Advertisers.

At the same time, there’s similarly positive news from digital advertising security firm White Ops on the ad fraud front. Its Bot Baseline Report, which analyzes the digital advertising activities of ANA members, is forecasting that economic losses due to bot fraud will decline by approximately 10% this year.

And yet … even with the expected decline, bot fraud is still expected to amount to a whopping $6.5 billion in economic losses.

The White Ops report found that traffic sourcing — that is, purchasing traffic from inorganic sources — remains the single biggest risk factor for fraud.

On the other hand, mobile fraud was considerably lower than expected.  Moreover, fraud in programmatic media buys is no longer particularly riskier than general market buys, thanks to improved filtration controls and procedures at media agencies.

Meanwhile, a new study conducted by Fraudlogix, and fraud detection company which monitors ad traffic for sell-side companies, finds that the majority of ad fraud is concentrated within a very small percentage of sources within the real-time bidding programmatic market.

The Fraudlogix study analyzed ~1.3 billion impressions from nearly 60,000 sources over a month-long period earlier this year. Interestingly, sites with more than 90% fraudulent impressions represented only about 1% of publishers, even while they contributed ~11% of the market’s impressions.

While Fraudlogix found nearly 19% of all impressions overall to be “fake,” its fraudulent behavior does not represent the industry as a whole. According to its analysis, just 3% of sources are causing more than two-thirds of the ad fraud.  [Fraudlogix defines a fake impression as one which generates ad traffic through means such as bots, scripts, click-farms or hijacked devices.]

As Fraudlogix CEO Hagai Schechter has remarked, “Our industry has a 3% fraud problem, and if we can clamp down on that, everyone but the criminals will be much better for it.”

That’s probably easier said than done, however. Many of the culprits are “ghost” newsfeed sites.  These sites are often used for nefarious purposes because they’re programmed to update automatically, making the sites seem “content-fresh” without publishers having to maintain them via human labor.

Characteristics of these “ghost sites” include cookie-cutter design templates … private domain registrations … and Alexa rankings way down in the doldrums. And yet they generate millions of impressions each day.

The bottom line is that the fraud problem remains huge.  Three percent of sources might be a small percentage figure, but that still means thousands of sources causing a ton of ad fraud.

What would be interesting to consider is having traffic providers submit to periodic random tests to determine the authenticity of their traffic. Such testing could then establish ratings – some sort of real/faux ranking.

And just like in the old print publications world, traffic providers that won’t consent to be audited would immediately become suspect in the eyes of those paying for the advertising.  Wouldn’t that development be a nice one …

More raps for Google on the “fake reviews” front.

Google’s trying to not have its local search initiative devolve into charges and counter-charges of “fake news” à la the most recent U.S. presidential election campaign – but is it trying hard enough?

It’s becoming harder for the reviews that show up on Google’s local search function to be considered anything other than “suspect.”

The latest salvo comes from search expert and author Mike Blumenthal, whose recent blog posts on the subject question Google’s willingness to level with its customers.

Mr. Blumenthal could be considered one of the premiere experts on local search, and he’s been studying the phenomenon of fake information online for nearly a decade.

The gist of Blumenthal’s argument is that Google isn’t taking sufficient action to clean up fake reviews (and related service industry and affiliate spam) that appear on Google Maps search results, which is one of the most important utilities for local businesses and their customers.

Not only that, but Blumenthal also contends that Google is publishing reports which represent “weak research” that “misleads the public” about the extent of the fake reviews problem.

Mike Blumenthal

Google contends that the problem isn’t a large one. Blumenthal feels differently – in fact, he claims the problem as growing worse, not getting better.

In a blog article published this week, Blumenthal outlines how he’s built out spreadsheets of reviewers and the businesses on which they have commented.

From this exercise, he sees a pattern of fake reviews being written for overlapping businesses, and that somehow these telltale signs have been missed by Google’s algorithms.

A case in point: three “reviewers” — “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” — have all “reviewed” three very different businesses in completely different areas of the United States:  Bedoy Brothers Lawn & Maintenance (Nevada), Texas Car Mechanics (Texas), and The Joint Chiropractic (Arizona, California, Colorado, Florida, Minnesota, North Carolina).

They’re all 5-star reviews, of course.

It doesn’t take a genius to figure out that “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” won’t be found in the local telephone directories where these businesses are located. That’s because they’re figments of some spammer-for-hire’s imagination.

The question is, why doesn’t Google develop procedures to figure out the same obvious answers Blumenthal can see plain as day?

And the follow-up question: How soon will Google get serious about banning reviewers who post fake reviews on local search results?  (And not just targeting the “usual suspect” types of businesses, but also professional sites such as physicians and attorneys.)

“If their advanced verification [technology] is what it takes to solve the problem, then stop testing it and start using it,” Blumenthal concludes.

To my mind, it would be in Google’s own interest to get to the bottom of these nefarious practices. If the general public comes to view reviews as “fake, faux and phony,” that’s just one step before ceasing to use local search results at all – which would hurt Google in the pocketbook.

Might it get Google’s attention then?