At the same time, there’s similarly positive news from White Ops on the ad fraud front. Its Bot Baseline Report, which analyzes the digital advertising activities of ANA members, is forecasting that economic losses due to bot fraud will decline by approximately 10% this year.
And yet … even with the expected decline, bot fraud is still expected to amount to a whopping $6.5 billion in economic losses.
The White Ops report found that traffic sourcing — that is, purchasing traffic from inorganic sources — remains the single biggest risk factor for fraud.
On the other hand, mobile fraud was considerably lower than expected. Moreover, fraud in programmatic media buys is no longer particularly riskier than general market buys, thanks to improved filtration controls and procedures at media agencies.
Meanwhile, a new study conducted by Fraudlogix, and fraud detection company which monitors ad traffic for sell-side companies, finds that the majority of ad fraud is concentrated within a very small percentage of sources within the real-time bidding programmatic market.
The Fraudlogix study analyzed ~1.3 billion impressions from nearly 60,000 sources over a month-long period earlier this year. Interestingly, sites with more than 90% fraudulent impressions represented only about 1% of publishers, even while they contributed ~11% of the market’s impressions.
While Fraudlogix found nearly 19% of all impressions overall to be “fake,” its fraudulent behavior does not represent the industry as a whole. According to its analysis, just 3% of sources are causing more than two-thirds of the ad fraud. [Fraudlogix defines a fake impression as one which generates ad traffic through means such as bots, scripts, click-farms or hijacked devices.]
As Fraudlogix CEO Hagai Schechter has remarked, “Our industry has a 3% fraud problem, and if we can clamp down on that, everyone but the criminals will be much better for it.”
That’s probably easier said than done, however. Many of the culprits are “ghost” newsfeed sites. These sites are often used for nefarious purposes because they’re programmed to update automatically, making the sites seem “content-fresh” without publishers having to maintain them via human labor.
Characteristics of these “ghost sites” include cookie-cutter design templates … private domain registrations … and Alexa rankings way down in the doldrums. And yet they generate millions of impressions each day.
The bottom line is that the fraud problem remains huge. Three percent of sources might be a small percentage figure, but that still means thousands of sources causing a ton of ad fraud.
What would be interesting to consider is having traffic providers submit to periodic random tests to determine the authenticity of their traffic. Such testing could then establish ratings – some sort of real/faux ranking.
And just like in the old print publications world, traffic providers that won’t consent to be audited would immediately become suspect in the eyes of those paying for the advertising. Wouldn’t that development be a nice one …