Good news: Online advertising “bot” fraud is down 10%. Bad news: It still amounts to $6.5 billion annually.

Ad spending continues with quite-healthy growth, being forecast to increase by about 10% in 2017 according to a studied released this month by the Association of National Advertisers.

At the same time, there’s similarly positive news from White Ops on the ad fraud front. Its Bot Baseline Report, which analyzes the digital advertising activities of ANA members, is forecasting that economic losses due to bot fraud will decline by approximately 10% this year.

And yet … even with the expected decline, bot fraud is still expected to amount to a whopping $6.5 billion in economic losses.

The White Ops report found that traffic sourcing — that is, purchasing traffic from inorganic sources — remains the single biggest risk factor for fraud.

On the other hand, mobile fraud was considerably lower than expected.  Moreover, fraud in programmatic media buys is no longer particularly riskier than general market buys, thanks to improved filtration controls and procedures at media agencies.

Meanwhile, a new study conducted by Fraudlogix, and fraud detection company which monitors ad traffic for sell-side companies, finds that the majority of ad fraud is concentrated within a very small percentage of sources within the real-time bidding programmatic market.

The Fraudlogix study analyzed ~1.3 billion impressions from nearly 60,000 sources over a month-long period earlier this year. Interestingly, sites with more than 90% fraudulent impressions represented only about 1% of publishers, even while they contributed ~11% of the market’s impressions.

While Fraudlogix found nearly 19% of all impressions overall to be “fake,” its fraudulent behavior does not represent the industry as a whole. According to its analysis, just 3% of sources are causing more than two-thirds of the ad fraud.  [Fraudlogix defines a fake impression as one which generates ad traffic through means such as bots, scripts, click-farms or hijacked devices.]

As Fraudlogix CEO Hagai Schechter has remarked, “Our industry has a 3% fraud problem, and if we can clamp down on that, everyone but the criminals will be much better for it.”

That’s probably easier said than done, however. Many of the culprits are “ghost” newsfeed sites.  These sites are often used for nefarious purposes because they’re programmed to update automatically, making the sites seem “content-fresh” without publishers having to maintain them via human labor.

Characteristics of these “ghost sites” include cookie-cutter design templates … private domain registrations … and Alexa rankings way down in the doldrums. And yet they generate millions of impressions each day.

The bottom line is that the fraud problem remains huge.  Three percent of sources might be a small percentage figure, but that still means thousands of sources causing a ton of ad fraud.

What would be interesting to consider is having traffic providers submit to periodic random tests to determine the authenticity of their traffic. Such testing could then establish ratings – some sort of real/faux ranking.

And just like in the old print publications world, traffic providers that won’t consent to be audited would immediately become suspect in the eyes of those paying for the advertising.  Wouldn’t that development be a nice one …

If there’s a drumbeat among B-to-B marketing professionals, it’s grousing about cross-channel marketing attribution.

If there’s one common complaint among business-to-business marketing professionals, it’s about how difficult it is to measure and attribute the results of their campaigns across marketing channels.

Now, a new survey of marketing professionals conducted Demand Gen (sponsored by marketing forecasting firm BrightFunnel) shows that nothing has particularly changed in recent times.

The survey sample isn’t large (around 200 respondents), but the findings are quite clear.  Only around 4 in 10 of the respondents believe that they can measure marketing pipeline influences. As to why this is the case, the following issues were cited most often:

  • Inability to measure and track activity between buyer stages: ~51% of respondents
  • The data is a mess: ~42%
  • Lack of good reporting: ~42%
  • Not sure which key performance indicators are the important ones to measure: ~15%

And in turn, a lack of resources was cited by nearly half of the respondents as to why they face the problems above and can’t seem to tackle them properly.

As for how B-to-B marketers are attempting to track and report their campaign results these days, it’s the usual practices we’ve been working with for a decade or more:

  • Tracking web traffic: ~95%
  • E-mail open/clickthrough rates: ~94%
  • Contact acquisition and web query forms completed: ~86%
  • Organic search results: ~77%
  • Paid search results: ~76%
  • Social media engagements/shares: ~60%

None of these hit the bullseye when it comes to marketing attribution, and that’s what makes it particularly difficult to find out what marketers really want to know:

  • Marketing ROI by channel
  • Cross-channel engagement
  • Customer lifetime value

It seems that a lot of this remains wait-and-wish-for for many B-to-B marketers …

The full report from Demand Gen, which contains additional research data, is available to download here.

For job seekers in America, the compass points south and west.

Downtown Miami

Many factors go into determining what may be the best cities for job seekers to find employment. There are any number of measures – not least qualitative ones such as where friends and family members reside, and what kind of family safety net exists.

But there are other measures, too – factors that are a little easier to apply across all workers:

  • How favorable is the local labor market to job seekers?
  • What are salary levels after adjusting for cost-of-living factors?
  • What is the “work-life” balance that the community offers?
  • What are the prospects for job security and advancement opportunities?

Seeking to find clues as to which metro areas represent the best environments for job seekers, job posting website Indeed set about analyzing data gathered from respondents who live in the 50 largest metro areas on the Indeed review database.

Indeed’s research methodology is explained here. Its analysis began by posing the four questions above and applying a percentile score for each one based on the feedback it received, followed by additional analytical calculations to come up with a consolidated score for each of the 50 metro areas.

The resulting list shows a definite skew towards the south and west. In order of rank, here are the ten metro areas that scored as the most attractive places for job seekers:

#1. Miami, FL

#2. Orlando, FL

#3. Raleigh, NC

#4. Austin, TX

#5. Sacramento, CA

#6. San Jose, CA

#7. Jacksonville, FL

#8. San Diego, CA

#9. Houston, TX

#10. Memphis, TN

Not all metro areas ranked equally strongly across the four measurement categories. Overall leader Miami scored very highly for work-life-balance as well as job security and advancement, but its cost-of-living factors were decidedly less impressive.

“Where are cities in the Northeast and the Midwest?”, you might ask. Not only are they nowhere to be found in the Top 10, they aren’t in the second group of ten in Indeed’s ranking, either:

#11. Las Vegas, NV

#12. San Francisco, CA

#13. Riverside, CA

#14. Atlanta, GA

#15. Los Angeles, CA

#16. San Antonio, TX

#17. Seattle, WA

#18. Hartford, CT

#19. Charlotte, NC

#20. Tampa, FL

… except for one: Hartford (#18 on Indeed’s list).

Likely, the scarcity of Northeastern and Midwestern cities correlates with the loss of manufacturing jobs, which have typically been so important to those metro areas.  Many of these markets have struggled to become more diversified.

If there are similar characteristics between the top-scoring cities beside geography, it’s that they’re high-tech bastions, highly diversified economies or – very significantly – the seat of state government.

In fact, if you look at the Top 10 metro areas, three of them are state capital cities; in the next group, there are two more.  Not surprisingly, those cities were ranked higher than others for job security.  And salary levels compared to the cost of living in those areas were also quite lucrative.

So much for the adage that a government paycheck is low but the job security is high; it turns out, they both are.

For more details on the Indeed listing, how the ranking was derived, and individual scores by metro area for the four criteria shown above, click here.

Does social media actually depress people? A new study says yes — sort of.

For some time now, we’ve been hearing the contention made that social media causes people to become angry or depressed.

One aspect of this phenomenon, the argument goes, is the “politicization” of social media — most recently exhibited in the 2016 U.S. presidential election.

Another aspect is the notion that since so many people engage in never-ending “happy talk” on social media — presenting their activities and their lives as a constant stream of oh-so-fabulous experiences — it’s only natural that those who encounter those social posts invariably become depressed when comparing them to their own dreary lives that come up wanting.

But much of this line of thought has been mere conjecture, awaiting analysis by social scientists.

One other question I’ve had in my mind is one of causation:  Even if you believe that social media contributes to feelings of depression and/or anger, is using social media what makes people feel depressed … or are people who are prone to depression or anger the very people who are more likely to use social media in the first place?

Recently, we’ve begun to see some research work that is pointing to the causation — and the finding that social media does actually contribute to negative mental health for some users of social media.

One such study appeared in the February 2017 issue of the American Journal of Epidemiology. Titled “Association of Facebook Use with Compromised Well-Being:  A Longitudinal Study,” the paper presents findings from three sets of data collected from ~5,200 subjects in Gallup’s Social Network panel.

The researchers — Drs. Holly Shakya and Nicholas Christakis — studied the relationships between Facebook activity over time with self-reported measures such as physical health, mental health and overall life satisfaction. There were other, more objective measures that were part of the analysis as well, such as weight and BMI information.

The study detected a correlation between increased Facebook activity and negative impacts on the well-being of the research subjects.  ore specifically, certain users who practiced the following social media behaviors more often (within one standard deviation) …

  • Liking social posts
  • Following links on Facebook
  • Updating their own social status frequently

… showed a decrease of 5% to 8% of a standard deviation in their emotional well-being.

As it turns out, the same correlation also applied when tracking people who migrated from light to moderate Facebook usage; these individuals were prone to suffer negative mental health impacts similar to the subjects who gravitated from moderate to heavy Facebook usage.

The Shakya/Christakis study presented several hypotheses seeking to explain the findings, including:

  • Social media usage comes at the expense of “real world,” face-to-face interactions.
  • Social media usage undermines self-esteem by triggering users to compare their own lives with the carefully constructed pictures presented by their social media contacts.

But what about that? It could be argued that heavy social media users are spending a good deal more time engaged in an activity which by definition is a pretty sedentary one.  Might the decreased physical activity of heavy social media users have a negative impact on mental health and well-being, too?

We won’t know anything much more definitive until the Shakya/Christakis study can be replicated in another longitudinal research study. However, it’s often quite difficult to replicate such findings in subsequent research, where results can be affected by how the questions are asked, how random the sample really is, and so forth.

I’m sure there are many social scientists who are itching to settle these fundamental questions about social media, but we might be waiting a bit longer; these research endeavors aren’t as tidy a process as one might think.

The U.S. Postal Services unveils its Informed Delivery notification service – about two decades too late.

Earlier this year, the U.S. Postal Service decided to get into the business of e-mail. But the effort is seemingly a day late and a dollar short.

Here’s how the scheme works: Via sending an e-mail with scanned images, the USPS will notify a customer of the postal mail that will be delivered that day.

It’s called Informed Delivery, and it’s being offered as a free service.

Exactly what is this intended to accomplish?

It isn’t as if receiving an e-mail notification of postal mail that’s going to be delivered within hours is particularly valuable.  If the information were that time-sensitive, why not receive the actual original item via e-mail to begin with?  That would have saved the sender 49 cents on the front end as well.

So the notion that this service would somehow stem the tide of mass migration to e-mail communications seems pretty far-fetched.

And here’s another thing: The USPS is offering the service free of charge – so it isn’t even going to reap any monetary income to recoup the cost of running the program.

That doesn’t seem to make very good business sense for an organization that’s already flooded with red ink.

Actually, I can think of one constituency that might benefit from Informed Delivery – rural residents who aren’t on regular delivery routes and who must travel a distance to pick up their mail at a post office. For those customers, I can see how they might choose to forgo a trip to town if the day’s mail isn’t anything to write home about — if you’ll pardon the expression.

But what portion of the population is made up of people like that? I’m not sure, but it’s likely far fewer than 5%.

And because the USPS is a quasi-governmental entity, it’s compelled to offer the same services to everyone.  So even the notion of offering Informed Delivery as “niche product” to just certain people isn’t relevant.

I guess the USPS deserves fair dues just for trying to come up with new ways to be relevant in the changing communications world. But it’s very difficult to come up with anything worthwhile when the entire foundation of the USPS’s mission has so been eroded over the past generation.

Suddenly, smartphones are looking like a mature market.

The smartphone diffusion curve. (Source: Business Insider)

In the consumer technology world, the pace of product innovation and maturation seems to be getting shorter and shorter.

When the television was introduced, it took decades for it to penetrate more than 90% of U.S. households. Later, when color TVs came on the market, it was years before the majority of households made the switch from black-and-white to color screens.

The dynamics of the mobile phone market illustrate how much the pace of adoption has changed.

Only a few years ago, well-fewer than half of all mobile phones in the market were smartphones. But smartphones rapidly eclipsed those older “feature phones” – so that now only a very small percentage of cellphones in use today are of the feature phone variety.

Now, in just as little time we’re seeing smartphones go from boom to … well, not quite bust.  In fewer than four years, the growth in smartphone sales has slowed from ~30% per year (in 2014) to just 4%.

That’s the definition of a “mature” market.  But it also demonstrates just how successful the smartphone has been in penetrating all corners of the market.

Consider this:  Market forecasting firm Ovum figures that by 2021, the smartphone will have claimed its position as the most popular consumer device of all time, when more than 5 billion of them are expected to be in use.

It’s part of a larger picture of connected smart devices in general, for which the total number in use is expected to double between now and 2021 – from an estimated 8 billion devices in 2016 to around 15 billion by then.

According to an evaluation conducted by research firm GfK, today only around 10% of consumers own either an Amazon Echo or Google Home device, but digital voice assistants are on the rise big-time. These interactive audio speakers offer a more “natural” way than smartphones or tablets to control smart home devices, with thousands of “skills” already perfected that allow them to interact with a large variety of apps.

There’s no question that home devices are the “next big thing,” but with their ubiquity, smartphones will continue to be the hub of the smart home for the foreseeable future.  Let’s check back in another three or four years and see how the dynamics look then.