Thanks to IOT, search is morphing into “just-in-time knowledge.”

aeIn today’s world of marketing, it’s been obvious for some time that the pace of technological change is dramatically shortening the life cycle of marketing techniques.

Consider online search. Twenty-five years ago it was hardly a blip on the radar screen.  Picking up momentum, paid search soon began to rival traditional forms of advertising, as companies took advantage of promo programs offered by Google and others that aligned neatly with consumers when they were on the hunt for products, services and solutions..

Google has attracted billions upon billions of dollars in search advertising revenue, becoming one of the biggest corporations in the world, even as entire industries have grown up around optimizing companies’ website presence and relevance so as to rank highly in search query results.

And now, thanks to continuing technology evolution and the emergence of the Internet of Things, the next generation of search is now upon us – and it’s looking likely to make keyboards and touchscreens increasingly irrelevant within a few short years.

afhSearches without screens are possible thanks to technology like Google Assistant, Amazon Echo/Alexa, and software development kits from providers like Soundhound and Microsoft.

This past October, market forecasting firm Gartner came out with an interesting prediction: Within four years, it forecasts that ~30% of all searches will be carried out without a screen.

It’s happening already, actually. In web search, Amazon Echo answers voice queries, while the Bing knowledge and action graph allows Microsoft to provide answers to queries rather than a set of answer possibilities in the form of links as has been the case up to now.

Gartner envisions voice interactions overtaking typing in search queries because it is so much easier, faster and more intuitive for consumers. By eliminating the need for people to use eyes and hands for search and browsing, voice interactions improve the utility of web sessions even while multitasking takes on ever-increasing degrees of shared activity (walking, driving, socializing, exercising and the like).

Related to this, Gartner also predicts that one in five brands will have abandoned offering mobile apps by 2019. Already, many companies have found disappointing levels of adoption, engagement and ROI pertaining to the mobile apps they’ve introduced, and the prognosis is no better going forward; the online consumer is already moving on.

Gartner’s predictions go even further. It envisions ever-higher levels of what it calls “just-in-time knowledge” – essentially trading out searching for knowledge by simply getting answers to voice queries.

Speaking personally, this prediction concerns me a little. I think that some people may not fully grasp the implications of what Gartner is forecasting.

To me, “just-in-time knowledge” sounds uncomfortably close to being “ill-educated” (as opposed to “uneducated”).  Sometimes, knowing a little bit about something is more dangerous than knowing nothing at all. Bad decisions often come from possessing a bit of knowledge — but with precious little “context” surrounding it.

With “just-in-time knowledge,” think of how many people could now fall into that kind of trap.

Ad fraud: It’s worse than you think.

It isn’t so much the size of the problem, but rather its implications.

affaA recently published report by White Ops, a digital advertising security and fraud detection company, reveals that the source of most online ad fraud in the United States isn’t large data centers, but rather millions of infected browsers in devices owned by people like you and me.

This is an important finding, because when bots run in browsers, they appear as “real people” to most advertising analytics and many fraud detection systems.

As a result, they are more difficult to detect and much harder to stop.

These fraudulent bots that look like “people” visit publishers, which serve ads to them and collect revenues.

faaf

Of course, once detected, the value of these “bot-bound” ads plummets in the bidding markets.  But is it really a self-correcting problem?   Hardly.

The challenge is that even as those browsers are being detected and rejected as the source of fraudulent traffic, new browsers are being infected and attracting top-dollar ad revenue just as quickly.

It may be that only 3% of all browsers account for well over half of the entire fraud activity by dollar volume … but that 3% is changing all the time.

Even worse, White Ops reports that access to these infected browsers is happening on a “black market” of sorts, where one can buy the right to direct a browser-resident bot to visit a website and generate fraudulent revenues.

… to the tune of billions of dollars every year.  According to ad traffic platform developer eZanga, advertisers are wasting more than $6 billion every year in fraudulent advertising spending.  For some advertisers involved in programmatic buying, fake impressions and clicks represent a majority of their revenue outlay — even as much as 70%.

The solution to this mess in online advertising is hard to see. It isn’t something as “simple and elegant” as blacklisting fake sites, because the fraudsters are dynamically building websites from stolen content, creating (and deleting) hundreds of them every minute.

They’ve taken the very attributes of the worldwide web which make it so easy and useful … and have thrown them back in our faces.

Virus protection software? To these fraudsters, it’s a joke.  Most anti-virus resources cannot even hope to keep pace.  Indeed, some of them have been hacked themselves – their code stolen and made available on the so-called “deep web.”  Is it any wonder that so many Internet-connected devices – from smartphones to home automation systems – contain weaknesses that make them subject to attack?

The problems would go away almost overnight if all infected devices were cut off from the Internet. But we all know that this is an impossibility; no one is going to throw the baby out with the bathwater.

It might help if more people in the ad industry would be willing to admit that there is a big problem, as well as to be more amenable to involve federal law enforcement in attacking it.  But I’m not sure even that would make all that much difference.

There’s no doubt we’ve built a Frankenstein-like monster.  But it’s one we love as well as hate.  Good luck squaring that circle!

Bing Plays the Bouncer Role in a Big Way

untitledMicrosoft Bing has just released stats chronicling its efforts to do its part to keep the Internet a safe space. Its 2015 statistics are nothing short of breathtaking.

Bing did its part by rejecting a total of 250 million ad impressions … banning ~150,000 advertisements … and blocking around 50,000 websites outright.

It didn’t stop there. Bing also reports that it blocked more than 3 million pages and 30 million ads due to spam and misleading content.

What were some of the reasons behind the blocking? Here are a few clues as to where Bing’s efforts were strongest (although I don’t doubt that there are some others that Bing is keeping closer to its vest so as not to raise any alarms):

  • Healthcare/pharma phishing attacks: ~2,000 advertisers and ~800,000 ads blocked in 2015
  • Selling of counterfeit goods: 7,000 advertisers and 700,000+ ads blocked
  • Tech support scams: ~25,000 websites and ~15 million ads blocked
  • Trademark infringement factors: ~50 million ad placements rejected

Bing doesn’t say exactly how it identifies such a ginormous amount of fraudulent or otherwise nefarious advertising, except to report that the company has improved its handling of many aspects based on clues ranging from toll-free numbers analysis to dead links analysis.

According to Neha Garg, a program manager of ad quality at Bing:

“There have even been times our machine learning algorithms have flagged accounts that look innocent at first glance … but on close examination we find malicious intent. The back-end machinery runs 24/7 and used hundreds of attributes to look for patterns which help spot suspicious ads among billions of genuine ones.”

We’re thankful to Bing and Google for all that they do to control the incidence of advertising that carries malicious malware that could potentially cause many other problems above and beyond the mere “irritation factor.”

Of course, there’s always room for improvement, isn’t there?

The mouse that roared: Smartphones take on bigger screens – and they’re winning.

The key takeaway message from MarketLive’s latest e-commerce statistics is that smartphones are where the go-go action is in e-commerce.

SmartphonesIf there’s any lingering doubt that smartphones are really on the march when it comes to e-commerce activity, the latest user stats are erasing all vestiges of it.

MarketLive’s 2nd Quarter e-commerce stats for 2015 reveal that mass-market consumers purchased ~335% more items via their smartphones than they did during the comparable quarter last year.

MarketLive’s report covers the buying activity of millions of online consumers. And the uptick it’s showing is actually more like a flood of increased activity.  That’s plain to see in these year-over-year 2nd Quarter comparative figures for smartphones:

  • Catalog merchandise: +374%
  • Merchandise sold by brick-and-mortar establishments’ online stores: +207%
  • Furnishings and houseware items: +163%

The critical mass that’s finally been reached is most likely attributable to these factors:

  • The growing number of “responsive-design” websites that display and work equally well on any size device
  • One-click purchasing functionality that simplify and ease e-commerce procedures

Interestingly, the dramatic growth in smartphone usage for online shopping appears to be skipping over tablets. Smartphones now account for more than twice the share of online traffic compared to tablets (~30% versus ~13%).

Total e-commerce dollar sales on tablets have also fallen behind smartphones for the first time ever.

Evidently, some people are now gravitating from desktops or laptops straight to smartphones, with nary a passing glance at tablets.

Another interesting data point among the MarketLive stats is the fact that traffic emanating from search (paid as well as organic), is actually on the decline.  By contrast, growth in traffic from e-mail marketing continues on its merry way, increasing ~18% over the same quarter last year.

One aspect remains a challenge in online commerce, however: The cart abandonment rate actually ticked up between 2014 and 2015. And conversion rates aren’t improving, either.

Marketlive logoFor the bottom line on what these new findings mean, I think Ken Burke, CEO of MarketLive, has it correct when he contends:

“Shoppers are seeking out their favorite brick-and-mortar brands online and expecting their websites to work on any device. We’re calling this trend ‘Commerce Anywhere the Customer Wants It.’ The more agile retailers and category leaders are outpacing their competitors by constantly adapting to – and embracing – a retail landscape where technology, consumers and markets are evolving at breakneck speed.” 

Details on MarketLive’s statistics can be accessed here.

Banner Ads Turn 20 This Year …

… But who really wants to celebrate?

paint the town red
Celebrating in the geriatric ward: The online banner ad turns 20.

It might come as a surprise to some, but the online banner ad is 20 years old this year.

In general, 20 years doesn’t seem very old, but in the online word, 20 years is ancient.

Simply put, banner ads represent the geriatric ward of online advertising.

In fact, there’s very little to love anymore about an advertising tool that once seemed so fresh and new … and now seems so tired and worn-out.

Furthermore, banner ads are a symbol of what’s wrong with online advertising.  They’re unwelcome.  They intrude on people’s web experience.  And they’re ignored by nearly everyone.

Old banner advertising
A whole lotta … nothing? Online banner ads in 2015.

But despite all of this, banner ads are as ubiquitous as ever.

Consider these stats as reported by Internet analytics company ComScore:

  • The number of display ads served in the United States approaches 6 trillion annually.
  • Fewer than 500 different advertisers alone are responsible for delivering 1 billion of these ads. 
  • The typical Internet user is served about 1,700 banner ads per month. (For 25 to 34 year-olds, it’s around 2,100 per month.) 
  • Approximately 30% of banner ad impressions are non-viewable.

paying no attention to advertisingAnd when it comes to banner ad engagement, it’s more like … disengagement:

  • According to DoubleClick, Google’s ad serving services subisidary, banner ads have a click rate of .04% (4 out of every 10,000 served) for ads sized 468×60 pixels. 
  • According to GoldSpot Media, as many as 50% of clicks on mobile banner ads are accidental. 
  • According to ComScore, just 8% of Internet users are responsible for ~85% of the clicks on banner ads.

Come to think of it, “banner blindness” seems wholly appropriate for an advertising vehicle that’s as old as this one is in the web world.

The final ignominy is that people trust banner ads even less than they do TV ads:  15% vs. 29%, according to a survey conducted by market research company eMarketer.

Despite the drumbeat of negative news and bad statistics, banner advertising continues to be a bulwark of the online advertising system.

Publishers love them because they’re easy to produce and cost practically nothing to run.

Ad clients understand them better than other online promotional tactics, so they’re easier to sell either as premium content or as part of ad networks and exchanges.

There’s plenty of talk about native advertising, sponsored content and many other advertising tactics that seem a lot fresher and newer than banner ads.  But I suspect we’ll continue to be inundated with them for years to come.

What do you think?  Do you have a different prediction?  Please share your thoughts with other readers here.

Information, advertising and eye-tracking: Continuity among the chaos.

digital eyeIn the Western world, humans have been viewing and processing information in the same basic ways for hundreds of years. It’s a subconscious process that entails expending the most judicious use of time and effort to forage for information.

Because of how we Westerners read and write – left-to-right and top-to-bottom – the way we’ve evolved searching for information mirrors the same sort of behavior.

And today we have eye-scanning research and mouse click studies that prove the point.

In conducting online searches, where we land on information is known variously as the “area of greatest promise,” the “golden triangle,” or the “F-scan diagram.”

However you wish to name it, it generally looks like this on a Google search engine results page (SERP):

F-scan diagram

It’s easy to see how the “area of greatest promise” exists. We generally look for information by scanning down the beginning of the first listings on a page, and then continue viewing to the right if something seems to be a good match for our information needs, ultimately resulting in a clickthrough if our suspicions are correct.

Heat maps also show that quick judgments of information relevance on a SERP occur within this same “F-scan” zone; if we determine nothing is particularly relevant, we’re off to do a different keyword search.

This is why it’s so important for websites to appear within the top 5-7 organic listings on a SERP – or within the top 1-3 paid search results in the right-hand column of Google’s SERP.

In recent years, Google and other search engines have been offering enhancements to the traditional SERP, ranging from showing images across the top of the page to presenting geographic information, including maps.

To what degree is this changing the “conditioning” of people who are seeking out information today compared to before?

What new eye-tracking and heat maps are showing is that we’re evolving to looking at “chunks” of information first for clues as to the promising areas of results. But then within those areas, we revert to the same “F-scan” behaviors.

Here’s one example:

Eye-tracking Update

And there’s more:  The same eye-tracking and heat map studies are showing that this two-step process is actually more time-efficient than before.

We’re covering more of the page (at least on the first scan), but are also able to zero in on the most promising information bits on the page.  Once we find them, we’re quicker to click on the result, too.

So while Google and other search engines may be “conditioning” us to change time-honored information-viewing habits, it’s just as much that we’re “conditioning” Google to organize their SERPs in ways that are easiest and most beneficial to us in the way be seek out and find relevant information.

Bottom line, it’s continuity among the chaos. And it proves yet again that the same “prime positioning” on the page favored for decades by advertisers and users alike – above the fold and to the left – still holds true today.

Security blind spots: It turns out they’re everywhere on the web.

sbsIt seems like there’s a story every other day about security breaches affecting e-commerce sites and other websites where consumers congregate.

And now we have quantification of the challenge. Ghostery, a provider of apps that enable consumers to identify and block company tracking on website pages, has examined instances of non-secure digital technologies active on the websites of 50 leading brands in key industry segments like news, financial services, airlines and retail.

More specifically, Ghostery was looking for security “blind spots,” which it defines as non-secure tags that are present without the permission of the host company.

What it found was that 48 of the 50 websites it studied had security blind spots.

And often  it’s not just one or two instances on a website. The analysis found that retail web pages host a high concentration of non-secure technologies:  438 of them on the Top Ten retail sites it analyzed (companies like Costco, Kohls, Overstock.com, Target and Walmart).

Financial services sites are also hit hard, with 382 blind spots identified, while airline websites had 223 instances. And they’re often present on the pages described as “secure” on these websites.

Scott Meyer, who is Ghostery’s chief executive officer, had this to say about the situation:

“Companies have very little understanding of what’s happening on their websites. The problem is not with any of the company’s marketing stacks, it’s with their own tech stacks.  What these companies have now is marketing clouds, not websites, and they’ve gotten complicated and hard to manage.”

Scott Meyer, Ghostery CEO
Scott Meyer, CEO of Ghostery (formerly The Better Advertising Project and Evidon).

There was one leading brand web site that came off looking squeaky clean compared to the others: Amazon.  “Amazon is incredibly sophisticated; others are not,” Meyer noted.

The implications of avoiding addressing these security blind spots could be seriously negative. Bot networks often use non-secure technologies to gain entry to websites.  Google is indexing company websites higher in search engine results based on their security ratings.

It makes it all the more important for companies to audit their websites and set up system alerts to identify the non-secure tags.

For the leading brands in particular, they just need to suck it up and do it for the benefit of their millions of customers.

Amazon’s (Somewhat) Surprising Shopping Stats

Shoppin on AmazonOver the years, Amazon has branched out greatly from its original focus on books and other media to offer all sorts of other merchandise.

In fact, these days people can buy pretty much anything on Amazon — assuming it’s legal.

Even so, I was somewhat surprised to read the tea leaves on some new findings released by Chicago-based Consumer Intelligence Research Partners.  This research firm surveyed ~1,100 Amazon customers, asking them about their most recent purchases on Amazon.

Categorizing the responses by type of merchandise, CIRP found that books are no longer the most popular products sold on Amazon.

Instead, pride of place now goes to top-ranked electronics products, with ~33% of the survey respondents reporting that those types of products were their most recent purchase on the site.

Books still maintain their high ranking; the category comes in second at ~20% of respondents.  (Incidentally, approximately one-third of those book purchases are e-books.)

Amazon’s Fresh service, which delivers groceries within 24 hours of ordering, has been operating in select West Coast cities for some time now — and it appears that the company has latched onto a winning formula.

In fact, the grocery category ranked third in the survey.

This surprised me:  Call me old school, but I still prefer to select my fresh meats and produce on my own, instead of relying on some anonymous “picker” to do it for me.

What were the bottom three merchandise categories found in the CIRP survey?  Sports-related purchases were low  … and music purchases were lower still (about half of them being music downloads, by the way).

Dead last is the automotive category.  No real surprise here, I don’t think.

Personally, I don’t know anyone who would feel comfortable purchasing a car online.  And since the vast majority of consumers don’t work on their cars either, it seems natural that most of them will continue to rely on their repair shops to procure the replacement parts and consumables they need for servicing their vehicles.

If you have particular merchandise you like to buy through Amazon — or if there is something really unusual that you’ve purchased from the site, please share your experiences with other readers here.

Are comment sections on news websites on the way out?

Trying to tame “the world of horrible Internet awfulness.”  (David Tarp, CEO, Tumblr)

Online CommentsOne of the most empowering aspects of the Internet is its ability to foster online interaction and feedback, wherein “regular people” have a megaphone in addition to journalists and writers on publisher websites.

But there’s an ugly side to the public dialogue, unfortunately: There’s an awful lot of verbal “dirty laundry” that gets put on display.

It’s sort of like taking younger children to the state fair or a sporting event — and then trying to shield them from the loud profanity (and worse) that they overhear.

The fact is, you can’t get away from the coarseness on online comment sections – particularly if the news content pertains to political or socio-cultural topics.

It’s often a “drive to the bottom” where social norms and common decency fall by the wayside in the name of airing grievances or settling scores.

It extends to less potentially inflammatary zones beyond polarizing politics, too.  Researchers have found that people who read the same news story about a new technology, but who are exposed to different sets of coments — one set fair and the other nasty — have completely different responses to the news story itself.  In the research, commenter anonymity and the ability to strongly attack a news story without the need to back it up with facts, caused ill effects that were neither accurate or fair.

The researchers dubbed it “The Nasty Effect.”

For some time now, Internet news and information sites have tried to strike a balance between access and interactivity on the one hand … and civility and decorum on the other.

In many cases, the quest has been frustratingly difficult. Here’s what some publishers have said about the issue:

“There’s got to be a better [way] to interact without comments taking away from the article or denigrating the people who are reported on.”  — Craig Newman, Chicago Sun-Times

“One of the worst things about writing in public is fielding random ad hominem attacks … in the space in which you’ve poured out your precious thoughts.”  — Ev Williams, Medium

“Even a fractious minority wields enough power to skew a reader’s perception of a story.”  — Suzanne LeBarre, Popular Science

As a result, now we’re seeing more instances of publishers and bloggers killing their comment sections completely.  Consider these examples:

… As of this month, the Chicago Sun-Times has axed its comment capability for the foreseeable future.

… The much anticipated March 2014 launch of Vox (Ezra Klein’s news venture), doesn’t provide for feedback comments.

… The same goes for The Dish, Andrew Sullivan’s website.

Popular Science shut down its comment sections this past September.

… Atlantic Media’s Quartz business site hasn’t ever allowed comments on its site since its launch in 2012.

… And neither has Tumblr.

But it seems rather unrealistic to think that comments sections can be banned from the Internet outright. That would be like trying to put the genie back in the bottle.

Instead, a via media approach may be what the Huffington Post has done. This past December, it implemented a policy wherein commenters must use their Facebook accounts and real names in order to post a comment on Huffington Post stories.

Those who wished to continue posting comments under pseudonyms have had to “appeal” for the right to do.

The goal? To make people think twice before publishing strongly worded comments — the kind that say as much about the poster as they do about the object of their commentary.

Despite the predictable howls from some readers who feel that the right to express an opinion without fear of reprisal is a big part of the appeal of the Internet, four months later the Huffington Post sees the move in positive terms.

The publisher’s community director Tim McDonald reports that the number of “faux” accounts in its system has gone way down – and the quality of discourse is up.

With this approach, perhaps “shameless” needn’t upstage  “shame” after all — and the benefits of interactivity and debate can be preserved at the same time.

The [dis]connect between content “quality” and online advertising.

Jack Marshall
Digiday’s Jack Marshall

I really appreciate the work of Jack Marshall, a reporter at marketing e-zine Digiday, who is helping to expose and explain the “brave new world” of online display advertising and how it has evolved into something that’s rife with problems.

ad exchangesConsider a recent column of Marshall’s titled “Is this the worst site on the Internet?”

In it, he notes that for “legitimate” online publishers that rely on advertising as their revenue model, that model is becoming a more daunting proposition with each passing day.

And a big reason is the emergence of other websites that are “gaming” the online system – not to mention the ad tech middlemen that are their willing accomplices.

Essentially, what’s happening is that ad dollars are being siphoned away from websites that provide professionally produced content and are going to sites that are explicitly constructed to serve up as many ad impressions as possible.

These sites contain little or no original content.

Marshall’s “Exhibit A” is Georgia Daily News, a website which purports to cover “news, traffic, sports, politics, entertainment, gossip and local events in Atlanta.”

As Marshall contends, “What it actually features is content ‘curated’ from elsewhere on the web, and some it has simply stolen from other major news sites” such as the Daily Mail.

Sizable chunks of the website’s content have nothing to do with Atlanta.

GADailyNews home pageConsidering the type of general news site it purports to be, GADailyNews.com doesn’t attract very much traffic at all.  And why would it? — since it contains precious little information of value or interest to anyone who is actually “seeking news about Atlanta.”

But it sure does generate a lot of ad impressions.  According to Marshall, each article page on the site features seven display ad units – all of which refresh every 20 seconds or so.

In the two-minute span of time it took him to read an article about Katy Perry and John Mayer (content copied from an Australian news site), Marshall was served more than 40 ad impressions.

Marshall continues:

“One page has served me nearly 500 ads in just 20 minutes – and I couldn’t stop refreshing them even if I wanted to.”

[And these ads aren’t for B-list advertisers, either.  They’re for brands like American Airlines, Hilton Hotels, Charles Schwaab and others.]

What’s happening here, of course, is that websites and ad tech middlemen have figured out that the algorithms of even the “quality” ad vendors like Google, AdRoll, and Bizo can be gamed pretty easily to serve ads on a low-quality site like Georgia Daily News, which is owned by a single-person entity called Integrated News Media Corporation.

It’s hardly the type of media vehicle that big-brand advertisers would normally wish to use for advertising.  But thanks to the vagaries and complexity of the ad exchange landscape, they are.

For every Georgia Daily News site, there are hundreds of others like it that cobble together seemingly valuable content with a passably convincing set of audience characteristics.

Put it together, and it adds up to problems on two levels.

First, advertisers are paying for impressions that are near-worthless.

Second, since there are finite ad dollars available, legitimate online publishers are losing out on those funds, which are far more important to their well-being than they are for sites that don’t engage in any true journalism at all.

As Jack Marshall concludes:

“Thanks to fraudulent traffic, dubious sites and middlemen with low quality standards, life is only getting harder for those publishers with expensive content teams to support.”