Recently, some interesting research findings were released by Nielsen as part of its latest round of Total Audience Reporting. The analysis shows that even as the number of stations received by U.S. TV households has increased to an average of ~192 in 2018 — up nearly 50% from a decade earlier — the number of channels actually watched, on average, has dropped to fewer than 7% of them.
Furthermore, stations watched has declined in absolute terms, not merely in terms of percentage share. The average number of stations tuned into by households as of 2018 (~13) was fewer than the number of TV channels households were tuning in to a decade earlier, when the average number was just over 17.
These findings underscore the continuing fragmentation of the linear TV ecosystem even as the number of alternative viewing choices increases, thanks to non-linear TV options such as OTT (Internet-direct) and VOD (video-on-demand) subscription services.
And here’s another takeaway from the research: These data underscore how dispensable most linear TV channels — not excluding ones affiliated with legacy networks — have become for most TV households.
What are your habits regarding watching linear TV these days? Do your practices mirror the Nielsen findings? How have your habits changed over the past few years? Please share your experiences with other readers.
Decades ago, people had a choice of cloth fibers like cotton, wool and silk. Each of these natural cloths had positive attributes … as well as negative ones, too.
Cotton is comfortable to wear, but wrinkles when washed. Wool is great for the cold weather months, but needs to be dry-cleaned. Too, moths and other insects love to burrow their way through woolen clothing, making many an item made from wool ready for the trash far too soon.
Silk? It has all the detriments of cotton and wool without any of the positives — except that it looks rich and expensive if one wishes to put on airs or otherwise “make a statement.”
Beginning in the 1940s, polyesters and other synthetic fibers were introduced, giving rise to all sorts of new clothing items that touted a variety of positive attributes: They washed up fine, didn’t need ironing, and kept their shape over time.
Never mind the fact that the clothing didn’t breathe, and made more than a few people stink to the heavens after wearing a synthetic cloth shirt for barely an hour on a hot summer day.
Along these same lines, today we have synthetic media. It’s essentially how people and machines are collaborating to create media that is algorithmically created (or modified).
In its earliest incarnations, synthetic media was a blend of “real” and “faux” components. Think of a newscast with your favorite, very real anchor person … but the background, screens and graphics are computer-generated.
But things have gone much further than that in recent times. Text, photography and videos are being created by software with such precision and seeming authenticity that it’s nearly impossible to determine what content is “real” versus what has been “synthesized.”
On the plus side, content can be automatically translated and delivered in multiple languages to different audiences spanning the world, bringing more news and information to more people simultaneously. But what if the avatar (host) could be customized to be more “familiar” to different audiences — and therefore more engaging and believable to them?
There’s a flipside to all of this innovation. So-called “deepfakes” (a recent term that took no time at all to be added to the major dictionary databases) harness digital technology to superimpose faces onto video clips in ways that are so realistic, they appear to be totally authentic.
Considering the advances in the technology, one can only imagine the plethora of “news” items that will be unleashed into cyberspace and on social media platforms in the coming months and years. Most likely, they’ll have the effect of making more than a few people suspicious of all news and information — regardless of the source.
Which brings us back to synthetic fabrics. They’re with us and always will be; there’s no turning back from them. But people have learned how to use them for what makes sense, and eschew the rest. We need to figure out how to do the same with synthetic media.
Canadian interactive media and search engine specialist extraordinaireGord Hotchkiss is one of my favorite columnists who write regularly on marketing topics. Invariably he does a fine job “connecting the dots” between seemingly disparate points — often drawing thought-provoking conclusions from them.
In short, a Hotchkiss column is one that is always worth reading. In his latest piece he starts out with a bold pronouncement:
“When the internet ushered in an explosion of information in the mid-to late-1990s there were many — I among them — who believed humans would get smarter.
What we didn’t realize then is that the opposite would eventually prove to be true.”
His point is that information technology has begun to change the time-honored ways humans are hard-wired to think, which is both fast and slow. In essence, two loops are required for mental processing: the “fast” loop pertains to our instinctive response to situations, whereas the “slow” loop is a more thoughtful processing of discerning reality.
In Hotchkiss’ view, people need both loops – especially now, considering the complexity of the world.
A more complex world requires more time to absorb and come to terms with that complexity. But when the focus is only on thinking “fast,” the results aren’t pretty. As he observes:
“If we could only think fast, we’d all believe in capital punishment, extreme retribution, and eye-for-eye retaliation. We would be disgusted and pissed off almost all the time. We would live in a Hobbesian State of Nature [where] the ‘natural condition of mankind’ is what would exist if there were no government, no civilization, no laws, and no common power to restrain human nature.
The state of nature is a ‘war of all against all’ in which human beings constantly seek to destroy each other in an incessant pursuit for power. Life in the state of nature is ‘nasty, brutish and short.’”
Do any of us wish to live in a world like that? One would think not.
But here’s where Hotchkiss feels like things have gone off the rails in recent times. The Internet and social media have delivered to us the speed of connection and reaction that is faster than ever before in our lives and in our culture:
“The Internet lures us into thinking with half a brain … and the half we’re using is the least thoughtful, most savage half … We are now living in a pinball culture, where the speed of play determines that we have to react by instinct. There is no time left for thoughtfulness.”
In such an environment, can we be all that surprised at the sorry result? Hotchkiss, for one, isn’t, noting:
“With its dense interconnectedness, the Internet has created a culture of immediate reaction. We react without all the facts. We are disgusted and pissed off all the time. This is the era of ‘cancel and ‘callout’ culture. The court of public opinion is now less like an actual court and more like a school of sharks in a feeding frenzy.”
Not that every interaction is like that, of course. If you think of social media posts, there are many — perhaps more — that are wonderfully charming, even cloyingly affectionate.
Most people are quick to point out that there’s this good side to social media, too – and in that sense, social media merely reflects the best and worst of human nature.
But regardless of whether it’s negative or positive, pretty much all interactive media lives in the realm of “thinking fast.” All of it is digested too quickly. Too often it’s empty calories – the nutritional equivalent of salt-and-vinegar potato chips or cotton candy.
Hotchkiss’ point is that interactive communications and media have effectively hijacked what’s necessary for humans to properly pause and reflect in the “slow thinking” lane, and he leaves us with this warning:
“It took humans over five thousand years to become civilized. Ironically, one of our greatest achievements is dissembling that civilization faster than we think. Literally.”
I’ve blogged before about the major struggles of the so-called alt-weekly press in recent times as the Internet has upended both the business model and the editorial mission of such papers.
But what about urban commuter publications? These are the tabloid freebies that sprang over the decades up to serve the daily public transit population in large urban areas, offering quick-read news and entertainment during subway, train and bus commutes.
Unlike the alt-weeklies with their often-edgy or otherwise counterculture editorial slant, the commuter tabloids were generally more conventional in their content — focusing less on controversial POV topics and instead on “what’s happening” in headline news and on the dining, arts and entertainment front.
One such publication that I came to know quite well was Skyway News — named after the iconic skyway system in downtown Minneapolis — where professionals could grab a copy of the tabloid while dashing off to grab their public transport. For me, reading Skyway News was a way to pass the time while taking my 35-minute bus commute (yes – it took that long to travel just three miles in the city during rush hour).
Alas, Skyway News, which debuted in 1970, eventually went the way of so many alt-weekly papers. First it tried expanding its circulation (and editorial focus) to cover residential Northeast Minneapolis, changing its name to The Journal in the process … but finally shut down for good late last year.
Still, it was an amazing 48-year run for a paper that never had a circulation exceeding 30,000.
This week, we’re hearing news that one of the most successful of the urban commuter tabloid ventures has bitten the dust, too. In this case it’s Washington DC’s vaunted Express, a free commuter tabloid published by the Washington Post since 2003.
In his customary colorful way, Dan Caccavaro – the tabloid’s founding editor who remained in that position for the entire 16 years of the publication’s existence – explained to readers what was behind the paper’s demise:
“When we launched in 2003, there was no such thing as an iPhone. It would be another year before Harvard students would start using a novel social network called Facebook to keep tabs on their classmates. No one was tweeting anything – or Instagramming or Snapchatting. And most of us still mocked our “CrackBerry”-addicted friends who just couldn’t wait until they got to work to check their email.
The headline of Caccavaro’s editorial says it all: “Hope you enjoy your stinkin’ phones.”
While circulation of the Express had been declining since its height of nearly 200,000 copies to around 130,000 today and while the paper’s finances had slipped into loss territory, the death knell came when the DC metro system introduced Wi-Fi service on its trains. With that move, the ability for the Express to engage the attentions of DC’s metro commuters died.
Whereas at one time the Express and its quick-read news format was “an integral part of the morning commute for Washingtonians,” the ability for people to stay online during their commute effectively made the Express an irrelevance.
As Caccavaro explained in his final editorial salvo:
“It wasn’t unusual in [the] early days to see two-thirds of riders on a rush-hour train reading Express … The appetite for Express was so great, in fact, that we more than once considered printing an afternoon edition.
This Monday morning as I rode the train to work, I was struck by a very different observation. Three people on my crowded Blue Line train were reading Express … one man had his nose in an old-fashioned book. Almost everyone else was staring at a phone.”
What’s particularly ironic is that the Express, with its lively, quick-read character and attractive, colorful layout, was the precursor to the kind of news and information that everyone expects to see continuously fed to them on their devices. So as it acclimated a generation of readers to being quickly-informed, entertained and pleasantly distracted during their commutes, Express actually sowed the seeds for the wholesale shift to mobile screens to receive information in the same fashion.
With the closure of Express, there can’t be more than a handful of urban commuter tabloids left in existence in America. I can’t think of single one. But if you’re aware of any, please enlighten us – and let us know what might be the secret behind their continuing relevance.
What are those sequential trends? The HBR article outlines five of them and dubs them “eras,” each of them evolving with increasing rapidity:
Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.
Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”
Clearly, it’s technology that has been the catalyst for change as we migrate from one era to the next. Mass marketing was a staple for the better part of 40 years, what with radio/TV and newspaper advertising being paramount. But subsequent eras have come along much more quickly as we’ve moved from market segmentation to customer-level marketing and loyalty marketing.
As for the emerging era of “relevance marketing,” new techniques are enabling marketers to exploit explicit data by name (such as previous purchase history and other known information) along with implicit data (additional information that can be inferred by behavior).
The question is whether this kind of “relevance” will engender long-term wins with today’s customers. The same technology that enables advertisers to target “Segments of One” is what enables those very targets to weigh the worth of those messages, discounts and offers so that they can find the best “deal” for themselves in their exact moment of need.
As far as the customer is concerned, wholesale digitization means that last week’s “preferred vendor” could be next week’s “reject” — with “loyalty” standing at the wayside holding the bag.
The danger is that for the seller, it can rapidly become a “race to the bottom” as buyers’ spontaneity erodes profit margins while the brand goodwill dissipates as quickly as it was created.
Marketing thought leaders Jim Lecinski, Gord Hotchkiss and several others have referred to this as the “zero moment of truth” – and in this case the “zero” may also be referring to the seller’s profit margin after we’ve progressed through the five eras of marketing that bring us to the “Segment of One.”
What are your thoughts about where marketing is ending up now that technology has given companies the power to micro-target — particularly if it means profit margins declining to their own “micro” levels? Please share your thoughts with other readers.
When Google feels the need to go public about the state of the current ad revenue ecosystem, you know something’s up.
And “what’s up” is actually “what’s down.” According to a new study by Google, digital publishers are losing more than half of their potential ad revenue, on average, when readers set their web browser preferences to block cookies – those data files used to track the online activity of Internet users.
The impact of cookie-blocking is even bigger on news publishers, which are foregoing ad revenues of around 62%, according to the Google study.
The way Google conducted its investigation was to run a 4-month test among ~500 global publishers (May to August 2019). Google disabled cookies on a randomly selected part of each publisher’s traffic, which enabled it to compare results with and without the cookie-blocking functionality employed.
It’s only natural that Google would be keen to understand the revenue impact of cookie-blocking. Despite its best efforts to diversify its business, Alphabet, Google’s parent company, continues to rely heavily on ad revenues – to the tune of more than 85% of its entire business volume.
While that percent is down a little from the 90%+ figures of 5 or 10 years ago, in spite of diversifying into cloud computing and hardware such as mobile phones, the dizzyingly high percentage of Google revenues coming from ad sales hasn’t budged at all in more recent times.
And yet … even with all the cookie-blocking activity that’s now going on, it’s likely that this isn’t the biggest threat to Google’s business model. That distinction would go to governmental regulatory agencies and lawmakers – the people who are cracking down on the sharing of consumer data that underpins the rationale of media sales.
The regulatory pressures are biggest in Europe, but consumer privacy concerns are driving similar efforts in North America as well.
On a parallel track, it has also initiated a project dubbed “Privacy Sandbox” to give publishers, advertisers, technology firms and web developers a vehicle to share proposals that will, in the words of Google, “protect consumer privacy while supporting the digital ad marketplace.”
Well, readers – what do you think? Do these initiatives have the potential to change the ecosystem to something more positive and actually achieve their objectives? Or is this just another “fool’s errand” where attractive-sounding platitudes sufficiently (or insufficiently) mask a dimmer reality?
There’s no question that “native advertising” – paid editorial content – has become a popular “go-to” marketing tactic. After all, it’s based on the time-tested notion that people don’t like advertising, and they’re more likely to pay attention to information that looks more like a news article than an ad.
Back in the days of print-only media, paid editorial placements were often labeled as “advertorials.” But these days we’re seeing a plethora of ways to label them – whether identified as “sponsored content,” “paid posts,” or using some kind of lead-in descriptor such as “presented by …”
Behind all of the verbal gymnastics is the notion that people may not easily distinguish native advertising from true editorial if the identification can be kept somewhat euphemistic. At the same time, the verbal “sleight of hand” raises concerns about the obfuscation that seems to be going on.
These dynamics have been tested. One such test, conducted several years ago by ad tech company TripleLift, used biometric eye-tracking to see how people would view the same piece of native advertising, that carries different disclosure labeling.
The results were revealing. Here are the percentages of participants who saw each ad, based on how the content was labeled:
“Presented by” labeling: ~39% saw the content
“Sponsored by” labeling: ~29%
“Promoted by” labeling: ~26%
“Brought to you by” labeling: ~24%
“Advertisement” labeling: ~23%
Notice that the content that was labeled “advertisement” was noticed the least often. This provides yet more confirmation that people ignore ads. When advertisers used softer/fuzzier terms like “presented by” and “sponsored by,” they achieved a bigger lift in the content being noticed.
It comes as little surprise that those same “presented by” and “sponsored by” labels are also the most potentially confusing to people regarding whether the item is paid content. And when people find out the truth, they tend to feel deceived.
Members of the Association of National Advertisers look at it the same way. In an ANA survey of its members conducted several years ago, two-thirds of the respondents agreed that there should be “clear disclosure” of native ads – even if there’s a lack of consensus regarding who should be responsible for the labeling or what constitutes “clear” disclosure.
Asked which labeling describes native ad disclosure “very well,” here’s what the ANA survey found:
“Advertisement”: 62% say this labeling describes native ad placements “very well”
“Paid content”: 37%
“Paid posts”: 34%
“Sponsored by”: 31%
“Native advertising”: 12%
“Presented by”: 11%
“Promoted by”: 11%
“Branded content”: 8%
“Featured partner”: 8%
Considering that the findings are all over the map, it would be nice if a universal method of disclosure could be devised. But the language that’s agreed upon shouldn’t scare away readers, since in so many cases native advertising isn’t directly pitching a product or service. Labeling such content “advertising” would be as much of a misnomer as failing to divulge the company paying for the placement.
My personal preference for adopting consistent labeling language among the options above would be “Sponsored by …” What’s yours?