When Google feels the need to go public about the state of the current ad revenue ecosystem, you know something’s up.
And “what’s up” is actually “what’s down.” According to a new study by Google, digital publishers are losing more than half of their potential ad revenue, on average, when readers set their web browser preferences to block cookies – those data files used to track the online activity of Internet users.
The impact of cookie-blocking is even bigger on news publishers, which are foregoing ad revenues of around 62%, according to the Google study.
The way Google conducted its investigation was to run a 4-month test among ~500 global publishers (May to August 2019). Google disabled cookies on a randomly selected part of each publisher’s traffic, which enabled it to compare results with and without the cookie-blocking functionality employed.
It’s only natural that Google would be keen to understand the revenue impact of cookie-blocking. Despite its best efforts to diversify its business, Alphabet, Google’s parent company, continues to rely heavily on ad revenues – to the tune of more than 85% of its entire business volume.
While that percent is down a little from the 90%+ figures of 5 or 10 years ago, in spite of diversifying into cloud computing and hardware such as mobile phones, the dizzyingly high percentage of Google revenues coming from ad sales hasn’t budged at all in more recent times.
And yet … even with all the cookie-blocking activity that’s now going on, it’s likely that this isn’t the biggest threat to Google’s business model. That distinction would go to governmental regulatory agencies and lawmakers – the people who are cracking down on the sharing of consumer data that underpins the rationale of media sales.
The regulatory pressures are biggest in Europe, but consumer privacy concerns are driving similar efforts in North America as well.
Figuring that a multipronged effort makes sense in order to counteract these trends, this week Google aired a proposal to give online users more control over how their data is being used in digital advertising, and seeking comments and feedback from interest parties.
On a parallel track, it has also initiated a project dubbed “Privacy Sandbox” to give publishers, advertisers, technology firms and web developers a vehicle to share proposals that will, in the words of Google, “protect consumer privacy while supporting the digital ad marketplace.”
Well, readers – what do you think? Do these initiatives have the potential to change the ecosystem to something more positive and actually achieve their objectives? Or is this just another “fool’s errand” where attractive-sounding platitudes sufficiently (or insufficiently) mask a dimmer reality?
The efforts to craft a new trade agreement with the People’s Republic of China have run into some pretty major roadblocks in recent weeks and months.
Things came to another inflection point this week when President Trump announced that new tariffs would be imposed on more Chinese goods imported into the United States. As of September 1, pretty much all categories of Chinese imports will now be subject to tariffs.
If we look at the impact the protracted impasse has had on markets, the repercussions are plain to see. One result we’ve seen is that China has dipped from making up the largest portion of trade with the United States to being in third place now, behind Mexico and Canada:
But what’s the long-term prognosis for a trade deal with China? Recent world (and USA) statistics point to softening of the economy, which could have negative consequences across the board.
When it comes to perspectives on economic and business matters involving China and the Pacific Rim, I like to check in with my brother, Nelson Nones, who has lived and worked in the Far East for more than 20 years. He has first-hand experience working in the Chinese market and is keenly aware of the issues of intellectual property protection, which is a major bone of contention between the United States and China and is one of the factors in the trade negotiations. (Nelson runs a software company which has chosen to forego the Chinese market because of regulations requiring software firms that set up a joint ventures with Chinese companies to disclose their source code — something his firm will never do.)
I asked Nelson to share his thoughts about what he sees happening in the coming months. Here are his observations:
Chinese President Xi has a lot on his plate right now. It isn’t just the U.S. trade war but also the Hong Kong disturbances, U.S. arms sales to Taiwan, the U.S. sending warships through the Taiwan Strait and the South China Sea, and China’s domestic banking sector weakness, to name just some. Trump has also put President Xi in a tight spot by demanding (or getting) Xi’s assurances that China will buy more U.S. agricultural products and will enact legislation protecting foreign intellectual property.
In spite of his very substantial power, I predict that Xi will have a very tough time ramming Trump’s conditions down the throats of his countrymen.
I should mention that the biggest issue here is intellectual property protection. The draft agreement that China “almost” signed had assurances that IP protection laws will be enacted, but Xi apparently nixed that draft whereupon the Chinese government stated that no government can promise, when negotiating a treaty with a foreign country, to change its domestic laws.
Technically, they’re right. For example, President Trump can’t commit to changing U.S. laws because only the Congress can do that under the constitutional separation of powers. Similarly, on paper, only China’s National People’s Congress (the national legislature) can change Chinese laws, and President Xi is not a member of the National People’s Congress. (Of course, this explanation conveniently overlooks the fact that both the Presidency and the National People’s Congress are subservient to the Communist Party of China, and that Xi is the General Secretary of the Communist Party, but still it’s technically correct.)
In view of all this, the natural Chinese instinct is to wait … and in this case, wait until the 2020 U.S. election and see what happens. If Trump is defeated for re-election, then perhaps many of Xi’s problems will disappear magically. On the other hand, if Trump stays in office maybe the pain that Trump’s China trade policy is inflicting on U.S. businesses and consumers will force Trump to lighten up a bit.
In other words, President Xi has much to gain and relatively little to lose by playing the waiting game for a while.
As for U.S. tariffs, those are causing Chinese businesses to adapt their supply chains by routing them through other East and Southeast Asian countries which are not subject to the tariffs. For instance, instead of sending products straight to the U.S., Chinese manufacturers are sending products to Vietnam or Thailand where a tiny bit of additional work is done – just enough to qualify for a “Made in Vietnam” or “Made in Thailand” label. (This adaptation partially explains Thailand’s large trade surplus which has made the Thai Baht one of the world’s best-performing currencies this year.)
These maneuvers actually provide a safety valve for both Xi and Trump. For Xi, it cushions the reduction in demand for Chinese exports. At the same time it puts some additional pressure on Trump because this type of safety valve does not really exist for U.S. exporters trying to evade reciprocal Chinese tariffs. But on the plus side for Trump, it tends to dampen the impact of higher tariffs pushing up U.S. producer and consumer prices.
If you ask me to bottom-line this, the trade problems look more like a protracted siege than an episode of brinksmanship.
How the siege is resolved depends on how strong Trump’s position will be after the 2020 election. If the Democrats continue with their leftward lurch, then Xi will eventually have to cave because Trump’s position will be strong (I’d say a 65% probability of re-election). But if the Democrats come to their senses and Trump continues shooting himself in the foot, then he’s in real danger of losing the election and Xi will come up the big winner (I’d give this a 35% probability as of today).
So there you have it: the prognosis from someone who is “on the ground” in East Asia. What are your thoughts? Are you in broad agreement or do you see things differently? Please share your observations with other readers here.
In the drive to keep the onslaught of fake e-mail communications under control, DMARC’s checks on incoming e-mail is an important weapon in the Internet police’s bag of tricks. A core weapon of cyber felons is impersonation, which is what catches most unwitting recipients unawares.
So … how is DMARC doing?
Let’s give it a solid C or C+.
DMARC, which stands for Domain-based Message Authentication, Reporting and Conformance, is a procedure that checks on the veracity of the senders of e-mail. Nearly 80% of all inboxes – that’s almost 5.5 billion – conduct DMARC checks, and nearly 750,000 domains apply DMARC as well.
Ideally, DMARC is designed to satisfy the following requirements to ensure as few suspicious e-mails as possible make it to the inbox:
Minimize false positives
Provide robust authentication reporting
Assert sender policy at receivers
Reduce successful phishing delivery
Work at Internet scale
Minimize complexity
But the performance picture is actually rather muddy.
According to a new study by cyber-security firm Valimail, people are being served nearly 3.5 billion suspicious e-mails each day. That’s because DMARC’s success rate of ferreting out and quarantining the faux stuff runs only around 20%. And while America has much better DMARC performance than other countries, the Unites States still accounts for nearly 40% of all suspicious e-mail that makes it through to inboxes due to the shear volume of e-mails involved.
In developing its findings, Valimail analyzed data from billions of authentication requests and nearly 20 million publicly accessible DMARC and SPF (Sender Policy Framework) records. The Valimail findings also reveal that there’s a pretty big divergence in DMARC usage based on the type of entity. DMARC usage is highest within the U.S. federal government and large technology companies, where it exceeds 20% of penetration. By contrast, it’s much lower in other commercial segments.
The commercial sector’s situation is mirrored in a survey of ~1,000 e-mail security and white-collar professionals conducted by GreatHorn, a cloud-native communication security platform, which found that nearly one in four respondents receive phishing or other malicious e-mails daily, and an additional ~25% receive them weekly. These include impersonations, payload attacks, business services spoofing, wire transfer requests, W2 requests and attempts at credential theft.
The GreatHorn study contains this eyebrow-raising finding as well: ~22% of the businesses surveyed have suffered a breach caused by malicious e-mail in the last quarter alone. The report concludes:
“There is an alarming sense of complacency at enterprises at the same time that cybercriminals have increased the volume and sophistication of their e-mail attacks.”
Interestingly, in its study Valimail finds that the government has the highest DMARC enforcement success rate, followed by U.S. technology and healthcare firms (but those two sectors lag significantly behind). It may be one of the few examples we have of government performance outstripping private practitioners.
Either way, much work remains to be done in order to reduce faux e-mail significantly more. We’ll have to see how things improve in the coming months and years.
Not all-smiles at the moment … Chinese leader Xi Jinping.
In China, it’s difficult to discern where private industry ends and the government begins. At some level, we’ve been aware of that conundrum for decades.
Still … opportunities for doing business in the world’s largest country have been a tempting siren call for American companies. And over the past 15+ years, conducting that business has seemed like the “right and proper” thing to do — what with China joining the G-8+5 economic powers along with incessant cheerleading by the U.S. Department of Commerce, abetted by proactive endeavors of other quasi-governmental groups promoting the interests of American commerce across the globe.
But it’s 2019 and circumstances have changed. It began with a change in political administrations in the United States several years ago, following which a great deal more credence has been given to the undercurrent of unease businesspeople have felt about the manner in which supposedly proprietary engineering and manufacturing technologies have suddenly popped up in China as if by magic, pulling the rug out from under American producers.
Nearly three years into the new presidential administration, we’re seeing evidence of this “new skepticism” begin to play out in concrete ways. One of the most eye-catching developments – and a stunning fall from grace – is Huawei Technologies Co., Ltd. (world headquarters: Shenzhen, China), one of the world’s largest makers of cellphones and high-end telecom equipment.
As recounted by NPR’s Weekend Edition reporter Emily Feng a few days ago, Huawei stands accused of some of the most blatant forms of technology-stealing. Recently, the Trump administration banned all American companies from using Huawei equipment in its 5G infrastructure and is planning to implement even more punitive measures that will effectively prevent U.S. companies from doing any business at all with Huawei.
Banning of Huawei equipment in U.S. 5G infrastructure isn’t directly related to the theft of intellectual property belonging to Huawei’s prospective U.S. suppliers. Rather, it’s a response to the perceived threat that the Chinese government will use Huawei equipment installed in U.S. 5G mobile networks to surreptitiously conduct espionage for military, political or economic purposes far into the future.
In other words, as one of the world’s largest telecom players, Huawei is perceived as a direct threat to non-Chinese interests not just on one front, but two: the demand side and the supply side. The demand-side threat is why the Trump administration has banned Huawei equipment in U.S. 5G infrastructure, and it has also publicly warned the U.K. government to implement a similar ban.
As for the supply side, the Weekend Edition report recounts the intellectual property theft experience of U.S.-based AKHAN Semiconductor when it started working with Huawei. AKHAN has developed and perfected an ingenious form of diamond-coated glass – a rugged engineered surface perfectly suited for smartphone screens.
Huawei expressed interest in purchasing the engineered glass for use in its own products. Nothing wrong with that … but Huawei used product samples provided by AKHAN under strict usage-and-return guidelines to reverse-engineer the technology, in direct contravention of those explicit conditions – and in violation of U.S. export control laws as well.
AKHAN discovered the deception because its product samples had been broken into pieces via laser cutting, and only a portion of them were returned to AKHAN upon demand.
When confronted about the matter, Huawei’s company officials in America admitted flat-out that the missing pieces had been sent to China. AKHAN enlisted the help of the FBI, and in the ensuing months was able to build a sufficient case that resulted in a raid on Huawei’s U.S. offices in San Diego.
The supply side and demand side threats are two fronts — but are related. One of the biggest reasons why Huawei kit has been selected, or is being considered, for deployment on 5G mobile networks worldwide is due to its low cost. The Chinese government, so the thinking goes, “seduces” telecom operators into buying the Huawei kit by undercutting all competitors, thereby gaining access to countless espionage opportunities. To maintain its financial footing Huawei must keep its costs as low as it can, and one way is to avoid R&D expenses by stealing intellectual property from would-be suppliers.
AKHAN is just the latest – if arguably the most dramatic – example of Huawei’s pattern of technology “dirty tricks” — others being a suit brought by Motorola against Huawei for stealing trade secrets (settled out of court), and T-Mobile’s suit for copying a phone-testing robot which resulted in Huawei paying millions of dollars in damages.
The particularly alarming – and noxious – part of the Huawei saga is that many of its employees in the United States (nearly all of them Chinese) weren’t so keen on participating in the capers, but found that their concerns and warnings went unheeded back home.
In other words – the directive was to get the technology and the trade secrets, come what may.
This kind of behavior is one borne from something that’s far bigger than a single company … it’s a directive that’s coming from “China, Inc.” Translation: The Chinese government.
The actions of the Trump administration regarding trade policy and protecting intellectual property can seem boorish, awkward and even clumsy at times. But in another sense, it’s a breath of fresh air after decades of the well-groomed, oh-so-proper “experts” who thought they were the smartest people in the room — but were being taken to the cleaners again and again.
What are your thoughts about “yesterday, today and the future” of trade, industrial espionage and technology transfer vis a vis China? Are we in a new era of tougher controls and tougher standards, or is this going to be only a momentary setback in China’s insatiable desire to become the world’s most important economy? Please share your thoughts and perspectives with other readers here.
Debris field from the Ethiopian Airlines plane crash (March 10, 2019).
It’s been exactly two months since the crash of the Ethiopian Airlines 737 Max 8 Boeing plane that killed all 157 passengers and crew on board. But as far as Boeing’s PR response is concerned, it might as well never ever happened.
Of course, sticking one’s corporate head in the sand doesn’t make problems go away — and in the case of Boeing, clearly the markets have been listening.
Since the crash, Boeing stock has lost more than $27 billion in market value — or nearly 15% — from its top value of $446 per share.
The problem is, the Ethiopian incident has laid bare stories of whistle blowers and ongoing maintenance issues regarding Boeing planes. But the company seems content to let these stories just hang out there, suspended in the air.
With no focused corporate response of any real coherence, it’s casting even greater doubt in the minds of the air traveling public about the quality and viability of the 737 planes — and Boeing aircraft in general.
Even if just 20% or 25% of the air traveling public ends up having bigger doubts, that would have (and is having) a big impact on the share price of Boeing stock.
And so the cycle of mistrust and reputational damage continues. What has Boeing actually done in the past few months to reverse the significant market value decline of the company? Whatever the company may or may not be undertaking isn’t having much of an impact on the “narrative” that’s taken shape about Boeing being a company that doesn’t “sweat the small stuff” with proper focus.
For an enterprise of the size and visibility of Boeing, being reactive isn’t a winning PR strategy. Waiting for the next shoe to drop before you develop and launch your response narrative doesn’t cut it, either.
Far from flying below radar, Boeing’s “non-response response” is actually saying something loud and clear. But in its case, “loud and clear” doesn’t seem to be ending up anyplace particularly good for the Boeing brand and the company’s
What are your thoughts about the way Boeing has handled the recent news about its mode 737 aircraft? What do you think could have done better? Please share your thoughts with other readers here.
Over the past year, Americans have been fed a fairly steady stream of news about the People’s Republic of China – and most of it hasn’t been particularly positive.
While Russia may get the more fevered news headlines because of the various political investigations happening in Washington, the current U.S. presidential administration hasn’t shied away from criticizing China on a range woes – trade policy in particular most recently, but also diverse other issues like alleged unfair technology transfer policies, plus the building of man-made islands in the South China Sea thereby bringing Chinese military power closer to other countries in the Pacific Rim.
The drumbeat of criticism could be expected to affect popular opinion about China – and that appears to be the case based on a just-published report from the Pew Research Center.
The Pew report is based on a survey of 1,500 American adults age 18 and over, conducted during the spring of 2018. It’s a survey that’s been conducted annually since 2012 using the same set of questions (and going back annually to 2005 for a smaller group of the questions).
The newest study shows that the opinions Americans have about China have become somewhat less positive over the past year, after having nudged higher in 2017.
The topline finding is this: today, ~38% of Americans have a favorable opinion of China, which is a drop of six percentage points from Pew’s 2017 finding of ~44%. We are now flirting with the same favorability levels that Pew was finding during the 2013-2016 period [see the chart above].
Drilling down further, the most significant concerns pertain to China’s economic competition, not its military strength. In addition to trade and tariff concerns, another area of growing concern is about the threat of cyber-attacks from China.
There are also the perennial concerns about the amount of U.S. debt held by China, as well as job losses to China; this has been a leading issue in the Pew surveys dating back to 2012. But even though debt levels remain a top concern, its raw score has fallen pretty dramatically over the past six years.
On the other hand, a substantial and growing percentage of Americans expresses worries about the impact of China’s growth on the quality of the global environment.
Interestingly, the proportion of Americans who consider China’s military prowess to be a bigger threat compared to an economic threat has dropped by a statistically significant seven percentage points over the past year – from 36% to 29%. Perhaps unsurprisingly, younger Americans age 18-29 are far less prone to have concerns over China’s purported saber-rattling – differing significantly from how senior-age respondents feel on this topic.
Taken as a group, eight issues presented by Pew Research in its survey revealed the following ranking of factors, based on whether respondents consider them to be “a serious problem for the United States”:
Large U.S. debt held by China: ~62% of respondents consider a “serious problem”
Cyber-attacks launched from China: ~57%
Loss of jobs to China: ~52%
China’s impact on the global environment: ~49%
Human rights issues: ~49%
The U.S. trade deficit with China: ~46%
Chinese territorial disputes with neighboring countries: ~32%
Tensions between China and Taiwan: ~21%
Notice that the U.S. trade deficit isn’t near the top of the list … but Pew does find that it is rising as a concern.
If the current trajectory of tit-for-tat tariff impositions continues to occur, I suspect we’ll see the trade issue being viewed by the public as a more significant problem when Pew administers its next annual survey one year from now.
Furthermore, now that the United States has just concluded negotiations with Canada and Mexico on a “new NAFTA” agreement, coupled with recent trade agreements made with South Korea and the EU countries, it makes the administration’s target on China as “the last domino” just that much more significant.
More detailed findings from the Pew Research survey can be viewed here.
Video is supposed to be the “great equalizer”: evidence that doesn’t lie — particularly in the case of chronicling law enforcement events.
From New York City and Chicago to Baltimore, Charleston, SC and dozens of places in between, there have been a number of “high profile” police incidents in recent years where mobile video has made it possible to go beyond the sometimes-contradictory “he said/she said” statements coming from officers and citizens.
There’s no question that it’s resulted in some disciplinary or court outcomes that may well have turned out differently in times before.
In response, numerous police departments have responded in a way best described as, “If you can’t beat them, join them.” They’ve begun outfitting their law enforcement personnel with police body cams.
The idea is that having a “third party” digital witness on the scene will protect both the perpetrator and the officer when assessments need to be made about conflicting accounts of what actually happened.
This tidy solution seems to be running into a problem, however. Some security experts are calling into question the ability of body cameras to provide reliable evidence – and it isn’t because of substandard quality in the video footage being captured.
Recently, specialists at the security firm Nuix examined five major brands of security cameras … and have determined that all of them are vulnerable to hacking.
The body cam suppliers in question are CEESC, Digital Ally, Fire Cam, Patrol Eyes, and VIEVU. The cameras are described by Nuix as “full-feature computers walking around on your chest,” and as such, require the same degree of security mechanisms that any other digital device operating in security-critical areas would need to possess.
But here’s the catch: None of the body cameras evaluated featured digital signatures on the uploaded footage. This means that there would be no way to confirm whether any of the video evidence might have been tampered with.
In other words, a skilled technician with nefarious intent could download, edit and re-upload content – all while avoiding giving any sort of indication that it had been revised.
These hackers could be operating on the outside … or they could be rogue officers inside a law enforcement department.
Another flaw uncovered by Nuix is that malware can infect the cameras in the form of malicious computer code being disguised as software updates – updates that the cameras are programmed to accept without any additional verification.
Even worse, once a hacker successfully breached a camera device, he or she could easily gain access to the wider police network, thereby causing a problem that goes much further than a single camera or a single police officer.
Thankfully, Nuix is a “good guy” rather than a “bad actor” in its experimentation. The company is already working with several of the body cam manufacturers to remedy the problems uncovered by its evaluation, so as to improve the ability of the cameras to repel hacking attempts.
But the more fundamental issue that’s raised is this: What other types of security vulnerabilities are out there that haven’t been detected yet?
It doesn’t exactly reinforce our faith in technology to ensure fairer, more honest and more transparent law enforcement activities. If video footage can’t be considered verified proof that an event happened or didn’t happen, have we just returned to Square One again, with people pointing fingers in both directions but with even lower levels of trust?
Hopefully not. But with the polarized camps we have at the moment, with people only too eager to blame the motives of their adversaries, the picture doesn’t look particularly promising …
You’d think that with the continuing cascade of news about the exposure of personal information, people would be more skittish than ever about sharing their data.
But this isn’t the case … and we have a 2018 study from marketing data foundation firm Acxiom to prove it. The report, titled Data Privacy: What the Consumer Really Thinks, is the result of survey information gathered in late 2017 by Acxiom in conjunction with the Data & Marketing Association (formerly the Direct Marketing Association).
The research, which presents results from an online survey of nearly 2,100 Americans age 18 and older, found that nearly 45% of the respondents feel more comfortable with data exchange today than they have in the past.
Among millennial respondents, well over half feel more comfortable about data exchange today.
Indeed, the report concludes that most Americans are “data pragmatists”: people who are open about exchanging personal data with businesses if the benefits received in return for their personal information are clearly stated.
Nearly 60% of Americans fall into this category.
On top of that, another 20% of the survey respondents report that they’re completely unconcerned about the collection and usage of their personal data. Among younger consumers, it’s nearly one-third.
When comparing Americans’ attitudes to consumers in other countries, we seem to be a particularly carefree bunch. Our counterparts in France and Spain are much more wary of sharing their personal information.
Part of the difference in views may be related to feelings that Americans have about who is responsible for data security. In the United States, the largest portion of people (~43%) believe that 100% of the responsibility for data security lies with consumers themselves, versus only ~6% who believe that the responsibility resides solely with brands or the government. (The balance of people think that the responsibility is shared between all parties.)
To me, the bottom-line finding from the Acxiom/DMA study is that people have become so conditioned to receiving the benefits that come from data exchange, they’re pretty inured to the potential downsides. Probably, many can’t even fathom going back to the days of true data privacy.
Of course, no one wishes for their personal data to be used for nefarious purposes, but who is willing to forego the benefits (be it monetary, convenience or comfort) that come from companies and brands knowing their personal information and their personal preferences?
And how forgiving would these people be if their personal data were actually compromised? From Target to Macy’s, quite a few Americans have already had a taste of this, but what is it going to take for such “data pragmatism” to seem not so practical after all?
Lawmakers’ cringeworthy questioning of Facebook’s Mark Zuckerberg calls into question the government’s ability to regulate social media.
Facebook CEO Mark Zuckerberg on Capitol Hill, April 2018. (Saul Loeb/AFP/Getty Images)
With the testimony on Capitol Hill last week by Facebook CEO Mark Zuckerberg, there’s heightened concern about the negative side effects of social media platforms. But in listening to lawmakers questioning Zuckerberg, it became painfully obvious that our Federal legislators have next to no understanding of the role of advertising in social media – or even how social media works in its most basic form.
Younger staff members may have written the questions for their legislative bosses, but it was clear that the lawmakers were ill-equipped to handle Zuckerberg’s alternatively pat, platitudinous and evasive responses and to come back with meaningful follow-up questions.
Even the younger senators and congresspeople didn’t acquit themselves well.
It made me think of something else, too. The questioners – and nearly everyone else, it seems – are missing this fundamental point about social media: Facebook and other social media platforms aren’t much different from old-fashioned print media, commercial radio and TV/cable in that that they all generate the vast bulk of their earnings from advertising.
It’s true that in addition to advertising revenues, print publications usually charge subscribers for paper copies of their publications. In the past, this was because 1) they could … but 2) also to help defray the cost of paper, ink, printing and physical distribution of their product to news outlets or directly to homes.
Commercial radio and TV haven’t had those costs, but neither did they have a practical way of charging their audiences for broadcasts – at least not until cable and satellite came along – and so they made their product available to their audiences at no charge.
The big difference between social media platforms and traditional media is that social platforms can do something that the marketers of old could only dream about: target their advertising based on personally identifiable demographics.
Think about it: Not so many years ago, the only demographics available to marketers came from census publications, which by law cannot reveal any personally identifiable information. Moreover, the U.S. census is taken only every ten years, so the data ages pretty quickly.
Beyond census information, advertisers using print media could rely on audit reports from ABC and BPA. If it was a business-to-business publication, some demographic data was available based on subscriber-provided information (freely provided in exchange for receiving those magazines free of charge). But in the case of consumer publications, the audit information wouldn’t give an advertiser anything beyond the number of copies printed and sold, and (sometimes) a geographic breakdown of where mail subscribers lived.
Advertisers using radio or TV media had to rely on researchers like Nielsen — but that research surveyed only a small sample of the audience.
What this meant was that the only way advertisers could “move the needle” in a market was to spend scads of cash on broadcasting their messages to the largest possible audience. As a connecting mechanism, this is hugely inefficient.
The value proposition that Zuckerberg’s Facebook and other social media platforms provide is the ability to connect advertisers with more people for less spend, due to these platforms’ abilities to use personally identifiable demographics for targeting the advertisements.
Want to find people who enjoy doing DIY projects but who live just in areas where your company has local distribution of your products? Through Facebook, you can narrow-cast your reach by targeting consumers involved with particular activities and interests in addition to geography, age, gender, or whatever other characteristics you might wish to use as filters.
That’s massively more efficient and effective than relying on something like median household income within a zip code or census tract. It also means that your message will be more targeted — and hence more relevant — to the people who see it.
All of this is immensely more efficient for advertisers, which is why social media advertising (in addition to search advertising on Google) has taken off while other forms of advertising have plateaued or declined.
But there’s a downside: Social media is being manipulated (“abused” might be the better term) by “black hats” – people who couldn’t do such things in the past using census information or Nielsen ratings or magazine audit statements.
Here’s another reality: Facebook and other social media platforms have been so fixated on their value proposition that they failed to conceive of — or plan for — the behavior inspired by the evil side of humanity or those who feel no guilt about taking advantage of security vulnerabilities for financial or political gain.
Facebook COO Sheryl Sandberg interviewed on the Today Show, April 2018.
Now that that reality has begun to sink in, it’ll be interesting to see how Mark Zuckerberg and Sheryl Sandberg of Facebook — not to mention other social media business leaders — respond to the threat.
They’ll need to do something — and it’ll have to be something more compelling than CEO Zuckerberg’s constant refrain at the Capitol Hill hearings (“I’ll have my team look into that.”) or COO Sandberg’s litany (“I’m glad you asked that question, because it’s an important one.”) on her parade of TV/cable interviews. The share price of these companies’ stock will continue to pummeled until investors understand what changes are going to be made that will actually achieve concrete results.
I don’t know about you, but I’ve never forgotten the late afternoon of August 14, 2003 when problems with the North American power grid meant that people in eight states stretching from New England to Detroit suddenly found themselves without power.
Fortunately, my company’s Maryland offices were situated about 100 miles beyond the southernmost extent of the blackout. But it was quite alarming to watch the power outage spread across a map of the Northeastern and Great Lakes States (plus Ontario) in real-time, like some sort of creeping blob from a science fiction film.
“Essential services remained in operation in some … areas. In others, backup generation systems failed. Telephone networks generally remained operational, but the increased demand triggered by the blackout left many circuits overloaded. Water systems in several cities lost pressure, forcing boil-water advisories to be put into effect. Cellular service was interrupted as mobile networks were overloaded with the increase in volume of calls; major cellular providers continued to operate on standby generator power. Television and radio stations remained on the air with the help of backup generators — although some stations were knocked off the air for periods ranging from several hours to the length of the entire blackout.”
Another (happier) thing I remember from this 15-year-old incident is that rather than causing confusion or bedlam, the massive power outage brought out the best in people. This anecdote from the blackout was typical: Manhattanites opening their homes to workers who couldn’t get to their own residences for the evening.
For most of the 50 million+ Americans and Canadians affected by the blackout, power was restored after about six hours. But for some, it would take as long as two days for power restoration.
Upon investigation of the incident, it was discovered that high temperatures and humidity across the region had increased energy demand as people turned on air conditioning units and fans. This caused power lines to sag as higher currents heated the lines. The precipitating cause of the blackout was a software glitch in the alarm system in a control room of FirstEnergy Corporation, causing operators to be unaware of the need to redistribute the power load after overloaded transmission lines had drooped into foliage.
In other words, what should have been, at worst, a manageable localized blackout cascaded rapidly into a collapse of the entire electric grid across multiple states and regions.
But at least the incident was borne out of human error, not nefarious motives.
That 2003 experience should make anyone hearing last week’s testimony on Capitol Hill about the risks faced by the U.S. power grid think long and hard about what could happen in the not-so-distant future.
The bottom-line on the testimony presented in the hearings is that malicious cyberattacks are becoming more sophisticated – and hence more capable of causing damage to American infrastructure. The Federal Energy Regulatory Commission (FERC) is cautioning that hackers are increasingly threatening U.S. utilities ranging from power plants to water processing systems.
Similar warnings come from the Department of Homeland Security, which reports that hackers have been attacking the U.S. electric grid, power plants, transportation facilities and even targets in commercial sectors.
The Energy Department goes even further, reporting in 2017 that the United States electrical power grid is in “imminent danger” from a cyber-attack. To underscore this threat, the Department contends that more than 100,000 cyber-attacks are being mounted every day.
With so many attacks of this kind happening on so many fronts, one can’t help but think that it’s only a matter of time before we face a “catastrophic event” that’s even more consequential than the one that affected the power grid in 2003.
Even more chilling, if it’s borne out of intentional sabotage – as seems quite likely based on recent testimony – it’s pretty doubtful that remedial action could be taken as quickly or as effectively as what would be done in response to an accidental incident likr the one that happened in 2003.
Put yourself in the saboteurs’ shoes: If your aim is to bring U.S. infrastructure to its knees, why plan for a one-off event? You’d definitely want to build in ways to cause cascading problems – not to mention planting additional “land-mines” to frustrate attempts to bring systems back online.
Contemplating all the implications is more than sobering — it’s actually quite frightening. What are your thoughts on the matter? Please share them with other readers.