China-bashing is taking its toll.

Over the past year, Americans have been fed a fairly steady stream of news about the People’s Republic of China – and most of it hasn’t been particularly positive.

While Russia may get the more fevered news headlines because of the various political investigations happening in Washington, the current U.S. presidential administration hasn’t shied away from criticizing China on a range woes – trade policy in particular most recently, but also diverse other issues like alleged unfair technology transfer policies, plus the building of man-made islands in the South China Sea thereby bringing Chinese military power closer to other countries in the Pacific Rim.

The drumbeat of criticism could be expected to affect popular opinion about China – and that appears to be the case based on a just-published report from the Pew Research Center.

The Pew report is based on a survey of 1,500 American adults age 18 and over, conducted during the spring of 2018.  It’s a survey that’s been conducted annually since 2012 using the same set of questions (and going back annually to 2005 for a smaller group of the questions).

The newest study shows that the opinions Americans have about China have become somewhat less positive over the past year, after having nudged higher in 2017.

The topline finding is this: today, ~38% of Americans have a favorable opinion of China, which is a drop of six percentage points from Pew’s 2017 finding of ~44%.  We are now flirting with the same favorability levels that Pew was finding during the 2013-2016 period [see the chart above].

Drilling down further, the most significant concerns pertain to China’s economic competition, not its military strength. In addition to trade and tariff concerns, another area of growing concern is about the threat of cyber-attacks from China.

There are also the perennial concerns about the amount of U.S. debt held by China, as well as job losses to China; this has been a leading issue in the Pew surveys dating back to 2012. But even though debt levels remain a top concern, its raw score has fallen pretty dramatically over the past six years.

On the other hand, a substantial and growing percentage of Americans expresses worries about the impact of China’s growth on the quality of the global environment.

Interestingly, the proportion of Americans who consider China’s military prowess to be a bigger threat compared to an economic threat has dropped by a statistically significant seven percentage points over the past year – from 36% to 29%. Perhaps unsurprisingly, younger Americans age 18-29 are far less prone to have concerns over China’s purported saber-rattling – differing significantly from how senior-age respondents feel on this topic.

Taken as a group, eight issues presented by Pew Research in its survey revealed the following ranking of factors, based on whether respondents consider them to be “a serious problem for the United States”:

  • Large U.S. debt held by China: ~62% of respondents consider a “serious problem”
  • Cyber-attacks launched from China: ~57%
  • Loss of jobs to China: ~52%
  • China’s impact on the global environment: ~49%
  • Human rights issues:  ~49%
  • The U.S. trade deficit with China: ~46%
  • Chinese territorial disputes with neighboring countries: ~32%
  • Tensions between China and Taiwan: ~21%

Notice that the U.S. trade deficit isn’t near the top of the list … but Pew does find that it is rising as a concern.

If the current trajectory of tit-for-tat tariff impositions continues to occur, I suspect we’ll see the trade issue being viewed by the public as a more significant problem when Pew administers its next annual survey one year from now.

Furthermore, now that the United States has just concluded negotiations with Canada and Mexico on a “new NAFTA” agreement, coupled with recent trade agreements made with South Korea and the EU countries, it makes the administration’s target on China as “the last domino” just that much more significant.

More detailed findings from the Pew Research survey can be viewed here.

Keeping law enforcement on the level.

Let’s go to the videotape … or not.

Video is supposed to be the “great equalizer”: evidence that doesn’t lie — particularly in the case of chronicling law enforcement events.

From New York City and Chicago to Baltimore, Charleston, SC and dozens of places in between, there have been a number of “high profile” police incidents in recent years where mobile video has made it possible to go beyond the sometimes-contradictory “he said/she said” statements coming from officers and citizens.

There’s no question that it’s resulted in some disciplinary or court outcomes that may well have turned out differently in times before.

In response, numerous police departments have responded in a way best described as, “If you can’t beat them, join them.” They’ve begun outfitting their law enforcement personnel with police body cams.

The idea is that having a “third party” digital witness on the scene will protect both the perpetrator and the officer when assessments need to be made about conflicting accounts of what actually happened.

This tidy solution seems to be running into a problem, however. Some security experts are calling into question the ability of body cameras to provide reliable evidence – and it isn’t because of substandard quality in the video footage being captured.

Recently, specialists at the security firm Nuix examined five major brands of security cameras … and have determined that all of them are vulnerable to hacking.

The body cam suppliers in question are CEESC, Digital Ally, Fire Cam, Patrol Eyes, and VIEVU. The cameras are described by Nuix as “full-feature computers walking around on your chest,” and as such, require the same degree of security mechanisms that any other digital device operating in security-critical areas would need to possess.

But here’s the catch: None of the body cameras evaluated featured digital signatures on the uploaded footage.  This means that there would be no way to confirm whether any of the video evidence might have been tampered with.

In other words, a skilled technician with nefarious intent could download, edit and re-upload content – all while avoiding giving any sort of indication that it had been revised.

These hackers could be operating on the outside … or they could be rogue officers inside a law enforcement department.

Another flaw uncovered by Nuix is that malware can infect the cameras in the form of malicious computer code being disguised as software updates – updates that the cameras are programmed to accept without any additional verification.

Even worse, once a hacker successfully breached a camera device, he or she could easily gain access to the wider police network, thereby causing a problem that goes much further than a single camera or a single police officer.

Thankfully, Nuix is a “good guy” rather than a “bad actor” in its experimentation. The company is already working with several of the body cam manufacturers to remedy the problems uncovered by its evaluation, so as to improve the ability of the cameras to repel hacking attempts.

But the more fundamental issue that’s raised is this: What other types of security vulnerabilities are out there that haven’t been detected yet?

It doesn’t exactly reinforce our faith in technology to ensure fairer, more honest and more transparent law enforcement activities. If video footage can’t be considered verified proof that an event happened or didn’t happen, have we just returned to Square One again, with people pointing fingers in both directions but with even lower levels of trust?

Hopefully not. But with the polarized camps we have at the moment, with people only too eager to blame the motives of their adversaries, the picture doesn’t look particularly promising …

Are we now a nation of “data pragmatists”?

Do people even care about data privacy anymore?

You’d think that with the continuing cascade of news about the exposure of personal information, people would be more skittish than ever about sharing their data.

But this isn’t the case … and we have a 2018 study from marketing data foundation firm Acxiom to prove it. The report, titled Data Privacy: What the Consumer Really Thinks, is the result of survey information gathered in late 2017 by Acxiom in conjunction with the Data & Marketing Association (formerly the Direct Marketing Association).

The research, which presents results from an online survey of nearly 2,100 Americans age 18 and older, found that nearly 45% of the respondents feel more comfortable with data exchange today than they have in the past.

Among millennial respondents, well over half feel more comfortable about data exchange today.

Indeed, the report concludes that most Americans are “data pragmatists”:  people who are open about exchanging personal data with businesses if the benefits received in return for their personal information are clearly stated.

Nearly 60% of Americans fall into this category.

On top of that, another 20% of the survey respondents report that they’re completely unconcerned about the collection and usage of their personal data. Among younger consumers, it’s nearly one-third.

When comparing Americans’ attitudes to consumers in other countries, we seem to be a particularly carefree bunch. Our counterparts in France and Spain are much more wary of sharing their personal information.

Part of the difference in views may be related to feelings that Americans have about who is responsible for data security. In the United States, the largest portion of people (~43%) believe that 100% of the responsibility for data security lies with consumers themselves, versus only ~6% who believe that the responsibility resides solely with brands or the government.  (The balance of people think that the responsibility is shared between all parties.)

To me, the bottom-line finding from the Acxiom/DMA study is that people have become so conditioned to receiving the benefits that come from data exchange, they’re pretty inured to the potential downsides.  Probably, many can’t even fathom going back to the days of true data privacy.

Of course, no one wishes for their personal data to be used for nefarious purposes, but who is willing to forego the benefits (be it monetary, convenience or comfort) that come from companies and brands knowing their personal information and their personal preferences?

And how forgiving would these people be if their personal data were actually compromised? From Target to Macy’s, quite a few Americans have already had a taste of this, but what is it going to take for such “data pragmatism” to seem not so practical after all?

I’m thinking, a lot.

For more findings from the Axciom research, click or tap here.

Clueless at the Capitol

Lawmakers’ cringeworthy questioning of Facebook’s Mark Zuckerberg calls into question the government’s ability to regulate social media.

Saul Loeb/AFP/Getty Images
Facebook CEO Mark Zuckerberg on Capitol Hill, April 2018. (Saul Loeb/AFP/Getty Images)

With the testimony on Capitol Hill last week by Facebook CEO Mark Zuckerberg, there’s heightened concern about the negative side effects of social media platforms. But in listening to lawmakers questioning Zuckerberg, it became painfully obvious that our Federal legislators have next to no understanding of the role of advertising in social media – or even how social media works in its most basic form.

Younger staff members may have written the questions for their legislative bosses, but it was clear that the lawmakers were ill-equipped to handle Zuckerberg’s alternatively pat, platitudinous and evasive responses and to come back with meaningful follow-up questions.

Even the younger senators and congresspeople didn’t acquit themselves well.

It made me think of something else, too. The questioners – and nearly everyone else, it seems – are missing this fundamental point about social media:  Facebook and other social media platforms aren’t much different from old-fashioned print media, commercial radio and TV/cable in that that they all generate the vast bulk of their earnings from advertising.

It’s true that in addition to advertising revenues, print publications usually charge subscribers for paper copies of their publications. In the past, this was because 1) they could … but 2) also to help defray the cost of paper, ink, printing and physical distribution of their product to news outlets or directly to homes.

Commercial radio and TV haven’t had those costs, but neither did they have a practical way of charging their audiences for broadcasts – at least not until cable and satellite came along – and so they made their product available to their audiences at no charge.

The big difference between social media platforms and traditional media is that social platforms can do something that the marketers of old could only dream about: target their advertising based on personally identifiable demographics.

Think about it:  Not so many years ago, the only demographics available to marketers came from census publications, which by law cannot reveal any personally identifiable information.  Moreover, the U.S. census is taken only every ten years, so the data ages pretty quickly.

Beyond census information, advertisers using print media could rely on audit reports from ABC and BPA.  If it was a business-to-business publication, some demographic data was available based on subscriber-provided information (freely provided in exchange for receiving those magazines free of charge).  But in the case of consumer publications, the audit information wouldn’t give an advertiser anything beyond the number of copies printed and sold, and (sometimes) a geographic breakdown of where mail subscribers lived.

Advertisers using radio or TV media had to rely on researchers like Nielsen — but that research surveyed only a small sample of the audience.

What this meant was that the only way advertisers could “move the needle” in a market was to spend scads of cash on broadcasting their messages to the largest possible audience. As a connecting mechanism, this is hugely inefficient.

The value proposition that Zuckerberg’s Facebook and other social media platforms provide is the ability to connect advertisers with more people for less spend, due to these platforms’ abilities to use personally identifiable demographics for targeting the advertisements.

Want to find people who enjoy doing DIY projects but who live just in areas where your company has local distribution of your products? Through Facebook, you can narrow-cast your reach by targeting consumers involved with particular activities and interests in addition to geography, age, gender, or whatever other characteristics you might wish to use as filters.

That’s massively more efficient and effective than relying on something like median household income within a zip code or census tract. It also means that your message will be more targeted — and hence more relevant — to the people who see it.

All of this is immensely more efficient for advertisers, which is why social media advertising (in addition to search advertising on Google) has taken off while other forms of advertising have plateaued or declined.

But there’s a downside: Social media is being manipulated (“abused” might be the better term) by “black hats” – people who couldn’t do such things in the past using census information or Nielsen ratings or magazine audit statements.

Here’s another reality: Facebook and other social media platforms have been so fixated on their value proposition that they failed to conceive of — or plan for — the behavior inspired by the evil side of humanity or those who feel no guilt about taking advantage of security vulnerabilities for financial or political gain.

Facebook COO Sheryl Sandberg interviewed on the Today Show, April 2018.

Now that that reality has begun to sink in, it’ll be interesting to see how Mark Zuckerberg and Sheryl Sandberg of Facebook — not to mention other social media business leaders — respond to the threat.

They’ll need to do something — and it’ll have to be something more compelling than CEO Zuckerberg’s constant refrain at the Capitol Hill hearings (“I’ll have my team look into that.”) or COO Sandberg’s litany (“I’m glad you asked that question, because it’s an important one.”) on her parade of TV/cable interviews.  The share price of these companies’ stock will continue to pummeled until investors understand what changes are going to be made that will actually achieve concrete results.

Future shock? How badly is cyber-hacking nibbling away at our infrastructure?

I don’t know about you, but I’ve never forgotten the late afternoon of August 14, 2003 when problems with the North American power grid meant that people in eight states stretching from New England to Detroit suddenly found themselves without power.

Fortunately, my company’s Maryland offices were situated about 100 miles beyond the southernmost extent of the blackout. But it was quite alarming to watch the power outage spread across a map of the Northeastern and Great Lakes States (plus Ontario) in real-time, like some sort of creeping blob from a science fiction film.

According to Wikipedia’s article on the topic, the impact of the blackout was substantial — and far-reaching:

“Essential services remained in operation in some … areas. In others, backup generation systems failed. Telephone networks generally remained operational, but the increased demand triggered by the blackout left many circuits overloaded. Water systems in several cities lost pressure, forcing boil-water advisories to be put into effect. Cellular service was interrupted as mobile networks were overloaded with the increase in volume of calls; major cellular providers continued to operate on standby generator power. Television and radio stations remained on the air with the help of backup generators — although some stations were knocked off the air for periods ranging from several hours to the length of the entire blackout.”

Another (happier) thing I remember from this 15-year-old incident is that rather than causing confusion or bedlam, the massive power outage brought out the best in people. This anecdote from the blackout was typical:  Manhattanites opening their homes to workers who couldn’t get to their own residences for the evening.

For most of the 50 million+ Americans and Canadians affected by the blackout, power was restored after about six hours.  But for some, it would take as long as two days for power restoration.

Upon investigation of the incident, it was discovered that high temperatures and humidity across the region had increased energy demand as people turned on air conditioning units and fans. This caused power lines to sag as higher currents heated the lines.  The precipitating cause of the blackout was a software glitch in the alarm system in a control room of FirstEnergy Corporation, causing operators to be unaware of the need to redistribute the power load after overloaded transmission lines had drooped into foliage.

In other words, what should have been, at worst, a manageable localized blackout cascaded rapidly into a collapse of the entire electric grid across multiple states and regions.

But at least the incident was borne out of human error, not nefarious motives.

That 2003 experience should make anyone hearing last week’s testimony on Capitol Hill about the risks faced by the U.S. power grid think long and hard about what could happen in the not-so-distant future.

The bottom-line on the testimony presented in the hearings is that malicious cyberattacks are becoming more sophisticated – and hence more capable of causing damage to American infrastructure. The Federal Energy Regulatory Commission (FERC) is cautioning that hackers are increasingly threatening U.S. utilities ranging from power plants to water processing systems.

Similar warnings come from the Department of Homeland Security, which reports that hackers have been attacking the U.S. electric grid, power plants, transportation facilities and even targets in commercial sectors.

The Energy Department goes even further, reporting in 2017 that the United States electrical power grid is in “imminent danger” from a cyber-attack. To underscore this threat, the Department contends that more than 100,000 cyber-attacks are being mounted every day.

With so many attacks of this kind happening on so many fronts, one can’t help but think that it’s only a matter of time before we face a “catastrophic event” that’s even more consequential than the one that affected the power grid in 2003.

Even more chilling, if it’s borne out of intentional sabotage – as seems quite likely based on recent testimony – it’s pretty doubtful that remedial action could be taken as quickly or as effectively as what would be done in response to an accidental incident likr the one that happened in 2003.

Put yourself in the saboteurs’ shoes: If your aim is to bring U.S. infrastructure to its knees, why plan for a one-off event?  You’d definitely want to build in ways to cause cascading problems – not to mention planting additional “land-mines” to frustrate attempts to bring systems back online.

Contemplating all the implications is more than sobering — it’s actually quite frightening. What are your thoughts on the matter?  Please share them with other readers.

Legislators tilt at the digital privacy windmill (again).

In the effort to preserve individual privacy in the digital age, hope springs eternal.

The latest endeavor to protect individuals’ privacy in the digital era is legislation introduced this week in the U.S. Senate that would require law enforcement and government authorities to obtain a warrant before accessing the digital communications of U.S. citizens.

Known as the ECPA Modernization Act of 2017, it is bipartisan legislation introduced by two senators known for being polar opposites on the political spectrum: Sen. Patrick Leahy (D-VT) on the left and Sen. Mike Lee (R-UT) on the right.

At present, only a subpoena is required for the government to gain full access to Americans’ e-mails that a over 180 days old. The new ECPA legislation would mean that access couldn’t be granted without showing probable cause, along with obtaining a judge’s signature.

The ECPA Modernization Act would also require a warrant for accessing geo-location data, while setting new limits on metadata collection. If the government did access cloud content without a warrant, the new legislation would make that data inadmissible in a court of law.

There’s no question that the original ECPA (Electronic Communications Privacy Act) legislation, enacted in 1986, is woefully out of date. After all, it stems from a time before the modern Internet.

It’s almost quaint to realize that the old ECPA legislation defines any e-mail older than 180 days as “abandoned” — and thereby accessible to government officials.  After all, we now live in an age when many residents keep the same e-mail address far longer than their home address.

The fact is, many individuals have come to rely on technology companies to store their e-mails, social media posts, blog posts, text messages, photos and other documents — and to do it for an indefinite period of time. It’s perceived as “safer” than keeping the information on a personal computer that might someday malfunction for any number of reasons.

Several important privacy advocacy groups are hailing the proposed legislation and urging its passage – among them the Center for Democracy & Technology and the Electronic Frontier Foundation.

Sophia Cope, an attorney at EFF, notes that the type of information individuals have entrusted to technology companies isn’t very secure at all. “Many users do not realize that an e-mail stored on a Google or Microsoft service has less protection than a letter sitting in a desk drawer at home,” Cope maintains.

“Users often can’t control how and when their whereabouts are being tracked by technology,” she adds.

The Senate legislation is also supported by the likes of Google, Amazon, Facebook and Twitter.

All of which makes it surprising that this type of legislation – different versions of which have been introduced in the U.S. Senate every year since 2013 – has had such trouble gaining traction.

The reasons for prior-year failure are many and varied – and quite revealing in terms of illuminating how crafting legislation is akin to sausage-making.  Which is to say, not very pretty.  But this year, the odds look more favorable than ever before.

Two questions remain on the table: First, will the legislation pass?  And second, will it really make a difference in terms of protecting the privacy of Americans?

Any readers with particular opinions are encouraged to weigh in.

When it comes to city parklands, the Twin Cities of Minneapolis-St. Paul rule.

Minneapolis, Minnesota

Although I lived in five states prior to going away to college, I spent the most time in those formative years of my life residing in the Twin Cities of Minneapolis and St. Paul in Minnesota.

The city parks in both towns are major amenities. Indeed, you could say that the entire fabric of life in the two cities is interwoven with the park systems; they’re that special.  And my family was no exception in taking advantage of everything the parks had to offer.

So it wasn’t much of a surprise to find that both cities are at the very top of the list of U.S. cities with the best parks.

The evaluation is done annually by The Trust for Public Land, and covers the 100 most populous cities in the United States.

The major metric studied is the percent of city residents who live within a 10-minute walk of a park — although other characteristics are also analyzed, such as park size and investment, the number of playgrounds, dog parks and recreation centers in relation to city population, and so on.

In the 2017 evaluation, Minneapolis topped the list of 100 cities, helped by superior park access across all ages and income levels, as well as achieving top scores in park investment as well as the number of senior and recreation centers, plus dog parks.

In total, an incredible 15% of Minneapolis’ entire square mileage is dedicated to park space.

St. Paul was right behind Minneapolis in the #2 slot out of 100 cities evaluated. As visitors to the Twin Cities know, Minneapolis is blessed with seven natural lakes within its borders, whereas next-door St. Paul has just two.  Nevertheless, its commitment to parkland is nearly as strong.

Here’s how the Top 10 list of cities shakes out:

  • #1 Minneapolis, MN
  • #2 St. Paul, MN
  • #3 San Francisco, CA
  • #4 Washington, DC
  • #5 Portland, OR
  • #6 Arlington, VA
  • #7 (tie) Irvine, CA and New York, NY
  • #9 Madison, WI
  • #10 Cincinnati, OH

Several of these cities shine in certain attributes. San Francisco, for instance, scores highest for park access, with nearly every resident living within a 10-minute walk of a park.

Three cities (Arlington, Irvine and Madison), achieved Top 10 ranking for only the second time (all three first made it into the Top 10 ranking in 2016).

What about cities that appear at the bottom of the Trust for Public Land list? They tend to be “car-dominated” cities, where parks aren’t easily accessible by foot for many residents.  For the record, here are the cities that rank lowest in the rankings:

  • #90 (tie) Fresno, CA, Hialeah, FL and Jacksonville, FL
  • #93 (tie) Laredo, TX and Winston-Salem, NC
  • #95 Mesa, AZ
  • #96 Louisville, KY
  • #97 Charlotte, NC
  • #98 (tie) Fort Wayne, IN and Indianapolis, IN
Bottom dweller: A crumbling structure in a padlocked park in Indianapolis, Indiana.

Interestingly, one of these cities – Charlotte – leads all others in median park size (~16 acres). Of course, this likely means that residents’ access to them suffers because there are fewer small parks scattered around the city.

To see the full rankings as well as each city’s score by category evaluated, you can view a comparative chart here.

Based on your experience, do any of the city rankings surprise you? Is there a particular city that you think should be singled out for praise (or pan) about their parklands?