Fake e-mails: A small percentage … but a big number.

Recently released statistics by e-mail security and authentication service provider Valimail tell us that ~2% of e-mail communications worldwide are deemed “potentially malicious” because they’ve failed DMARC testing (domain-based message authentication, reporting and conformance) and also don’t originate from known, legitimate senders.

That’s a small percentage — seemingly trivial.  But considering the volume of e-mail messages sent every day, it translates into nearly 6.4 billion e-mails sent every day that are “fake, faux and phony.”

Interestingly, the source of those fake e-mails is most often right here in the United States.  Not Russia or Ukraine.  Or Nigeria or Tajikistan.

In fact, no other country even comes close to the USA in the number of fraudulent e-mails.

The good news is that DMARC has made some pretty decent strides in recent times, with DMARC support now covering around 5 billion inboxes worldwide, up from less than 3 billion in 2015.

The federal government is the biggest user of DMARC, but nearly all U.S. tech companies and most Fortune 500 companies also participate.

Participation is one thing, but doing something about enforcement is another. At the moment, Valimail is finding that the enforcement failure rate is well above 70% — hardly an impressive track record.

The Valimail study findings came as the result of analyzing billions of e-mail message authentication requests, along with 3 million+ publicly accessible DMARC records. So, the findings are meaningful and provide good directional indications.

But what are the research implications? The findings underscore the degree to which name brands can be “hijacked” for nefarious purposes.

Additionally, there’s consumer fallout in that many people are increasingly skittish about opening any marketing-oriented e-mails at all, figuring that the risk of importing a virus outweighs any potential benefit from the marketing pitch.

That isn’t an over-abundance of caution, either, because 9 in 10 cyber attacks begin with a phishing e-mail.

It’s certainly enough to keep many people from opening the next e-mail that hits their inbox from a Penneys(?), DirecTV(?) or BestBuy(?).

How about you?  Are you now sending those e-mails straight to the trash as a matter of course?

Keeping law enforcement on the level.

Let’s go to the videotape … or not.

Video is supposed to be the “great equalizer”: evidence that doesn’t lie — particularly in the case of chronicling law enforcement events.

From New York City and Chicago to Baltimore, Charleston, SC and dozens of places in between, there have been a number of “high profile” police incidents in recent years where mobile video has made it possible to go beyond the sometimes-contradictory “he said/she said” statements coming from officers and citizens.

There’s no question that it’s resulted in some disciplinary or court outcomes that may well have turned out differently in times before.

In response, numerous police departments have responded in a way best described as, “If you can’t beat them, join them.” They’ve begun outfitting their law enforcement personnel with police body cams.

The idea is that having a “third party” digital witness on the scene will protect both the perpetrator and the officer when assessments need to be made about conflicting accounts of what actually happened.

This tidy solution seems to be running into a problem, however. Some security experts are calling into question the ability of body cameras to provide reliable evidence – and it isn’t because of substandard quality in the video footage being captured.

Recently, specialists at the security firm Nuix examined five major brands of security cameras … and have determined that all of them are vulnerable to hacking.

The body cam suppliers in question are CEESC, Digital Ally, Fire Cam, Patrol Eyes, and VIEVU. The cameras are described by Nuix as “full-feature computers walking around on your chest,” and as such, require the same degree of security mechanisms that any other digital device operating in security-critical areas would need to possess.

But here’s the catch: None of the body cameras evaluated featured digital signatures on the uploaded footage.  This means that there would be no way to confirm whether any of the video evidence might have been tampered with.

In other words, a skilled technician with nefarious intent could download, edit and re-upload content – all while avoiding giving any sort of indication that it had been revised.

These hackers could be operating on the outside … or they could be rogue officers inside a law enforcement department.

Another flaw uncovered by Nuix is that malware can infect the cameras in the form of malicious computer code being disguised as software updates – updates that the cameras are programmed to accept without any additional verification.

Even worse, once a hacker successfully breached a camera device, he or she could easily gain access to the wider police network, thereby causing a problem that goes much further than a single camera or a single police officer.

Thankfully, Nuix is a “good guy” rather than a “bad actor” in its experimentation. The company is already working with several of the body cam manufacturers to remedy the problems uncovered by its evaluation, so as to improve the ability of the cameras to repel hacking attempts.

But the more fundamental issue that’s raised is this: What other types of security vulnerabilities are out there that haven’t been detected yet?

It doesn’t exactly reinforce our faith in technology to ensure fairer, more honest and more transparent law enforcement activities. If video footage can’t be considered verified proof that an event happened or didn’t happen, have we just returned to Square One again, with people pointing fingers in both directions but with even lower levels of trust?

Hopefully not. But with the polarized camps we have at the moment, with people only too eager to blame the motives of their adversaries, the picture doesn’t look particularly promising …

Declining DUI arrests: What’s the cause?

Looking back over the past eight years or so, something very interesting has been happening to the arrest rate statistics for people driving under the influence.

DUI arrests have been dropping – pretty steadily and inexorably.

The trend started in 2011, in which year an 8% decline in DUI arrests was experienced over the prior year. In 2012 the decline was 4.1% … in 2013, it was another 7.2%.

And arrest rates didn’t plateau after that, either. DUI arrests have continued to decline — even as police departments have continued to put plenty of cops on the beat for such purposes.

One of the most dramatic examples of the continued decline is in Miami-Dade County — the highest population county in the entire Southeastern U.S.  The Miami-Dade police force made DUI arrests in 2017 that were 65% fewer than four years earlier.

Look around the country and you’ll see similar trends in places as diverse as San Antonio, TX, Phoenix, AZ, Portland, OR and Orange County, CA.

There are common thread, in what’s being seen across the nation:

  • DUI arrest levels are down in major metro areas — but not necessarily in exurban or rural areas.
  • DUI arrest levels have declined the nearly all of the metro areas where ride-sharing services are prominent.

This last point a significant factor to consider:  The increasing popularity of ride sharing services has coincided with the drop in DUI arrests.

A 2017 University of Pennsylvania analysis found that in places where ride-hailing services were readily available, in most cases DUI arrests had declined upwards of 50% or more compared to just a few years earlier.

Ride-hailing services are particularly popular with younger adults, who like the smartphone apps that make them pretty effortless to use.  They’re also popular with people who are looking for more affordable ways to get about town compared to what highly regulated taxi services choose to charge.

Plus, the “cool” factor of ride-sharing leaves old-fashioned taxi services pretty much in the dust.

The few exceptions of locations where DUI arrest declines haven’t been so pronounced are in places like Las Vegas and Reno, NV – tourist destinations that draw out-of-towners who frequently take public transportation, hail taxis, or simply walk rather than rent vehicles to get around town.

With the positive consequences of fewer DUI arrests – which also correlate to reductions in vehicular homicides and lower medical care costs due to fewer people injured in traffic accidents, as well as reductions in the cost of prosecuting and incarcerating the perpetrators – one might think that other urban areas would take notice and become more receptive to ride-sharing services than they have been up to now.

But where taxi services are well-entrenched and “wired” into the political fabric – a situation often encountered in older urban centers like Chicago, St. Louis, Philadelphia and Baltimore — the ancillary benefits of ride-sharing services don’t appear to hold much sway with city councils or city regulators – at least not yet.

One might suppose that overstretched urban police departments would welcome having to spend less time patrolling the streets for DUI drivers.  And if that benefits police departments … well, the police also represents a politically important constituency, too.

It seems that some fresh thinking may be in order.

Are we now a nation of “data pragmatists”?

Do people even care about data privacy anymore?

You’d think that with the continuing cascade of news about the exposure of personal information, people would be more skittish than ever about sharing their data.

But this isn’t the case … and we have a 2018 study from marketing data foundation firm Acxiom to prove it. The report, titled Data Privacy: What the Consumer Really Thinks, is the result of survey information gathered in late 2017 by Acxiom in conjunction with the Data & Marketing Association (formerly the Direct Marketing Association).

The research, which presents results from an online survey of nearly 2,100 Americans age 18 and older, found that nearly 45% of the respondents feel more comfortable with data exchange today than they have in the past.

Among millennial respondents, well over half feel more comfortable about data exchange today.

Indeed, the report concludes that most Americans are “data pragmatists”:  people who are open about exchanging personal data with businesses if the benefits received in return for their personal information are clearly stated.

Nearly 60% of Americans fall into this category.

On top of that, another 20% of the survey respondents report that they’re completely unconcerned about the collection and usage of their personal data. Among younger consumers, it’s nearly one-third.

When comparing Americans’ attitudes to consumers in other countries, we seem to be a particularly carefree bunch. Our counterparts in France and Spain are much more wary of sharing their personal information.

Part of the difference in views may be related to feelings that Americans have about who is responsible for data security. In the United States, the largest portion of people (~43%) believe that 100% of the responsibility for data security lies with consumers themselves, versus only ~6% who believe that the responsibility resides solely with brands or the government.  (The balance of people think that the responsibility is shared between all parties.)

To me, the bottom-line finding from the Acxiom/DMA study is that people have become so conditioned to receiving the benefits that come from data exchange, they’re pretty inured to the potential downsides.  Probably, many can’t even fathom going back to the days of true data privacy.

Of course, no one wishes for their personal data to be used for nefarious purposes, but who is willing to forego the benefits (be it monetary, convenience or comfort) that come from companies and brands knowing their personal information and their personal preferences?

And how forgiving would these people be if their personal data were actually compromised? From Target to Macy’s, quite a few Americans have already had a taste of this, but what is it going to take for such “data pragmatism” to seem not so practical after all?

I’m thinking, a lot.

For more findings from the Axciom research, click or tap here.

GDPR: What’s the big whoop?

This past week, the European Union’s General Data Protection Regulation (GDPR) initiative kicked in. But what does it mean for businesses that operate in the EU region?

And what are the prospects for GDPR-like privacy coming to the USA anytime soon?

First off, let’s review what’s covered by the GDPR initiative. The GDPR includes the following rights for individuals:

  1. The right to be informed
  2. The right of access
  3. The right to rectification
  4. The right to be forgotten
  5. The right to restrict processing
  6. The right to data portability
  7. The right to object
  8. Rights in relation to automated decision making and profiling

The “right to be forgotten” means data subjects can request their information to be erased. The right to “data portability” is also a new factor.  Data subjects now have the right to have data transferred to a third-party service provider in machine-readable format.  However, this right arises only when personal data is provided and processed on the basis of consent, or when necessary to perform a contract.

Privacy impact assessments and “privacy by design” are now legally required in certain circumstances under GDPR, too. Businesses are obliged to carry out data protection impact assessments for new technologies.  “Privacy by design” involves accounting for privacy risk when designing a new product or service, rather than treating it as an afterthought.

Implications for Marketers

A recent study investigated how much customer data will still be usable after GDPR provisions are implemented. Research was done involving more than 30 companies that have already gone through the process of making their data completely GDPR-compliant.

The sobering finding:  Nearly 45% of EU audience data is being lost due to GDPR provisions.  One of the biggest changes is that cookie IDs disappear, which is the basis behind so much programmatic and other data-driven advertising both in Europe and in the United States.

Doug Stevenson, CEO of Vibrant Media, the contextual advertising agency that conducted the study, had this to say about the implications:

“Publishers will need to rapidly fill their inventory with ‘pro-privacy’ solutions that do not require consent, such as contextual advertising, native [advertising] opportunities and non-personalized ads.”

New platforms are emerging to help publishers manage customer consent for “privacy by design,” but the situation is sure to become more challenging in the ensuing months and years as compliance tracking the regulatory authorities ramps up.

It appears that some companies are being a little less proactive than is advisable. A recent study by compliance consulting firm CompliancePoint shows that a large contingent of companies, simply put, aren’t ready for GDPR.

As for why they aren’t, nearly half report that they’re taking a “wait and see” attitude to determine what sorts of enforcement actions ensue against scofflaws. Some marketers admit that their companies aren’t ready due to their own lack of understanding of GDPR issues, while quite a few others claim simply that they’re unconcerned.

I suspect we’re going to get a much better understanding of the implications of GDPR over the coming year or so. It’ll be good to check back on the status of implementation and enforcement measure by this time next year.

Comfortable in our own skin: Consumers embrace biometrics for identification and authentication purposes.

Perhaps it’s the rash of daily reports about data breaches. Or the one-too-many compromises of protection of people’s passwords.

Whatever the cause, it appears that Americans are becoming increasingly interested in the use of biometrics to verify personal identity or to enable payments.

And the credit card industry has taken notice. Biometrics – the descriptive term for body measurements and calculations – is becoming more prevalent as a means to authenticate identity and enable proper access and control of accounts.

A recent survey of ~1,000 American adult consumers, conducted in Fall 2017 by AYTM Marketing Research for VISA, revealed that two-thirds of the respondents are now familiar with biometrics.

What’s more, for those who understand what biometrics entails, more than 85% of the survey’s respondents expressed interest in their use for identity authentication.

About half of the respondents think that adopting biometrics would be more secure than using PIN numbers or passwords. Even more significantly, ~70% think that biometrics would make authentication faster and easier – whether it be done via voice recognition or by fingerprint recognition.

Interestingly, the view that biometrics are “easier” than traditional methods appears to be the case despite the fact that fewer than one-third of the survey respondents use unique passwords for each of their accounts.

As a person who does use unique passwords for my various accounts – and who has the usual “challenges” managing so many different ones – I would have thought that people who use only a few passwords might find traditional methods of authentication relatively easy to manage. Despite this, the “new world” of biometrics seems like a good bet for many of these people.

That stated, it’s also true that people are understandably skittish about ID theft in general. To illustrate, about half of the respondents in the AYTM survey expressed concerns about the risk of a security breach of biometric data – in other words, that the very biometric information used to authenticate a person could be nabbed by others who could use it the data for nefarious purposes.

And lastly, a goodly percentage of “Doubting Thomases” question whether biometric authentication will work properly – or even if it does work, whether it might require multiple attempts to do so.

In other words, it may end up being “déjà vu all over again” with this topic …

For an executive summary of the AYTM research findings, click or tap here.

Future shock? How badly is cyber-hacking nibbling away at our infrastructure?

I don’t know about you, but I’ve never forgotten the late afternoon of August 14, 2003 when problems with the North American power grid meant that people in eight states stretching from New England to Detroit suddenly found themselves without power.

Fortunately, my company’s Maryland offices were situated about 100 miles beyond the southernmost extent of the blackout. But it was quite alarming to watch the power outage spread across a map of the Northeastern and Great Lakes States (plus Ontario) in real-time, like some sort of creeping blob from a science fiction film.

According to Wikipedia’s article on the topic, the impact of the blackout was substantial — and far-reaching:

“Essential services remained in operation in some … areas. In others, backup generation systems failed. Telephone networks generally remained operational, but the increased demand triggered by the blackout left many circuits overloaded. Water systems in several cities lost pressure, forcing boil-water advisories to be put into effect. Cellular service was interrupted as mobile networks were overloaded with the increase in volume of calls; major cellular providers continued to operate on standby generator power. Television and radio stations remained on the air with the help of backup generators — although some stations were knocked off the air for periods ranging from several hours to the length of the entire blackout.”

Another (happier) thing I remember from this 15-year-old incident is that rather than causing confusion or bedlam, the massive power outage brought out the best in people. This anecdote from the blackout was typical:  Manhattanites opening their homes to workers who couldn’t get to their own residences for the evening.

For most of the 50 million+ Americans and Canadians affected by the blackout, power was restored after about six hours.  But for some, it would take as long as two days for power restoration.

Upon investigation of the incident, it was discovered that high temperatures and humidity across the region had increased energy demand as people turned on air conditioning units and fans. This caused power lines to sag as higher currents heated the lines.  The precipitating cause of the blackout was a software glitch in the alarm system in a control room of FirstEnergy Corporation, causing operators to be unaware of the need to redistribute the power load after overloaded transmission lines had drooped into foliage.

In other words, what should have been, at worst, a manageable localized blackout cascaded rapidly into a collapse of the entire electric grid across multiple states and regions.

But at least the incident was borne out of human error, not nefarious motives.

That 2003 experience should make anyone hearing last week’s testimony on Capitol Hill about the risks faced by the U.S. power grid think long and hard about what could happen in the not-so-distant future.

The bottom-line on the testimony presented in the hearings is that malicious cyberattacks are becoming more sophisticated – and hence more capable of causing damage to American infrastructure. The Federal Energy Regulatory Commission (FERC) is cautioning that hackers are increasingly threatening U.S. utilities ranging from power plants to water processing systems.

Similar warnings come from the Department of Homeland Security, which reports that hackers have been attacking the U.S. electric grid, power plants, transportation facilities and even targets in commercial sectors.

The Energy Department goes even further, reporting in 2017 that the United States electrical power grid is in “imminent danger” from a cyber-attack. To underscore this threat, the Department contends that more than 100,000 cyber-attacks are being mounted every day.

With so many attacks of this kind happening on so many fronts, one can’t help but think that it’s only a matter of time before we face a “catastrophic event” that’s even more consequential than the one that affected the power grid in 2003.

Even more chilling, if it’s borne out of intentional sabotage – as seems quite likely based on recent testimony – it’s pretty doubtful that remedial action could be taken as quickly or as effectively as what would be done in response to an accidental incident likr the one that happened in 2003.

Put yourself in the saboteurs’ shoes: If your aim is to bring U.S. infrastructure to its knees, why plan for a one-off event?  You’d definitely want to build in ways to cause cascading problems – not to mention planting additional “land-mines” to frustrate attempts to bring systems back online.

Contemplating all the implications is more than sobering — it’s actually quite frightening. What are your thoughts on the matter?  Please share them with other readers.