GDPR: What’s the big whoop?

This past week, the European Union’s General Data Protection Regulation (GDPR) initiative kicked in. But what does it mean for businesses that operate in the EU region?

And what are the prospects for GDPR-like privacy coming to the USA anytime soon?

First off, let’s review what’s covered by the GDPR initiative. The GDPR includes the following rights for individuals:

  1. The right to be informed
  2. The right of access
  3. The right to rectification
  4. The right to be forgotten
  5. The right to restrict processing
  6. The right to data portability
  7. The right to object
  8. Rights in relation to automated decision making and profiling

The “right to be forgotten” means data subjects can request their information to be erased. The right to “data portability” is also a new factor.  Data subjects now have the right to have data transferred to a third-party service provider in machine-readable format.  However, this right arises only when personal data is provided and processed on the basis of consent, or when necessary to perform a contract.

Privacy impact assessments and “privacy by design” are now legally required in certain circumstances under GDPR, too. Businesses are obliged to carry out data protection impact assessments for new technologies.  “Privacy by design” involves accounting for privacy risk when designing a new product or service, rather than treating it as an afterthought.

Implications for Marketers

A recent study investigated how much customer data will still be usable after GDPR provisions are implemented. Research was done involving more than 30 companies that have already gone through the process of making their data completely GDPR-compliant.

The sobering finding:  Nearly 45% of EU audience data is being lost due to GDPR provisions.  One of the biggest changes is that cookie IDs disappear, which is the basis behind so much programmatic and other data-driven advertising both in Europe and in the United States.

Doug Stevenson, CEO of Vibrant Media, the contextual advertising agency that conducted the study, had this to say about the implications:

“Publishers will need to rapidly fill their inventory with ‘pro-privacy’ solutions that do not require consent, such as contextual advertising, native [advertising] opportunities and non-personalized ads.”

New platforms are emerging to help publishers manage customer consent for “privacy by design,” but the situation is sure to become more challenging in the ensuing months and years as compliance tracking the regulatory authorities ramps up.

It appears that some companies are being a little less proactive than is advisable. A recent study by compliance consulting firm CompliancePoint shows that a large contingent of companies, simply put, aren’t ready for GDPR.

As for why they aren’t, nearly half report that they’re taking a “wait and see” attitude to determine what sorts of enforcement actions ensue against scofflaws. Some marketers admit that their companies aren’t ready due to their own lack of understanding of GDPR issues, while quite a few others claim simply that they’re unconcerned.

I suspect we’re going to get a much better understanding of the implications of GDPR over the coming year or so. It’ll be good to check back on the status of implementation and enforcement measure by this time next year.

Comfortable in our own skin: Consumers embrace biometrics for identification and authentication purposes.

Perhaps it’s the rash of daily reports about data breaches. Or the one-too-many compromises of protection of people’s passwords.

Whatever the cause, it appears that Americans are becoming increasingly interested in the use of biometrics to verify personal identity or to enable payments.

And the credit card industry has taken notice. Biometrics – the descriptive term for body measurements and calculations – is becoming more prevalent as a means to authenticate identity and enable proper access and control of accounts.

A recent survey of ~1,000 American adult consumers, conducted in Fall 2017 by AYTM Marketing Research for VISA, revealed that two-thirds of the respondents are now familiar with biometrics.

What’s more, for those who understand what biometrics entails, more than 85% of the survey’s respondents expressed interest in their use for identity authentication.

About half of the respondents think that adopting biometrics would be more secure than using PIN numbers or passwords. Even more significantly, ~70% think that biometrics would make authentication faster and easier – whether it be done via voice recognition or by fingerprint recognition.

Interestingly, the view that biometrics are “easier” than traditional methods appears to be the case despite the fact that fewer than one-third of the survey respondents use unique passwords for each of their accounts.

As a person who does use unique passwords for my various accounts – and who has the usual “challenges” managing so many different ones – I would have thought that people who use only a few passwords might find traditional methods of authentication relatively easy to manage. Despite this, the “new world” of biometrics seems like a good bet for many of these people.

That stated, it’s also true that people are understandably skittish about ID theft in general. To illustrate, about half of the respondents in the AYTM survey expressed concerns about the risk of a security breach of biometric data – in other words, that the very biometric information used to authenticate a person could be nabbed by others who could use it the data for nefarious purposes.

And lastly, a goodly percentage of “Doubting Thomases” question whether biometric authentication will work properly – or even if it does work, whether it might require multiple attempts to do so.

In other words, it may end up being “déjà vu all over again” with this topic …

For an executive summary of the AYTM research findings, click or tap here.

Future shock? How badly is cyber-hacking nibbling away at our infrastructure?

I don’t know about you, but I’ve never forgotten the late afternoon of August 14, 2003 when problems with the North American power grid meant that people in eight states stretching from New England to Detroit suddenly found themselves without power.

Fortunately, my company’s Maryland offices were situated about 100 miles beyond the southernmost extent of the blackout. But it was quite alarming to watch the power outage spread across a map of the Northeastern and Great Lakes States (plus Ontario) in real-time, like some sort of creeping blob from a science fiction film.

According to Wikipedia’s article on the topic, the impact of the blackout was substantial — and far-reaching:

“Essential services remained in operation in some … areas. In others, backup generation systems failed. Telephone networks generally remained operational, but the increased demand triggered by the blackout left many circuits overloaded. Water systems in several cities lost pressure, forcing boil-water advisories to be put into effect. Cellular service was interrupted as mobile networks were overloaded with the increase in volume of calls; major cellular providers continued to operate on standby generator power. Television and radio stations remained on the air with the help of backup generators — although some stations were knocked off the air for periods ranging from several hours to the length of the entire blackout.”

Another (happier) thing I remember from this 15-year-old incident is that rather than causing confusion or bedlam, the massive power outage brought out the best in people. This anecdote from the blackout was typical:  Manhattanites opening their homes to workers who couldn’t get to their own residences for the evening.

For most of the 50 million+ Americans and Canadians affected by the blackout, power was restored after about six hours.  But for some, it would take as long as two days for power restoration.

Upon investigation of the incident, it was discovered that high temperatures and humidity across the region had increased energy demand as people turned on air conditioning units and fans. This caused power lines to sag as higher currents heated the lines.  The precipitating cause of the blackout was a software glitch in the alarm system in a control room of FirstEnergy Corporation, causing operators to be unaware of the need to redistribute the power load after overloaded transmission lines had drooped into foliage.

In other words, what should have been, at worst, a manageable localized blackout cascaded rapidly into a collapse of the entire electric grid across multiple states and regions.

But at least the incident was borne out of human error, not nefarious motives.

That 2003 experience should make anyone hearing last week’s testimony on Capitol Hill about the risks faced by the U.S. power grid think long and hard about what could happen in the not-so-distant future.

The bottom-line on the testimony presented in the hearings is that malicious cyberattacks are becoming more sophisticated – and hence more capable of causing damage to American infrastructure. The Federal Energy Regulatory Commission (FERC) is cautioning that hackers are increasingly threatening U.S. utilities ranging from power plants to water processing systems.

Similar warnings come from the Department of Homeland Security, which reports that hackers have been attacking the U.S. electric grid, power plants, transportation facilities and even targets in commercial sectors.

The Energy Department goes even further, reporting in 2017 that the United States electrical power grid is in “imminent danger” from a cyber-attack. To underscore this threat, the Department contends that more than 100,000 cyber-attacks are being mounted every day.

With so many attacks of this kind happening on so many fronts, one can’t help but think that it’s only a matter of time before we face a “catastrophic event” that’s even more consequential than the one that affected the power grid in 2003.

Even more chilling, if it’s borne out of intentional sabotage – as seems quite likely based on recent testimony – it’s pretty doubtful that remedial action could be taken as quickly or as effectively as what would be done in response to an accidental incident likr the one that happened in 2003.

Put yourself in the saboteurs’ shoes: If your aim is to bring U.S. infrastructure to its knees, why plan for a one-off event?  You’d definitely want to build in ways to cause cascading problems – not to mention planting additional “land-mines” to frustrate attempts to bring systems back online.

Contemplating all the implications is more than sobering — it’s actually quite frightening. What are your thoughts on the matter?  Please share them with other readers.

Blockchain Technology: Hype … or Hope?

Recently I read a column in MediaPost by contributing writer Ted McConnell (The Block Chain [sic] Gang, 3/8/18) that seemed pretty wide of the mark in terms of some of its (negative) pronouncements about blockchain technology.

Not being an expert on the subject, I shared the column with my brother, Nelson Nones, who is a blockchain technology specialist, to get his “read” on the article’s POV.

Nelson cited four key areas where he disagreed with the positions of the column’s author – particularly in the fallacy of conflating blockchain technology – the system of keeping records which are impossible to falsify – with cryptocurrencies (systems for trading assets).

But the bigger aspect, Nelson pointed out, is that the MediaPost column reflects a general degree of negativity that has crept into the press recently about blockchain technology and its potential for solving business security challenges.  He noted that blockchain technology is in the midst of going through the various stages of Gartner’s well-known “hype cycle” – a model that charts the maturity, adoption and social application of emerging technologies.

Interested in learning more about this larger issue, I asked Nelson to expand on this aspect. Presented below is what he reported back to me.  His perspectives are interesting enough, I think, to share with others in the world of business.

Hyperbole All Over Again?

Most folks in the technology world – and many business professionals outside of it – are familiar with the “hype cycle.” It’s a branded, graphic representation of the maturity, adoption and social application of technologies from Gartner, Inc., a leading IT research outfit.

According to Gartner, the hype cycle progresses through five phases, starting with the Technology Trigger:

“A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven.”

Next comes the Peak of Inflated Expectations, which implies that everyone is jumping on the bandwagon. But Gartner is a bit less sanguine:

“Early publicity produces a number of success stories — often accompanied by scores of failures. Some companies take action; most don’t.”

There follows a precipitous plunge into the Trough of Disillusionment.  Gartner says:

“Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.”

If the hype cycle is to be believed, the Trough of Disillusionment cannot be avoided; but from a technology provider’s perspective it seems a dreadful place to be.

With nowhere to go but up, emerging technologies begin to climb the Slope of Enlightenment. Gartner explains:

“More instances of how the technology can benefit the enterprise start to crystallize and become more widely understood. Second- and third-generation products appear from technology providers. More enterprises fund pilots; conservative companies remain cautious.”

Finally, we ascend to the Plateau of Productivity. Gartner portrays a state of “nirvana” in which:

“Mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off. If the technology has more than a niche market, then it will continue to grow.”

Gartner publishes dozens of these “hype cycles,” refreshing them every July or August. They mark the progression of specific technologies along the curve – as well as predicting the number of years to mainstream adoption.

Below is an infographic I’ve created which plots Gartner’s view for cloud computing overlaid by a plot of global public cloud computing revenues during that time, as reported by Forrester Research, another prominent industry observer.

Cloud Computing Hype Cycle

This infographic provides several interesting insights. For starters, cloud computing first appeared on Gartner’s radar in 2008. In hindsight that seems a little late — especially considering that Salesforce.com launched its first product all the way back in 2000. Amazon Web Services (AWS) first launched in 2002 and re-launched in 2006.

Perhaps Gartner was paying more attention to Microsoft, which announced Azure in 2008 but didn’t release it until 2010. Microsoft, Amazon, IBM and Salesforce are the top four providers today, holding ~57% of the public cloud computing market between them.

Cloud computing hit the peak of Gartner’s hype cycle just one year later, in 2009, but it lingered at or near the Peak of Inflated Expectations for three full years. All the while, Gartner was predicting mainstream adoption within 2 to 5 years. Indeed, they have made the same prediction for ten years in a row (although I would argue that cloud computing has already gained mainstream market adoption).

It also turns out that the Trough of Disillusionment isn’t quite the valley of despair that Gartner would have us believe. In fact, global public cloud revenues grew from $26 billion during 2011 to $114 billion during 2016 — roughly half the 64% compound annual growth rate (CAGR) during the Peak of Inflated Expectations, but a respectable 35% CAGR nonetheless.

Indeed, there’s no evidence here of waning market interest – nor did an industry shakeout occur. With the exception of IBM, the leading producers in 2008 remain the leading producers today.

All in all, it seems the hype curve significantly lagged the revenue curve during the plunge into the Trough of Disillusionment.

… Which brings us to blockchain technology. Just last month (February 2018), Gartner Analyst John Lovelock stated, “We are in the first phase of blockchain use, what I call the ‘irrational exuberance’ phase.”  The chart below illustrates shows how Gartner sees the “lay of the land” currently:

Blockchain Hype Cycle

This suggests that blockchain is at the Peak of Inflated Expectations, though it appeared ready to jump off a cliff into the Trough of Disillusionment in Gartner’s 2017 report for emerging technologies. (It wasn’t anywhere on Gartner’s radar before 2016.)

Notably, cryptocurrencies first appeared in 2014, just past the peak of the cycle, even though the first Bitcoin network was established in 2009. Blockchain is the distributed ledger which underpins cryptocurrencies and was invented at the same time.

It’s also interesting that Gartner doesn’t foresee mainstream adoption of blockchain for another 5 to 10 years. Lovelock reiterated that point in February, reporting that “Gartner does not expect large returns on blockchain until 2025.”

All the same, blockchain seems to be progressing through the curve at three times the pace of cloud computing. If cloud computing’s history is any guide, blockchain provider revenues are poised to outperform the hype curve and multiply into some truly serious money during the next five years.

It’s reasonable to think that blockchain has been passing through a phase of “irrational exuberance” lately, but it’s equally reasonable to believe that industry experts are overly cautious and a bit late to the table.

________________________

So that’s one specialist’s view.  What are your thoughts on where we are with blockchain technology? Please share your perspectives with other readers here.

IoT’s Ticking Time Bomb

The Internet of Things is making major headway in consumer product categories — but it turns out it’s bringing its share of headaches along for the ride.

It shouldn’t be particularly surprising that security could be a potential issue around IoT, of course.  But recent evaluations point to the incidence being more significant than first thought.

That’s the conclusion of research conducted by management consulting firm Altman Vilandrie & Company. Its findings are based on a survey of ~400 IT decision-makers working at companies that have purchased some form of IoT security solutions.

According to the Vilandrie survey, approximately half of the respondents reported that they have experienced at least one IoT-related security intrusion or breach within the past two years.  The companies included in the research range across some 19 industry segments, so the issue of security doesn’t appear to be confined to one or two sectors alone.

What’s more, smaller firms experienced higher relative “pain” caused by a security breach. In the Vilandrie survey, companies with fewer than $5 million in annual revenues reported an average loss of $255,000 associated with IoT security breaches.

While that’s substantially lower in dollar amount to the average loss reported by large companies, the loss for small business as a percentage of total revenues is much greater.

More findings from the Altman Vilandrie research study can be accessed here.

What does the Equifax data breach tell us about the larger issue of risk management in an increasingly unpredictable world?

It’s common knowledge by now that the data breach at credit reporting company Equifax earlier this year affected more than 140 million Americans. I don’t know about you personally, but in my immediate family, it’s running about 40% of us who have been impacted.

And as it turns out, the breach occurred because one of the biggest companies in the world — an enterprise that’s charged with collecting, holding and securing the sensitive personal and financial data of hundreds of millions of people — was woefully ill-prepared to protect any of it.

How ill-prepared? The more you dig around, the worse it appears.

Since my brother, Nelson Nones, works every day with data and systems security issues in his dealings with large multinational companies the world over, I asked him for his thoughts and perspectives on the Equifax situation.

What he reported back to me is a cautionary tale for anyone in business today – whether you’re working in a big or small company.  Nelson’s comments are presented below:

Background … and What Happened

According to Wikipedia, “Equifax Inc. is a consumer credit reporting agency. Equifax collects and aggregates information on over 800 million individual consumers and more than 88 million businesses worldwide.”

Founded in 1899, Equifax is one of the largest credit risk assessment companies in the world.  Last year it reported having more than 9,500 employees, turnover of $3.1 billion, and a net income of $488.1 million.

On September 8, 2017, Equifax announced a data breach potentially impacting 143 million U.S. consumers, plus anywhere from 400,000 to 44 million British residents. The breach was a theft carried out by unknown cyber-criminals between mid-May 2017 until July 29, 2017, which is when Equifax first discovered it.

It took another 4 days — until August 2, 2017 — for Equifax to engage a cybersecurity firm to investigate the breach.

Equifax has since confirmed that the cyber-criminals exploited a vulnerability of Apache Struts, which is an open-source model-view-controller (MVC) framework for developing web applications in the Java programming language.

The specific vulnerability, CVE-2017-5638, was disclosed by Apache in March 2017, but Equifax had not applied the patch for this vulnerability before the attack began in mid-May 2017.

The workaround recommended by Apache back in March consists of a mere 27 lines of code to implement a Servlet filter which would validate Content-Type and throw away requests with suspicious values not matching multipart/form-data. Without this workaround or the patch, it was possible to perform Remote Code Execution through a REST API using malicious Content-Type values.

Subsequently, on September 12, 2017, it was reported that a company “online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected [sic] by perhaps the most easy-to-guess password combination ever: ‘admin/admin’ … anyone authenticated with the ‘admin/admin’ username and password could … add, modify or delete user accounts on the system.”

Existing user passwords were masked, but:

“… all one needed to do in order to view [a] password was to right-click on the employee’s profile page and select ‘view source’. A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.”

The reporter who broke this story contacted Equifax and was referred to their attorneys, who later confirmed that the Argentine portal “was disabled and that Equifax is investigating how this may have happened.”

The Immediate Impact on Equifax’s Business

In the wake of these revelations, Equifax shares fell sharply:  15% on September 8, 2017, reducing market capitalization (shareholder value) by $3.97 billion in a single trading day.

Over the next 5 trading days, shares fell another 24%, reducing shareholder value by another $5.4 billion.

What this means is that the cost of the breach, measured in shareholder value lost by the close of business on September 15, 2017 (6 business days), was $9.37 billion – which is equivalent to the entire economic output of the country of Norway over a similar time span.

This also works out to losses of $347 million per line of code that Equifax could have avoided had it deployed the Apache Struts workaround back in March 2017.

The company’s Chief Information Officer and Chief Security Officer also “retired” on September 15, 2017.

Multiple lawsuits have been filed against Equifax. The largest is seeking $70 billion in damages sustained by affected consumers. This is more than ten times the company’s assets in 2016, and nearly three times the company’s market capitalization just before the breach was announced.

The Long-Term Impact on Equifax’s Brand

This is yet to be determined … but it’s more than likely the company will never fully recover its reputation.  (Just ask Target Corporation about this.)

Takeaway Points for Other Companies

If something like this could happen at Equifax — where securely keeping the private information of consumers is the lifeblood of the business — one can only imagine the thousands of organizations and millions of web applications out there which are just as vulnerable (if not as vital), and which could possibly destroy the entire enterprise if compromised.

At most of the companies I’ve worked with over the past decade, web application development and support takes a back seat in terms of budgets and oversight compared to so-called “core” systems like SAP ERP. That’s because the footprint of each web application is typically small compared to “core” systems.

Of necessity, due to budget and staffing constraints at the Corporate IT level, business units have haphazardly built out and deployed a proliferation of web applications — often “on the cheap” — to address specific and sundry tactical business needs.

“Kid’s Day” at Equifax’s Argentine offices. Were the kids in command there, one is tempted to wonder …

I strongly suspect the Equifax portal for managing credit report disputes in Argentina — surely a backwater business unit within the greater Equifax organization — was one of those.

If I were a CIO or Chief Security Officer right now, I’d either have my head in the sand, or I’d be facing a choice. I could start identifying and combing through the dozens or hundreds of web applications currently running in my enterprise (each likely to be architecturally and operationally different from the others) to find and patch all the vulnerabilities. Or I could throw them all out, replacing them with a highly secure and centrally-maintainable web application platform — several of which have been developed, field-tested, and are readily available for use.

__________________________

So, there you have it from someone who’s “in the arena” of risk management every day. To all the CEOs, CIOs and CROs out there, here’s your wakeup call:  Equifax is the tip of the spear.  It’s no longer a question of “if,” but “when” your company is going to be attacked.

And when that attack happens, what’s the likelihood you’ll be able to repel it?

… Or maybe it’ll be the perfect excuse to make an unforeseen “early retirement decision” and call it a day.

__________________________

Update (9/25/17):  And just like clockwork, another major corporation ‘fesses up to a major data breach — Deloitte — equally problematic for its customers.

The Connected Home

It doesn’t take a genius to realize that the typical American home contains more than a few digital devices. But it might surprise some to learn just how many devices there actually are.

According to a recent survey of nearly 700 American adults who have children under the age of 15 living at home, the average household contains 7.3 “screens.”

The survey, which was conducted by technology research company ReportLinker in April 2017, found that TVs remain the #1 item … but the number of digital devices in the typical home is also significant.

Here’s what the ReportLinker findings show:

  • TV: ~93% of homes have at least one
  • Smartphone: ~79%
  • Laptop computer: ~78%
  • Tablet computer: ~68%
  • Desktop computer: ~63%
  • Tablet computer for children age 10 or younger: ~52%
  • Video game console: ~52%
  • e-Reader: ~16%

An interesting facet of the report focuses on how extensively children are interfacing with these devices. Perhaps surprisingly, TV remains the single most popular device used by kids under the age of 15 at home, compared to other devices that may seem to be more attuned to the younger generation’s predilections:

  • TV: ~62% used by children in their homes
  • Tablets: ~47%
  • Smartphones: ~39%
  • Video game consoles: ~38%

The ReportLinker survey also studied attitudes adults have about technology and whether it poses risks for their children. Parents who allow their children to use digital devices in their bedrooms report higher daily usage by their children compared to families who do not do so – around three hours of usage per day versus two.

On balance, parents have positive feelings about the impact technology is having on their children, with ~40% of the respondents believing that technology promotes school readiness and cognitive development, along with a higher level of technical savvy.

On the other hand, around 50% of the respondents feel that technology is hurting the “essence” of childhood, and causing kids to spend less time playing, spending time outdoors, or reading.

A smaller but still-significant ~30% feel that their children are more isolated, because they have fewer social interactions than they would have had without digital devices in their lives.

And lastly, seven in ten parents have activated some form of parental supervision software on the digital devices in their homes – a clear indication that, despite the benefits of the technology that nearly everyone can recognize, there’s a nagging sense that downsides of that technology are always lurking just around the corner …

For more findings from the ReportLinker survey, follow this link.