The downside dangers of IoT: Overblown or underestimated?

In recent weeks, there has been an uptick in articles appearing in the press about the downside risks of the Internet of Things (IoT). The so-called “Weeping Angel” technique, which essentially allows hackers to turn a smart television into a microphone, is one eyebrow-raising example included from the CIA files released by WikiLeaks recently. Another is the potential for hacking into the systems of autonomous vehicles, enabling cargo to be stolen or the vehicles themselves to be held for ransom.

Some of it seems like the stuff of science fiction – or at the very least a modern form of cloak-and-dagger activity. Regular readers of the Nones Notes blog know that when we’re in the midst of a “collective angst” about a topics of this nature, I like to solicit the views of my brother, Nelson Nones, who has been in the fields of IT and operations management for decades.

I asked Nelson to share his perspectives on IoT, what he sees are its pitfalls, and whether the current levels of concern are justified. His comments are presented below:

Back in 1998, I was invited to speak about the so-called “millennium bug” (also known as the “Y2K bug”) at a symposium in Kuching, Malaysia. It was a hot topic at that time, because many computer systems then in use hadn’t been designed or built to deal with calendar dates beyond the end of the 20th century.  

The purpose of my presentation was to educate the audience about the nature of the problem, and how to mitigate it. During the question-and-answer session which followed, a member of the audience rose and began to speak rather hysterically of the threat which the millennium bug posed to civilization as we knew it.  

His principal concern was the millions of embedded sensors and controllers in use throughout industry which were not programmable and would therefore need to be replaced. In his view, very few people knew which of those devices were susceptible to the millennium bug, or where they were running.  

As a result, he felt that many flawed devices would go undetected, causing critical infrastructures such as power generation plants, electricity grids and aircraft to fail.  

Needless to say, his dire predictions did not come to pass and humankind sailed into the 21st century with barely a murmur. This isn’t to say that the millennium bug wasn’t a real threat – it certainly was – but rather that providers and users of information technology (IT) mostly did what was necessary to prepare for it.  As Britain’s Guardian newspaper reported in April 2000, “In truth, there have been bug incidents … none of this, however, adds up to global recession, or infrastructure collapse, or accidental nuclear war, as the most heated prophets were anticipating.”  

It is for similar reasons that I take much of today’s hype over security vulnerabilities of IoT with more than a pinch of salt. 

It’s worth noting that, technologically speaking, IoT isn’t really very new at all. As the prophet of doom at my 1998 symposium (correctly) observed, sensors, software, actuators and electronic controllers have been integral components of automated industrial systems for the past thirty years at least.   

What’s new is that these technologies have begun to be accepted and deployed by consumers. I say “begun” because I don’t know anyone who has actually rigged a “smart home” to work in the all-encompassing way breathlessly envisioned by purveyors of home automation technology; but I do know people who use the technology for specific purposes such as home security, thermostat control and recording TV programs.  

Just last week I spoke with someone who is beta testing a self-driving Tesla automobile, but he confessed that he still won’t take his hands off the wheel because he doesn’t really trust the self-driving technology yet.  

What’s also new is that businesses are extending their use of sensors and controllers well beyond the confines of plants, factories and warehouses. For example, trucking companies routinely use global positioning system (GPS) sensors to monitor fleet locations in real-time.  

Aircraft engine makers such as Rolls-Royce and GE rely on management and monitoring systems to transmit information from sensors to ground stations for real time analysis, during flight.  Many problems which are detected in this manner can be instantly corrected during flight, by relaying instructions back to controllers and actuators installed on the engine.  

The common denominator for what’s new is the use of existing Internet infrastructure; hence the “I” in “IoT.”  

In earlier times, sensors, software and electronic controllers could communicate only through local area networks (LANs) which were physically isolated and therefore impermeable to external attacks. But when those devices are connected to the public Internet, in theory anyone can access them — including cyber-criminals and governments engaged in sabotage or espionage, or who want to hold things for ransom, surreptitiously watch live feeds, or deploy botnets for distributed denial of service (DDoS) attacks.  

It is clear, therefore, that the root causes of privacy and security concerns arising from increasing IoT usage are mainly network security lapses, and not the things themselves.

Ensuring the highest possible degree of network security is no easy task. Above and beyond arcane technical details such as encryption, installing network firewalls, and opening and closing of ports, it means deploying multiple layers of defenses according to specific policies and controls, and that requires skills and knowledge which most consumers, and even many businesses, do not possess. 

Still, one doesn’t have to be a network geek to implement basic security mechanisms that far too many people overlook. In search of easy pickings, cyber-criminals usually prefer to exploit the huge number of unlocked doors begging for their attention, rather than wasting time trying to penetrate even slightly stronger defenses.   

For example, many people install wireless networks in their homes but forget to change the default router password and default network name (SSID) – or they pick a password that’s easy to guess. In addition, many people leave their network “open” to anyone having a wireless card by failing to implement a security key such as a WPA, WPA2 or WEP key, or by choosing a weak security key.   

An attacker can discover those lapses in a matter of seconds, or less, giving them full administrative authority and control over the compromised network with little risk of detection. This, in turn, would give the attacker immediate access to, and remote control over, any device on the network which is switched on but does not require authentication; for example, network printers, data storage devices, cameras, TVs and personal computers (PCs) which are not configured to require a user logon. 

Plugging those security holes doesn’t require specialist knowledge and shouldn’t take more than an hour for most home networks. Recognizing the security concerns, an increasing number of hardware and software vendors are preconfiguring their products in “full lockdown” mode, which provides basic security by default and requires users to apply specialist knowledge in order to open up their networks as necessary for greater convenience.  

This is precisely what Microsoft did over a decade ago, with great success, in response to widely publicized security vulnerabilities in its Windows® operating system and Internet Explorer browser. 

It’s all too easy to imagine the endgames of hypothetical scenarios in which the bad apples win by wresting control over the IoT from the good guys. But just like the millennium bug nearly two decades ago, it is wiser to heed the wisdom of Max Ehrmann’s Desiderata, published back in 1927:  

“Exercise caution in your business affairs, for the world is full of trickery … but do not distress yourself with dark imaginings.”  

Going forward, I’m confident that a healthy dose of risk intelligence, and not fear, will prove to be the key for successfully managing the downside aspects of IoT.

_________________________

So those are Nelson’s views on the Internet of Things. What about you?  Are you in agreement, or are there aspects about which you may think differently?  Please share your thoughts with other readers.

In the “right to be forgotten” battles, Google’s on the defensive again.

untitledSuddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.)  But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

file-and-forgetAs I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure.  Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all.  (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results.  Today, I don’t think we’re anywhere closer to knowing.

China’s controversial product supplier pledge: An “on the ground” view from the Far East.

The business world is abuzz about the latest moves by China to regulate the behavior of U.S. and other foreign companies that choose to do business in that country.  What’s the real skinny?

contract

While much of the reporting and commentary has been decidedly scant on details, we can actually take a look at the official document that contains the various provisos the Chinese government is intending to impose on foreign companies.

Ostensibly, the declaration is aimed at “protecting user security.” Here are the six provisions that make up the declaration:

Information Technology Product Supplier Declaration of Commitment to Protect User Security

Our company agrees to strictly adhere to the two key principles of “not harming national security and not harming consumer rights” and hereby promises to:

#1.  Respect the user’s right to know. To clearly advise users of the scope, purpose, quantity, storage location, etc. of information collected about the user; and to use clear and easy-to-understand language in the user agreement regarding policies and details of protecting user security and privacy.

#2.  Respect the user’s right to control. To permit the user to determine the scope of information that is collected and products and systems that are controlled; to collect user information only after openly obtaining user permission, and to use collected user information to [sic] the authorized purposes only.

#3.  Respect the user’s right to choice. To allow the user to agree, reject or withdraw agreement for collection of user information; to permit the user to choose to install or uninstall non-essential components; to not restrict user selection of other products and services.

#4.  Guarantee product safety and trustworthiness. To use effective measures to ensure the security and trustworthiness of products during the design, development, production, delivery and maintenance processes; to provide timely notice and fixes upon discovery of security vulnerabilities; to not install any hidden functionalities or operations the user is unaware of [sic] within the product.

#5.  Guarantee the security of user information. To employ effective measures to guarantee that any user information that is collected or processed isn’t illegally altered, leaked, or used; to not transfer, store or process any sensitive user information collected within the China market outside China’s borders without express permission of the user or approval from relevant authorities.

#6.  Accept the supervision of all parts of society. To promise to accept supervision from all parts of society, to cooperate with third-party institutions for assessment and verification that products are secure and controllable and that user information is protected etc. to prove actual compliance with these commitments.

Often with China, there are “official” pronouncements … and then there’s what’s “really” going on behind the curtain.

So to find out the real skinny, I decided to ask my brother, Nelson Nones, who has lived and worked in East Asia for years.  Since Nelson’s business activities take him to China and all of the other key Asian economies on a regular basis, I figured that his perspectives would be well-grounded and worth hearing.  Here’s Nelson’s take:

Points 1 through 3 are fundamentally no different from the provisions of personal data protection laws already on the books in the 27 member states of the European Union, plus Australia, Hong Kong, Iceland, India, Japan, South Korea, Liechtenstein, Macau, Malaysia, New Zealand, Norway, Singapore, the Philippines, Taiwan and some U.S. states.  Nor do they materially differ from privacy policy best practices — so I would not see these as particularly onerous or unreasonable.

The key difference is that these points are not enshrined in law in Mainland China, so compliance is voluntary at the moment (as it was in Singapore until 2013) – presumably binding on only those companies that sign this declaration. 

News reports also indicate that China has asked only American technology companies to sign its Declaration of Commitment, implying that domestic Chinese companies aren’t necessarily held to the same standards — although if this is truly the case, it might actually put Chinese companies at a competitive disadvantage by enhancing the appeal of American technology products to discerning Chinese users.

Point 4 doesn’t generally fall within the scope of existing personal data protection laws, but in my view its provisions fall well within the QA and warranty commitments that any legitimate technology company should be prepared to make in today’s competitive environment.

Comparing Point 5 with legislation currently in force within the European Union, Australia, Hong Kong, Iceland, India, Japan, South Korea, Liechtenstein, Macau, Malaysia, New Zealand, Norway, Singapore, the Philippines, Taiwan and some U.S. states, this point lacks some really key definitions, including:  

  • Who exactly is a “data subject” who is entitled to personal (i.e. user) data protection?
  • Who exactly is the “data controller” who owns the user information that is being collected or processed?
  • Who might be the “data processor” who stores and/or processes user information on behalf of the “data controller”?

EU Data Protection DirectiveThe legislation and regulations I’ve reviewed in this realm provide very explicit (and varied) definitions of these entities. Unlike China’s Declaration of Commitment, for instance, the E.U. Data Protection Directive allows “data controllers” or “data processors” to transfer user data outside the E.U., as long as the country where the data is transferred protects the rights of “data subjects” as much as the E.U. 

It also defines which “data controllers” and “data processors” must comply with E.U. law, based on whether or not they store or process personal information with the E.U., or operate within the E.U. (regardless of where the data is actually stored or processed).

The requirement to keep sensitive user information within China’s borders, in the absence of permission from users or “relevant authorities” to transfer, store or process it elsewhere, could also be seen as an attempt by the Chinese government to enlist the help of American technology companies in circumventing the U.S. government’s ongoing Internet data-gathering programs.

If this attempt succeeds, it might further enhance the appeal of American technology products to discerning Chinese users. 

Point 6 is garnering the most headlines in the West because of the implied threat that cooperating with “third-party institutions for assessment and verification … to prove actual compliance with these commitments” could mean being forced to reveal source code or encryption algorithms.  

However, in classic Chinese style, none of that is actually spelled out. 

Green Dam Youth Escort ServiceA little history about this: Over the past decade, the Chinese government has put forward various proposals for controlling IT – and then abruptly withdrawing them in the face of domestic as well as global criticism. Here are two: 

As for implications, China’s Declaration of Commitment shouldn’t have significant impact on companies that aren’t in the consumer IT market.  At best, its first five points could potentially improve the competitiveness of American IT products in the  Chinese market.    

However, I would advise any tech companies that may be wondering what to do, to sit on their hands for a while. Law in China is always a “work in progress,” so the safest bet is to wait for that “progress” for as long as possible.

So there you have it – the view from someone who is smack in the middle of the business economy in East Asia. If you have your own perspectives to share on the topic, I’m sure other readers would be interested to hear them as well.

Criptext: When a recall actually looks pretty good.

Criptext logo

I doubt there are many of us in business who have never inadvertently sent an e-mail to the wrong person … or sent a message before it was fully complete … or forgot to include an attachment.

In such cases, it would be so nice to be able to recall the e-mail — just like we used to do in the days of postal mail simply by retrieving the letter from the outgoing mail bin.

Recent news reports reveal that this capability is actually a reality now.

In the fast lane?  Criptext principals just completed a successful round of investment funding.
In the fast lane? Criptext principals just completed a successful round of investment funding.

A start-up firm called Criptext has just raised a half-million dollars in private investment funds to help it perfect and expand a product that allows any sent e-mail to be recalled — even if the recipient has already opened and read it.

According to a report from Business Insider, Criptext is currently available as a plugin and a browser extension for the popular Outlook and Gmail email services.  It operates inside of the email, enabling the sender to track when, where and who has opened emails and/or downloaded attachments within them.

In addition, Criptext also enables the sender to recall emails, and even to set a self-destruct timer to automatically recall emails after a specified length of time.

Viewing a screenshot of how Criptext works (in this case with the Gmail service), things look pretty simple (and pretty cool, too):

Criptext activity panel example

I thought it would be only a matter of time before some developer would figure out a way to “unwind” an email communiqué once the “send” button was hit.  And now we have it.

Of course, time will tell whether Criptext can live up to its billing … or if it turns out to be more of a nightmare of glitches than a dream come true.

It would be great to hear from anyone who may have first-hand experience with Criptext — or other similar email functionalities.  Please share your experiences and perspectives pro or con with other readers here.

Memo to web users with “Do Not Track” enabled: You’re being tracked anyway.

do not trackFor anyone who thinks he or she is circumventing web tracking via enabling Do Not Track (DNT) functionality … think again.

A recently released study from researchers at KU Leuven-iMinds, a Dutch-based university think tank, shows that nearly 150 of the world’s leading websites have ditched tracking cookies in favor of “device fingerprinting” (or “browser fingerprinting” as it’s sometimes called).

What’s that?  It’s the practice of evaluating selected properties of desktop computers, tablets and smartphone to build a unique user identifier.  These properties include seemingly innocuous details found on each device, such as:

  • Versions of installed software and plugins
  • Screen size
  • A listing of installed fonts

An analysis by the Electronic Frontier Foundation (EFF) has shown that for the majority of browsers, the combination of these properties creates a unique ID – thereby allowing a user to be tracked without the perpetrator needing to rely on cookies — or having to deal with pesky legal restrictions pertaining to the restriction of cookies’ use.

Overwhelmingly, browser fingerprinting targets popular and commonly used JavaScript or Flash functions, so that nearly every person who accesses the web is a target – without their knowledge or consent.

According to the Leuven-iMinds analysis, the use of JavaScript-based fingerprinting allows websites to track non-Flash mobile phones and devices.  So it’s cold comfort thinking that the iPad platform will offer protection against this form of “non-cookie tracking.”

Is there anything good about device fingerprinting?  Perhaps … in that it can be used for some justifiable security-related activities such as protection against account hijacking, fraud detection, plus anti-bot and anti-scraping services.

But the accompanying bad news is this:  It can also be used for analytics and marketing purposes via the fingerprinting scripts hidden behind banner advertising.

How to fight back, if one is so-inclined?  The Leuven-iMinds researchers have developed a free tool that analyzes websites for suspicious scripts.  Known as FPDetective, it’s being made available to other researchers to conduct their own investigations.

So you’re able to identify the offenders.  But then what — short of never visiting their websites again?

Computer security measures: A whole lot of heat … and very little light?

Cyber-security ... how effective is it in relation to the all the effort?If you’re like me, you have upwards of two dozen sets of user names and passwords associated with the various business, banking, shopping and social media sites with which you interact on a regular or occasional basis.

Trying to keep all of this information safe and secure – yet close at hand – is easier said than done. More often than not, passwords and other information end up on bits of paper floating around the office, in a wallet … or in (and out of) your head.

And to make things even more difficult, if you paid attention to conventional advice, you’d be changing those passwords every 30 or 60 days, making sure you’re following the guidelines regarding creating indecipherable permutations of numbers, letters and symbols so as to throw the “bad guys” off your password’s scent.

Now, here comes a paper written by Dr. Cormac Herley, principal research analyst at Microsoft Corporation, that calls into question how much all of this focus on password protection and cyber-security is really benefiting anyone.

Dr. Herley’s paper is titled So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users. In it, the author contends that the collective time and effort involved in complying with all of the directives and admonitions regarding computer security add up to far more cost than the cost of what is actually caused by cyber-security breaches.

[For the record, he estimates if the time spent by American adults on these tasks averages a minute a day, it adds up to ~$16 billion worth of time every year.]

Here’s a quote from Herley’s paper:

“We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice, we find that the advice is complex and growing, but the benefit is largely speculative or moot.”

It would be one thing if this screed was written by some outré blogger operating on the fringes of the discipline. But it’s coming from a senior researcher at Microsoft.

To illustrate his point, Herley summarizes the whole area of password rules, which he contends places the entire burden of password management on the user. To wit:

 Length of password
 Password composition (e.g., letters, numbers, special characters)
 Non-dictionary words (in any language, not just English)
 Don’t write the password down
 Don’t share the password with anyone
 Change it often
 Don’t re-use the same passwords across sites

How much value each of these guidelines possesses is a matter of debate. For instance, the first three factors listed above are not consequential, as most applications and web sites lock out access after three or four incorrect tries.

Changing passwords often – whether that’s quarterly, monthly or weekly – is never often enough, as any attack using a purloined password will likely happen within a few seconds, minutes or hours of its acquisition, rather than waiting days. On the other hand, for users to change their passwords regularly requires time and attention … and often leads to frustration and lost productivity as people hunt around for the “last, best” misplaced password they assigned to their account.

And as for those irritating certificate error warnings that pop up on the computer screen with regularity, Herley contends that most users do not understand their significance. And even if they did, what options do people have when confronted with one of these warnings, other than exiting the program?

As it turns out, there’s not much to fear, as virtually all certificate errors are “false positives.” With certificates as well as so many other issues of cyber-security, Herley maintains that the dangers are often not evidenced-based. As for the computer users, “The effort we ask of them is real, while the harm we warn them of is theoretical,” he writes.

Herley’s main beef is that all of the energy surrounding cyber-security and what is asked of consumers is a cost borne by the entire population … but that the cost of security directives should actually be in proportion to the victimization rate, which he characterizes as miniscule.

An interesting prognosis … and a rather surprising one considering the source.