The Connected Home

It doesn’t take a genius to realize that the typical American home contains more than a few digital devices. But it might surprise some to learn just how many devices there actually are.

According to a recent survey of nearly 700 American adults who have children under the age of 15 living at home, the average household contains 7.3 “screens.”

The survey, which was conducted by technology research company ReportLinker in April 2017, found that TVs remain the #1 item … but the number of digital devices in the typical home is also significant.

Here’s what the ReportLinker findings show:

  • TV: ~93% of homes have at least one
  • Smartphone: ~79%
  • Laptop computer: ~78%
  • Tablet computer: ~68%
  • Desktop computer: ~63%
  • Tablet computer for children age 10 or younger: ~52%
  • Video game console: ~52%
  • e-Reader: ~16%

An interesting facet of the report focuses on how extensively children are interfacing with these devices. Perhaps surprisingly, TV remains the single most popular device used by kids under the age of 15 at home, compared to other devices that may seem to be more attuned to the younger generation’s predilections:

  • TV: ~62% used by children in their homes
  • Tablets: ~47%
  • Smartphones: ~39%
  • Video game consoles: ~38%

The ReportLinker survey also studied attitudes adults have about technology and whether it poses risks for their children. Parents who allow their children to use digital devices in their bedrooms report higher daily usage by their children compared to families who do not do so – around three hours of usage per day versus two.

On balance, parents have positive feelings about the impact technology is having on their children, with ~40% of the respondents believing that technology promotes school readiness and cognitive development, along with a higher level of technical savvy.

On the other hand, around 50% of the respondents feel that technology is hurting the “essence” of childhood, and causing kids to spend less time playing, spending time outdoors, or reading.

A smaller but still-significant ~30% feel that their children are more isolated, because they have fewer social interactions than they would have had without digital devices in their lives.

And lastly, seven in ten parents have activated some form of parental supervision software on the digital devices in their homes – a clear indication that, despite the benefits of the technology that nearly everyone can recognize, there’s a nagging sense that downsides of that technology are always lurking just around the corner …

For more findings from the ReportLinker survey, follow this link.

Legislators tilt at the digital privacy windmill (again).

In the effort to preserve individual privacy in the digital age, hope springs eternal.

The latest endeavor to protect individuals’ privacy in the digital era is legislation introduced this week in the U.S. Senate that would require law enforcement and government authorities to obtain a warrant before accessing the digital communications of U.S. citizens.

Known as the ECPA Modernization Act of 2017, it is bipartisan legislation introduced by two senators known for being polar opposites on the political spectrum: Sen. Patrick Leahy (D-VT) on the left and Sen. Mike Lee (R-UT) on the right.

At present, only a subpoena is required for the government to gain full access to Americans’ e-mails that a over 180 days old. The new ECPA legislation would mean that access couldn’t be granted without showing probable cause, along with obtaining a judge’s signature.

The ECPA Modernization Act would also require a warrant for accessing geo-location data, while setting new limits on metadata collection. If the government did access cloud content without a warrant, the new legislation would make that data inadmissible in a court of law.

There’s no question that the original ECPA (Electronic Communications Privacy Act) legislation, enacted in 1986, is woefully out of date. After all, it stems from a time before the modern Internet.

It’s almost quaint to realize that the old ECPA legislation defines any e-mail older than 180 days as “abandoned” — and thereby accessible to government officials.  After all, we now live in an age when many residents keep the same e-mail address far longer than their home address.

The fact is, many individuals have come to rely on technology companies to store their e-mails, social media posts, blog posts, text messages, photos and other documents — and to do it for an indefinite period of time. It’s perceived as “safer” than keeping the information on a personal computer that might someday malfunction for any number of reasons.

Several important privacy advocacy groups are hailing the proposed legislation and urging its passage – among them the Center for Democracy & Technology and the Electronic Frontier Foundation.

Sophia Cope, an attorney at EFF, notes that the type of information individuals have entrusted to technology companies isn’t very secure at all. “Many users do not realize that an e-mail stored on a Google or Microsoft service has less protection than a letter sitting in a desk drawer at home,” Cope maintains.

“Users often can’t control how and when their whereabouts are being tracked by technology,” she adds.

The Senate legislation is also supported by the likes of Google, Amazon, Facebook and Twitter.

All of which makes it surprising that this type of legislation – different versions of which have been introduced in the U.S. Senate every year since 2013 – has had such trouble gaining traction.

The reasons for prior-year failure are many and varied – and quite revealing in terms of illuminating how crafting legislation is akin to sausage-making.  Which is to say, not very pretty.  But this year, the odds look more favorable than ever before.

Two questions remain on the table: First, will the legislation pass?  And second, will it really make a difference in terms of protecting the privacy of Americans?

Any readers with particular opinions are encouraged to weigh in.

The downside dangers of IoT: Overblown or underestimated?

In recent weeks, there has been an uptick in articles appearing in the press about the downside risks of the Internet of Things (IoT). The so-called “Weeping Angel” technique, which essentially allows hackers to turn a smart television into a microphone, is one eyebrow-raising example included from the CIA files released by WikiLeaks recently. Another is the potential for hacking into the systems of autonomous vehicles, enabling cargo to be stolen or the vehicles themselves to be held for ransom.

Some of it seems like the stuff of science fiction – or at the very least a modern form of cloak-and-dagger activity. Regular readers of the Nones Notes blog know that when we’re in the midst of a “collective angst” about a topics of this nature, I like to solicit the views of my brother, Nelson Nones, who has been in the fields of IT and operations management for decades.

I asked Nelson to share his perspectives on IoT, what he sees are its pitfalls, and whether the current levels of concern are justified. His comments are presented below:

Back in 1998, I was invited to speak about the so-called “millennium bug” (also known as the “Y2K bug”) at a symposium in Kuching, Malaysia. It was a hot topic at that time, because many computer systems then in use hadn’t been designed or built to deal with calendar dates beyond the end of the 20th century.  

The purpose of my presentation was to educate the audience about the nature of the problem, and how to mitigate it. During the question-and-answer session which followed, a member of the audience rose and began to speak rather hysterically of the threat which the millennium bug posed to civilization as we knew it.  

His principal concern was the millions of embedded sensors and controllers in use throughout industry which were not programmable and would therefore need to be replaced. In his view, very few people knew which of those devices were susceptible to the millennium bug, or where they were running.  

As a result, he felt that many flawed devices would go undetected, causing critical infrastructures such as power generation plants, electricity grids and aircraft to fail.  

Needless to say, his dire predictions did not come to pass and humankind sailed into the 21st century with barely a murmur. This isn’t to say that the millennium bug wasn’t a real threat – it certainly was – but rather that providers and users of information technology (IT) mostly did what was necessary to prepare for it.  As Britain’s Guardian newspaper reported in April 2000, “In truth, there have been bug incidents … none of this, however, adds up to global recession, or infrastructure collapse, or accidental nuclear war, as the most heated prophets were anticipating.”  

It is for similar reasons that I take much of today’s hype over security vulnerabilities of IoT with more than a pinch of salt. 

It’s worth noting that, technologically speaking, IoT isn’t really very new at all. As the prophet of doom at my 1998 symposium (correctly) observed, sensors, software, actuators and electronic controllers have been integral components of automated industrial systems for the past thirty years at least.   

What’s new is that these technologies have begun to be accepted and deployed by consumers. I say “begun” because I don’t know anyone who has actually rigged a “smart home” to work in the all-encompassing way breathlessly envisioned by purveyors of home automation technology; but I do know people who use the technology for specific purposes such as home security, thermostat control and recording TV programs.  

Just last week I spoke with someone who is beta testing a self-driving Tesla automobile, but he confessed that he still won’t take his hands off the wheel because he doesn’t really trust the self-driving technology yet.  

What’s also new is that businesses are extending their use of sensors and controllers well beyond the confines of plants, factories and warehouses. For example, trucking companies routinely use global positioning system (GPS) sensors to monitor fleet locations in real-time.  

Aircraft engine makers such as Rolls-Royce and GE rely on management and monitoring systems to transmit information from sensors to ground stations for real time analysis, during flight.  Many problems which are detected in this manner can be instantly corrected during flight, by relaying instructions back to controllers and actuators installed on the engine.  

The common denominator for what’s new is the use of existing Internet infrastructure; hence the “I” in “IoT.”  

In earlier times, sensors, software and electronic controllers could communicate only through local area networks (LANs) which were physically isolated and therefore impermeable to external attacks. But when those devices are connected to the public Internet, in theory anyone can access them — including cyber-criminals and governments engaged in sabotage or espionage, or who want to hold things for ransom, surreptitiously watch live feeds, or deploy botnets for distributed denial of service (DDoS) attacks.  

It is clear, therefore, that the root causes of privacy and security concerns arising from increasing IoT usage are mainly network security lapses, and not the things themselves.

Ensuring the highest possible degree of network security is no easy task. Above and beyond arcane technical details such as encryption, installing network firewalls, and opening and closing of ports, it means deploying multiple layers of defenses according to specific policies and controls, and that requires skills and knowledge which most consumers, and even many businesses, do not possess. 

Still, one doesn’t have to be a network geek to implement basic security mechanisms that far too many people overlook. In search of easy pickings, cyber-criminals usually prefer to exploit the huge number of unlocked doors begging for their attention, rather than wasting time trying to penetrate even slightly stronger defenses.   

For example, many people install wireless networks in their homes but forget to change the default router password and default network name (SSID) – or they pick a password that’s easy to guess. In addition, many people leave their network “open” to anyone having a wireless card by failing to implement a security key such as a WPA, WPA2 or WEP key, or by choosing a weak security key.   

An attacker can discover those lapses in a matter of seconds, or less, giving them full administrative authority and control over the compromised network with little risk of detection. This, in turn, would give the attacker immediate access to, and remote control over, any device on the network which is switched on but does not require authentication; for example, network printers, data storage devices, cameras, TVs and personal computers (PCs) which are not configured to require a user logon. 

Plugging those security holes doesn’t require specialist knowledge and shouldn’t take more than an hour for most home networks. Recognizing the security concerns, an increasing number of hardware and software vendors are preconfiguring their products in “full lockdown” mode, which provides basic security by default and requires users to apply specialist knowledge in order to open up their networks as necessary for greater convenience.  

This is precisely what Microsoft did over a decade ago, with great success, in response to widely publicized security vulnerabilities in its Windows® operating system and Internet Explorer browser. 

It’s all too easy to imagine the endgames of hypothetical scenarios in which the bad apples win by wresting control over the IoT from the good guys. But just like the millennium bug nearly two decades ago, it is wiser to heed the wisdom of Max Ehrmann’s Desiderata, published back in 1927:  

“Exercise caution in your business affairs, for the world is full of trickery … but do not distress yourself with dark imaginings.”  

Going forward, I’m confident that a healthy dose of risk intelligence, and not fear, will prove to be the key for successfully managing the downside aspects of IoT.

_________________________

So those are Nelson’s views on the Internet of Things. What about you?  Are you in agreement, or are there aspects about which you may think differently?  Please share your thoughts with other readers.

Ad fraud: It’s worse than you think.

It isn’t so much the size of the problem, but rather its implications.

affaA recently published report by White Ops, a digital advertising security and fraud detection company, reveals that the source of most online ad fraud in the United States isn’t large data centers, but rather millions of infected browsers in devices owned by people like you and me.

This is an important finding, because when bots run in browsers, they appear as “real people” to most advertising analytics and many fraud detection systems.

As a result, they are more difficult to detect and much harder to stop.

These fraudulent bots that look like “people” visit publishers, which serve ads to them and collect revenues.

faaf

Of course, once detected, the value of these “bot-bound” ads plummets in the bidding markets.  But is it really a self-correcting problem?   Hardly.

The challenge is that even as those browsers are being detected and rejected as the source of fraudulent traffic, new browsers are being infected and attracting top-dollar ad revenue just as quickly.

It may be that only 3% of all browsers account for well over half of the entire fraud activity by dollar volume … but that 3% is changing all the time.

Even worse, White Ops reports that access to these infected browsers is happening on a “black market” of sorts, where one can buy the right to direct a browser-resident bot to visit a website and generate fraudulent revenues.

… to the tune of billions of dollars every year.  According to ad traffic platform developer eZanga, advertisers are wasting more than $6 billion every year in fraudulent advertising spending.  For some advertisers involved in programmatic buying, fake impressions and clicks represent a majority of their revenue outlay — even as much as 70%.

The solution to this mess in online advertising is hard to see. It isn’t something as “simple and elegant” as blacklisting fake sites, because the fraudsters are dynamically building websites from stolen content, creating (and deleting) hundreds of them every minute.

They’ve taken the very attributes of the worldwide web which make it so easy and useful … and have thrown them back in our faces.

Virus protection software? To these fraudsters, it’s a joke.  Most anti-virus resources cannot even hope to keep pace.  Indeed, some of them have been hacked themselves – their code stolen and made available on the so-called “deep web.”  Is it any wonder that so many Internet-connected devices – from smartphones to home automation systems – contain weaknesses that make them subject to attack?

The problems would go away almost overnight if all infected devices were cut off from the Internet. But we all know that this is an impossibility; no one is going to throw the baby out with the bathwater.

It might help if more people in the ad industry would be willing to admit that there is a big problem, as well as to be more amenable to involve federal law enforcement in attacking it.  But I’m not sure even that would make all that much difference.

There’s no doubt we’ve built a Frankenstein-like monster.  But it’s one we love as well as hate.  Good luck squaring that circle!

The financial goals — and worries — of affluent consumers: It turns out they’re more similar than different from the broader population.

But gender differences do exist …

acIn this year’s U.S. presidential election campaign, there’s been a good deal of attention paid to so-called “working class” voters. No doubt, this is a segment of the electorate that’s especially unhappy with the current state of affairs in the country.

But what about other population groups?

As it turns out, affluent Americans are worried about many of the same things as well. A recent survey of affluent Americans conducted by the Shullman Research firm reveals that their worries are fundamentally similar to other Americans.

Here’s what survey respondents revealed as their to worries:

  • Your own health: ~36% of respondents cited as a top worry
  • Your family’s health: ~31% cited
  • Having enough money saved to retire comfortably: ~30%
  • The economy going into recession: ~28%
  • Terrorism: ~27%
  • Inflation: ~23%
  • The price of gasoline: ~22%
  • Being out of work and finding a good job: ~20%
  • Political issues / warfare around the world: ~15%
  • Taking care of elderly parents: ~15%

[One mild surprise for me was seeing how many respondents cited “the price of gasoline” as a source of worry, considering not only the recent easing of those prices as well as the affluence level of the survey sample.]

Generally speaking, the research found few gender differences in these responses, but with a few exceptions.

Men were more likely to cite “inflation” as a concern (28% for men vs. 18% for women), whereas women were more likely to consider “the economy going into recession” as a concern (30% for women vs. 26% for men).

Where there’s more divergence between genders is in how people’s identify their top financial goals. Here’s how the various goals tested by the Shullman research ranked overall:

  • Having enough money for daily living expenses: ~57% citied as a top financial goal
  • Having enough money for unexpected emergency expenses: ~56%
  • Having enough income for retirement: ~46%
  • Reducing my debt: ~41%
  • Improving my standard of living: ~40%
  • Remaining financially independent: ~39%
  • Becoming financially independent: ~33%
  • Keeping up with inflation: ~30%
  • Providing protection for family members if I die: ~29%
  • Purchasing a home: ~19%
  • Providing for my children’s college expenses: ~19%
  • Providing an estate for my spouse and/or children: ~16%

Obviously, some of the goals that rank further down the list are more applicable to certain people at certain stages in their lives — whether they’re just getting started in their career, raising young children and so forth.

But I was struck at how many of these supposed “affluent” respondents cited “having enough money for daily living expenses” as a top financial goal. Wouldn’t more people have already achieved that milestone?

Another interesting finding: With many of the goals, women place more importance on them than do men:

  • 63% of women versus just 50% of men consider “having enough money for daily living expenses” to be a top financial goal.
  • 63% of women versus just 47% of men consider “having enough money for unexpected emergency expenses” a top financial goal.
  • 48% of women versus just 33% of men consider “reducing debt” a top financial goal.
  • 45% of women versus just 34% of men consider “improving their standard of living” a top financial goal.
  • 36% of women versus 30% of men consider “becoming financially independent” a top financial goal.

caOne explanation for the differences observed between men and women may be the “baseline” from which each group is weighing their financial goals. But since the survey was limited to affluent consumers, one might have expected that the usual demographic characteristics wouldn’t apply.  Perhaps the differences are rooted in other, more fundamental characteristics.

What are your thoughts? Please share them with other readers.

More information and insights from this study can be accessed here (fee-based).

In the Facebook-faceprint tussle, score one for the little guys.

Is that Maria Callas? Check with Facebook -- they'll know.
Is that Maria Callas? Check with Facebook — they’ll know.

I blogged last year about privacy concerns surrounding Facebook’s “face geometry” database activities, which have led to lawsuits in Illinois under the premise that those activities run afoul of that state’s laws regarding the use of biometric data.

The Illinois legislation, enacted in 2008, requires companies to obtain written authorization from subjects prior to collecting any sort of face geometry or related biometric data.

The lawsuit, which was filed in early 2015, centers on Facebook’s automatic photo-tagging feature which has been active since around 2010. The “faceprints” feature – Facebook’s term for face geometry – recognizes faces based on the social network’s vast archive of users and their content, and suggests their names when they appear in photos uploaded by their friends.

The lawsuit was filed by three plaintiffs in a potential class-action effort, and it’s been mired in legal wrangling ever since.

From the outset, many had predicted that Facebook would emerge victorious.  Eric Goldman, a law professor at Santa Clara University, noted in 2015 that the Illinois law is “a niche statute, enacted to solve a particular problem.  Seven years later, it’s being applied to a very different set of circumstances.”

But this past week, a federal judge sided not with Facebook, but with the plaintiffs by refusing to grant a request for dismissal.

In his ruling issued on May 5th, U.S. District Court Judge James Donato rejected Facebook’s contention that the Illinois Biometric Privacy Information Act does not apply to faceprints that are derived from photos, but only when it’s based on a source other than photos, such as in-person scans.

The Judge roundly rejected this contention as inconsistent with the purpose of the Illinois law. Donato wrote:

“The statute is an informed consent privacy law addressing the collection, retention and use of personal biometric identifiers and information at a time when biometric technology is just beginning to be broadly deployed. Trying to cabin this purpose within a specific in-person data collection technique has no support in the words and structure of the statute, and is antithetical to its broad purpose of protecting privacy in the face of emerging biometric technology.”

This isn’t the first time that the Illinois law has withstood a legal challenge. Another federal court judge, Charles Norgle, sided against Shutterfly recently on the same issues.

And Google is now in the crosshairs; it’s facing a class-action lawsuit filed early this year for its face geometry activities involving Google Photos.

Clearly, this fight has a long way to go before the issues are resolved.

If you have strong opinions pro or con about social networks’ use of face geometry, please share your views with other readers in the comment section below.

Is Apple setting itself up for failure in the FBI’s Syed Farook Probe?

ipThere’s no question that Apple’s refusal to help the FBI gain access to data in one of the iPhones used during the San Bernardino massacre has been getting scads of coverage in the news and business press.

Apple’s concerns, eloquently stated by CEO Tim Cook, are understandable. From the company’s point of view, it is at risk of giving up a significant selling feature of the iPhone to enable a “back door” access to encrypted data..  Apple’s contention is that many people have purchased the latest models of iPhones for precisely the purpose of protecting their data from prying eyes.

On the other hand, the U.S. government’s duty is to protect the American public from terrorist activities.

Passions are strong — and they’re lining up along some predictable social and political fault lines. After having read more than a dozen news articles in the various news and business media over the past week or so, I decided to check in with my brother, Nelson Nones, for an outsider’s perspective.

As someone who has lived and worked outside the United States for decades, Nelson’s perspectives are invariably interesting because they’re formed from the vantage point of “distance.”

Furthermore, Nelson has held very strongly negative views about the efforts of the NSA and other government entities to monitor computer and cellphone records. I’ve given voice to his perspectives on this topic on the Nones Notes blog several times, such as here and here.

So when I asked Nelson to share his perspectives on the Apple/FBI, I was prepared for him to weigh in on the side of Apple.

Well … not so fast. Shown below what he wrote to me:

______________________

This may come as a surprise, but I’m siding with the government on this one. Why?  Three reasons:

Point #1: The device in question is (and was) owned by San Bernardino County, a government entity.

The Fourth Amendment of the U.S. Constitution provides, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated …”

The investigation that the FBI wants to conduct could either be thought of as a seizure of property (the iPhone), or as a search (accessing the iPhone’s contents). Either way, Fourth Amendment protections do not apply in this case.

Within the context of the Fourth Amendment, seizure of property means interfering with an individual’s possessory interests in the property. In this case, the property isn’t (and never was) owned by an individual; it is public property.  Because Farook, an individual, never had a possessory interest in the property, no “unreasonable seizure” can possibly occur.

Also, within the meaning of the Fourth Amendment, an “unreasonable search” occurs when the government violates an individual’s reasonable expectation of privacy. In this case the iPhone was issued to Farook by his employer.  It is well known and understood through legal precedent that employees have no reasonable expectation of privacy when using employer-furnished equipment.  For example, employers can and do routinely monitor the contents of the email accounts they establish for their employees.

Point #2: The person who is the subject of the investigation (Syed Farook) is deceased.

According to Paul J. Stablein, a U.S. criminal defense attorney, “Unlike the concept of privilege (like communications between doctor and patient or lawyer and client), the privacy expectations afforded persons under the Fourth Amendment do not extend past the death of the person who possessed the privacy right.”

So, even if the iPhone belonged to Farook, no reasonable expectation of privacy exists today because Farook is no longer alive.

Point #3: An abundance of probable cause exists to issue a warrant.

In addition to protecting people against unreasonable searches and seizures, the Fourth Amendment also states, “… no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

I strongly believe the U.S. National Security Agency’s mass surveillance was unconstitutional and therefore illegal, due to the impossibility of establishing probable cause for indiscriminately searching the records of any U.S. citizen who might have placed or received a telephone call, sent or received an email message or logged on to their Facebook account.

That’s because these acts do not, in and of themselves, provide any reasonable basis for believing that evidence of a crime exists.

I also strongly believe that U.S. citizens have the right to encrypt their communications. No law exists preventing them from doing so for legal purposes. Conducting indiscriminate searches through warrantless “back door” decryption would be just as unconstitutional and illegal as mass surveillance.

In this case, however, multiple witnesses watched Farook and his wife, Tashfeen Malik, open fire on a holiday party, killing 14 people, and then flee after leaving behind three pipe bombs apparently meant to detonate remotely when first responders arrived on the scene.

Additional witnesses include the 23 police offers involved in the shootout where Farook and Malik eventually were killed.

These witnesses have surely given sworn statements attesting to the perpetrators’ crimes.

It is eminently reasonable to believe that evidence of these crimes exists in the iPhone issued to Farook. So, in this case there can be no doubt that all the requirements for issuing a warrant have been met.

For these three reasons, unlike mass surveillance or the possibility of warrantless “back door” decryption, the law of the land sits squarely and undeniably on the FBI’s side.

Apple’s objections.

Apple’s objections, seconded by Edward Snowden, rest on the notion that it’s “too dangerous” to assist the FBI in this case, because the technology Apple would be forced to develop cannot be kept secret.

“Once [this] information is known, or a way to bypass the code is revealed, [iPhone] encryption can be defeated by anyone with that knowledge,” says Tim Cook, Apple’s CEO. Presumably this could include overreaching government agencies, like the National Security Agency, or criminals and repressive foreign regimes.

It is important to note that Apple has not been ordered to invent a “back door” that decrypts the iPhone’s contents. Instead, the FBI wants to unlock the phone quickly by brute force; that is, by automating the entry of different passcode guesses until they discover the passcode that works.

To do this successfully, it’s necessary to bypass two specific iPhone security features. The first renders brute force automation impractical by progressively increasing the minimum time allowed between entries.  The second automatically destroys all of the iPhone’s contents after the maximum allowable number of consecutive incorrect guesses is reached.

Because the iPhone’s operating system must be digitally signed by Apple, only Apple can install the modifications needed to defeat these features.

It’s also important to note that Magistrate Judge Sheri Pym’s order says Apple’s modifications for Farook’s iPhone should have a “unique identifier” so the technology can’t be used to unlock other iPhones.

This past week, Apple has filed a motion to overturn Magistrate Judge Pym’s order. In its motion, the company offers a number of interesting arguments, three of which stand out:

Contention #1: The “unreasonable burden” argument.

Apple argues that complying with Magistrate Judge Pym’s order is unreasonably burdensome because the company would have to allocate between six and ten of its employees, nearly full-time over a 2 to 4 week period, together with additional quality assurance, testing and documentation effort.  Apple also argues that being forced to comply in this case sets a precedent for similar orders in the future which would become an “enormously intrusive burden.”

Contention #2: Contesting the phone search requirement.

Apple isn’t contesting whether or not the FBI can lawfully seize and search the iPhone.  Instead it is contesting Magistrate Judge Pym’s order compelling Apple to assist the FBI in performing the search.  As such, Apple is an “innocent third party.”  According to Apple, the FBI is relying on a case, United States v. New York Telephone, that went all the way to the Supreme Court in 1977.  Ultimately, New York Telephone was ordered to assist the government by installing a “pen register,” which is a simple device for monitoring the phone numbers placed from a specific phone line.

The government argued that it needed the phone company’s assistance to execute a lawful warrant without tipping off the suspects.  The Supreme Court found that complying with this order was not overly burdensome because the phone company routinely used pen registers in its own internal operations, and because it is a highly regulated public utility with a duty to serve the public.  In essence, Apple is arguing that United States v. New York Telephone does not apply, because (unlike the phone company’s prior use of pen registers) it is being compelled to do something it has never undertaken before, and also because it is not a public utility with a duty to serve.

Contention #3: The requirement to write new software.

Lastly, Apple argues that it will have to write new software in order to comply with Magistrate Judge Pym’s order. However, according to Apple, “Under well-settled law, computer code is treated as speech within the meaning of the First Amendment,” so complying with the order amounts to “compelled speech” that the Constitution prohibits.

What do I think of Apple’s arguments?

Regarding the first of the them, based on its own estimates of the effort involved, I’m guessing that Apple wouldn’t incur more than half a million dollars of direct expense to comply with this order. How burdensome is that to a company that just reported annual revenues of nearly $234 billion, and over $53 billion of profit?

Answer:  To Apple, half a million dollars over a four-week period is equivalent to 0.01% of last year’s profitability over an equivalent time span. If the government compensates Apple for its trouble, I don’t see how Apple can win this argument.

Regarding the other two arguments above, as Orin Kerr states in his Washington Post blog, “I don’t know which side would win … the scope of authority under the [All Writs Act] is very unclear as applied to the Apple case.  This case is like a crazy-hard law school exam hypothetical in which a professor gives students an unanswerable problem just to see how they do.”

My take:  There’s no way a magistrate judge can decide this.  If Apple loses, and appeals, this case will eventually end up at the Supreme Court.

What if the back door is forced open?

The concerns of privacy advocates are understandable. Even though I’m convinced the FBI’s legal position is solid, I also believe there is a very real risk that Apple’s modifications, once made, could leak into the wrong hands. But what happens if they do?

First, unlike warrantless “back door” decryption, this technique would work only for iPhones — and it also requires physical possession of a specifically targeted iPhone.

In other words, government agencies and criminals would have to lawfully seize or unlawfully steal an iPhone before they could use such techniques to break in. This is a far cry from past mass surveillance practices conducted in secret.

Moreover, if an iPhone is ever seized or stolen, it is possible to destroy its contents remotely, as soon as its owner realizes it’s gone, before anyone has the time to break in.

Second, Apple might actually find a market for the technology it is being compelled to create. Employers who issue iPhones to their employees certainly have the right to monitor employees’ use of the equipment.  Indeed, they might already have a “duty of care” to prevent their employees from using employer-issued iPhones for illegal or unethical purposes, which they cannot fulfill because of the iPhone’s security features.

Failure to exercise a duty of care creates operational as well as reputational risks, which employers could mitigate by issuing a new variety of “enterprise class” iPhones that they can readily unlock using these techniques.

____________________

So that’s one person’s considered opinion … but we’d be foolish to expect universal agreement on the Apple/FBI tussle. If you have particular views pro or con Apple’s position, please join the discussion and share them with other readers here.