The (Very) Real Privacy Concerns Raised by Contact Tracing

Last week, I linked to a “guest” blog post about the challenges of contact tracing as part of the way out of the worldwide coronavirus pandemic.  The piece was authored by my brother, Nelson Nones, who heads up a company that has developed software capabilities to support such functions. One reader left a thoughtful response citing the personal privacy concerns that any sort of effective contact tracing regimen inevitably raises.

It’s an important issue that deserves an equally thoughtful response, so I invited Nelson to share his own thoughts on the issue. Here’s what he wrote to me:

The introduction of new contact tracing apps for smartphones has raised quite a few privacy fears around the globe. This is a very hot topic right now which deserves attention. However, to keep my original article about the ability to conduct effective contact tracing on point, I purposely sidestepped the privacy issue — other than mentioning privacy fears briefly in the ‘Technology Limitations’ section of the article. 

Here I’ll expand a bit. Naturally, the coronavirus pandemic has raised a lot of concern about Orwellian “big brother” surveillance and government overreach, but what many people may not realize is that it’s not about expanding “the target population of surveillance and state control” as the commenter notes. When it comes to public health, governments – including state governments in the United States – have possessed these powers for a long time. 

I first discovered this in my own personal life about 20 years ago. I was at work in Long Beach one day when I received a call from the California Department of Health, informing me that I was confirmed to have a highly contagious gastrointestinal infection and ordering me to submit regular stool samples until my tests came back negative. I was informed that if I did not do so, I could be forcibly quarantined — and fined or even jailed — if I refused to cooperate. 

My first question to myself was, “How the h*ll and why the h*ll did they target me?”  

I had recently returned from a trip to Thailand and started having GI issues, so I went to my doctor and gave a stool sample. They performed a lab analysis which confirmed a particular type of infection that was listed on the Department of Health’s watch list, so I was informed that my doctor was obliged by law to report my case to the Department of Health.  

The Health Department, in turn, was obliged by law to contact me and issue the orders given to me – and by law I was obliged to comply with their orders. 

The reason that nations, states and provinces have such powers is to contain and control the spread of infectious diseases. This means that governments have the power to forcibly isolate people who are confirmed to be infected — and they also have the power to forcibly quarantine people who are suspected (but not yet confirmed) to be infected.  

Whether or not, and how, they choose to exercise those powers depends on the nature of the disease, how it’s transmitted, whether or not an epidemic or pandemic has been declared, and whether or not proven cures exist. Moreover, rigorous protocols are in place to protect people against the abuse of those powers.  

But the bottom line is: in most countries, including the United States, if you are unfortunate enough to catch an infectious and communicable disease, you have no constitutional right to prevent the government from identifying you and potentially depriving you of your civil liberties, because of the risk that you could unknowingly infect other people. 

Think of it as a civic duty — just as you have no constitutional right to prevent the government from ordering you to perform jury service. 

Medical science is so advanced these days that most diseases can be contained and controlled without having to inconvenience more than a relatively small number of people, which is why most people have no idea that governments possess such vast powers. But COVID-19 is a once-in-a-century outbreak that’s so novel, so poorly understood, and so communicable that nearly everyone in the world is being deprived of their civil liberties right now out of an abundance of caution.  

Realistically, one could expect these restrictions to remain in place unless and until COVID-19 vaccines and/or therapies are invented, proven and made available to the public – at which time it will (hopefully) be possible to manage COVID-19 like the seasonal flu, which doesn’t require draconian public health measures.    

As for the new smartphone apps, have a look at this recent article that appeared in Britain’s Express newspaper which will give you a good idea of how “hot” this topic has become.  

The key question here is whether or not the database backend (which is the software that my company Geoprise makes) is “centralized” or “decentralized.” A “centralized” backend follows the Singapore model and contains personally-identifiable information (PII) about everyone who registers the app with a public health authority and/or is confirmed to be infected.  

Conversely, some researchers are proposing a “decentralized” backend which serves only as a communications platform, and only ever receives anonymized and nonlinkable data from the smartphones.  

This is the privacy and security model that Apple and Google are following, but there is no way that such a “decentralized” backend could ever serve as a contact tracing database in the traditional sense. That’s because a traditional contact tracing database, by definition, always contains linkable PII. (Incidentally, our Geoprise software could be used in either a “centralized” or “decentralized” manner.) 

The key thing to understand about even the most “centralized” of the smartphone apps, such as Singapore’s TraceTogether app, is that they contain numerous privacy and security safeguards. Here’s a short list: 

  • The data which is captured and retained on individual devices identifies a particular smartphone only by an encrypted “TempID” which changes periodically (Singapore’s recommendation is to change the TempIDs every 15 minutes). This makes it impossible for a smartphone owner or eavesdropper to reconstruct complete histories of encounters held on the devices in a personally-identifiable way.
  • As my original article states, the contact tracing apps don’t use or store geo-location data (i.e. “where your smartphone was”) because GPS measurements are too unreliable for proximity-sensing purposes. Instead they use the device’s Bluetooth radio to sense other Bluetooth-enabled devices that come within very close range (i.e. “devices that were near your smartphone”).
  • The apps are opt-in. You can’t be compelled to download the app or register it with the public health authority (unless you happen to live in Mainland China — but that’s yet another story!).
  • Only people who are confirmed to be infected are ever asked to share their history of encounters with the public health authority.
  • Sharing your history of encounters is voluntary. You can’t be compelled to upload your contact tracing history to the public health authority’s backend server.

Apple and Google appear to be taking this a step further by: 

  • Allowing smartphone owners to “turn off” proximity sensing whenever they wish (such as when meeting a secret lover during trysts, or for more innocuous occasions).
  • Allowing smartphone owners to delete their history of encounters on demand, and to erase all data when uninstalling the app.
  • “Graceful dismantling” – to quote one researcher:“The system will organically dismantle itself after the end of the epidemic. Infected patients will stop uploading their data to the central server, and people will stop using the app. Data on the server is removed after 14 days.”  

The bottom-line on privacy and government overreach, I think, is for everyone to step back a safe distance from one another, and take a deep breath …

Despite privacy issues, social media adoption remains as high as ever.

The question is, why?

It seems as though privacy issues in social media have been in the news nearly steadily over the past several years. Considering that, it might come as a surprise that social media adoption remains as high as it’s ever been.

Today, nearly 9 in 10 Americans age 18 or older are regular users of one or more social media sites (interacting at least one or two times per week).

If anything, that’s a higher percentage than before.  So what gives?

Here’s the answer: According to data from a recent survey of nearly 2,200 Americans age 18 or older conducted by Regina Corso Consulting, two-thirds of respondents believe that people on social media should not have any expectations of privacy.  None.

Thus, it seems pretty clear that social media users have factored in privacy concerns and decided that, on balance, the “price of admission” when using social media sites is to leave their privacy at the door. It’s a tradeoff most users recognize, understand and accept.

This isn’t to contend that all users are deliriously happy with their current social media practices. In fact, nearly 40% of the respondents in the Regina Corso survey would like to reduce or stop their usage — but are afraid of what they might miss in the way of news and updates.  The “FOMO factor” is real.

In the end, that’s what Facebook and several other social media giants have long understood:  Once a certain critical mass is achieved, any concerns about social platforms are negated by the sheet universal nature of them.

Just as millions of American choose to reside in places prone to hurricane storm and flooding damage while fully recognizing the potential danger, millions more choose to be on social media despite the privacy risks that everyone has heard about them.

What about you — have you changed your social media behaviors in the wake of news developments over the past several years?

Comfortable in our own skin: Consumers embrace biometrics for identification and authentication purposes.

Perhaps it’s the rash of daily reports about data breaches. Or the one-too-many compromises of protection of people’s passwords.

Whatever the cause, it appears that Americans are becoming increasingly interested in the use of biometrics to verify personal identity or to enable payments.

And the credit card industry has taken notice. Biometrics – the descriptive term for body measurements and calculations – is becoming more prevalent as a means to authenticate identity and enable proper access and control of accounts.

A recent survey of ~1,000 American adult consumers, conducted in Fall 2017 by AYTM Marketing Research for VISA, revealed that two-thirds of the respondents are now familiar with biometrics.

What’s more, for those who understand what biometrics entails, more than 85% of the survey’s respondents expressed interest in their use for identity authentication.

About half of the respondents think that adopting biometrics would be more secure than using PIN numbers or passwords. Even more significantly, ~70% think that biometrics would make authentication faster and easier – whether it be done via voice recognition or by fingerprint recognition.

Interestingly, the view that biometrics are “easier” than traditional methods appears to be the case despite the fact that fewer than one-third of the survey respondents use unique passwords for each of their accounts.

As a person who does use unique passwords for my various accounts – and who has the usual “challenges” managing so many different ones – I would have thought that people who use only a few passwords might find traditional methods of authentication relatively easy to manage. Despite this, the “new world” of biometrics seems like a good bet for many of these people.

That stated, it’s also true that people are understandably skittish about ID theft in general. To illustrate, about half of the respondents in the AYTM survey expressed concerns about the risk of a security breach of biometric data – in other words, that the very biometric information used to authenticate a person could be nabbed by others who could use it the data for nefarious purposes.

And lastly, a goodly percentage of “Doubting Thomases” question whether biometric authentication will work properly – or even if it does work, whether it might require multiple attempts to do so.

In other words, it may end up being “déjà vu all over again” with this topic …

For an executive summary of the AYTM research findings, click or tap here.

Legislators tilt at the digital privacy windmill (again).

In the effort to preserve individual privacy in the digital age, hope springs eternal.

The latest endeavor to protect individuals’ privacy in the digital era is legislation introduced this week in the U.S. Senate that would require law enforcement and government authorities to obtain a warrant before accessing the digital communications of U.S. citizens.

Known as the ECPA Modernization Act of 2017, it is bipartisan legislation introduced by two senators known for being polar opposites on the political spectrum: Sen. Patrick Leahy (D-VT) on the left and Sen. Mike Lee (R-UT) on the right.

At present, only a subpoena is required for the government to gain full access to Americans’ e-mails that a over 180 days old. The new ECPA legislation would mean that access couldn’t be granted without showing probable cause, along with obtaining a judge’s signature.

The ECPA Modernization Act would also require a warrant for accessing geo-location data, while setting new limits on metadata collection. If the government did access cloud content without a warrant, the new legislation would make that data inadmissible in a court of law.

There’s no question that the original ECPA (Electronic Communications Privacy Act) legislation, enacted in 1986, is woefully out of date. After all, it stems from a time before the modern Internet.

It’s almost quaint to realize that the old ECPA legislation defines any e-mail older than 180 days as “abandoned” — and thereby accessible to government officials.  After all, we now live in an age when many residents keep the same e-mail address far longer than their home address.

The fact is, many individuals have come to rely on technology companies to store their e-mails, social media posts, blog posts, text messages, photos and other documents — and to do it for an indefinite period of time. It’s perceived as “safer” than keeping the information on a personal computer that might someday malfunction for any number of reasons.

Several important privacy advocacy groups are hailing the proposed legislation and urging its passage – among them the Center for Democracy & Technology and the Electronic Frontier Foundation.

Sophia Cope, an attorney at EFF, notes that the type of information individuals have entrusted to technology companies isn’t very secure at all. “Many users do not realize that an e-mail stored on a Google or Microsoft service has less protection than a letter sitting in a desk drawer at home,” Cope maintains.

“Users often can’t control how and when their whereabouts are being tracked by technology,” she adds.

The Senate legislation is also supported by the likes of Google, Amazon, Facebook and Twitter.

All of which makes it surprising that this type of legislation – different versions of which have been introduced in the U.S. Senate every year since 2013 – has had such trouble gaining traction.

The reasons for prior-year failure are many and varied – and quite revealing in terms of illuminating how crafting legislation is akin to sausage-making.  Which is to say, not very pretty.  But this year, the odds look more favorable than ever before.

Two questions remain on the table: First, will the legislation pass?  And second, will it really make a difference in terms of protecting the privacy of Americans?

Any readers with particular opinions are encouraged to weigh in.

Is Apple setting itself up for failure in the FBI’s Syed Farook Probe?

ipThere’s no question that Apple’s refusal to help the FBI gain access to data in one of the iPhones used during the San Bernardino massacre has been getting scads of coverage in the news and business press.

Apple’s concerns, eloquently stated by CEO Tim Cook, are understandable. From the company’s point of view, it is at risk of giving up a significant selling feature of the iPhone to enable a “back door” access to encrypted data..  Apple’s contention is that many people have purchased the latest models of iPhones for precisely the purpose of protecting their data from prying eyes.

On the other hand, the U.S. government’s duty is to protect the American public from terrorist activities.

Passions are strong — and they’re lining up along some predictable social and political fault lines. After having read more than a dozen news articles in the various news and business media over the past week or so, I decided to check in with my brother, Nelson Nones, for an outsider’s perspective.

As someone who has lived and worked outside the United States for decades, Nelson’s perspectives are invariably interesting because they’re formed from the vantage point of “distance.”

Furthermore, Nelson has held very strongly negative views about the efforts of the NSA and other government entities to monitor computer and cellphone records. I’ve given voice to his perspectives on this topic on the Nones Notes blog several times, such as here and here.

So when I asked Nelson to share his perspectives on the Apple/FBI, I was prepared for him to weigh in on the side of Apple.

Well … not so fast. Shown below what he wrote to me:

______________________

This may come as a surprise, but I’m siding with the government on this one. Why?  Three reasons:

Point #1: The device in question is (and was) owned by San Bernardino County, a government entity.

The Fourth Amendment of the U.S. Constitution provides, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated …”

The investigation that the FBI wants to conduct could either be thought of as a seizure of property (the iPhone), or as a search (accessing the iPhone’s contents). Either way, Fourth Amendment protections do not apply in this case.

Within the context of the Fourth Amendment, seizure of property means interfering with an individual’s possessory interests in the property. In this case, the property isn’t (and never was) owned by an individual; it is public property.  Because Farook, an individual, never had a possessory interest in the property, no “unreasonable seizure” can possibly occur.

Also, within the meaning of the Fourth Amendment, an “unreasonable search” occurs when the government violates an individual’s reasonable expectation of privacy. In this case the iPhone was issued to Farook by his employer.  It is well known and understood through legal precedent that employees have no reasonable expectation of privacy when using employer-furnished equipment.  For example, employers can and do routinely monitor the contents of the email accounts they establish for their employees.

Point #2: The person who is the subject of the investigation (Syed Farook) is deceased.

According to Paul J. Stablein, a U.S. criminal defense attorney, “Unlike the concept of privilege (like communications between doctor and patient or lawyer and client), the privacy expectations afforded persons under the Fourth Amendment do not extend past the death of the person who possessed the privacy right.”

So, even if the iPhone belonged to Farook, no reasonable expectation of privacy exists today because Farook is no longer alive.

Point #3: An abundance of probable cause exists to issue a warrant.

In addition to protecting people against unreasonable searches and seizures, the Fourth Amendment also states, “… no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

I strongly believe the U.S. National Security Agency’s mass surveillance was unconstitutional and therefore illegal, due to the impossibility of establishing probable cause for indiscriminately searching the records of any U.S. citizen who might have placed or received a telephone call, sent or received an email message or logged on to their Facebook account.

That’s because these acts do not, in and of themselves, provide any reasonable basis for believing that evidence of a crime exists.

I also strongly believe that U.S. citizens have the right to encrypt their communications. No law exists preventing them from doing so for legal purposes. Conducting indiscriminate searches through warrantless “back door” decryption would be just as unconstitutional and illegal as mass surveillance.

In this case, however, multiple witnesses watched Farook and his wife, Tashfeen Malik, open fire on a holiday party, killing 14 people, and then flee after leaving behind three pipe bombs apparently meant to detonate remotely when first responders arrived on the scene.

Additional witnesses include the 23 police offers involved in the shootout where Farook and Malik eventually were killed.

These witnesses have surely given sworn statements attesting to the perpetrators’ crimes.

It is eminently reasonable to believe that evidence of these crimes exists in the iPhone issued to Farook. So, in this case there can be no doubt that all the requirements for issuing a warrant have been met.

For these three reasons, unlike mass surveillance or the possibility of warrantless “back door” decryption, the law of the land sits squarely and undeniably on the FBI’s side.

Apple’s objections.

Apple’s objections, seconded by Edward Snowden, rest on the notion that it’s “too dangerous” to assist the FBI in this case, because the technology Apple would be forced to develop cannot be kept secret.

“Once [this] information is known, or a way to bypass the code is revealed, [iPhone] encryption can be defeated by anyone with that knowledge,” says Tim Cook, Apple’s CEO. Presumably this could include overreaching government agencies, like the National Security Agency, or criminals and repressive foreign regimes.

It is important to note that Apple has not been ordered to invent a “back door” that decrypts the iPhone’s contents. Instead, the FBI wants to unlock the phone quickly by brute force; that is, by automating the entry of different passcode guesses until they discover the passcode that works.

To do this successfully, it’s necessary to bypass two specific iPhone security features. The first renders brute force automation impractical by progressively increasing the minimum time allowed between entries.  The second automatically destroys all of the iPhone’s contents after the maximum allowable number of consecutive incorrect guesses is reached.

Because the iPhone’s operating system must be digitally signed by Apple, only Apple can install the modifications needed to defeat these features.

It’s also important to note that Magistrate Judge Sheri Pym’s order says Apple’s modifications for Farook’s iPhone should have a “unique identifier” so the technology can’t be used to unlock other iPhones.

This past week, Apple has filed a motion to overturn Magistrate Judge Pym’s order. In its motion, the company offers a number of interesting arguments, three of which stand out:

Contention #1: The “unreasonable burden” argument.

Apple argues that complying with Magistrate Judge Pym’s order is unreasonably burdensome because the company would have to allocate between six and ten of its employees, nearly full-time over a 2 to 4 week period, together with additional quality assurance, testing and documentation effort.  Apple also argues that being forced to comply in this case sets a precedent for similar orders in the future which would become an “enormously intrusive burden.”

Contention #2: Contesting the phone search requirement.

Apple isn’t contesting whether or not the FBI can lawfully seize and search the iPhone.  Instead it is contesting Magistrate Judge Pym’s order compelling Apple to assist the FBI in performing the search.  As such, Apple is an “innocent third party.”  According to Apple, the FBI is relying on a case, United States v. New York Telephone, that went all the way to the Supreme Court in 1977.  Ultimately, New York Telephone was ordered to assist the government by installing a “pen register,” which is a simple device for monitoring the phone numbers placed from a specific phone line.

The government argued that it needed the phone company’s assistance to execute a lawful warrant without tipping off the suspects.  The Supreme Court found that complying with this order was not overly burdensome because the phone company routinely used pen registers in its own internal operations, and because it is a highly regulated public utility with a duty to serve the public.  In essence, Apple is arguing that United States v. New York Telephone does not apply, because (unlike the phone company’s prior use of pen registers) it is being compelled to do something it has never undertaken before, and also because it is not a public utility with a duty to serve.

Contention #3: The requirement to write new software.

Lastly, Apple argues that it will have to write new software in order to comply with Magistrate Judge Pym’s order. However, according to Apple, “Under well-settled law, computer code is treated as speech within the meaning of the First Amendment,” so complying with the order amounts to “compelled speech” that the Constitution prohibits.

What do I think of Apple’s arguments?

Regarding the first of the them, based on its own estimates of the effort involved, I’m guessing that Apple wouldn’t incur more than half a million dollars of direct expense to comply with this order. How burdensome is that to a company that just reported annual revenues of nearly $234 billion, and over $53 billion of profit?

Answer:  To Apple, half a million dollars over a four-week period is equivalent to 0.01% of last year’s profitability over an equivalent time span. If the government compensates Apple for its trouble, I don’t see how Apple can win this argument.

Regarding the other two arguments above, as Orin Kerr states in his Washington Post blog, “I don’t know which side would win … the scope of authority under the [All Writs Act] is very unclear as applied to the Apple case.  This case is like a crazy-hard law school exam hypothetical in which a professor gives students an unanswerable problem just to see how they do.”

My take:  There’s no way a magistrate judge can decide this.  If Apple loses, and appeals, this case will eventually end up at the Supreme Court.

What if the back door is forced open?

The concerns of privacy advocates are understandable. Even though I’m convinced the FBI’s legal position is solid, I also believe there is a very real risk that Apple’s modifications, once made, could leak into the wrong hands. But what happens if they do?

First, unlike warrantless “back door” decryption, this technique would work only for iPhones — and it also requires physical possession of a specifically targeted iPhone.

In other words, government agencies and criminals would have to lawfully seize or unlawfully steal an iPhone before they could use such techniques to break in. This is a far cry from past mass surveillance practices conducted in secret.

Moreover, if an iPhone is ever seized or stolen, it is possible to destroy its contents remotely, as soon as its owner realizes it’s gone, before anyone has the time to break in.

Second, Apple might actually find a market for the technology it is being compelled to create. Employers who issue iPhones to their employees certainly have the right to monitor employees’ use of the equipment.  Indeed, they might already have a “duty of care” to prevent their employees from using employer-issued iPhones for illegal or unethical purposes, which they cannot fulfill because of the iPhone’s security features.

Failure to exercise a duty of care creates operational as well as reputational risks, which employers could mitigate by issuing a new variety of “enterprise class” iPhones that they can readily unlock using these techniques.

____________________

So that’s one person’s considered opinion … but we’d be foolish to expect universal agreement on the Apple/FBI tussle. If you have particular views pro or con Apple’s position, please join the discussion and share them with other readers here.

In the “right to be forgotten” battles, Google’s on the defensive again.

untitledSuddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.)  But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

file-and-forgetAs I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure.  Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all.  (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results.  Today, I don’t think we’re anywhere closer to knowing.

The Ideal Privacy Policy?

policyRecently, I came upon a column written by software entrepreneur and business author Cyndie Shaffstall in which she proposes the following policy for any company to adopt that truly cares about its customers’ privacy:

The Ideal Privacy Policy:

1.  We have on file only your first name, last name, and e-mail address.

2.  We ask for nothing else.

3.  We send you only e-mails you request.

4.  We have nothing to share with others – and wouldn’t if they asked.

5.  We won’t change this policy without prior notice – ever. 

Thank you for being our customer, 

~ Your Grateful Vendor 

Cyndie Shaffstall
Cyndie Shaffstall

As Shaffstall herself acknowledges, she’s never actually seen a policy like this.

But if a company actually adopted such a policy, it would certainly make people more comfortable about purchasing its products — particularly things like phones, wearables and other products that capture and process user-specific data as part of their functionality.

Unfortunately, Shaffstall is correct in asserting that few if any companies would actually adopt such a privacy policy.  Because if they did, they’d be voluntarily walking away from so much of what makes the online world such a lucrative business proposition.

But think for a moment:  Wouldn’t it be absolutely wonderful if we didn’t have to consider such privacy policies “too good to be true”?

Do you know any real-live examples of companies whose privacy policies come close to this ideal?  If so, please share them with readers here.

Going up against Goliath: The latest privacy tussle with Facebook.

Is that Maria Callas?  Check with Facebook -- they'll know.
Is that Maria Callas? Check with Facebook — they’ll know.

It had to happen eventually:  Facebook’s “faceprints” database activities are now the target of a lawsuit.

The suit, which has been filed in the state of Illinois, alleges that Facebook’s use of its automatic photo-tagging capability to identify people in images is a violation of Illinois’ state law regarding biometric data.

Facebook has been compiling faceprint data since 2010, and while people may choose to opt out of having their images identified in such a way, not surprisingly, that option is buried deep within the Facebook “settings” area where most people won’t notice it.

Moreover, the “default” setting is for Facebook to apply the automatic photo-tagging feature to all users.

Carlo Licata, the lead individual in the class-action complaint filed in Illinois, contends that Facebook’s practices are in direct conflict with the Illinois Biometric Information Privacy Act.  That legislation, enacted in 2008, requires companies to obtain written authorization from persons before collecting any sort of “face geometry” or related biometric data.

The Illinois law goes further by requiring the companies gathering biometric data to notify people about the practice, as well as to publish a schedule for destroying the information.

Here’s how the lawsuit states its contention:

“Facebook doesn’t disclose its wholesale biometrics data collection practices in its privacy policies, nor does it even ask users to acknowledge them.  With millions of users in the dark about the true nature of this technology, Facebook [has] secretly amassed the world’s largest privately held database of consumer biometrics data.”

The response from Facebook has been swift – and predictable.  It contends the lawsuit is without merit.

As much as I’m all for of individual privacy, I suspect that Facebook may be correct in this particular case.

<em>Brave New World:</em>  Biometrics
Brave New World: Biometrics

For one thing, the Illinois law doesn’t reference social networks at all.  Instead, it focuses on the use of biometrics in business and security screening activities — citing examples like finger-scan technologies.

As Eric Goldman, a professor of law at Santa Clara University notes, the Illinois law is “a niche statute, enacted to solve a particular problem.  Seven years later, it’s being applied to a very different set of circumstances.”

And there’s this, too:  The Illinois law deals with people who don’t know they’re giving data to a company.  In the case of Facebook, it’s commonly understood user data is submitted with consent.

That may not be a particularly appealing notion … but it’s the price of gaining access to the fabulous networking functionality that Facebook offers its users – all at no expense to them.

And of course, millions of people have made that bargain.

That being said, there’s one nagging doubt that I’m sure more than a few people have about the situation:  The folks at Facebook now aren’t the same people who will be there in the future.  The use of faceprint information collected on people may seem quite benign today, but what about tomorrow?

The fact is, ultimately we don’t have control over what becomes the “tower of power” or who resides there.  And that’s a sobering thought, indeed.

What’s your own perspective?  Please share your thoughts with other readers here.

It had to happen: Google Glass begets facial recognition apps.

Google Glass wearerWell, that didn’t take long:  Now that Google Glass devices have started to be worn by more than just the first few early adopters, a new facial recognition app has promptly been developed.

It’s an app that enables users to snap a photo of someone, and then search the Internet for more information about the image – essentially, to identify the person by name.

Of course, Google has always maintained that such activities are an inappropriate use of Google Glass devices.  But that hasn’t stopped an outside app developer from doing just that.

The app is called NameTag, and it was introduced in late 2013 by a developer group known as FacialNetwork.  In December, the developer uploaded a video showing how NameTag works.  You can view it on YouTube here — and note that it’s quite controversial with more “dislikes” than “likes” from voters; how often does that happen?

Basically, the Google Glass wearer snaps a picture and the app runs the photo through a database containing ~2.5 million facial images.  If a match is found, it returns that finding along with the name and profile associated with the facial match (e.g., occupation, personal interests and relationship status).

According to the developers, NameTag can detect an image match even from behind obstructions like glasses and a hat.

What does FacialNetwork see as the benefit of its new NameTag app?  The developer touts the potential for dating and relationship-building.  On its website, the following scenario is presented:

“Jane has lots of different social media profiles and loves to meet new people.  By using NameTag, she can link all her social networks to her face and share her information, and meet new people in an instant.”

Personal privacy concernsRight. 

… And I’m sure “Jane” doesn’t worry one bit about the “creepiness” factor of someone learning her name and her personal information before she’s even aware of it …

As if pre-anticipating all of the hackles, NameTag quickly goes on to explain that people can choose whether to have their name and profile information displayed to others.

As entrepreneur and NameTag’s co-creator Kevin Tussy notes, “It’s not about invading anyone’s privacy; it’s about connecting people that [sic] want to be connected … We will even allow users to have one profile that is seen during business hours, and another that is only seen in social situations.”

If all of this sounds like it’ll be a tad more difficult to carry out in real life than it sounds like in theory – that you’re not fully comfortable “assuming” an app like this will actually offer the proper degree of safeguarding – I’m sure you’re not alone in your concerns.

In fact, some people aren’t waiting around to see what “might” happen, but are moving now to take preemptive action.  Certain congresspeople are in on the action.  And consumer advocacy group Public Citizen notes that Google Glass users in certain states could potentially face criminal prosecution in addition to civil penalties for recording people without their knowledge or consent.

Up to now, no one has faced such legal action – but that could be because the technology remains so new that few people are actually using Google Glass devices at this point.

The question is this:  Are people giving their “implicit consent” to be recorded just by talking with someone who’s wearing the device?  (The devices are fairly distinctive looking, after all.)

The answer may lie in whether the person even knows what Google Glass devices are.  One person speaking to another automatically means they know they’re being recorded seems to assume too much – at least at this relatively early stage in the product adoption cycle.

Things remain murky at this point because we’re still in an emerging phase in the application of the technology.  But one thing that seems clear is that we haven’t yet seen the beginning of what promises to be an airing of “dueling rights” in this area of the law.

For those who may already be using Google Glass (I’m not one, by the way), here’s your chance to share your perspectives with other readers here.

“Don’t Tread On Me”: Employees have strong feelings about employers gaining access to their social media profiles.

Social media privacyRecent news reports that some companies are asking their current employees or prospective new hires to grant them access to their private social media profiles haven’t set well with many people.

It seems that while people don’t mind publishing their personal information for friends and families to see, they’re not keen at all on employers having access as well.

This is borne out in the latest American Pulse survey from BIGinsight, a consumer information portal. In that survey, which queried nearly 3,600 American adults over the age of 18, respondents were asked how they would react to a request by an employer to hand over personal social media passwords, thereby gaining access to their profiles.

Approximately one in five of the survey respondents reported that they are not engaged in social media.  But among the remainder, most would resist the employer’s request … even to the extent of quitting their job:

  • Would quit a job or withdraw an employment application: ~52%
  • Would delete social media pages to prevent them from being seen: ~21%
  • Would go ahead and provide social media passwords to the employer: ~14%
  • Would edit social media profiles first … then provide passwords: ~13%

Based on the opinions of the respondents, it’s not at all surprising that the survey also found that ~85% think that when employers asking for access to social media profiles, it’s an invasion of privacy.  And only about 11% of respondents would be “comfortable” sharing their social media profiles with a potential employer.

There does seem to be a bit of altruism at work, because the preponderance of survey respondents (~72%) claim that they have “nothing to hide” on their social sites.

No doubt, Americans’ views about online privacy are borne out of the “live free or die … don’t tread on me” tradition of individualism in this country.  We love our ability to express ourselves … but spare us the KGB/Stasi routine!