The unintended “open book” company … opens a can of worms.

Transparency is usually considered a good thing. But when it means your company is an open book, it’s gone too far.

Unfortunately, some companies are making far too much of their information visible to the world without realizing it. Clean laundry, dirty laundry – the works.

One of these instances came to light recently when vpnMentor, a firm that bills itself as an “ethical hacking group,” discovered an alarming lack of e-mail protection and encryption during a web-mapping project regarding an international piping, valve and fitting manufacturing organization.

I’m going to shield the name of the company in the interest of “discretion being the better part of valor,” but the company’s data that was found to be visible is amazingly broad and deep. Reportedly it included:

  • Project bids
  • Product prices and price quotations
  • Discussions concerning suppliers, clients, projects and internal matters
  • Names of employees and clients
  • Internal e-mail addresses from various branch offices
  • Employee IDs
  • External/client e-mail addresses, full names and phone numbers
  • Information on company operations
  • Travel arrangements
  • Private conversations
  • Personal e-mails received via company e-mail addresses

Basically, this company’s entire business activities are laid out for the world to see.

The vpnMentor research team was able to view the firm’s “confidential” e-mail communications. Amusingly, the team saw its own e-mails it had sent to the firm warning about the security breach (that the company never answered).

“The most absurd part is that we not only know that they received an e-mail from one of the journalists we work with, alerting them to the leak in this report, but we [also] know they trashed it,” as one of the team members noted.

The company in question isn’t some small, inconsequential entity. It operates in 18 countries including the biggies like Germany, France, Germany, the United States, Canada and Brazil.  So the implications are wide-ranging, not just for the company in question but also for everyone with which they do business.

The inevitable advice from vpnMentor to other companies out there:

“Review your security protocols internally and those of any third-party apps and contractors you use. Make sure that any online platform you integrate into your operations follows the strictest data security guidelines.”

Are you aware of any security breaches that have happened with other companies that are as potentially far-reaching as this one? It may be hard to top this particular example, but if you have examples that are worth sharing, I’m sure we’d all find them interesting to to hear.

Hacking is a two-way street.

Usually we hear of attacks being launched against American websites from outside the country. But the opposite is true as well.

In recent days there have been reports that attacks were launched against Iranian computer networks that support that country’s air bases, likely in response to the June 20th attack by Iran’s Islamic Revolutionary Guard  Corps on a U.S. military drone in the Persian Gulf.

And now there are reports that hackers working for an alliance of intelligence agencies broke into Yandex, the large Russian-based search engine, in an attempt to find technical information that reveals how Yandex authenticates user accounts.  The hackers used Regin (QWERTY), a malware toolkit associated with intelligence sharing that has often been utilized by the intelligence alliance (made up of the USA, Canada, UK, Australia and New Zealand).

Interestingly, Yandex acknowledges the hack, which happened back in 2018. But whereas it claims the attack was detected by the company’s security team before any damage could be done or data lost, outside observers believe that the hackers were able to maintain their access to Yandex for several weeks or longer before being detected.

Reportedly, the information being sought could help spy agencies impersonate Yandex users, thereby gaining access to their private messages. The purpose?  To focus on espionage rather than the theft of intellectual property.

These actions, which are coming to light only now even though the events in question happened last year, underscore how much much future “warfare” between nations will be conducted in cyberspace rather than via boots on the ground.

Welcome to Cold War II — 21st century style.

Bait for the phish: The subject lines that reel them in.

To those of us who work in the MarComm field – or in business generally – it may seem odd how so many people can get suckered into opening e-mails that contain malware or otherwise wreak havoc with their devices.

But as it turns out, the phishing masters have become quite adept at crafting e-mail subject lines and content that successfully ensnare even the most alert recipients.

In fact, the phishers actually exploit our concerns about security by sending e-communications that play off of those very fears.

To study this effect, cybersecurity firm KnowBe4 conducted an analysis of the most clicked-on phishing subject lines of 2018. Its evaluation was two-pronged – charting actual phishing e-mails received by KnowBe4 clients and reported by their IT departments as suspicious, as well as conducting simulated phishing tests to monitor recipient behavior.

What KnowBe4 found was that the most effective phishing e-mail subject lines generally fall into five topic categories:

  • Passwords
  • Deliveries
  • IT department
  • Company policies
  • Vacation

More specifically, the ten most clicked-on subject lines during 2018, in order of rank, were these:

  • #1. Password Check Required Immediately / Change of Password Required Immediately
  • #2. Your Order with Amazon.com / Your Amazon Order Receipt
  • #3. Announcement: Change in Holiday Schedule
  • #4. Happy Holidays! Have a drink on us
  • #5. Problem with Bank Account
  • #6. De-activation of [recipient’s e-mail address] in Process
  • #7. Wire Department
  • #8. Revised Vacation & Sick Time Policy
  • #9. Last reminder: please respond immediately
  • #10. UPS Label Delivery 1ZBE312TNY00015011

Notice that nearly all of them pertain to topics that seem important, timely and needing the attention of the recipient.

Another way that KnowBe4 analyzed the situation was by pinpointing the e-mail subject lines that were deployed most often in phishing e-mails during 2018.

Here are the Top Ten, ranked in order of their usage:

  • #1. Apple: You recently requested a password reset for your Apple ID
  • #2. Employee Satisfaction Survey
  • #3. Sharepoint: You Have Received 2 New Fax Messages
  • #4. Your Support Ticket is Closing
  • #5. Docusign: You’ve received a Document for Signature
  • #6. ZipRecruiter: ZipRecruiter Account Suspended
  • #7. IT System Support
  • #8. Amazon: Your Order Summary
  • #9. Office 365: Suspicious Activity Report
  • #10. Squarespace: Account billing failure

Commenting on the results that were uncovered by the evaluation, Perry Carpenter, a strategy officer at KnowBe4 had this to say:

“Clicking [on] an e-mail is as much about human psychology as it is about accomplishing a task. The fact that we saw ‘password’ subject lines clicked … shows us that users are concerned about security.  Likewise, users clicked on messages about company policies and deliveries … showing a general curiosity about issues that matter to them.”

Carpenter went on to note that KnowBe4’s findings should help corporate IT departments understand “how recipients think” before they click on phishing e-mails and the links within them.

How about you? Are there other e-mail subject lines beyond the ones listed above that you’ve encountered in your daily activities and that raise your suspicions? Please share your examples in the comment section below.

Keeping law enforcement on the level.

Let’s go to the videotape … or not.

Video is supposed to be the “great equalizer”: evidence that doesn’t lie — particularly in the case of chronicling law enforcement events.

From New York City and Chicago to Baltimore, Charleston, SC and dozens of places in between, there have been a number of “high profile” police incidents in recent years where mobile video has made it possible to go beyond the sometimes-contradictory “he said/she said” statements coming from officers and citizens.

There’s no question that it’s resulted in some disciplinary or court outcomes that may well have turned out differently in times before.

In response, numerous police departments have responded in a way best described as, “If you can’t beat them, join them.” They’ve begun outfitting their law enforcement personnel with police body cams.

The idea is that having a “third party” digital witness on the scene will protect both the perpetrator and the officer when assessments need to be made about conflicting accounts of what actually happened.

This tidy solution seems to be running into a problem, however. Some security experts are calling into question the ability of body cameras to provide reliable evidence – and it isn’t because of substandard quality in the video footage being captured.

Recently, specialists at the security firm Nuix examined five major brands of security cameras … and have determined that all of them are vulnerable to hacking.

The body cam suppliers in question are CEESC, Digital Ally, Fire Cam, Patrol Eyes, and VIEVU. The cameras are described by Nuix as “full-feature computers walking around on your chest,” and as such, require the same degree of security mechanisms that any other digital device operating in security-critical areas would need to possess.

But here’s the catch: None of the body cameras evaluated featured digital signatures on the uploaded footage.  This means that there would be no way to confirm whether any of the video evidence might have been tampered with.

In other words, a skilled technician with nefarious intent could download, edit and re-upload content – all while avoiding giving any sort of indication that it had been revised.

These hackers could be operating on the outside … or they could be rogue officers inside a law enforcement department.

Another flaw uncovered by Nuix is that malware can infect the cameras in the form of malicious computer code being disguised as software updates – updates that the cameras are programmed to accept without any additional verification.

Even worse, once a hacker successfully breached a camera device, he or she could easily gain access to the wider police network, thereby causing a problem that goes much further than a single camera or a single police officer.

Thankfully, Nuix is a “good guy” rather than a “bad actor” in its experimentation. The company is already working with several of the body cam manufacturers to remedy the problems uncovered by its evaluation, so as to improve the ability of the cameras to repel hacking attempts.

But the more fundamental issue that’s raised is this: What other types of security vulnerabilities are out there that haven’t been detected yet?

It doesn’t exactly reinforce our faith in technology to ensure fairer, more honest and more transparent law enforcement activities. If video footage can’t be considered verified proof that an event happened or didn’t happen, have we just returned to Square One again, with people pointing fingers in both directions but with even lower levels of trust?

Hopefully not. But with the polarized camps we have at the moment, with people only too eager to blame the motives of their adversaries, the picture doesn’t look particularly promising …

The downside dangers of IoT: Overblown or underestimated?

In recent weeks, there has been an uptick in articles appearing in the press about the downside risks of the Internet of Things (IoT). The so-called “Weeping Angel” technique, which essentially allows hackers to turn a smart television into a microphone, is one eyebrow-raising example included from the CIA files released by WikiLeaks recently. Another is the potential for hacking into the systems of autonomous vehicles, enabling cargo to be stolen or the vehicles themselves to be held for ransom.

Some of it seems like the stuff of science fiction – or at the very least a modern form of cloak-and-dagger activity. Regular readers of the Nones Notes blog know that when we’re in the midst of a “collective angst” about a topics of this nature, I like to solicit the views of my brother, Nelson Nones, who has been in the fields of IT and operations management for decades.

I asked Nelson to share his perspectives on IoT, what he sees are its pitfalls, and whether the current levels of concern are justified. His comments are presented below:

Back in 1998, I was invited to speak about the so-called “millennium bug” (also known as the “Y2K bug”) at a symposium in Kuching, Malaysia. It was a hot topic at that time, because many computer systems then in use hadn’t been designed or built to deal with calendar dates beyond the end of the 20th century.  

The purpose of my presentation was to educate the audience about the nature of the problem, and how to mitigate it. During the question-and-answer session which followed, a member of the audience rose and began to speak rather hysterically of the threat which the millennium bug posed to civilization as we knew it.  

His principal concern was the millions of embedded sensors and controllers in use throughout industry which were not programmable and would therefore need to be replaced. In his view, very few people knew which of those devices were susceptible to the millennium bug, or where they were running.  

As a result, he felt that many flawed devices would go undetected, causing critical infrastructures such as power generation plants, electricity grids and aircraft to fail.  

Needless to say, his dire predictions did not come to pass and humankind sailed into the 21st century with barely a murmur. This isn’t to say that the millennium bug wasn’t a real threat – it certainly was – but rather that providers and users of information technology (IT) mostly did what was necessary to prepare for it.  As Britain’s Guardian newspaper reported in April 2000, “In truth, there have been bug incidents … none of this, however, adds up to global recession, or infrastructure collapse, or accidental nuclear war, as the most heated prophets were anticipating.”  

It is for similar reasons that I take much of today’s hype over security vulnerabilities of IoT with more than a pinch of salt. 

It’s worth noting that, technologically speaking, IoT isn’t really very new at all. As the prophet of doom at my 1998 symposium (correctly) observed, sensors, software, actuators and electronic controllers have been integral components of automated industrial systems for the past thirty years at least.   

What’s new is that these technologies have begun to be accepted and deployed by consumers. I say “begun” because I don’t know anyone who has actually rigged a “smart home” to work in the all-encompassing way breathlessly envisioned by purveyors of home automation technology; but I do know people who use the technology for specific purposes such as home security, thermostat control and recording TV programs.  

Just last week I spoke with someone who is beta testing a self-driving Tesla automobile, but he confessed that he still won’t take his hands off the wheel because he doesn’t really trust the self-driving technology yet.  

What’s also new is that businesses are extending their use of sensors and controllers well beyond the confines of plants, factories and warehouses. For example, trucking companies routinely use global positioning system (GPS) sensors to monitor fleet locations in real-time.  

Aircraft engine makers such as Rolls-Royce and GE rely on management and monitoring systems to transmit information from sensors to ground stations for real time analysis, during flight.  Many problems which are detected in this manner can be instantly corrected during flight, by relaying instructions back to controllers and actuators installed on the engine.  

The common denominator for what’s new is the use of existing Internet infrastructure; hence the “I” in “IoT.”  

In earlier times, sensors, software and electronic controllers could communicate only through local area networks (LANs) which were physically isolated and therefore impermeable to external attacks. But when those devices are connected to the public Internet, in theory anyone can access them — including cyber-criminals and governments engaged in sabotage or espionage, or who want to hold things for ransom, surreptitiously watch live feeds, or deploy botnets for distributed denial of service (DDoS) attacks.  

It is clear, therefore, that the root causes of privacy and security concerns arising from increasing IoT usage are mainly network security lapses, and not the things themselves.

Ensuring the highest possible degree of network security is no easy task. Above and beyond arcane technical details such as encryption, installing network firewalls, and opening and closing of ports, it means deploying multiple layers of defenses according to specific policies and controls, and that requires skills and knowledge which most consumers, and even many businesses, do not possess. 

Still, one doesn’t have to be a network geek to implement basic security mechanisms that far too many people overlook. In search of easy pickings, cyber-criminals usually prefer to exploit the huge number of unlocked doors begging for their attention, rather than wasting time trying to penetrate even slightly stronger defenses.   

For example, many people install wireless networks in their homes but forget to change the default router password and default network name (SSID) – or they pick a password that’s easy to guess. In addition, many people leave their network “open” to anyone having a wireless card by failing to implement a security key such as a WPA, WPA2 or WEP key, or by choosing a weak security key.   

An attacker can discover those lapses in a matter of seconds, or less, giving them full administrative authority and control over the compromised network with little risk of detection. This, in turn, would give the attacker immediate access to, and remote control over, any device on the network which is switched on but does not require authentication; for example, network printers, data storage devices, cameras, TVs and personal computers (PCs) which are not configured to require a user logon. 

Plugging those security holes doesn’t require specialist knowledge and shouldn’t take more than an hour for most home networks. Recognizing the security concerns, an increasing number of hardware and software vendors are preconfiguring their products in “full lockdown” mode, which provides basic security by default and requires users to apply specialist knowledge in order to open up their networks as necessary for greater convenience.  

This is precisely what Microsoft did over a decade ago, with great success, in response to widely publicized security vulnerabilities in its Windows® operating system and Internet Explorer browser. 

It’s all too easy to imagine the endgames of hypothetical scenarios in which the bad apples win by wresting control over the IoT from the good guys. But just like the millennium bug nearly two decades ago, it is wiser to heed the wisdom of Max Ehrmann’s Desiderata, published back in 1927:  

“Exercise caution in your business affairs, for the world is full of trickery … but do not distress yourself with dark imaginings.”  

Going forward, I’m confident that a healthy dose of risk intelligence, and not fear, will prove to be the key for successfully managing the downside aspects of IoT.

_________________________

So those are Nelson’s views on the Internet of Things. What about you?  Are you in agreement, or are there aspects about which you may think differently?  Please share your thoughts with other readers.

In the “right to be forgotten” battles, Google’s on the defensive again.

untitledSuddenly, the conflict between Google and the European Union countries regarding the censoring of search results has taken on even wider worldwide proportions.

This past week, the courts have upheld the French government’s data protection office (CNIL) order for Google to broaden the “right to be forgotten” by censoring search results worldwide — not just in Europe.

Google had appealed the initial CNIL ruling.

The CNIL rejected Google’s argument that a worldwide implementation of the European standard of censoring search results would mean that the Internet would be only as free as the “least free place.” (Think Belarus or Syria.)  But in its ruling, the CNIL noted that a country-by-country implementation of the “right to be forgotten” would mean that the right could be circumvented too easily.

While it’s true that more than 95% of Google searches in Europe are performed via European versions of the company’s search engine tool, such as google.fr and google.co.uk, identical searches can be performed easily using google.com, meaning that anyone trying to find “forgotten” information on an individual can do so easily, irrespective of the European standard.

file-and-forgetAs I blogged back in May, The European Court of Justice’s 2014 ruling meant that Google is required to allow residents of EU countries to delete links to certain harmful or embarrassing information that may appear about themselves in Google search results.

The directive has turned into a real thicket of challenges for Google.

What the definition of “harmed and embarrassing” is is somewhat amorphous, as the court’s ruling encompassed links to information ranging from excessive and harmful on one end of the scale all the way down to links that are merely outdated, inadequate or irrelevant.

Since the ruling went into effect, Google has had to field requests to remove more than one million links from European search results.

Link removal isn’t accomplished via some sort of “bot” procedure.  Instead, each request is considered on a case-by-case basis by a panel of arbiters made up of attorneys, paralegals and search engineers.

Approximately one-third of the links in question have been removed following panel review, while about half have remained in search results.

The rest – the real toughies – are still under review, and their status as yet unresolved.

Obviously, for this activity to spread from covering just European search engines to include potentially the entire world isn’t what Google has in mind at all.  (If Google could have its way, doubtless the whole notion of “the right to be forgotten” would be off the table.)

But the situation is getting pretty hot now. French authorities imposed a 15-day compliance deadline, after which Google could be fined nearly US$350,000.

Of course, the amount of that penalty pales in comparison to the cost Google would incur to comply with the directive.

But that fine is just the opening salvo; there’s no telling what the full degree of financial penalties might turn out to be for continued non-compliance.

I wrote before that it’s difficult to know where the world will eventually end up on the issue of censoring search engine results.  Today, I don’t think we’re anywhere closer to knowing.

China’s controversial product supplier pledge: An “on the ground” view from the Far East.

The business world is abuzz about the latest moves by China to regulate the behavior of U.S. and other foreign companies that choose to do business in that country.  What’s the real skinny?

contract

While much of the reporting and commentary has been decidedly scant on details, we can actually take a look at the official document that contains the various provisos the Chinese government is intending to impose on foreign companies.

Ostensibly, the declaration is aimed at “protecting user security.” Here are the six provisions that make up the declaration:

Information Technology Product Supplier Declaration of Commitment to Protect User Security

Our company agrees to strictly adhere to the two key principles of “not harming national security and not harming consumer rights” and hereby promises to:

#1.  Respect the user’s right to know. To clearly advise users of the scope, purpose, quantity, storage location, etc. of information collected about the user; and to use clear and easy-to-understand language in the user agreement regarding policies and details of protecting user security and privacy.

#2.  Respect the user’s right to control. To permit the user to determine the scope of information that is collected and products and systems that are controlled; to collect user information only after openly obtaining user permission, and to use collected user information to [sic] the authorized purposes only.

#3.  Respect the user’s right to choice. To allow the user to agree, reject or withdraw agreement for collection of user information; to permit the user to choose to install or uninstall non-essential components; to not restrict user selection of other products and services.

#4.  Guarantee product safety and trustworthiness. To use effective measures to ensure the security and trustworthiness of products during the design, development, production, delivery and maintenance processes; to provide timely notice and fixes upon discovery of security vulnerabilities; to not install any hidden functionalities or operations the user is unaware of [sic] within the product.

#5.  Guarantee the security of user information. To employ effective measures to guarantee that any user information that is collected or processed isn’t illegally altered, leaked, or used; to not transfer, store or process any sensitive user information collected within the China market outside China’s borders without express permission of the user or approval from relevant authorities.

#6.  Accept the supervision of all parts of society. To promise to accept supervision from all parts of society, to cooperate with third-party institutions for assessment and verification that products are secure and controllable and that user information is protected etc. to prove actual compliance with these commitments.

Often with China, there are “official” pronouncements … and then there’s what’s “really” going on behind the curtain.

So to find out the real skinny, I decided to ask my brother, Nelson Nones, who has lived and worked in East Asia for years.  Since Nelson’s business activities take him to China and all of the other key Asian economies on a regular basis, I figured that his perspectives would be well-grounded and worth hearing.  Here’s Nelson’s take:

Points 1 through 3 are fundamentally no different from the provisions of personal data protection laws already on the books in the 27 member states of the European Union, plus Australia, Hong Kong, Iceland, India, Japan, South Korea, Liechtenstein, Macau, Malaysia, New Zealand, Norway, Singapore, the Philippines, Taiwan and some U.S. states.  Nor do they materially differ from privacy policy best practices — so I would not see these as particularly onerous or unreasonable.

The key difference is that these points are not enshrined in law in Mainland China, so compliance is voluntary at the moment (as it was in Singapore until 2013) – presumably binding on only those companies that sign this declaration. 

News reports also indicate that China has asked only American technology companies to sign its Declaration of Commitment, implying that domestic Chinese companies aren’t necessarily held to the same standards — although if this is truly the case, it might actually put Chinese companies at a competitive disadvantage by enhancing the appeal of American technology products to discerning Chinese users.

Point 4 doesn’t generally fall within the scope of existing personal data protection laws, but in my view its provisions fall well within the QA and warranty commitments that any legitimate technology company should be prepared to make in today’s competitive environment.

Comparing Point 5 with legislation currently in force within the European Union, Australia, Hong Kong, Iceland, India, Japan, South Korea, Liechtenstein, Macau, Malaysia, New Zealand, Norway, Singapore, the Philippines, Taiwan and some U.S. states, this point lacks some really key definitions, including:  

  • Who exactly is a “data subject” who is entitled to personal (i.e. user) data protection?
  • Who exactly is the “data controller” who owns the user information that is being collected or processed?
  • Who might be the “data processor” who stores and/or processes user information on behalf of the “data controller”?

EU Data Protection DirectiveThe legislation and regulations I’ve reviewed in this realm provide very explicit (and varied) definitions of these entities. Unlike China’s Declaration of Commitment, for instance, the E.U. Data Protection Directive allows “data controllers” or “data processors” to transfer user data outside the E.U., as long as the country where the data is transferred protects the rights of “data subjects” as much as the E.U. 

It also defines which “data controllers” and “data processors” must comply with E.U. law, based on whether or not they store or process personal information with the E.U., or operate within the E.U. (regardless of where the data is actually stored or processed).

The requirement to keep sensitive user information within China’s borders, in the absence of permission from users or “relevant authorities” to transfer, store or process it elsewhere, could also be seen as an attempt by the Chinese government to enlist the help of American technology companies in circumventing the U.S. government’s ongoing Internet data-gathering programs.

If this attempt succeeds, it might further enhance the appeal of American technology products to discerning Chinese users. 

Point 6 is garnering the most headlines in the West because of the implied threat that cooperating with “third-party institutions for assessment and verification … to prove actual compliance with these commitments” could mean being forced to reveal source code or encryption algorithms.  

However, in classic Chinese style, none of that is actually spelled out. 

Green Dam Youth Escort ServiceA little history about this: Over the past decade, the Chinese government has put forward various proposals for controlling IT – and then abruptly withdrawing them in the face of domestic as well as global criticism. Here are two: 

As for implications, China’s Declaration of Commitment shouldn’t have significant impact on companies that aren’t in the consumer IT market.  At best, its first five points could potentially improve the competitiveness of American IT products in the  Chinese market.    

However, I would advise any tech companies that may be wondering what to do, to sit on their hands for a while. Law in China is always a “work in progress,” so the safest bet is to wait for that “progress” for as long as possible.

So there you have it – the view from someone who is smack in the middle of the business economy in East Asia. If you have your own perspectives to share on the topic, I’m sure other readers would be interested to hear them as well.

Criptext: When a recall actually looks pretty good.

Criptext logo

I doubt there are many of us in business who have never inadvertently sent an e-mail to the wrong person … or sent a message before it was fully complete … or forgot to include an attachment.

In such cases, it would be so nice to be able to recall the e-mail — just like we used to do in the days of postal mail simply by retrieving the letter from the outgoing mail bin.

Recent news reports reveal that this capability is actually a reality now.

In the fast lane?  Criptext principals just completed a successful round of investment funding.
In the fast lane? Criptext principals just completed a successful round of investment funding.

A start-up firm called Criptext has just raised a half-million dollars in private investment funds to help it perfect and expand a product that allows any sent e-mail to be recalled — even if the recipient has already opened and read it.

According to a report from Business Insider, Criptext is currently available as a plugin and a browser extension for the popular Outlook and Gmail email services.  It operates inside of the email, enabling the sender to track when, where and who has opened emails and/or downloaded attachments within them.

In addition, Criptext also enables the sender to recall emails, and even to set a self-destruct timer to automatically recall emails after a specified length of time.

Viewing a screenshot of how Criptext works (in this case with the Gmail service), things look pretty simple (and pretty cool, too):

Criptext activity panel example

I thought it would be only a matter of time before some developer would figure out a way to “unwind” an email communiqué once the “send” button was hit.  And now we have it.

Of course, time will tell whether Criptext can live up to its billing … or if it turns out to be more of a nightmare of glitches than a dream come true.

It would be great to hear from anyone who may have first-hand experience with Criptext — or other similar email functionalities.  Please share your experiences and perspectives pro or con with other readers here.

Memo to web users with “Do Not Track” enabled: You’re being tracked anyway.

do not trackFor anyone who thinks he or she is circumventing web tracking via enabling Do Not Track (DNT) functionality … think again.

A recently released study from researchers at KU Leuven-iMinds, a Dutch-based university think tank, shows that nearly 150 of the world’s leading websites have ditched tracking cookies in favor of “device fingerprinting” (or “browser fingerprinting” as it’s sometimes called).

What’s that?  It’s the practice of evaluating selected properties of desktop computers, tablets and smartphone to build a unique user identifier.  These properties include seemingly innocuous details found on each device, such as:

  • Versions of installed software and plugins
  • Screen size
  • A listing of installed fonts

An analysis by the Electronic Frontier Foundation (EFF) has shown that for the majority of browsers, the combination of these properties creates a unique ID – thereby allowing a user to be tracked without the perpetrator needing to rely on cookies — or having to deal with pesky legal restrictions pertaining to the restriction of cookies’ use.

Overwhelmingly, browser fingerprinting targets popular and commonly used JavaScript or Flash functions, so that nearly every person who accesses the web is a target – without their knowledge or consent.

According to the Leuven-iMinds analysis, the use of JavaScript-based fingerprinting allows websites to track non-Flash mobile phones and devices.  So it’s cold comfort thinking that the iPad platform will offer protection against this form of “non-cookie tracking.”

Is there anything good about device fingerprinting?  Perhaps … in that it can be used for some justifiable security-related activities such as protection against account hijacking, fraud detection, plus anti-bot and anti-scraping services.

But the accompanying bad news is this:  It can also be used for analytics and marketing purposes via the fingerprinting scripts hidden behind banner advertising.

How to fight back, if one is so-inclined?  The Leuven-iMinds researchers have developed a free tool that analyzes websites for suspicious scripts.  Known as FPDetective, it’s being made available to other researchers to conduct their own investigations.

So you’re able to identify the offenders.  But then what — short of never visiting their websites again?