3D printing: The newest market disrupter?

3D printing was in the “popular press” a few weeks back when there was a dust-up about plans to publish specifications online for the manufacturing of firearms using 3D technology.

Of course, that news hit the streets because of the hot-button issues of access to guns, the lack of ability to trace a firearm manufactured via 3D printing, plus concerns about avoiding detection in security screenings due to the composition of the pieces and parts (in this case, polymer materials).

But 3D printing should be receiving more press generally. It may well be the latest market disrupter because of its promise to fundamentally change the way parts and components are designed, sourced and made.

3D printing technologies – both polymer and metal – have been emerging for some time now, and unlike some other technologies, they have already found a highly receptive commercial audience. That’s because 3D printing technology can be used with great efficiency to manufacture production components for applications that are experiencing some of the hottest market demand – like medical instrumentation as well as aerospace, automation and defense products.

One the baseline benefits of 3D printing is that it can reduce lead times dramatically on custom-designed components. Importantly, 3D printing requires no upfront tooling investment, saving both time and dollars for purchasers.  On top of that, there are no minimum order requirements, which is a great boon for companies that may be testing a new design and need only a few prototypes to start.

Considering all of these benefits, 3D printing offers customers the flexibility to innovate rapidly with virtually no limitations on design geometries or other parameters. Small minimum orders (even for regular production runs) enable keeping reduced inventories along with the ability to rely on just-in-time manufacturing.

The question is, which industry segments will be impacted most by the rise of 3D printing? I can see ripple effects that potentially go well-beyond the mortal danger faced by tool and die shops.  How many suppliers are going to need to revisit their capabilities in order to support smaller production runs and über-short lead-times?

And on the plus side, what sort of growth will we see in companies that invest in 3D printing capabilities?  Most likely we’ll be seeing startup operations that simply weren’t around before.

One thing’s for sure – it will be very interesting to look back on this segment five years hence to take stock of the evolution and how quickly it came about.  Some market forecasts have the sector growing at more than 25% per year to exceed $30 billion in value by that time.

Like some other rosy predictions in other emerging markets that ultimately came up short, will those predictions turn out to be too bullish?

Future shock? How badly is cyber-hacking nibbling away at our infrastructure?

I don’t know about you, but I’ve never forgotten the late afternoon of August 14, 2003 when problems with the North American power grid meant that people in eight states stretching from New England to Detroit suddenly found themselves without power.

Fortunately, my company’s Maryland offices were situated about 100 miles beyond the southernmost extent of the blackout. But it was quite alarming to watch the power outage spread across a map of the Northeastern and Great Lakes States (plus Ontario) in real-time, like some sort of creeping blob from a science fiction film.

According to Wikipedia’s article on the topic, the impact of the blackout was substantial — and far-reaching:

“Essential services remained in operation in some … areas. In others, backup generation systems failed. Telephone networks generally remained operational, but the increased demand triggered by the blackout left many circuits overloaded. Water systems in several cities lost pressure, forcing boil-water advisories to be put into effect. Cellular service was interrupted as mobile networks were overloaded with the increase in volume of calls; major cellular providers continued to operate on standby generator power. Television and radio stations remained on the air with the help of backup generators — although some stations were knocked off the air for periods ranging from several hours to the length of the entire blackout.”

Another (happier) thing I remember from this 15-year-old incident is that rather than causing confusion or bedlam, the massive power outage brought out the best in people. This anecdote from the blackout was typical:  Manhattanites opening their homes to workers who couldn’t get to their own residences for the evening.

For most of the 50 million+ Americans and Canadians affected by the blackout, power was restored after about six hours.  But for some, it would take as long as two days for power restoration.

Upon investigation of the incident, it was discovered that high temperatures and humidity across the region had increased energy demand as people turned on air conditioning units and fans. This caused power lines to sag as higher currents heated the lines.  The precipitating cause of the blackout was a software glitch in the alarm system in a control room of FirstEnergy Corporation, causing operators to be unaware of the need to redistribute the power load after overloaded transmission lines had drooped into foliage.

In other words, what should have been, at worst, a manageable localized blackout cascaded rapidly into a collapse of the entire electric grid across multiple states and regions.

But at least the incident was borne out of human error, not nefarious motives.

That 2003 experience should make anyone hearing last week’s testimony on Capitol Hill about the risks faced by the U.S. power grid think long and hard about what could happen in the not-so-distant future.

The bottom-line on the testimony presented in the hearings is that malicious cyberattacks are becoming more sophisticated – and hence more capable of causing damage to American infrastructure. The Federal Energy Regulatory Commission (FERC) is cautioning that hackers are increasingly threatening U.S. utilities ranging from power plants to water processing systems.

Similar warnings come from the Department of Homeland Security, which reports that hackers have been attacking the U.S. electric grid, power plants, transportation facilities and even targets in commercial sectors.

The Energy Department goes even further, reporting in 2017 that the United States electrical power grid is in “imminent danger” from a cyber-attack. To underscore this threat, the Department contends that more than 100,000 cyber-attacks are being mounted every day.

With so many attacks of this kind happening on so many fronts, one can’t help but think that it’s only a matter of time before we face a “catastrophic event” that’s even more consequential than the one that affected the power grid in 2003.

Even more chilling, if it’s borne out of intentional sabotage – as seems quite likely based on recent testimony – it’s pretty doubtful that remedial action could be taken as quickly or as effectively as what would be done in response to an accidental incident likr the one that happened in 2003.

Put yourself in the saboteurs’ shoes: If your aim is to bring U.S. infrastructure to its knees, why plan for a one-off event?  You’d definitely want to build in ways to cause cascading problems – not to mention planting additional “land-mines” to frustrate attempts to bring systems back online.

Contemplating all the implications is more than sobering — it’s actually quite frightening. What are your thoughts on the matter?  Please share them with other readers.

Peeking behind the curtain at Google.

A recently-departed Google engineer gives us the lowdown of what’s actually been happening at his former company.

Steve Yegge, a former engineer at Google who has recently joined Grab, a fast-growing ride-hailing and logistics services firm serving customers in Southeast Asia, has just gone public with an explanation of why he decided to part ways with Google after having been with the company for more than a dozen years.

His reasons are a near-indictment of the company for losing the innovative spark that Yegge thinks was the key to Google’s success — and which now appears to be slipping away.

In a recently published blog post, Yegge lays out what he considers to be Google’s fundamental flaws today:

  • Google has gone deep into protection-and-preservation mode. “Gatekeeping and risk aversion at Google are the norm rather than the exception,” Yegge writes.
  • Google has gotten way more political than it should be as an organization. “Politics is a cumbersome process, and it slows you down and leads to execution problems,” Yegge contends.
  • Google is arrogant. “It has taken me years to understand that a company full of humble individuals can still be an arrogant company. Google has the arrogance of “we”, not the “I”.
  • Google has become competitor-focused rather than customer-focused. “Their new internal slogan — ‘Focus on the user and all else will follow’ – unfortunately, it’s just lip service,” Yegge maintains. “A slogan isn’t good enough. It takes real effort to set aside time regularly for every employee to interact with your customers. Instead, [Google] play[s] the dangerous but easier game of using competitor activity as a proxy for what customers really need.”

Yegge goes on to note that nearly all of Google’s portfolio of product launches over the past 10 years can be traced to “me-too copying” of competitor moves. He cites Google Home (Amazon Echo), Google+ (Facebook) and Google Cloud (AWS) as just three examples — none of them particularly impressive introductions on Google’s part.

Yegge sums it all up with this rather damning conclusion:

“In short, Google just isn’t a very inspiring place to work anymore. I love being fired up by my work, but Google had gradually beaten it out of me.”

Steve Yegge

It isn’t as if the company’s considerable positive attributes aren’t acknowledged – Yegge still views Google as “one of the very best places to work on Earth.”

It’s just that for creative engineers like him, the spark is no longer there.

Where have we seen these dynamics at play before? Microsoft and Yahoo come to mind.

These days, Facebook might be trending in that direction too, a bit.

It seems as though issues of “invincibility” have a tendency to creep in and color how companies view their place in the world, which can eventually lead to complacency and a loss of touch with customers. Ineffective company strategies follow.

That’s a progression every company should try mightily to avoid.

What are your thoughts on Steve Yegge’s characterization of Google? Is he on point?  Or way wide of the mark?  Please share your perspectives with other readers here.

New Car Technologies and their Persistently Bullish Prospects

Let’s dip back a few years for a quick history lesson. It’s 2010, and various business prognosticators are confidently predicting that the number of electric cars sold in the United States in 2013 will be ~200,000 vehicles.

And in 2015, electric auto sales will reach ~280,000 units.

What really happened?

In 2013 total electric car sales in the United States were fewer than 97,000.  In 2015, the figure was higher – all of 119,000 units.

It’s worse than even these statistics show. The auto industry’s own expert predictions were off by miles.  In 2011, Nissan CEO Carlos Ghosn predicted that his company would have more than 1.5 million Renault-Nissan electric vehicles on the road.

That forecast turned out to be about 80% too high.

More recent sales forecasts for electric cars are much more realistic. As has become quite clear, many consumers aren’t particularly interested in shifting to a newer technology of automobile if they have to pay substantially more for the technology up-front – despite the promise of lower vehicle operating expenses over time.

Even more telling, a recent McKinsey survey found that of today’s electric car owners, only about half of respondents indicated that they would purchase one again. Ouch.

So, what we now have are projections that electric vehicles won’t reach 4% of the U.S. automotive market until 2023 at the earliest. That’s about a decade later than those first forecasts envisioned reaching that penetration level.

Is it all that surprising, actually? If we’re being honest, we have to acknowledge that the most lucrative markets for electric vehicles are in highly prosperous, population-dense urban areas with strict gasoline emissions standards – the very definition of a “limited market” (think San Francisco or Boston).

Thinking about the next technological advancement in this sector, the industry’s newest “bright shiny thing” is self-driving cars – also referred to as the classier-sounding “autonomous vehicle.” But it appears that this sector may be facing similar dynamics that made electric vehicles the “fizzled sizzle” they turned out to be.

Consider the challenges that autonomous vehicles face that threaten to dampen marketplace acceptance of these products – at least in the short- and medium-term:

  • The regulatory and legal ramifications of autonomous vehicles are even more daunting than those affecting electric cars. For starters, try assigning liability for car crashes.
  • Autonomous vehicles require sophisticated mapping and data analytics to operate properly. The United States is a big country. Put those two factors together and it’s easy to see what kind of a challenge it will be to get these vehicles on the road in any major way.
  • How about resistance from powerful groups that have a vested interest in the status quo? Of the ~3.5 million commercial truck drivers in the United States, I wonder how many are in favor of self-driving vehicles?

Not every new technology operates in a similar environment, and for this reason some new-fangled products don’t have such a long gestation and ramp-up period.  Take the smartphone, which took all of ten years to go from “what’s that?” to “who doesn’t own one?”

But there’s quite a difference, actually.  Smartphones were a sea change from what people typically considered a mobile phone, with oodles of added utility and capabilities that were never even part of the equation before.

By contrast, consumers know what it’s like to have a car, and even self-driving cars won’t be doing anything particularly “new.” Just doing it differently.

At this juncture, McKinsey is predicting that autonomous cars will reach ~15% of U.S. automobile sales by the year 2030.

Maybe that’s correct … maybe not. But my guess is, if McKinsey’s prediction turns out to be off, it’ll be because it was too robust.

IoT’s Ticking Time Bomb

The Internet of Things is making major headway in consumer product categories — but it turns out it’s bringing its share of headaches along for the ride.

It shouldn’t be particularly surprising that security could be a potential issue around IoT, of course.  But recent evaluations point to the incidence being more significant than first thought.

That’s the conclusion of research conducted by management consulting firm Altman Vilandrie & Company. Its findings are based on a survey of ~400 IT decision-makers working at companies that have purchased some form of IoT security solutions.

According to the Vilandrie survey, approximately half of the respondents reported that they have experienced at least one IoT-related security intrusion or breach within the past two years.  The companies included in the research range across some 19 industry segments, so the issue of security doesn’t appear to be confined to one or two sectors alone.

What’s more, smaller firms experienced higher relative “pain” caused by a security breach. In the Vilandrie survey, companies with fewer than $5 million in annual revenues reported an average loss of $255,000 associated with IoT security breaches.

While that’s substantially lower in dollar amount to the average loss reported by large companies, the loss for small business as a percentage of total revenues is much greater.

More findings from the Altman Vilandrie research study can be accessed here.

Where Robots Are Getting Ready to Run the Show

The Brookings Institution has just published a fascinating map that tells us a good deal about what is happening with American manufacturing today.

Headlined “Where the Robots Are,” the map graphically illustrates that as of 2015, nearly one-third of America’s 233,000+ industrial robots are being put to use in just three states:

  • Michigan: ~12% of all industrial robots working in the United States
  • Ohio: ~9%
  • Indiana: ~8%

It isn’t surprising that these three states correlate with the historic heart of the automotive industry in America.

Not coincidentally, those same states also registered a massive lurch towards the political part of the candidate in the 2016 U.S. presidential election who spoke most vociferously about the loss of American manufacturing jobs.

The Brookings map, which plots industrial robot density per 1,000 workers, shows that robots are being used throughout the country, but that the Great Lakes Region is home to the highest density of them.

Toledo, OH has the honor of being the “Top 100” metro area with the highest distribution of industrial robots: nine per 1,000 workers.  To make it to the top of the list, Toledo’s robot volume jumped from around 700 units in 2010 to nearly 2,400 in 2015, representing an average increase of nearly 30% each year.

For the record, here are the Top 10 metropolitan markets among the 100 largest, ranked in terms of their industrial robot exposure.  They’re mid-continent markets all:

  • Toledo, OH: 9.0 industrial robots per 1,000 workers
  • Detroit, MI: 8.5
  • Grand Rapids, MI: 6.3
  • Louisville, KY: 5.1
  • Nashville, TN: 4.8
  • Youngstown-Warren, OH: 4.5
  • Jackson, MS: 4.3
  • Greenville, SC: 4.2
  • Ogden, UT: 4.2
  • Knoxville, TN: 3.7

In terms of where industrial robots are very low to practically non-existent within the largest American metropolitan markets, look to the coasts:

  • Ft. Myers, FL: 0.2 industrial robots per 1,000 workers
  • Honolulu, HI: 0.2
  • Las Vegas, NV: 0.2
  • Washington, DC: 0.3
  • Jacksonville, FL: 0.4
  • Miami, FL: 0.4
  • Richmond, VA: 0.4
  • New Orleans, LA: 0.5
  • New York, NY: 0.5
  • Orlando, FL: 0.5

When one consider that the automotive industry is the biggest user of industrial robots – the International Federation of Robotics estimates that the industry accounts for nearly 40% of all industrial robots in use worldwide – it’s obvious how the Midwest region could end up being the epicenter of robotic manufacturing activity in the United States.

It should come as no surprise, either, that investments in robots are continuing to grow. The Boston Consulting Group has concluded that a robot typically costs only about one-third as much to “employ” as a human worker who is doing the same job tasks.

In another decade or so, the cost disparity will likely be much greater.

On the other hand, two MIT economists maintain that the impact of industrial robots on the volume of available jobs isn’t nearly as dire as many people might think. According to Daron Acemoglu and Pascual Restrepo:

“Indicators of automation (non-robot IT investment) are positively correlated or neutral with regard to employment. So even if robots displace some jobs in a given commuting zone, other automation (which presumably dwarfs robot automation in the scale of investment) creates many more jobs.”

What do you think? Are Messrs. Acemoglu and Restrepo on point here – or are they off by miles?  Please share your thoughts with other readers.

Suddenly, smartphones are looking like a mature market.

The smartphone diffusion curve. (Source: Business Insider)

In the consumer technology world, the pace of product innovation and maturation seems to be getting shorter and shorter.

When the television was introduced, it took decades for it to penetrate more than 90% of U.S. households. Later, when color TVs came on the market, it was years before the majority of households made the switch from black-and-white to color screens.

The dynamics of the mobile phone market illustrate how much the pace of adoption has changed.

Only a few years ago, well-fewer than half of all mobile phones in the market were smartphones. But smartphones rapidly eclipsed those older “feature phones” – so that now only a very small percentage of cellphones in use today are of the feature phone variety.

Now, in just as little time we’re seeing smartphones go from boom to … well, not quite bust.  In fewer than four years, the growth in smartphone sales has slowed from ~30% per year (in 2014) to just 4%.

That’s the definition of a “mature” market.  But it also demonstrates just how successful the smartphone has been in penetrating all corners of the market.

Consider this:  Market forecasting firm Ovum figures that by 2021, the smartphone will have claimed its position as the most popular consumer device of all time, when more than 5 billion of them are expected to be in use.

It’s part of a larger picture of connected smart devices in general, for which the total number in use is expected to double between now and 2021 – from an estimated 8 billion devices in 2016 to around 15 billion by then.

According to an evaluation conducted by research firm GfK, today only around 10% of consumers own either an Amazon Echo or Google Home device, but digital voice assistants are on the rise big-time. These interactive audio speakers offer a more “natural” way than smartphones or tablets to control smart home devices, with thousands of “skills” already perfected that allow them to interact with a large variety of apps.

There’s no question that home devices are the “next big thing,” but with their ubiquity, smartphones will continue to be the hub of the smart home for the foreseeable future.  Let’s check back in another three or four years and see how the dynamics look then.