United Airlines’ four miscalculations — and the $200 million impact.

Just how many mistakes did United Airlines make in “re-accommodating” four of its booked passengers recently? Oh, let us count the ways …

Miscalculation #1

Despite some reports to the contrary, technically United Airlines wasn’t in an overbooking or oversold situation. The flight boarded full; then some crew assigned to a future flight from the destination city turned up suddenly.

The airline’s first mistake was failure by its managers or staff to correctly anticipate the crew that needed to travel on this flight.

 Miscalculation #2

Their second mistake was to implement an operating procedure which gives crew higher priority than paying customers.

Because all customers had already taken their seats on the airplane, and no more seats were available, this meant that the airline’s staff had to ask — or coerce — some seated customers to leave.

Miscalculation #3

United’s third mistake was management’s failure to empower the airline’s gate agents to offer higher compensation in order to entice customers to leave voluntarily.

This miscalculation guaranteed that the victims would be “paying customers” who had done nothing wrong, rather than the airline’s managers and staff who had made all the mistakes.

Miscalculation #4

When choosing its victims, everything else being equal apparently, United Airlines and its regional partners like United Express go after the lowest-paying customers first. That too is a miscalculation.

Let’s explore this a bit further. According to the latest published data I could find, on average around 40% of passengers on the typical flight are traveling using heavily-discounted tickets.  Most of those tickets are non-refundable, and prepaid.  They can be changed ahead of time, but only if the customer pays change fees which can be very costly.

This means:

  • If the passenger doesn’t change his or her booking early enough, and doesn’t show up for the flight, the airline keeps all the revenue – and has the possibility of re-selling the seat to a different passenger.
  • Otherwise, the airline keeps the original revenue, plus the change fee. For United, this amounted to $800 million of additional revenue in the year 2015 alone.

Phony Risk?

Airlines justify their overbooking and overselling tactics as a way of reducing the risk of revenue lost from no-shows. Published data indicates that approximately 15% of confirmed reservations are no-shows. Assuming that the airline bears the full risk of revenue lost from no-shows, overbooking mitigates that risk by allowing other passengers to claim and pay for seats that would otherwise fly empty.

Airlines typically overbook about 12% of their seats, counting on no-shows to match load-to-seats, or later cancellations to reduce bookings. (Failure to correctly anticipate the number of no-shows would also qualify as a mistake by the airline’s management or staff.)

All that being said, however, in most discounting situations there is no “risk” to reduce, because most customers who buy discounted tickets already bear all the financial risks from a failure to show up for flights. If passengers are unable to fly when originally planned, they must either pay steep change fees … or they forfeit the entire fare paid.

The Real Risk

In fact, the airlines’ biggest risk of revenue loss from no-shows arises from passengers paying first class, business class or full-fare economy.

These types of tickets account for approximately 25% of passengers and 50% of ticket revenues.  Yet those passengers typically incur few if any cancellation fees or penalties if or when they don’t show up.

When enterprises like United try to have it both ways – by putting themselves ahead of their customers and gaming the system to maximize revenues without incurring any apparent financial risks – is it any wonder the end result is ghastly spectacles like passengers being forcibly dragged off airplanes?

Scenes like that are the predictable consequences of greed overtaking sound business management and ethics. You don’t have to think too hard to come up with other examples of precisely the same thing — Wells Fargo’s “faux” bank account setups being another recent corporate black-eye.

I’m sure if United Airlines had it to do all over again, it would have cheerfully offered up to $10,000 per ticketed passenger to get its four flight crew members off to Louisville, rather than suffer more than a $200 million net loss in share value of its company stock over the past week.

But instead, United Airlines decided on a pennywise/pound-foolish approach.

How wonderful that turned out to be for everyone.

Limp Fries: Restaurants Join Brick-and-Mortar Retailing in Facing Economic Challenges

Press reports about the state of the retail industry have focused quite naturally on the travails of the retail segment, chronicling high-profile bankruptcies (most recently hhgregg) along with store closings by such big names as Macy’s, Sears, and even Target.

Less covered, but just as challenging, is the restaurant environment, where a number of high-profile chains have suffered over the past year, along with a general malaise experienced by the industry across-the-board.

This has now been quantified in a benchmarking report issued last month by accounting and business consulting firm BDO USA covering the operating results of publicly traded restaurants in 2016.

The BDO report found that same-store sales were flat overall, with many restaurants facing lower traffic counts.

The “fast casual” segment, which had experienced robust growth in 2015, experienced the largest loss of any restaurant segment in same-store sales in 2016 (nearly 1.5%), along with the highest cost of sales (nearly 31%).

Chipotle’s poor showing, thanks to persistent food contamination problems, didn’t help the category at all, but those results were counterbalanced by several other establishments which beat the category averages significantly (Shake Shack, Wingstop and Panera Bread).

The “casual dining” segment didn’t perform much better, with same-store sales declining nearly 1% over the year. By contrast, “upscale casual” restaurants reported an ever-so-slight same-store sales gain of 0.2%.

Better sales increases were charted at quick serve (fast food) restaurants, with same-store increases of nearly 1%. Even better results were experienced at pizza restaurants, where same-store sales were up nearly 5%.  This category was led by Domino’s with over 10% same-store sales growth, thanks in part to its Tweet-To-Order rollout and other digital innovations.  Well more than half of the Domino’s orders now come through digital channels.

What are some of the broader currents contributing to the mediocre performance of restaurant chains? Unlike retailing, where it’s easy to see how purchasing practices are migrating online from physical stores, people can hardly eat digitally.  And with “time” at an all-time premium in an economy that’s no longer in recession, it would seem that preparing meals at home hasn’t suddenly becoming easier.

The BDO analysis contends that the convenience economy and the continued attractive savings offered by dining at home combine to slow restaurant foot traffic: “To remain afloat, restaurants will need to drive sales by leveraging the very trends that are shaping this evolving consumer behaviors.”

Tactics cited by BDO as lucrative steps for restaurants include expanding delivery options, and embracing digital channels in a major way.  BDO reports that in 2016, digital food ordering accounted for nearly 2 billion restaurant transactions, and this figure is expected to continue to rise significantly.

Speaking personally, I think there is a glut of dining options presented to consumers across the various segments of the restaurant trade. One local example:  In one stretch of highway on the outskirts of a county seat just 20 miles from where I live (population ~20,000), no fewer than six national chain restaurant locations have opened up in the past 24 months strung out along the main highway, joining several others in a string of options like exhibit booths at a trade show

There’s no way that market demand can satisfy the new restaurant capacity in that town.  Something’s gotta give.

What are your thoughts about which chains are doing things right in the highly competitive restaurant environment today – and which ones are stumbling?

Downtown turnaround? In these places, yes.

Downtown Minneapolis (Photo: Dan Anderson)

For decades, “going downtown” meant something special – probably from its very first use as a term to describe the lower tip of Manhattan, which was then New York City’s heart of business, commercial and residential life.

Downtown was literally “where it was at” – jobs, shopping, cultural attractions and all.

But then, beginning in post-World War II America, many downtowns lost their luster, as people were drawn to the suburbs thanks to cheap land and easy means to traveling to and fro.

In some places, downtowns and the areas immediately adjoining them became places of high crime, industrial decay, shopworn appearances and various socio-economic pathologies.

Things hit rock bottom in the late 1970s, as personified by the Times Square area of New York City. But since then, many downtowns have slowly come back from those near-death experiences, spurred by new types of residents with new and different priorities.

Dan Cort, author of the book Downtown Turnaround, describes it this way:  “People – young ones especially – love historical buildings that reintroduce them to the past.  They want to live where they can walk out of the house, work out, go to a café, and still walk to work.”

There are a number of cities where the downtown areas have come back in the big way over the past several decades. Everyone knows which ones they are:  New York, Seattle, San Francisco, Minneapolis …

But what about the latest success stories? Which downtowns are those?

Recently, Realtor.com analyzed the 200 largest cities in the United States to determine which ones have the made the biggest turnaround since 2012. To determine the biggest successes, it studied the following factors:

  • Downtown residential population growth
  • Growth in the number of restaurants, bars, grocery stores and food trucks per capita
  • Growth in the number of independent realtors per capita
  • Growth in the number of jobs per capita
  • Home price appreciation since 2012 (limited to cities where the 2012 median home price was $400,000 or lower)
  • Price premium of purchasing a home in the downtown district compared with the median home price of the whole city
  • Residential and commercial vacancy rates

Based on these criteria, Realtor.com’s list of the Top 10 cities where downtown is making a comeback are these:

  • #1 Pittsburgh, PA
  • #2 Indianapolis, IN
  • #3 Oakland, CA
  • #4 Detroit, MI
  • #5 Columbus, OH
  • #6 Austin, TX
  • #7 Los Angeles, CA
  • #8 Dallas, TX
  • #9 Chicago, IL
  • #10 Providence, RI

Some of these may surprise you. But it’s interesting to see some of the stats that are behind the rankings.  For instance, look at what’s happened to median home prices in some of these downtown districts since 2012:

  • Detroit: +150%
  • Oakland: +111%
  • Los Angeles: +63%
  • Pittsburgh: +31%

And residential population growth has been particularly strong here:

  • Pittsburgh: +32%
  • Austin: +25%
  • Dallas: +25%
  • Chicago: +21%

In the coming years, it will be interesting to see if the downtown revitalization trend continues – and spreads to more large cities.

And what about America’s medium-sized cities, where downtown zones continue to struggle. If you’ve been to Midwestern cities like Kokomo, IN, Flint, MI or Lima, OH, those downtowns look particularly bleak.  Can the sort of revitalization we see in the major urban centers be replicated there?

I have my doubts … but what is your opinion? Feel free to share your thoughts below.

PR Practices: WOM Still Wins in the End

These days, there are more ways than ever to publicize a product or service so as to increase its popularity and its sales.

And yet … the type of thing most likely to convince someone to try a new product – or to change a brand – is a reference or endorsement from someone they know and trust.

Omnichannel marketing promotions firm YA conducted research in 2016 with ~1,000 American adults (age 18+) that quantifies what many have long suspected: ~85% of respondents reported that they are more likely to purchase a product or service if it is recommended by someone they know.

A similarly high percentage — 76% — reported that an endorsement from such a person would cause them to choose one brand over another.

Most important of all, ~38% of respondents reported that when researching product or services, a referral from a friend is the source of information they trust the most.  No other source comes close.

This means that online reviews, news reports and advertising – all of which have some impact – aren’t nearly as important as the opinions of friends, colleagues or family members.

… Even if those friends aren’t experts in the topic!

It boils down to this:  The level of trust between people has a greater bearing on purchase decisions because consumers value the opinion of people they know.

Likewise, the survey respondents exhibited a willingness to make referrals of products and services, with more than 90% reporting that they give referrals when they like a product. But a far lower percentage — ~22% — have actually participated in formal refer-a-friend programs.

This seems like it could be an opportunity for brands to create dedicated referral programs, wherein those who participate are rewarded for their involvement.

The key here is harnessing the referrers as “troops” in the campaign, so as to attract a larger share of referral business and where the opportunities are strongest — and tracking the results carefully, of course.

In copywriting, it’s the KISS approach on steroids today.

… and it means “Keep It Short, Stupid” as much as it does “Keep It Simple, Stupid.”

Regardless of the era, most successful copywriters and ad specialists have always known that short copy is generally better-read than long.

And now, as smaller screens essentially take over the digital world, the days of copious copy flowing across a generous preview pane area are gone.

More fundamentally, people don’t have the screen size – let along the patience – to wade through long copy. These days, the “sweet spot” in copy runs between 50 and 150 words.

Speaking of which … when it comes to e-mail subject lines, the ideal length keeps getting shorter and shorter. Research performed by SendGrid suggests that it’s now down to an average length of about seven words for the subject line.

And the subject lines that get the best engagement levels are a mere three or four words.

So it’s KISS on steroids: keeping it short as well as simple.

Note: The article copy above comes in at under 150 words …!

Where’s the Best Place to Live without a Car?

For Americans who live in the suburbs, exurbs or rural areas, being able to live without a car seems like a pipedream. But elsewhere, there are situations where it may actually make some sense.

They may be vastly different in nearly every other way, but small towns and large cities share one trait – being the places where it’s more possible to live without a car.

Of course, within the larger group of small towns and larger cities there can be big differences in relative car-free attractiveness depending on differing factors.

For instance, the small county seat where I live can be walked from one side of town to the other in under 15 minutes. This means that, even if there are places where a sidewalk would be nice to have, it’s theoretically possible to take care of grocery shopping and trips to the pharmacy or the cleaners or the hardware store on foot.

Visiting restaurants, schools, the post office and other government offices is also quite easy as well.

But even slightly bigger towns pose challenges because of distances that are much greater – and there’s usually little in the way of public transport to serve inhabitants who don’t possess cars.

At the other end of the scale, large cities are typically places where it’s possible to move around without the benefit of a car – but some urban areas are more “hospitable” than others based on factors ranging from the strength of the public transit system and neighborhood safety to the climate.

Recently, real estate brokerage firm Redfin took a look at large U.S. cities (those with over 300,000 population) to come up with its listing of the 10 cities it judged the most amenable for living without a car. Redfin compiled rankings to determine which cities have the better composite “walk scores,” “transit scores” and “bike scores.”

Here’s how the Redfin Top 10 list shakes out. Topping the list is San Francisco:

  • #1: San Francisco
  • #2: New York
  • #3: Boston
  • #4: Washington, DC
  • #5: Philadelphia
  • #6: Chicago
  • #7: Minneapolis
  • #8: Miami
  • #9: Seattle
  • #10: Oakland, CA

Even within the Top 10 there are differences, of course. This chart shows how these cities do relatively better (or worse) in the three categories scored:

Redfin has also analyzed trends in residential construction in urban areas, finding that including parking spaces within residential properties is something that’s beginning to diminish – thereby making the choice of opting out of automobile ownership a more important consideration than in the past.

What about your own experience? Do you know of a particular city or town that’s particularly good in accommodating residents who don’t own cars?  Or just the opposite?  Please share your observations with other readers.

More Trouble in the Twittersphere

With each passing day, we see more evidence that Twitter has become the social media platform that’s in the biggest trouble today.

The news is replete with articles about how some people are signing off from Twitter, having “had it” with the politicization of the platform. (To be fair, that’s a knock on Facebook as well these days.)

Then there are reports of how Twitter has stumbled in its efforts to monetize the platform, with advertising strategies that have failed to generate the kind of growth to match the company’s optimistic forecasts. That bit of bad news has hurt Twitter’s share price pretty significantly.

And now, courtesy of a new analysis published by researchers at Indiana University and the University of Southern California, comes word that Twitter is delivering misleading analytics on audience “true engagement” with tweets.  The information is contained in a peer-reviewed article titled Online Human-Bot Interactions: Detection, Estimation and Characterization.

According to findings as determined by Indiana University’s Center for Complex Networks & Systems Research (CNetS) and the Information Sciences Institute at the University of Southern California, approximately 15% of Twitter accounts are “bots” rather than people.

That sort of news can’t be good for a platform that is struggling to elevate its user base in the face of growing competition.

But it’s even more troubling for marketers who rely on Twitter’s engagement data to determine the effectiveness of their campaigns. How can they evaluate social media marketing performance if the engagement data is artificially inflated?

Fifteen percent of all accounts may seem like a rather small proportion, but in the case of Twitter that represents nearly 50 million accounts.

To add insult to injury, the report notes that even the 15% figure is likely too low, because more sophisticated and complex bots could have appeared as a “humans” in the researchers’ analytical model, even if they aren’t.

There’s actually an upside to social media bots – examples being automatic alerts of natural disasters or customer service responses. But there’s also growing evidence of nefarious applications abounding.

Here’s one that’s unsurprising even if irritating: bots that emulate human behavior to manufacture “faux” grassroots political support.  But what about the delivery of dangerous or inciting propaganda thanks to bot “armies”?  That’s more alarming.

The latest Twitter-bot news is more confirmation of the deep challenges faced by this particular social media platform.  What’s next, I wonder?

The downside dangers of IoT: Overblown or underestimated?

In recent weeks, there has been an uptick in articles appearing in the press about the downside risks of the Internet of Things (IoT). The so-called “Weeping Angel” technique, which essentially allows hackers to turn a smart television into a microphone, is one eyebrow-raising example included from the CIA files released by WikiLeaks recently. Another is the potential for hacking into the systems of autonomous vehicles, enabling cargo to be stolen or the vehicles themselves to be held for ransom.

Some of it seems like the stuff of science fiction – or at the very least a modern form of cloak-and-dagger activity. Regular readers of the Nones Notes blog know that when we’re in the midst of a “collective angst” about a topics of this nature, I like to solicit the views of my brother, Nelson Nones, who has been in the fields of IT and operations management for decades.

I asked Nelson to share his perspectives on IoT, what he sees are its pitfalls, and whether the current levels of concern are justified. His comments are presented below:

Back in 1998, I was invited to speak about the so-called “millennium bug” (also known as the “Y2K bug”) at a symposium in Kuching, Malaysia. It was a hot topic at that time, because many computer systems then in use hadn’t been designed or built to deal with calendar dates beyond the end of the 20th century.  

The purpose of my presentation was to educate the audience about the nature of the problem, and how to mitigate it. During the question-and-answer session which followed, a member of the audience rose and began to speak rather hysterically of the threat which the millennium bug posed to civilization as we knew it.  

His principal concern was the millions of embedded sensors and controllers in use throughout industry which were not programmable and would therefore need to be replaced. In his view, very few people knew which of those devices were susceptible to the millennium bug, or where they were running.  

As a result, he felt that many flawed devices would go undetected, causing critical infrastructures such as power generation plants, electricity grids and aircraft to fail.  

Needless to say, his dire predictions did not come to pass and humankind sailed into the 21st century with barely a murmur. This isn’t to say that the millennium bug wasn’t a real threat – it certainly was – but rather that providers and users of information technology (IT) mostly did what was necessary to prepare for it.  As Britain’s Guardian newspaper reported in April 2000, “In truth, there have been bug incidents … none of this, however, adds up to global recession, or infrastructure collapse, or accidental nuclear war, as the most heated prophets were anticipating.”  

It is for similar reasons that I take much of today’s hype over security vulnerabilities of IoT with more than a pinch of salt. 

It’s worth noting that, technologically speaking, IoT isn’t really very new at all. As the prophet of doom at my 1998 symposium (correctly) observed, sensors, software, actuators and electronic controllers have been integral components of automated industrial systems for the past thirty years at least.   

What’s new is that these technologies have begun to be accepted and deployed by consumers. I say “begun” because I don’t know anyone who has actually rigged a “smart home” to work in the all-encompassing way breathlessly envisioned by purveyors of home automation technology; but I do know people who use the technology for specific purposes such as home security, thermostat control and recording TV programs.  

Just last week I spoke with someone who is beta testing a self-driving Tesla automobile, but he confessed that he still won’t take his hands off the wheel because he doesn’t really trust the self-driving technology yet.  

What’s also new is that businesses are extending their use of sensors and controllers well beyond the confines of plants, factories and warehouses. For example, trucking companies routinely use global positioning system (GPS) sensors to monitor fleet locations in real-time.  

Aircraft engine makers such as Rolls-Royce and GE rely on management and monitoring systems to transmit information from sensors to ground stations for real time analysis, during flight.  Many problems which are detected in this manner can be instantly corrected during flight, by relaying instructions back to controllers and actuators installed on the engine.  

The common denominator for what’s new is the use of existing Internet infrastructure; hence the “I” in “IoT.”  

In earlier times, sensors, software and electronic controllers could communicate only through local area networks (LANs) which were physically isolated and therefore impermeable to external attacks. But when those devices are connected to the public Internet, in theory anyone can access them — including cyber-criminals and governments engaged in sabotage or espionage, or who want to hold things for ransom, surreptitiously watch live feeds, or deploy botnets for distributed denial of service (DDoS) attacks.  

It is clear, therefore, that the root causes of privacy and security concerns arising from increasing IoT usage are mainly network security lapses, and not the things themselves.

Ensuring the highest possible degree of network security is no easy task. Above and beyond arcane technical details such as encryption, installing network firewalls, and opening and closing of ports, it means deploying multiple layers of defenses according to specific policies and controls, and that requires skills and knowledge which most consumers, and even many businesses, do not possess. 

Still, one doesn’t have to be a network geek to implement basic security mechanisms that far too many people overlook. In search of easy pickings, cyber-criminals usually prefer to exploit the huge number of unlocked doors begging for their attention, rather than wasting time trying to penetrate even slightly stronger defenses.   

For example, many people install wireless networks in their homes but forget to change the default router password and default network name (SSID) – or they pick a password that’s easy to guess. In addition, many people leave their network “open” to anyone having a wireless card by failing to implement a security key such as a WPA, WPA2 or WEP key, or by choosing a weak security key.   

An attacker can discover those lapses in a matter of seconds, or less, giving them full administrative authority and control over the compromised network with little risk of detection. This, in turn, would give the attacker immediate access to, and remote control over, any device on the network which is switched on but does not require authentication; for example, network printers, data storage devices, cameras, TVs and personal computers (PCs) which are not configured to require a user logon. 

Plugging those security holes doesn’t require specialist knowledge and shouldn’t take more than an hour for most home networks. Recognizing the security concerns, an increasing number of hardware and software vendors are preconfiguring their products in “full lockdown” mode, which provides basic security by default and requires users to apply specialist knowledge in order to open up their networks as necessary for greater convenience.  

This is precisely what Microsoft did over a decade ago, with great success, in response to widely publicized security vulnerabilities in its Windows® operating system and Internet Explorer browser. 

It’s all too easy to imagine the endgames of hypothetical scenarios in which the bad apples win by wresting control over the IoT from the good guys. But just like the millennium bug nearly two decades ago, it is wiser to heed the wisdom of Max Ehrmann’s Desiderata, published back in 1927:  

“Exercise caution in your business affairs, for the world is full of trickery … but do not distress yourself with dark imaginings.”  

Going forward, I’m confident that a healthy dose of risk intelligence, and not fear, will prove to be the key for successfully managing the downside aspects of IoT.

_________________________

So those are Nelson’s views on the Internet of Things. What about you?  Are you in agreement, or are there aspects about which you may think differently?  Please share your thoughts with other readers.

B-to-B content marketers: Not exactly a confident bunch.

In the world of business-to-business marketing, all that really matters is producing a constant flow of quality sales leads.  According to Clickback CEO Kyle Tkachuk, three-fourths of B-to-B marketers cite their most significant objective as lead generation.  Pretty much everything else pales in significance.

This is why content marketing is such an important aspect of commercial marketing campaigns.  Customers in the commercial world are always on the lookout for information and insights to help them solve the variety of challenges they face on the manufacturing line, in their product development, quality assurance, customer service and any number of other critical functions.

Suppliers and brands that offer a steady diet of valuable and actionable information are often the ones that end up on a customer’s “short-list” of suppliers when the need to make a purchase finally rolls around.

Thus, the role of content marketers continues to grow – along with the pressures on them to deliver high-quality, targeted leads to their sales forces.

The problem is … a large number of content marketers aren’t all that confident about the effectiveness of their campaigns.

It’s a key takeaway finding from a survey conducted for content marketing software provider SnapApp by research firm Demand Gen.  The survey was conducted during the summer and fall of 2016 and published recently in SnapApp’s Campaign Confidence Gap report.

The survey revealed that more than 80% of the content marketers queried reported being just “somewhat” or “not very” confident regarding the effectiveness of their campaigns.

Among the concerns voiced by these content marketers is that the B-to-B audience is becoming less enamored of white papers and other static, lead-gated PDF documents to generate leads.

And yet, those are precisely the vehicles that continue to be used most often used to deliver informational content.

According to the survey respondents, B-to-B customers not only expect to be given content that is relevant, they’re also less tolerant of resources that fail to speak to their specific areas of interest.

For this reason, one-third of the content managers surveyed reported that they are struggling to come up with effective calls-to-action that capture attention, interest and action instead of being just “noise.”

The inevitable conclusion is that traditional B-to-B marketing strategies and similar “seller-centric” tactics have become stale for buyers.

Some content marketers are attempting to move beyond these conventional approaches and embrace more “content-enabled” campaigns that can address interest points based on a customer’s specific need and facilitate engagement accordingly.

Where such tactics have been attempted, content marketers report somewhat improved results, including more open-rate activity and an in increase in clickthrough rates.

However, the degree of improvement doesn’t appear to be all that impressive. Only about half of the survey respondents reported experiencing improved open rates.  Also, two-thirds reported experiencing an increase in clickthrough rates – but only by 5% or less.

Those aren’t exactly eye-popping improvements.

But here’s the thing: Engagement levels with traditional “static” content marketing vehicles are likely to actually decline … so if content-enabled campaigns can arrest the drop-off and even notch improvements in audience engagement, that’s at least something.

Among the tactics content marketers consider for their creating more robust content-enabled campaigns are:

  • Video
  • Surveys
  • Interactive infographics
  • ROI calculators
  • Assessments/audits

The hope is that these and other tools will increase customer engagement, allow customers to “self-quality,” and generate better-quality leads that are a few steps closer to an actual sale.

If all goes well, these content-enabled campaigns will also collect data that helps sales personnel accelerate the entire process.

Where the Millionaires Are

Look to the states won by Hillary Clinton in the 2016 presidential election.

Proportion of “millionaire households” by state (darker shades equals higher proportion of millionaires).

Over the past few years, we’ve heard a good deal about income inequality in the United States. One persistent narrative is that the wealthiest and highest-income households continue to do well – and indeed are improving their relative standing – while many other families struggle financially.

The most recent statistical reporting seems to bears this out.

According to the annual Wealth & Affluent Monitor released by research and insights firm Phoenix Marketing International, the total tally of U.S. millionaire households is up more than 800,000 over the past years.

And if we go back to 2006, before the financial crisis and subsequent Great Recession, the number of millionaire households has increased by ~1.3 million since that time.

[For purposes of the Phoenix report, “millionaire households” are defined as those that have $1 million or more in investable assets. Collectively, these households possess approximately $20 billion in liquid wealth, which is nearly 60% of the entire liquid wealth in America.]

Even with a growing tally, so-called “millionaire households” still represent around 5% of all U.S. households, or approximately 6.8 million in total. That percentage is nearly flat (up only slightly to 5.1% from 4.8% in 2006).

Tellingly, there is a direct correlation between the states with the largest proportion of millionaire households and how those states voted in the most recent presidential election. Every one of the top millionaire states is located on the east or west coasts – and all but one of them was won by Hillary Clinton:

  • #1  Maryland
  • #2  Connecticut
  • #3  New Jersey
  • #4  Hawaii
  • #5  Alaska
  • #6  Massachusetts
  • #7  New Hampshire
  • #8  Virginia
  • #9  DC
  • #10  Delaware

Looking at the geographic makeup of the states with the highest share of millionaires helps explain how “elitist” political arguments had a degree resonance in the 2016 campaign that may have surprised some observers.

Nearly half of the jurisdictions Hillary Clinton won are part of the “Top 10” millionaire grouping, whereas just one of Donald Trump’s states can be found there.

But it’s when we look at the tiers below the “millionaire households” category that things come into even greater focus. The Phoenix report shows that “near-affluent” households in the United States – the approximately 14 million households having investable assets ranging from $100,000 to $250,000 – actually saw their total investable assets decline in the past year.

“Affluent” households, which occupy the space in between the “near-affluents” and the “millionaires,” have been essentially treading water. So it’s quite clear that things are not only stratified, but also aren’t improving, either.

The reality is that the concentration of wealth continues to deepen, as the Top 1% wealthiest U.S. households possess nearly one quarter of the total liquid wealth.

In stark contrast, the ~70% of non-affluent households own less than 10% of the country’s liquid wealth.

Simply put, the past decade hasn’t been kind to the majority of Americans’ family finances. In my view, that dynamic alone explains more of 2016’s political repercussions than any other single factor.  It’s hardly monolithic, but often “elitism” and “status quo” go hand-in-hand. In 2016 they were lashed together; one candidate was perceived as both “elitist” and “status quo,” and the result was almost preordained.

The most recent Wealth & Affluent Monitor from Phoenix Marketing International can be downloaded here.