When it comes to city parklands, the Twin Cities of Minneapolis-St. Paul rule.

Minneapolis, Minnesota

Although I lived in five states prior to going away to college, I spent the most time in those formative years of my life residing in the Twin Cities of Minneapolis and St. Paul in Minnesota.

The city parks in both towns are major amenities. Indeed, you could say that the entire fabric of life in the two cities is interwoven with the park systems; they’re that special.  And my family was no exception in taking advantage of everything the parks had to offer.

So it wasn’t much of a surprise to find that both cities are at the very top of the list of U.S. cities with the best parks.

The evaluation is done annually by The Trust for Public Land, and covers the 100 most populous cities in the United States.

The major metric studied is the percent of city residents who live within a 10-minute walk of a park — although other characteristics are also analyzed, such as park size and investment, the number of playgrounds, dog parks and recreation centers in relation to city population, and so on.

In the 2017 evaluation, Minneapolis topped the list of 100 cities, helped by superior park access across all ages and income levels, as well as achieving top scores in park investment as well as the number of senior and recreation centers, plus dog parks.

In total, an incredible 15% of Minneapolis’ entire square mileage is dedicated to park space.

St. Paul was right behind Minneapolis in the #2 slot out of 100 cities evaluated. As visitors to the Twin Cities know, Minneapolis is blessed with seven natural lakes within its borders, whereas next-door St. Paul has just two.  Nevertheless, its commitment to parkland is nearly as strong.

Here’s how the Top 10 list of cities shakes out:

  • #1 Minneapolis, MN
  • #2 St. Paul, MN
  • #3 San Francisco, CA
  • #4 Washington, DC
  • #5 Portland, OR
  • #6 Arlington, VA
  • #7 (tie) Irvine, CA and New York, NY
  • #9 Madison, WI
  • #10 Cincinnati, OH

Several of these cities shine in certain attributes. San Francisco, for instance, scores highest for park access, with nearly every resident living within a 10-minute walk of a park.

Three cities (Arlington, Irvine and Madison), achieved Top 10 ranking for only the second time (all three first made it into the Top 10 ranking in 2016).

What about cities that appear at the bottom of the Trust for Public Land list? They tend to be “car-dominated” cities, where parks aren’t easily accessible by foot for many residents.  For the record, here are the cities that rank lowest in the rankings:

  • #90 (tie) Fresno, CA, Hialeah, FL and Jacksonville, FL
  • #93 (tie) Laredo, TX and Winston-Salem, NC
  • #95 Mesa, AZ
  • #96 Louisville, KY
  • #97 Charlotte, NC
  • #98 (tie) Fort Wayne, IN and Indianapolis, IN
Bottom dweller: A crumbling structure in a padlocked park in Indianapolis, Indiana.

Interestingly, one of these cities – Charlotte – leads all others in median park size (~16 acres). Of course, this likely means that residents’ access to them suffers because there are fewer small parks scattered around the city.

To see the full rankings as well as each city’s score by category evaluated, you can view a comparative chart here.

Based on your experience, do any of the city rankings surprise you? Is there a particular city that you think should be singled out for praise (or pan) about their parklands?

For job seekers in America, the compass points south and west.

Downtown Miami

Many factors go into determining what may be the best cities for job seekers to find employment. There are any number of measures – not least qualitative ones such as where friends and family members reside, and what kind of family safety net exists.

But there are other measures, too – factors that are a little easier to apply across all workers:

  • How favorable is the local labor market to job seekers?
  • What are salary levels after adjusting for cost-of-living factors?
  • What is the “work-life” balance that the community offers?
  • What are the prospects for job security and advancement opportunities?

Seeking to find clues as to which metro areas represent the best environments for job seekers, job posting website Indeed set about analyzing data gathered from respondents who live in the 50 largest metro areas on the Indeed review database.

Indeed’s research methodology is explained here. Its analysis began by posing the four questions above and applying a percentile score for each one based on the feedback it received, followed by additional analytical calculations to come up with a consolidated score for each of the 50 metro areas.

The resulting list shows a definite skew towards the south and west. In order of rank, here are the ten metro areas that scored as the most attractive places for job seekers:

#1. Miami, FL

#2. Orlando, FL

#3. Raleigh, NC

#4. Austin, TX

#5. Sacramento, CA

#6. San Jose, CA

#7. Jacksonville, FL

#8. San Diego, CA

#9. Houston, TX

#10. Memphis, TN

Not all metro areas ranked equally strongly across the four measurement categories. Overall leader Miami scored very highly for work-life-balance as well as job security and advancement, but its cost-of-living factors were decidedly less impressive.

“Where are cities in the Northeast and the Midwest?”, you might ask. Not only are they nowhere to be found in the Top 10, they aren’t in the second group of ten in Indeed’s ranking, either:

#11. Las Vegas, NV

#12. San Francisco, CA

#13. Riverside, CA

#14. Atlanta, GA

#15. Los Angeles, CA

#16. San Antonio, TX

#17. Seattle, WA

#18. Hartford, CT

#19. Charlotte, NC

#20. Tampa, FL

… except for one: Hartford (#18 on Indeed’s list).

Likely, the scarcity of Northeastern and Midwestern cities correlates with the loss of manufacturing jobs, which have typically been so important to those metro areas.  Many of these markets have struggled to become more diversified.

If there are similar characteristics between the top-scoring cities beside geography, it’s that they’re high-tech bastions, highly diversified economies or – very significantly – the seat of state government.

In fact, if you look at the Top 10 metro areas, three of them are state capital cities; in the next group, there are two more.  Not surprisingly, those cities were ranked higher than others for job security.  And salary levels compared to the cost of living in those areas were also quite lucrative.

So much for the adage that a government paycheck is low but the job security is high; it turns out, they both are.

For more details on the Indeed listing, how the ranking was derived, and individual scores by metro area for the four criteria shown above, click here.

The U.S. Postal Services unveils its Informed Delivery notification service – about two decades too late.

Earlier this year, the U.S. Postal Service decided to get into the business of e-mail. But the effort is seemingly a day late and a dollar short.

Here’s how the scheme works: Via sending an e-mail with scanned images, the USPS will notify a customer of the postal mail that will be delivered that day.

It’s called Informed Delivery, and it’s being offered as a free service.

Exactly what is this intended to accomplish?

It isn’t as if receiving an e-mail notification of postal mail that’s going to be delivered within hours is particularly valuable.  If the information were that time-sensitive, why not receive the actual original item via e-mail to begin with?  That would have saved the sender 49 cents on the front end as well.

So the notion that this service would somehow stem the tide of mass migration to e-mail communications seems pretty far-fetched.

And here’s another thing: The USPS is offering the service free of charge – so it isn’t even going to reap any monetary income to recoup the cost of running the program.

That doesn’t seem to make very good business sense for an organization that’s already flooded with red ink.

Actually, I can think of one constituency that might benefit from Informed Delivery – rural residents who aren’t on regular delivery routes and who must travel a distance to pick up their mail at a post office. For those customers, I can see how they might choose to forgo a trip to town if the day’s mail isn’t anything to write home about — if you’ll pardon the expression.

But what portion of the population is made up of people like that? I’m not sure, but it’s likely far fewer than 5%.

And because the USPS is a quasi-governmental entity, it’s compelled to offer the same services to everyone.  So even the notion of offering Informed Delivery as “niche product” to just certain people isn’t relevant.

I guess the USPS deserves fair dues just for trying to come up with new ways to be relevant in the changing communications world. But it’s very difficult to come up with anything worthwhile when the entire foundation of the USPS’s mission has so been eroded over the past generation.

Where’s the Best Place to Live without a Car?

For Americans who live in the suburbs, exurbs or rural areas, being able to live without a car seems like a pipedream. But elsewhere, there are situations where it may actually make some sense.

They may be vastly different in nearly every other way, but small towns and large cities share one trait – being the places where it’s more possible to live without a car.

Of course, within the larger group of small towns and larger cities there can be big differences in relative car-free attractiveness depending on differing factors.

For instance, the small county seat where I live can be walked from one side of town to the other in under 15 minutes. This means that, even if there are places where a sidewalk would be nice to have, it’s theoretically possible to take care of grocery shopping and trips to the pharmacy or the cleaners or the hardware store on foot.

Visiting restaurants, schools, the post office and other government offices is also quite easy as well.

But even slightly bigger towns pose challenges because of distances that are much greater – and there’s usually little in the way of public transport to serve inhabitants who don’t possess cars.

At the other end of the scale, large cities are typically places where it’s possible to move around without the benefit of a car – but some urban areas are more “hospitable” than others based on factors ranging from the strength of the public transit system and neighborhood safety to the climate.

Recently, real estate brokerage firm Redfin took a look at large U.S. cities (those with over 300,000 population) to come up with its listing of the 10 cities it judged the most amenable for living without a car. Redfin compiled rankings to determine which cities have the better composite “walk scores,” “transit scores” and “bike scores.”

Here’s how the Redfin Top 10 list shakes out. Topping the list is San Francisco:

  • #1: San Francisco
  • #2: New York
  • #3: Boston
  • #4: Washington, DC
  • #5: Philadelphia
  • #6: Chicago
  • #7: Minneapolis
  • #8: Miami
  • #9: Seattle
  • #10: Oakland, CA

Even within the Top 10 there are differences, of course. This chart shows how these cities do relatively better (or worse) in the three categories scored:

Redfin has also analyzed trends in residential construction in urban areas, finding that including parking spaces within residential properties is something that’s beginning to diminish – thereby making the choice of opting out of automobile ownership a more important consideration than in the past.

What about your own experience? Do you know of a particular city or town that’s particularly good in accommodating residents who don’t own cars?  Or just the opposite?  Please share your observations with other readers.

The downside dangers of IoT: Overblown or underestimated?

In recent weeks, there has been an uptick in articles appearing in the press about the downside risks of the Internet of Things (IoT). The so-called “Weeping Angel” technique, which essentially allows hackers to turn a smart television into a microphone, is one eyebrow-raising example included from the CIA files released by WikiLeaks recently. Another is the potential for hacking into the systems of autonomous vehicles, enabling cargo to be stolen or the vehicles themselves to be held for ransom.

Some of it seems like the stuff of science fiction – or at the very least a modern form of cloak-and-dagger activity. Regular readers of the Nones Notes blog know that when we’re in the midst of a “collective angst” about a topics of this nature, I like to solicit the views of my brother, Nelson Nones, who has been in the fields of IT and operations management for decades.

I asked Nelson to share his perspectives on IoT, what he sees are its pitfalls, and whether the current levels of concern are justified. His comments are presented below:

Back in 1998, I was invited to speak about the so-called “millennium bug” (also known as the “Y2K bug”) at a symposium in Kuching, Malaysia. It was a hot topic at that time, because many computer systems then in use hadn’t been designed or built to deal with calendar dates beyond the end of the 20th century.  

The purpose of my presentation was to educate the audience about the nature of the problem, and how to mitigate it. During the question-and-answer session which followed, a member of the audience rose and began to speak rather hysterically of the threat which the millennium bug posed to civilization as we knew it.  

His principal concern was the millions of embedded sensors and controllers in use throughout industry which were not programmable and would therefore need to be replaced. In his view, very few people knew which of those devices were susceptible to the millennium bug, or where they were running.  

As a result, he felt that many flawed devices would go undetected, causing critical infrastructures such as power generation plants, electricity grids and aircraft to fail.  

Needless to say, his dire predictions did not come to pass and humankind sailed into the 21st century with barely a murmur. This isn’t to say that the millennium bug wasn’t a real threat – it certainly was – but rather that providers and users of information technology (IT) mostly did what was necessary to prepare for it.  As Britain’s Guardian newspaper reported in April 2000, “In truth, there have been bug incidents … none of this, however, adds up to global recession, or infrastructure collapse, or accidental nuclear war, as the most heated prophets were anticipating.”  

It is for similar reasons that I take much of today’s hype over security vulnerabilities of IoT with more than a pinch of salt. 

It’s worth noting that, technologically speaking, IoT isn’t really very new at all. As the prophet of doom at my 1998 symposium (correctly) observed, sensors, software, actuators and electronic controllers have been integral components of automated industrial systems for the past thirty years at least.   

What’s new is that these technologies have begun to be accepted and deployed by consumers. I say “begun” because I don’t know anyone who has actually rigged a “smart home” to work in the all-encompassing way breathlessly envisioned by purveyors of home automation technology; but I do know people who use the technology for specific purposes such as home security, thermostat control and recording TV programs.  

Just last week I spoke with someone who is beta testing a self-driving Tesla automobile, but he confessed that he still won’t take his hands off the wheel because he doesn’t really trust the self-driving technology yet.  

What’s also new is that businesses are extending their use of sensors and controllers well beyond the confines of plants, factories and warehouses. For example, trucking companies routinely use global positioning system (GPS) sensors to monitor fleet locations in real-time.  

Aircraft engine makers such as Rolls-Royce and GE rely on management and monitoring systems to transmit information from sensors to ground stations for real time analysis, during flight.  Many problems which are detected in this manner can be instantly corrected during flight, by relaying instructions back to controllers and actuators installed on the engine.  

The common denominator for what’s new is the use of existing Internet infrastructure; hence the “I” in “IoT.”  

In earlier times, sensors, software and electronic controllers could communicate only through local area networks (LANs) which were physically isolated and therefore impermeable to external attacks. But when those devices are connected to the public Internet, in theory anyone can access them — including cyber-criminals and governments engaged in sabotage or espionage, or who want to hold things for ransom, surreptitiously watch live feeds, or deploy botnets for distributed denial of service (DDoS) attacks.  

It is clear, therefore, that the root causes of privacy and security concerns arising from increasing IoT usage are mainly network security lapses, and not the things themselves.

Ensuring the highest possible degree of network security is no easy task. Above and beyond arcane technical details such as encryption, installing network firewalls, and opening and closing of ports, it means deploying multiple layers of defenses according to specific policies and controls, and that requires skills and knowledge which most consumers, and even many businesses, do not possess. 

Still, one doesn’t have to be a network geek to implement basic security mechanisms that far too many people overlook. In search of easy pickings, cyber-criminals usually prefer to exploit the huge number of unlocked doors begging for their attention, rather than wasting time trying to penetrate even slightly stronger defenses.   

For example, many people install wireless networks in their homes but forget to change the default router password and default network name (SSID) – or they pick a password that’s easy to guess. In addition, many people leave their network “open” to anyone having a wireless card by failing to implement a security key such as a WPA, WPA2 or WEP key, or by choosing a weak security key.   

An attacker can discover those lapses in a matter of seconds, or less, giving them full administrative authority and control over the compromised network with little risk of detection. This, in turn, would give the attacker immediate access to, and remote control over, any device on the network which is switched on but does not require authentication; for example, network printers, data storage devices, cameras, TVs and personal computers (PCs) which are not configured to require a user logon. 

Plugging those security holes doesn’t require specialist knowledge and shouldn’t take more than an hour for most home networks. Recognizing the security concerns, an increasing number of hardware and software vendors are preconfiguring their products in “full lockdown” mode, which provides basic security by default and requires users to apply specialist knowledge in order to open up their networks as necessary for greater convenience.  

This is precisely what Microsoft did over a decade ago, with great success, in response to widely publicized security vulnerabilities in its Windows® operating system and Internet Explorer browser. 

It’s all too easy to imagine the endgames of hypothetical scenarios in which the bad apples win by wresting control over the IoT from the good guys. But just like the millennium bug nearly two decades ago, it is wiser to heed the wisdom of Max Ehrmann’s Desiderata, published back in 1927:  

“Exercise caution in your business affairs, for the world is full of trickery … but do not distress yourself with dark imaginings.”  

Going forward, I’m confident that a healthy dose of risk intelligence, and not fear, will prove to be the key for successfully managing the downside aspects of IoT.

_________________________

So those are Nelson’s views on the Internet of Things. What about you?  Are you in agreement, or are there aspects about which you may think differently?  Please share your thoughts with other readers.

Some good news for the U.S. Postal Service for a change …

psThe U.S. Postal Service has just implemented a price adjustment on first class letter mail – the first rate increase in quite a few years. Some other pricing adjustments have been implemented as well, but on the whole they are modest.

Hopefully the rate increases won’t throw water on the good news that the USPS experienced over the holiday season. According to a Rasmussen Reports consumer survey of ~1,000 American adults age 18 and over conducted at the end of December, Americans used the USPS more in the most recent holiday than in the 2015 season.

The public also continues to give the USPS higher marks than its major competitors – FedEx and UPS – on the way it handles their packages.

For the record, ~21% of the respondents surveyed by Rasmussen reported that they used the U.S. Postal Service more this holiday season than they have in previous years, while ~18% reported they used it less. The remaining ~61% kept their USPS usage at around the same level of activity.

On the commercial side, for many businesses who do not have the kind of high volume shipping needs to qualify for special pricing from FedEx or UPS, the USPS also appears to be a far more lucrative choice from a price-to-value relationship.

usIn mid-2015, FitSmallBusiness.com undertook apple-to-apples comparisons of the three big package delivery firms, and found some startling differences.  For example, to ship a 3-lb. package overnight-delivery from New York City to Los Angeles, using FedEx would set the sender back $83.  UPS was even worse, at $84.

The USPS price?  Just $24.99.

Comparing short-haul rates as well as heavier 10-lb. packages found similar major discrepancies — all in favor of using the U.S. Postal Service. On top of that, the USPS provides free packaging materials, complimentary pick-up service, free insurance and tracking — not to mention flat-rate boxes for packages that weigh up to 70 lbs.

feSealing the deal further, while FedEx’s 50,000+ and UPS’s 63,000+ locations worldwide are certainly nice to rely on, the number of USPS locations dwarfs those figures by a country mile. Those myriad USPS locations also mean that packages can be shipped to P.O. boxes in addition to physical addresses – something that’s out of the reach of either FedEx or UPS.

People love to beat up on the United States Postal Service.  But say what you will about the USPS, its problems and its financial challenges, they’re still a major-league bargain for many consumers and businesses.

People at the Polls: A Tale of Four Predictions

ballotsRegular readers of the Nones Notes blog will know that my brother, Nelson Nones, sometimes contributes his thoughts and perspectives for the benefit of other readers.  As someone who has lived and worked outside the United States for the past two decades, Nelson’s perspectives on domestic and international events and megatrends are always insightful — and often different from prevailing local opinions.

That has certainly been the case during 2016.  In a year of four major election surprises, my brother correctly predicted the results at the ballot box in every single case.  Below is a guest post written by Nelson in which he explains how he arrived at predictions that were so much at odds with the prevailing views and conventional wisdom.

Oh, and by the way … I was one of the “confidantes” Nelson refers to in his post, so I can personally vouch for the fact that his “fearless predictions” were made before the events happened — even if they were delivered to a skeptical (and at times incredulous!) audience.

_______________________________________

A TALE OF FOUR PREDICTIONS

By Nelson M. Nones, CPIM

Bangkok, Thailand — 13th November 2016

duterte-trump
Rodrigo Duterte and Donald Trump

A handful of my confidantes across the world can vouch that I correctly predicted four momentous events during the past fifteen months: that Donald Trump would be the US Republican nominee, United Kingdom (UK) citizens would vote to leave the European Union (EU), Rodrigo Duterte would be elected President of the Philippines and Trump would win the US Presidency.

To my knowledge, no one else correctly predicted all four events.

Skeptics might dismiss my predictions as reckless, and credit my successes to dumb luck, but my confidantes can also attest that these predictions were the products of diligent analyses which I freely shared alongside the predictions themselves. What did I do to gain insights into the future that most others did not see?

Independent, Globally-Connected Thinking

Although I’m American, I have lived and worked in Thailand since 2004, and elsewhere in Asia and Europe for 16 years of the past two decades. During that time I’ve had the good fortune to visit or reside in over 40 countries on five continents, and form long-lasting professional relationships with knowledgeable people all over the world.

Not only has this experience given me a truly global perspective, it also filters out most of the distractions that partisans everywhere deliberately craft to alter what people think. Happily, for instance, I almost never encounter the barrage of negative attack advertising which fills every US election cycle. Avoiding so much propaganda, instead of continually having to confront and fend it off, has given me freedom and time to nurture and hold a much more independent and finely balanced point of view than I could ever have acquired by sitting in a single locale.

By now, the repudiation of globalization and free trade, and the rise of nationalist, populist fervor are universally recognized as key reasons why these four events unfolded as they did. Most people have come to realize or accept this only in hindsight, but my global perspective made the eventual gathering of these forces obvious years ago.

In 1999, while living in London, I published the article “Deflation’s Impact on Business Information System Requirements” in which I anticipated that deflation would force manufacturers to reduce their variable labor costs and warned, “These actions will have adverse and increasing social effects. Worker demands for improved economic security may lead to renewal of isolationist policies in some locales, which would effectively reverse the economic liberalization and globalization trends of recent years. Business models designed to optimize performance in the previous climate of unrestricted free trade will be adjusted to account for the artificial incentives and penalties created by new government regulations.” My inspiration was a special section of The Economist, “Could it happen again?” published on February 18th, 1999 which declared, “It is conceivable that the world may be in for a new period of global deflation (meaning falling consumer prices) for the first time since the 1930s” and concluded, “The world economy is, in short, precariously balanced on the edge of a deflationary precipice … history has shown that once deflation takes hold, it can be far more damaging than inflation.”

At that time, global inflation had fallen to 3.6% per year, from a peak of 14.8% in 1980. By 2014, global inflation had fallen further to 1.7%, and the best-fitting linear trend over the previous 44 years suggested to me that prices could stop rising altogether by 2018 globally, after which a period of negative inflation (“deflation”) might ensue, with prices falling perhaps by 0.4% in 2020. From this I concluded that the rise of isolationism, and the concomitant demise of globalization, economic liberalization and free trade, were near at hand.

First Hand Observation

It is one thing to sit in a chair, reading and writing about such things, and quite another to see the evidence up close and in person.

Nigel Farage
Nigel Farage

I spent four months of 2013 working in the UK, which gave me plenty of opportunity to watch local television news. One person who seemed to make headlines every day was the controversial politician Nigel Farage, then a Member of the European Parliament and Leader of the UK Independence Party (UKIP) who would spearhead the campaign to leave the EU nearly three years later. Earlier in 2013, he had led UKIP, a Eurosceptic and right-wing populist party, to its best performance ever in a UK election. Key planks in its platform at that time sound eerily familiar now and included deporting migrants, legalizing handguns, destroying wind farms (which many climate change deniers like to condemn as wasteful and unattractive), privatizing the National Health Service (NHS) and improving relations with Vladimir Putin. Just like Trump would do in 2016, UKIP ran strongest among less-educated voters in predominantly white, blue-collar areas, while faring much worse in areas dominated by college graduates, immigrants and minorities.

The following year I had the opportunity to travel by train from Paris to London. It was the first time I’d passed through the Channel Tunnel, that most emblematic symbol of free trade and European economic integration. But as the train approached the tunnel’s French portal, I was struck by the tight security and double fencing meant to prevent migrants from climbing or jumping onto trains bound for the UK. These were impressively similar to the border fencing and controls separating El Paso, TX from Ciudad Juárez, Mexico which my family and I visited in early January, 2015.

Later, I spent July and August 2015 in the Pacific Northwest. Trump was all over the news, both before and after his infamous Republican primary debate on August 6th. I instantly noticed the many similarities between Trump’s rhetoric and what I’d heard from Nigel Farage two years before. Just like Farage, Trump appeared to be running a campaign requiring no television advertising whatsoever. By provoking so much controversy, he seemed to receive thirty minutes’ free television interview time for every thirty seconds any other candidate got.

Based on these observations, on August 19th, 2015, I wrote a confidante, “Last week, at lunch, I made a prediction to our project team in Washington state: Trump will win the Republican nomination, and he will probably win the Presidency too (or at least come very close to winning it).” I added, “Right now, he is looking like a steamroller, and all the other candidates (Hillary [Clinton] included), are looking like ants who can’t run fast enough to get out of the way.”

Fast-forward to October, 2016, when I next visited the U.S. and drove all the way from New York to Minnesota through Pennsylvania, Ohio, Indiana, Illinois and Wisconsin. These states, with the exception of New York and Illinois, form the majority of the “blue wall” which Trump successfully breached on Election Day, losing only Minnesota by a tiny margin. I noticed, particularly outside the big cities of New York, Cleveland, Chicago, Saint Paul and Minneapolis, that Trump’s lawn signs and bumper stickers seemed to outnumber Clinton’s by something like four to one. I even spotted a few homemade Trump signs or marquees, but not a single one for Clinton.

My arrival in the U.S. coincided with the Washington Post release of the video containing Trump’s lewd comments about women. Yet while I was driving across America, even as new accusers came forward with fresh allegations of sexual assault and Trump’s prospects, according to media pronouncements, sagged precipitously toward new lows, the lopsided proportion of Trump lawn signs and bumper stickers appeared to stay firmly planted in place.

Hypothesis Framing

As early as August, 2015, long before the Republican primaries got underway, I began to formulate a hypothesis to explain why Trump could possibly win nomination as well as the US Presidency. At the time, Trump led the Republican polls and there was talk that he would hit a 25 or 30 percent “ceiling” of support which, if true, would put the Republican nomination out of his reach.

I decided to test this notion using Lee Drutman’s analysis of an American National Election Studies (ANES) 2012 Time Series data set (Lee Drutman, “What Donald Trump gets about the electorate,” Vox,  August 18th, 2015). Drutman’s thesis was that Trump, as well as Bernie Sanders, were playing to a sizeable number of voters representing up to 40 percent of the total electorate. In very broad terms, according to Drutman, the electorate breaks down like this:

  • 40 percent are “Populists” – of which 55 percent are strongly Republican, lean Republican or are independent; attracted to Trump and Sanders.
  • 33 percent are “Liberals” – of which 80 percent are strongly Democratic, lean Democratic or are independent, thus forming the core of Clinton’s support (and presumably would vote for Sanders as well).
  • 21 percent are “Moderates” – evenly divided between Democratic, independent and Republican, thus more likely than not to vote for Clinton on the Democratic side or a mainstream Republican like Jeb Bush on the Republican side.
  • 4 percent are “Business Republicans” – attracted to mainstream Republicans.
  • 2 percent are “Political Conservatives” – attracted to other Republicans in the field.

Considering this, I reckoned that Trump, being the only populist candidate in the Republican field, could potentially attract all the Republican as well as independent “Populists” and thereby conceivably garner up to 58 percent of the Republican primary vote, far exceeding any 25 or 30 percent “ceiling” and more than enough to win nomination.

Thereafter, assuming Clinton won the Democratic nomination, I reasoned that in the general election Trump, as the only “Populist” standing for the Presidency and also the Republican nominee, could potentially attract all the “Populists,” Republican “Moderates,” “Business Republicans” and “Political Conservatives,” thereby taking about an eighth of Clinton’s natural support away from her and giving him up to 52 percent of the popular vote.

My reasoning was simplistic, but good enough to frame my working hypothesis:  Trump wins the nomination, and the US Presidency, by running as a populist under the Republican brand.

Later on, right before the Brexit referendum, another of my confidantes, who is British, mentioned rumors that published polls were understating the true extent of support for leaving the EU because likely voters, who actually intended to vote for leaving, were reluctant to admit this fact to the pollsters. Indeed, the final compilation of polls released by The Economist on the eve of the referendum revealed 45 percent each for leaving and remaining, and 10 percent undecided, yet 52 percent actually voted to leave. This meant that seven out of every ten supposedly undecided voters ultimately chose to leave. It’s not improbable that many of them truly intended to do so all along. But if this were true, polling data alone could not be relied upon to predict the outcome.

This insight led me to frame a supplementary hypothesis just before the US general election. In consequence of Trump’s many campaign missteps and personal indiscretions, a “closet” vote exists comprising likely voters who fully intend to vote for Trump in the privacy of a ballot booth, but are too ashamed to admit it to a pollster (or anyone else, for that matter).

Data Gathering

As much as I prefer gathering and analyzing my own data, instead of relying on secondary sources, this option was out of the question. I would have to rely on whatever public polls were available.

Fortunately even in Thailand, using high-speed Internet it is possible to instantly retrieve and compile data from thousands of public polls. But not all public polls are alike. Some have better track records than others. Some rely on questionable methodologies or don’t disclose their methodologies at all. While some are issued by reputable and supposedly impartial polling organizations, others come from partisan outfits. Last, and most importantly, some are less timely than others. A poll’s usefulness for predicting the winner decays rapidly with the passage of time.

Nonetheless, when doing empirical analyses, data of any kind or quality is always better than no data at all, so I employ a simple set of rules to separate the wheat from the chaff.

First, in countries such as the UK and Philippines, I analyze only national polls because national elections are decided by popular vote across the whole country. But for US data, considering how the Republication nomination process and the US Electoral College are designed to work, I completely disregard national US polls, and analyze only polls taken at the state and territorial levels. Unless no other polling data is available, I also disregard any poll taken over a period exceeding one week, or which is not documented to have been designed and taken according to standard opinion polling practices, or for which the polling dates, sample size, margin of error and confidence level are not disclosed.

Second, because polls are merely snapshots of an electorate at specific moments in time, for any given election I only use the most recent poll, after combining all the polls taken on the same day by weighting their frequencies according to sample size. To determine when a poll was taken, I use the last date it was conducted in the field and not the date it was released.

The US general election, and other national elections, are held on the same day so I prefer to wait until a few days before each election before gathering any data. However the US Republican nominating process is a sequence of 56 separate statewide or territorial elections, caucuses or conventions held over a period lasting just over four months. It is therefore necessary to refresh the analysis nearly every week, as late as possible before the next scheduled events, using previously-bound delegate counts together with the latest polling data.

Model Building

To turn poll results into election predictions, you need models. A model is simply an algorithm for converting the frequency distribution of the latest poll into a projection which takes all working hypotheses and applicable decision rules into consideration.

Modeling a national election decided by popular vote is quite straightforward. The poll’s reported frequency distribution is adjusted, if required, according to the working hypotheses. The candidate or proposition preferred by the greatest adjusted number (or percentage) of respondents is deemed the winner.

For example, the final compilation of polls available on the eve of the Brexit referendum reported that the percentage of respondents who preferred to leave the EU was exactly the same as the percentage of respondents who preferred to remain. A smaller percentage of respondents was undecided. According to the working hypothesis, the number of respondents who claim to be undecided but actually prefer to leave exceeds the number of respondents who claim to be undecided but actually prefer to remain. Because any possible adjustment conforming to the working hypothesis would break the tie in favor of respondents who preferred to leave, I predicted that the UK would choose to leave the EU.

Modeling Republican primary contests is much more complicated. Each state or territory sends its prescribed number of delegates to the national nominating convention. Most of those delegates are bound to vote for a specific candidate, at least during the initial balloting round, but the rules for binding those delegates vary widely based on proportional, winner-takes-all and other voting systems.

32 of the 56 states and territories bind their delegates proportionally according to each candidate’s share of the vote in a primary election, convention or local caucuses. Candidates may also need to exceed a minimum threshold before earning any delegates. The outcome is decided exclusively at the state or territorial level in some of these states and territories, while in the others outcomes are decided separately for each congressional district and, for some, at the state level also to bind their at-large delegates.

To illustrate, New York has 95 delegates comprising three delegates from each of its 27 congressional districts, plus 14 at-large delegates. It holds an April primary election to bind the delegates from each of its congressional districts proportionally, and uses the statewide popular vote to proportionally bind the at-large delegates. At both the congressional district and state levels, candidates must receive at least 20 percent of the popular vote and, at both levels also, a candidate who receives more than 50 percent of the popular vote wins all delegates. Finally, if at least two candidates receive 20 percent or more of the vote, the candidate with the most votes receives two delegates and the candidate with the second most votes receives one.

18 other states and territories use the winner-takes-all system which binds all delegates to the single candidate who received the largest number of popular votes in a primary election or local caucuses. Just as for states and territories using the proportional voting system, outcomes are decided either at the state or territorial level, by congressional district or at both levels.

California, for instance, has 172 delegates comprising three delegates from each of its 52 congressional districts, plus 13 at-large delegates. It holds a June primary election to bind the delegates from each of its congressional districts on a winner-takes-all basis, and uses the statewide popular vote to bind the at-large delegates, again on a winner-takes-all basis. Florida, on the other hand, holds a March primary election to bind all of its 99 delegates on statewide winner-takes-all basis.

West Virginia Republicans, uniquely, hold a primary election in which its 34 delegates are elected directly. Only the delegates’ names appear on the ballot.

Lastly, 5 states and territories don’t bind their delegates at all, nor do they hold primary elections. Instead they select their delegates at local caucuses or state nominating conventions.

Given the voting system differences and wide disparities between the rules, my basic approach is to model each state or territory separately. This allows me to take as many local variations into account as possible. Inevitably, though, compromises must be struck to deal with missing data and other uncertainties.

Although polling data is available for most states and territories, very little is available for individual congressional districts. In 2016, data were published only for congressional districts in New York and California. Accordingly, I modeled New York and California at both the state and congressional district levels, but elsewhere only at the state or territorial level which effectively employs statewide or territorial poll results as proxies for congressional districts.

Finally, for West Virginia as well as other states and territories whose delegates are unbound, I could only use statewide or territorial poll results as proxies to predict the outcomes of their primary elections, conventions or local caucuses.

To arrive at predictions for the Republican nomination, I saw no need to apply any supplemental working hypotheses. Instead, I used the unadjusted frequency distributions reported for the polls most recently taken in each state or territory, and then applied the relevant state or territorial model to project the number of delegates bound to each candidate in each state or territory. I summed these projections by candidate, and then added the actual number of delegates already bound to each candidate, to arrive at the national projection. The candidate projected to enter the national convention with at least 1,237 bound delegates (half the total of national convention delegates plus one) was deemed the eventual winner.

Incidentally, if no candidate was projected to enter the convention with at least 1,237 bound delegates, I made no attempt to model or project the eventual winner. In other words, I didn’t try to predict who would win the nomination on the second or subsequent rounds of balloting.

As mentioned earlier, I refreshed these projections nearly every week during the Republican primary season to reflect the actual outcomes of previous primary elections, conventions and local caucuses, and to incorporate the most recent polling data available.

Turning finally to modeling the US general election, under the Electoral College system each state, along with the District of Columbia, has a prescribed number of electors. All but two states use a winner-takes-all system which binds those electors to the single candidate who received the greatest number of popular votes. The exceptions are Maine and Nebraska, where each congressional district has one elector who is bound to the candidate receiving the greatest number of popular votes in that district. The remaining two electors from each of those states are bound to the candidate receiving the largest number of votes statewide, on a winner-takes-all basis. However, due again to the lack of polling data for individual congressional district in these states, I modeled Maine and Nebraska only at the state level, thus effectively employing statewide poll results as proxies for congressional districts.

To predict the outcome of the US general election, I applied alternate versions of the supplemental working hypothesis described earlier to the unadjusted frequency distributions reported for the polls most recently taken in each state.

The first alternate relied strictly on the actual outcome of the Brexit referendum. There, seven out of every ten presumably undecided voters ultimately chose to leave. So, I adjusted each state’s frequency distribution by allocating 70 percent of the undecided percentage to Trump, and the remainder to Clinton.

The second alternate moderates the actual Brexit referendum outcome as suggested by one of my confidantes. It assumes that half the undecided voters do not vote, and 70 percent of those who do vote for Trump. I adjusted each state’s frequency distribution by allocating 35 percent of the undecided percentage to Trump (that is, half the number of presumably undecided Brexit referendum voters who ultimately chose to leave), and 15 percent (remainder of those who presumably voted) to Clinton.

I then applied the statewide winner-takes-all model to project the number of electors bound to each candidate from each state. I summed these projections by candidate to arrive at the national projection. The candidate projected to win at least 270 bound electors (half the total of Electoral College votes plus one) was deemed the winner.

Rigorous Application of Data to the Models

All the races I predicted were hotly contested. Above and beyond civilized advertising, policy briefings, campaign rallies, speeches and debates, partisans of every stripe blasted falsehoods, negative attacks, dirty tricks, leaks, and endless spin through every available mass media and social media channel in their attempts to reinforce or change what voters were thinking.

Underlying this noise were the inevitable disruptions arising from transformational change. Although Switzerland’s Parliament voted, in March 2016, to retract its application to join the EU, and both Greenland and Algeria chose to leave the EU’s predecessors long ago, until the Brexit referendum no EU member state had ever exercised its right to withdraw under the current Treaty on European Union (TEU) Article 50. The withdrawal of the UK, which is the EU’s second-largest economy, is a transformational change for Europe which, according to many commentators, is likely to create serious repercussions and might be an existential threat to the EU itself.

rrwppThe rise of populism, and “renewal of isolationist policies in some locales, which would effectively reverse the economic liberalization and globalization trends of recent years” as I warned back in 1999, are symptoms of even broader transformational change that in the worst case harkens back to the dark days preceding World War II. Donald Trump, Nigel Farage and Rodrigo Duterte are by no means the only populists ascending to the worldwide political stage today. Others include Norbert Hofer (Austria), Marine Le Pen (France), André Poggenburg (Germany), Pawel Kukiz (Poland), Marian Kotleba (Slovakia), Recep Tayyip Erdoğan (Turkey) and Nicolás Maduro as well as his predecessor Hugo Chávez (Venezuela). Narendra Modi became India’s Prime Minister on a populist platform, although he recently stated that he is avoiding a “populist course.” In Thailand, the populist politician Thaksin Shinawatra rose to power democratically, and so did his sister, Yingluck, after Thaksin was deposed in a 2006 military coup. And Italy’s conservative populist Silvio Berlusconi first took power in 1994, though he is no longer Prime Minister.

Increasingly, these developments are rendering conventional political wisdom obsolete. The old-school ways, which rely upon centrist voters to nominate and elect mainstream candidates who lean either left or right, are becoming increasingly irrelevant. The advice of pols who cut their political teeth before the mid-nineties can no longer be counted upon to reliably predict the future.

Which is to say, to predict elections correctly in this day and age, it’s vital to filter out as much partisan noise and conventional wisdom as possible, and trust both the data and the models to speak for themselves. For me, time and time again, this meant resisting the temptation to fiddle with my projections based on what television, the Internet and even my confidantes were saying.

As mentioned earlier, during the Republican primary season I refreshed my projections nearly every week to reflect the actual outcomes as they occurred, remove candidates as they dropped out of the race, and incorporate the most recent polling data available. I began my first analysis on March 3rd, 2016, just after the “Super Tuesday” primaries, and it projected Trump would come up 87 delegates short. After that, every one of my subsequent analyses indicated Trump would win, except the one I published on March 25th, 2016, just after the Utah caucuses and the Arizona primary, when I projected he would fall 9 delegates short. My final analysis, on April 29th, 2016, projected Trump would enter the convention with 86 more delegates than he needed. The following Tuesday, Trump performed significantly better than my final projection, sweeping all of Indiana’s 57 delegates, and his main opponent Ted Cruz dropped out. It was game over.

The Philippine Presidential election was scheduled to take place the next week, on Monday, May 9th, 2016. I didn’t pay much attention until I spotted news reports labeling Duterte the “Trump of the East.”

duterte-t-shirtDuterte had led every one of the 19 polls taken after March 25th, 2016, except three. Grace Poe, an independent businessperson, consistently placed second but never within the statistical margin of error. Curiously, another candidate, Mar Roxas, was tied with Poe in the first and led the second two of the three polls putting Duterte second or third. The last of those polls, which showed Roxas surging to a five-point lead, was also the final poll to be conducted before the election. Its immediate predecessor, conducted two days earlier by a highly-regarded polling organization, had shown Duterte leading Poe by 11 and Roxas by 13 points.

Shifts of that magnitude over such a short period of time always look suspicious to me. I started digging. It didn’t take long to discover that the organization which released the three polls showing Duterte second or third had never published any other polls, and the first of those three polls had been conducted merely three weeks before the election. I also learned that Roxas and his Liberal party were favored by outgoing Philippines President, Benigno Aquino III, himself a Liberal.

This was all the evidence I needed to disregard the final Philippines poll, and substitute its immediate predecessor. Seeing no need to apply any supplemental working hypotheses, I projected Duterte to win by 11 points. He actually won by 15.6 points over Roxas, and 17.6 points over Poe.

Next up was the Brexit referendum scheduled for June 23rd, 2016. I hadn’t initially planned to predict the outcome, but a lengthy conversation with my British confidante motivated me to give it a try. Using the working hypotheses, data and models I’ve already described, it took almost no time to predict that UK voters would choose to leave the EU. I shared my prediction with two confidantes in Singapore and sat back to watch the result, which ended up looking like this:

brexit-vote

I began to analyze the US general election only on Sunday, November 6th, 2016. To save time, I quickly narrowed the scope of my analysis to 18 battleground states. These were Arizona, Colorado, Florida, Georgia, Iowa, Maine, Michigan, Minnesota, Missouri, Nevada, New Hampshire, New Mexico, North Carolina, Ohio, Pennsylvania, Utah, Virginia and Wisconsin. Victory appeared all but assured for Trump or Clinton in the remaining 32 states, as well as the District of Columbia, so I allocated their electoral votes as all key media outlets were unanimously projecting.

The latest available polls covering 13 of the 18 battleground states were conducted on November 6th, 2016, so most of the data was very recent. Altogether, I relied upon 31 polls from 16 different polling organizations. Notably, the oldest poll was conducted on October 25th, 2016, covering Minnesota. It showed Clinton with a 10 point lead. As it turned out, Clinton won Minnesota by only 1.4 points.

The latest polls, before any adjustments, pointed to a Clinton victory with 277 electoral votes versus 261 for Trump.

After adjusting the poll frequencies according to both the first and second alternate supplemental working hypotheses, the Electoral College model projected Trump the winner. So, I published my final prediction that Trump would win the US Presidency with 276 electoral votes versus 262 for Clinton.

Nelson Nones U.S. Presidential Election Prediction
U.S. Presidential Election Prediction by Nelson Nones

As at this writing, Michigan’s unofficial results show that Trump finished ahead of Clinton by just 13,107 votes in Michigan (0.3 points) but major news organizations, including the Associated Press (AP), have not yet called the race. If Trump’s current lead holds in Michigan, he will receive 306 electoral votes and Clinton will receive 232. Otherwise Trump will receive 290 electoral votes and Clinton will receive 248.

It’s perhaps premature to judge my forecast accuracy because Michigan still hasn’t been called. But if Michigan holds for Trump, my performance looks like this:

  • On a geographic basis, I have correctly predicted 50 of 55 (91 percent). The total includes 50 states, plus the District of Columbia, plus the four congressional districts in Maine and Nebraska.
  • On an electoral vote basis using either of the supplemental working hypotheses, I have correctly predicted 478 of 538 (89 percent).
  • In terms of the actual outcome, my prediction was 100 percent accurate.

I’ve also compared the latest polls as well as my projected winning margins against the actuals reported so far. The average statistical margin of error among the 31 polls within the scope of my final US general election analysis was 3.3 percent. So far, their average actual error is 4.2 percent with a 2 percent standard deviation, and the actual polling error exceeded the statistical margin of error in half the states I analyzed. Meanwhile, for either of the supplemental working hypotheses, the average actual error for my projected winning margins is 3.0 percent with a 0.5 percent standard deviation.

These performance measures tell me that none of my working hypotheses are likely to be disproven by the actual outcomes. Indeed, had I not framed them, and relied solely upon the latest available polls instead, I doubt that I would have correctly predicted any of these events except the Philippines Presidential election.

Conclusion

My method boiled down to framing hypotheses, adjusting the latest available polls according to my working hypotheses, if any, and running the adjusted data through deterministic models of voting systems and rules to project the winners. Along the way, especially during the Republican primary season, I discovered and corrected numerous errors of my own making. I am indebted to my confidantes for their information and critiques, and also for pointing out important details such as specific delegate allocation rules I’d overlooked. As such, my endeavors were error-prone at times and remained a constant work in process throughout.

In truth, along the way I judged my approach to be quite primitive. I didn’t (and couldn’t, from halfway round the world) consider the myriad soft factors or nuances that other prominent forecasters, with seemingly vast resources at their disposal, appeared to be incorporating into probabilistic models which seem far more elegant than mine.

But in the end, it’s the degree of objectivity combined with diligent work that really counts. And I can truthfully say that I did my best to leave my personal preferences outside the door, trust my cumulative experience and first-hand observations, incorporate my previous research, maintain a balanced point of view, and ultimately let the data points themselves guide me to my conclusions.

 

©2016 Nelson M. Nones. All rights reserved.