More raps for Google on the “fake reviews” front.

Google’s trying to not have its local search initiative devolve into charges and counter-charges of “fake news” à la the most recent U.S. presidential election campaign – but is it trying hard enough?

It’s becoming harder for the reviews that show up on Google’s local search function to be considered anything other than “suspect.”

The latest salvo comes from search expert and author Mike Blumenthal, whose recent blog posts on the subject question Google’s willingness to level with its customers.

Mr. Blumenthal could be considered one of the premiere experts on local search, and he’s been studying the phenomenon of fake information online for nearly a decade.

The gist of Blumenthal’s argument is that Google isn’t taking sufficient action to clean up fake reviews (and related service industry and affiliate spam) that appear on Google Maps search results, which is one of the most important utilities for local businesses and their customers.

Not only that, but Blumenthal also contends that Google is publishing reports which represent “weak research” that “misleads the public” about the extent of the fake reviews problem.

Mike Blumenthal

Google contends that the problem isn’t a large one. Blumenthal feels differently – in fact, he claims the problem as growing worse, not getting better.

In a blog article published this week, Blumenthal outlines how he’s built out spreadsheets of reviewers and the businesses on which they have commented.

From this exercise, he sees a pattern of fake reviews being written for overlapping businesses, and that somehow these telltale signs have been missed by Google’s algorithms.

A case in point: three “reviewers” — “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” — have all “reviewed” three very different businesses in completely different areas of the United States:  Bedoy Brothers Lawn & Maintenance (Nevada), Texas Car Mechanics (Texas), and The Joint Chiropractic (Arizona, California, Colorado, Florida, Minnesota, North Carolina).

They’re all 5-star reviews, of course.

It doesn’t take a genius to figure out that “Charlz Alexon,” “Ginger Karime” and “Jen Mathieu” won’t be found in the local telephone directories where these businesses are located. That’s because they’re figments of some spammer-for-hire’s imagination.

The question is, why doesn’t Google develop procedures to figure out the same obvious answers Blumenthal can see plain as day?

And the follow-up question: How soon will Google get serious about banning reviewers who post fake reviews on local search results?  (And not just targeting the “usual suspect” types of businesses, but also professional sites such as physicians and attorneys.)

“If their advanced verification [technology] is what it takes to solve the problem, then stop testing it and start using it,” Blumenthal concludes.

To my mind, it would be in Google’s own interest to get to the bottom of these nefarious practices. If the general public comes to view reviews as “fake, faux and phony,” that’s just one step before ceasing to use local search results at all – which would hurt Google in the pocketbook.

Might it get Google’s attention then?

When people think “search,” they still think “Google.”

… And they might say it, too — thanks to the rise of voice search.

Over the years, many things have changed in the world of cyberspace. But one thing seems to be pretty much a constant:  When people are in “search” mode online, most of them are playing in Google’s ballpark.

This behavior has been underscored yet again in a new survey of ~800 consumers conducted by Fivesight Research.

Take a look at these two statistics that show how strong Google’s search popularity remains today:

  • Desktop users: ~79% of searches are on Google
  • Smartphone users: ~86% use Google

The smartphone figure above is even more telling in that the percentage is that high whether users are on an iPhone or an Android system.

But here’s another very interesting finding from the Fivesight survey: Google’s biggest competition isn’t other search engines like Bing or Yahoo.  Instead, it’s Siri, which now accounts for ~6% of mobile search market share.

So what we’re seeing in search isn’t a shift to other providers, but rather a shift into new technologies. To illustrate, nearly three in four consumers are using voice technologies such as Siri, Google Now and Microsoft Cortana to supplement their traditional search activities.

Some marketing specialists contend that “voice search is the new search” – and it’s hard not to agree with them. Certainly, voice search has become easier in the past year or so as more mobile devices as well as personal home assistants like Amazon Alexa have been adopted by the marketplace.

It also helps that voice recognition technology continues to improve in quality, dramatically reducing the incidences of “machine mistakes” in understanding the meaning of voice search queries.

But whether it’s traditional or voice-activated, I suspect Google will continue to dominate the search segment for years to come.

That may or may not be a good thing for consumers. But it’s certainly a good thing for Google – seeing as how woefully ineffective the company has been in coming up with any other business endeavor even remotely as financially lucrative as its search business.

PR Practices: WOM Still Wins in the End

These days, there are more ways than ever to publicize a product or service so as to increase its popularity and its sales.

And yet … the type of thing most likely to convince someone to try a new product – or to change a brand – is a reference or endorsement from someone they know and trust.

Omnichannel marketing promotions firm YA conducted research in 2016 with ~1,000 American adults (age 18+) that quantifies what many have long suspected: ~85% of respondents reported that they are more likely to purchase a product or service if it is recommended by someone they know.

A similarly high percentage — 76% — reported that an endorsement from such a person would cause them to choose one brand over another.

Most important of all, ~38% of respondents reported that when researching product or services, a referral from a friend is the source of information they trust the most.  No other source comes close.

This means that online reviews, news reports and advertising – all of which have some impact – aren’t nearly as important as the opinions of friends, colleagues or family members.

… Even if those friends aren’t experts in the topic!

It boils down to this:  The level of trust between people has a greater bearing on purchase decisions because consumers value the opinion of people they know.

Likewise, the survey respondents exhibited a willingness to make referrals of products and services, with more than 90% reporting that they give referrals when they like a product. But a far lower percentage — ~22% — have actually participated in formal refer-a-friend programs.

This seems like it could be an opportunity for brands to create dedicated referral programs, wherein those who participate are rewarded for their involvement.

The key here is harnessing the referrers as “troops” in the campaign, so as to attract a larger share of referral business and where the opportunities are strongest — and tracking the results carefully, of course.

In copywriting, it’s the KISS approach on steroids today.

… and it means “Keep It Short, Stupid” as much as it does “Keep It Simple, Stupid.”

Regardless of the era, most successful copywriters and ad specialists have always known that short copy is generally better-read than long.

And now, as smaller screens essentially take over the digital world, the days of copious copy flowing across a generous preview pane area are gone.

More fundamentally, people don’t have the screen size – let along the patience – to wade through long copy. These days, the “sweet spot” in copy runs between 50 and 150 words.

Speaking of which … when it comes to e-mail subject lines, the ideal length keeps getting shorter and shorter. Research performed by SendGrid suggests that it’s now down to an average length of about seven words for the subject line.

And the subject lines that get the best engagement levels are a mere three or four words.

So it’s KISS on steroids: keeping it short as well as simple.

Note: The article copy above comes in at under 150 words …!

More Trouble in the Twittersphere

With each passing day, we see more evidence that Twitter has become the social media platform that’s in the biggest trouble today.

The news is replete with articles about how some people are signing off from Twitter, having “had it” with the politicization of the platform. (To be fair, that’s a knock on Facebook as well these days.)

Then there are reports of how Twitter has stumbled in its efforts to monetize the platform, with advertising strategies that have failed to generate the kind of growth to match the company’s optimistic forecasts. That bit of bad news has hurt Twitter’s share price pretty significantly.

And now, courtesy of a new analysis published by researchers at Indiana University and the University of Southern California, comes word that Twitter is delivering misleading analytics on audience “true engagement” with tweets.  The information is contained in a peer-reviewed article titled Online Human-Bot Interactions: Detection, Estimation and Characterization.

According to findings as determined by Indiana University’s Center for Complex Networks & Systems Research (CNetS) and the Information Sciences Institute at the University of Southern California, approximately 15% of Twitter accounts are “bots” rather than people.

That sort of news can’t be good for a platform that is struggling to elevate its user base in the face of growing competition.

But it’s even more troubling for marketers who rely on Twitter’s engagement data to determine the effectiveness of their campaigns. How can they evaluate social media marketing performance if the engagement data is artificially inflated?

Fifteen percent of all accounts may seem like a rather small proportion, but in the case of Twitter that represents nearly 50 million accounts.

To add insult to injury, the report notes that even the 15% figure is likely too low, because more sophisticated and complex bots could have appeared as a “humans” in the researchers’ analytical model, even if they aren’t.

There’s actually an upside to social media bots – examples being automatic alerts of natural disasters or customer service responses. But there’s also growing evidence of nefarious applications abounding.

Here’s one that’s unsurprising even if irritating: bots that emulate human behavior to manufacture “faux” grassroots political support.  But what about the delivery of dangerous or inciting propaganda thanks to bot “armies”?  That’s more alarming.

The latest Twitter-bot news is more confirmation of the deep challenges faced by this particular social media platform.  What’s next, I wonder?

B-to-B content marketers: Not exactly a confident bunch.

In the world of business-to-business marketing, all that really matters is producing a constant flow of quality sales leads.  According to Clickback CEO Kyle Tkachuk, three-fourths of B-to-B marketers cite their most significant objective as lead generation.  Pretty much everything else pales in significance.

This is why content marketing is such an important aspect of commercial marketing campaigns.  Customers in the commercial world are always on the lookout for information and insights to help them solve the variety of challenges they face on the manufacturing line, in their product development, quality assurance, customer service and any number of other critical functions.

Suppliers and brands that offer a steady diet of valuable and actionable information are often the ones that end up on a customer’s “short-list” of suppliers when the need to make a purchase finally rolls around.

Thus, the role of content marketers continues to grow – along with the pressures on them to deliver high-quality, targeted leads to their sales forces.

The problem is … a large number of content marketers aren’t all that confident about the effectiveness of their campaigns.

It’s a key takeaway finding from a survey conducted for content marketing software provider SnapApp by research firm Demand Gen.  The survey was conducted during the summer and fall of 2016 and published recently in SnapApp’s Campaign Confidence Gap report.

The survey revealed that more than 80% of the content marketers queried reported being just “somewhat” or “not very” confident regarding the effectiveness of their campaigns.

Among the concerns voiced by these content marketers is that the B-to-B audience is becoming less enamored of white papers and other static, lead-gated PDF documents to generate leads.

And yet, those are precisely the vehicles that continue to be used most often used to deliver informational content.

According to the survey respondents, B-to-B customers not only expect to be given content that is relevant, they’re also less tolerant of resources that fail to speak to their specific areas of interest.

For this reason, one-third of the content managers surveyed reported that they are struggling to come up with effective calls-to-action that capture attention, interest and action instead of being just “noise.”

The inevitable conclusion is that traditional B-to-B marketing strategies and similar “seller-centric” tactics have become stale for buyers.

Some content marketers are attempting to move beyond these conventional approaches and embrace more “content-enabled” campaigns that can address interest points based on a customer’s specific need and facilitate engagement accordingly.

Where such tactics have been attempted, content marketers report somewhat improved results, including more open-rate activity and an in increase in clickthrough rates.

However, the degree of improvement doesn’t appear to be all that impressive. Only about half of the survey respondents reported experiencing improved open rates.  Also, two-thirds reported experiencing an increase in clickthrough rates – but only by 5% or less.

Those aren’t exactly eye-popping improvements.

But here’s the thing: Engagement levels with traditional “static” content marketing vehicles are likely to actually decline … so if content-enabled campaigns can arrest the drop-off and even notch improvements in audience engagement, that’s at least something.

Among the tactics content marketers consider for their creating more robust content-enabled campaigns are:

  • Video
  • Surveys
  • Interactive infographics
  • ROI calculators
  • Assessments/audits

The hope is that these and other tools will increase customer engagement, allow customers to “self-quality,” and generate better-quality leads that are a few steps closer to an actual sale.

If all goes well, these content-enabled campaigns will also collect data that helps sales personnel accelerate the entire process.

Getting a handle on survey response rates.

It turns out, there are some predictive factors.

sgOne of the nice things about the proliferation on online surveys in recent years is that, over time, we’ve come to understand survey response dynamics much better.

Of course, predicting response rates with flawless precision is impossible due to the individual attributes of each individual survey, the sample composition and so forth.  But thanks to a 2015 compilation of “bottom-line” information by content marketing specialist Andrea Fryrear, the following points are good ones for marketing personnel undertaking market survey work.

Surveys aimed at “internal audiences” outperform external ones.

Targeting an internal audience such as a company’s own employee base is likely going to generate higher response rates (in the neighborhood of 35% to 40%, give or take). For surveys of an external audience, it’s more like 10% or perhaps even lower.

The reason is simple: Surveys aimed at internal audiences are likely very-well targeted, whereas with an external audience, often it’s difficult to reach only the right type of respondents.  At least some of them will turn out to be poor targets.

Additional motivating factors.

Other factors that can influence survey response rates include:

  • Customer loyalty – People who feel a connection with the brand conducting a survey tend to be more likely to participate.
  • Brand recognition – Surveys that focus on well-known brands will typically outperform ones from an unknown source or dealing with unfamiliar brands.
  • Perceived benefit – The “WIIFM” factor.  For example, response rates can soar even higher if the respondent population is motivated by serious incentives.  I recall getting more than a 60% response rate on a mail survey and an external sample because the monetary incentive was a $2 bill.
  • Demographics – The reality is that certain segments of the population are more likely to respond to surveys than others.  Think everything from age and gender to ethnicity and geographic location.
  • Survey distribution – Certain audiences are used to interacting on social media … others online … still others offline.  Chances are, you already know which type of research targets those are within your target markets, and it should influence your choice of survey delivery.

Survey length can make or break your response and completion rates.

To achieve the highest response rates, ideally surveys should take five minutes or less to complete. Ten minutes or less is probably OK, too.  But anything longer than that will likely have deleterious effect on your response rate.

How many questions does this mean? On average, respondents can complete five closed-ended questions in a minutes’ time … but only two open-ended ones.

Survey reminders? Yes.

Particularly with online surveys, it’s a good idea to send reminder notices to those who haven’t completed surveys as you get closer to the cut-off date. Sending two or three reminders is a good rule of thumb … and try sending them at different times of the day or different days of the week to that you can reach as many different prospective respondents as possible.

Learning from the experience of the thousands of surveys administered every month should make it easier for marketers to ensure their next survey will generate successful results instead of flame out.  There’s really no reason for failure considering the wealth of “experiential information” that’s out there.