Just when you thought there were no new breakthroughs to be had in search marketing … along comes Google Goggles. It’s a new “visual search” application focusing on computer vision for mobile phones, currently in development and testing at Google Labs. An early version has already been unveiled by the Goggles product development team and been released to Android mobile users.
What does Google Goggles do? It allows anyone to search on a cell phone simply by snapping a picture of an object. Once the picture has been taken, it is “read” by Google’s cloud, algorithms search for the information, the matches are ranked and detailed search results appear on your phone – just as if you had typed in a search command.
Because this is far easier to show than to explain, Google has issued a short video clip that features several members of the development team demonstrating how Goggles works. Currently, the app works well with inanimate objects such as DVDs, books, and physical landmarks. You can even point your phone to a store building while using the geo-targeting feature, and search results pertaining to the store and its merchandise will appear on your phone.
What doesn’t work so well are items like food, plants, animals and people … yet. Give it a few more years, and no doubt the brains at Google will have figured out those challenges as well.
While at present Goggles is available only to Android phone users, it is Google’s intention to develop and offer the program to other popular mobile platforms. So iPhone and BlackBerry users needn’t worry.
Incidentally, Goggles isn’t the only new development in search that’s happening right now. Google is also working on creating real-time translation in multiple languages by speaking a query into a search engine app. (The audio is translated into a digital request before being processed and returning results.) And developers at Ball State University are working on devices that can “read” search commands simply by the flick of a finger or by waving in front of the screen.
What’s next? Search results appearing after someone merely thinks about making a query?
A few months ago, I blogged about “Wikipedians” – the hordes of people around the world who write and edit for the online encyclopedia sensation. In the “free information” world of the Internet, in just a few short years Wikipedia has effectively knocked the more traditional encyclopedias like World Book and Britannica off of their perch as the denizens and purveyors of broad knowledge.
With its self-described goal to be “the sum of all human knowledge,” Wikipedia has become the world’s fifth most popular web site, attracting more than 325 million visits per month – a 20% increase in traffic from a year ago. All this success, even as there have been well-publicized incidences of “rogue” information incorporated into Wikipedia article entries, either through honest error or deliberate insertions of false material; the report on Ted Kennedy’s Wikipedia entry of the senator’s death months before its actual occurrence is but one recent example.
Indeed, among the strengths of Wikipedia’s information model has been the idea of crowd-sourcing, with its legions of editors who have kept an eagle eye on Wikipedia entries to police them and aggressively remove incorrect, non-cited or otherwise suspect information.
But just last week, The Wall Street Journal published a front-page story reporting that Wikipedia is losing its volunteer force at a much steeper rate than ever before. Writers and editors have been departing Wikipedia faster than new ones joining – and the net decline has become particularly pronounced in 2009, to the tune of a net loss ~25,000 editor volunteers every month.
What’s causing this? A few reasons that have been suggested are:
As Wikipedia has grown, the number of topics not yet covered has diminished, making for fewer opportunities for new writers and editors to come on board with entries or to improve on existing articles.
Well-publicized problems with the veracity of some articles have caused Wikipedia to tighten its editorial and submission guidelines. The not-for-profit Wikipedia Foundation has adopted a plethora of rules (spelled out over dozens of web pages) that make it much harder for editors who are less familiar with the software to successfully post new stories – causing some articles to be removed within mere hours of being posted. It’s no secret that the burnout rate for writers is going to be higher when they have to continually debate and defend the content and format of their articles.
A reduction in the “passion” factor – Wikipedians are a bit of a different breed, driven as much by altruism as a generous dose of ego (“I’m better than the rest of you”) when it comes to information knowledge. They’ve even created their own special social world, with the Wikipedia Foundation hosting annual get-togethers of volunteers in such exotic locales as Buenos Aires where they can collectively bask in the aura of their shared “specialness.” But, people being people, over time the thrill abates except for the most passionate contributors.
The downturn in the economy probably doesn’t help, either. More free time might be available to devote to Wikipedia for those newly out of work … but in such times, more attention and effort is understandably going to be placed elsewhere.
Recognizing the need to reverse recent trends, the Wikipedia Foundation is working on creating streamlined instructions to help novice editors contribute articles that adhere to the proper submission guidelines more successfully. It is also working to actively recruit writers and editors from the scientific community, a historically soft spot in terms of Wikipedia’s information coverage.
Whether these steps will be enough to make a difference is an open question. Encouraging more volunteers from the world of science for what is essentially a volunteer mission with little or no peer recognition has been (and likely will continue to be) a tall order. And the jury is still out on the potential effectiveness of the new streamlined submission guidelines, because they haven’t been put into practice yet.
Still, there’s no denying that Wikipedia has had a major impact on the way people research information … and it has accomplished this over the span of only a few short years.
What’s happening in the world of direct marketing these days, and where is the best ROI to be found?
Certainly, a lot of inboxes are positively groaning under the sheer quantity of e-mail volume, and many people have responded by beefing up their spam filtering. But the most recent economic impact study conducted by research firm Global Insight for the Direct Marketing Association reports that commercial e-mail marketing delivers the best bang for the promotional buck – more than $43 for every dollar spent on it.
By comparison, ROI from Google AdWords and other paid search advertising activities generates about $22 for every dollar spent.
According to the analysis, postal direct mail initiatives deliver lower returns on investment — with catalogs returning just a little over $7 per dollar spent and other forms of postal direct mail around $15.
Depite its stellar ROI numbers, it is true that e-mail marketing is actually showing a slight drop in ROI. And that is forecast to continue to decline in the upcoming years, at least partially because of the reasons noted above. Even so, e-mail is forecast to deliver an ROI of ~$38 for every dollar invested in 2013.
Of course, it’s important to recognize that search marketing is where much of the heavy action is these days. Internet search drives ~$244 billion in sales as compared against a related cost of ~$11 billion.
Commercial e-mail? It drives just ~$26 billion in sales … although the cost to drive those sales is a relative pittance at ~$600 million.
And over on the postal side of the ledger, no one should be surprised to learn that direct mail expenditures, while still large at ~$44 billion, are down ~16% in just the past year alone.
[But you can look on the bright side: Your promo piece is going to be noticed a lot easier among the smaller stack of daily snail mail that’s being delivered!]
The first toll-free phone lines, called WATS lines (for Wide Area Telephone Service), were introduced in the United States nearly 50 years ago. For years thereafter, all toll-free numbers used the prefix “800,” so that many consumers came to refer to toll-free lines as “800 numbers.” And they were very popular with consumers because of the then-relatively high cost of long-distance calling.
But just as the rise of cell phone popularity caused a proliferation of new area codes, the growing popularity of toll-free phone numbers meant a dwindling supply of lines within the “800” prefix. Hence, the introduction of “888,” “877” and “866” toll-free prefixes have been made over the past 13 years to expand the supply of available lines.
But old habits die hard. Even today, many consumers reflexively refer to all toll-free lines as “800 numbers.” And indeed, a study conducted earlier this year by Engine Ready, a California-based search marketing software and service firm, finds that “800” lines actually outperform the other prefixes when it comes to phone conversions.
For the study, Engine Ready sampled ~18,000 visits to a single lead-generation web site. The visits were driven by a Google AdWords search engine marketing campaign, producing ~2,600 call-in and online conversions. Visits were split evenly among four web landing pages that were identical save for the call-in response action that contained distinct phone numbers featuring the four different toll-free prefixes.
While little difference was observed between the four prefixes in online conversion behavior (form fills), the “800” prefix clearly performed best of the four toll-free lines for call-in responses. Its conversion performance ranged from 20% to 60% better than the three other phone lines — that despite the fact that there was no practical difference at all between the phone numbers except for the different prefixes.
Moral of the story: Even in today’s “the only thing that’s constant is change” environment, sticking with the “tried-and-true” when it’s possible to do so can be a pretty smart move. And if it’s inbound telephone sales you’re doing, make sure you insist on getting one of those old-fashioned “800 numbers.”
The latest financial results for the U.S. Postal Service are in, and they’re a continuation of the same old story line: Tons of red ink and fingers pointing in every direction.
The USPS posted a net loss of $3.9 billion for FY 2009, “only” $1.1 billion worse than the previous year. And that’s even after receiving a $4 billion deferment on paying an annual $5.4 million obligation to pre-fund healthcare premiums for its retirees.
Not surprisingly, total postal revenues were down about 10% to ~$68 billion, not only because of the economic downturn but also because of the continuing shift to digital communications. Total physical mail volume declined ~13% to around 177 billion pieces.
Given the sorry financial stats, one would assume that the USPS would be moving forward in all haste with its plans to shutter as many as 10% of its post offices and branches around the country.
But if you thought that … you would be wrong. What started out as a potential closure listing of ~3,200 stations (the impressively named Station & Branch Optimization Initiative) quickly became ~700 stations and branches that were actually slated to close. Then that figure was trimmed to just over 400. And now we have word that the closure figure is down to ~370.
Given more time, the number of closures may well slip even further … and at some point the whole exercise becomes completely meaningless as cost-cutting endeavor.
And then there are the persistent rumors that mail delivery will be cut back to five days from six. But that never seems to be anything more than just an idle threat.
Welcome to the wonderful world of government agencies: Stultifying bureaucratic procedures that are near-Byzantine in their complexity, coupled with reacting to every conceivable interest group while being too timid to make any hard choices at all when it comes to managing their operations like any business in private industry must do.
Mozilla’s Firefox web browser marked a milestone this past week, celebrating its fifth birthday.
No question about it, the open-source browser has been a big success, with growth that has been impressive by any measure. As of the end of July, Firefox had been downloaded more than 1 billion times.
Indeed, a mainstream site like this one here (WordPress) reports that Firefox now represents a larger share of activity than Internet Explorer — 46% versus 39% of traffic.
But now that Firefox has come of age, it’s facing some of the same “grown up” challenges that other browsers face.
In fact, application security vendor Cenzic has just released its security trends report covering the first half of 2009. Guess what? Firefox led the field of web browers in terms of reported total vulnerabilities. Here are the stats from Cenzic:
Firefox: 44% of reported browser vulnerabilities
Apple Safari: 35%
Internet Explorer: 15%
Opera: 6%
Compared to Cenzic’s report covering the second half of 2008, Firefox’s figure is up from 39%, while IE’s number is down sharply from 43%.
Welcome to reality. As Firefox has grown in importance, it’s gained more exposure to vulnerabilities. A significant portion of those vulnerabilities have come via plug-ins.
Mozilla is trying to take steps to counteract this, including launching a plug-in checker service to ensure that users are running up-to-date versions. It also offers a “bug bounty” to anyone who discovers security holes in Firebox.
And the good news is that even though Firefox had the highest number of vulnerabilities, even Cenzic admits that this doesn’t necesarily mean Firefox users are more vulnerable to security threats. Plus, those vulnerabilities tend to be patched more quickly than those found in other browsers.
So on this fifth anniversary milestone, Firefox can be justly praised as a major success story in the web browser world.
How can the views and perspectives of newspaper publishers and readers be so out of kilter? It might have something to do with “wishful thinking” on the part of the publishers.
Case in point: American Press Institute has just released the results of a field research study that compares the opinions of readers and publishers on paying for news content.
Naturally, this issue is of paramount concern to newspapers that are trying to create a new business model that is profitable. In fact, nearly 60% of the publisher respondents in the survey reported that they’re considering requiring paid access for online news — news that is currently provided to readers free of charge. At the same time, these respondents seem to believe that consumers will willingly “pay to play” in a new paid-content environment.
But I wonder about that.
Here’s an example of the disconnect between newspaper publishers and news consumers found in the survey: More than two-thirds of the publishers believe it will be “not very easy” or “not easy at all” for consumers to find similar news content online from alternative free sources once the shift to paid content happens. Do consumers agree? Well … only ~43% think the same way.
And where do newspaper publishers think people will go for news if their paper’s free online information is no longer available to them? Again, we see a big disparity in the results. The top three sources publishers think consumers will turn to are:
The publisher’s own print newspaper: 75%
Other local media: 55%
Television: 53%
For consumers, those alternate sources all rated lower – in two cases, dramatically so:
The publisher’s own print newspaper: 30%
Other local media: 17%
Television: 45%
[For the record, the alternative free news source identified by the most consumers was “other local web sites,” cited by 68% of respondents.]
With such dramatically different views held by newspaper publishers and their consumers, it’s clear that both sides can’t be correct. I’ll to bet that the consumers’ responses are closer to the reality.
For this reason, it would be advisable for publishers to tread very carefully as they attempt a shift to a paid content business model. Does the term “evaporating audiences” mean anything to them?
With the constantly changing online world, it’s always helpful when a new survey comes along that give us the latest reading of just what’s going on with online behavior.
Harris Interactive has just conducted its second survey of online users for Norton® Symantec. The survey was taken across 12 countries, including the U.S., five European nations, plus Australia, Canada, China, India, Japan and Brazil. The total number of people taking the 15-minute online survey was very large: nearly 6,500 adults plus ~1,300 kids age 8 to 17. That’s close to double the number of respondents who participated in Harris Interactive’s first such research project conducted in 2007 — and it gives us some very good data to chew on.
The Harris survey, published as the Norton Online Living Report, found that the average number of hours spent online per month for adults was ~89 hours. That’s well more than three full days in a month. But a look at the numbers found in some of the countries surveyed shows dramatically different results:
China: 141 hours spent online per month on average
Brazil: 119 hours
India: 87 hours
Japan: 85 hours
United States: 56 hours
Canada: 46 hours
No doubt, these figures will challenge the perceptions of some who have thought that citizens in the developed world are more “wired” than those in the developing nations. Could it be that there are fewer exciting alternative activities competing for consumers’ time and attention in China or Brazil?
As for kids’ online activities, one very interesting finding from Harris is that American and British youth report being online twice as long as their parents think they’re online:
U.S. kids report being online 42 hours per month, versus 18 hours their parents estimate they’re spending online.
U.K. youths report being online 44 hours per month, versus 19 hours their parents estimate.
Clearly, the age-old disconnect between parents’ perception and kids actual behavior is alive and well – and hasn’t changed at all in the Internet age!
What’s more, even with parents’ wildly inaccurate estimate of the time their kids are online, nearly three-fourths of U.S. and Canadian parents believe their children are spending too much time at the computer. So the perception-versus-reality gulf is even more stark.
[The only country where a clear minority of parents feel this way is Japan, at ~30%.]
Basically, when it comes to their children, the Harris survey quantifies what any parent already knows: for the kids, it’s online all the way. Consider these stats that Harris Interactive gleaned from surveying U.S. children:
American kids have an average of nearly 85 online friends – the highest among all 12 countries surveyed.
They “out-text” the rest of the world, too – with U.S. kids spending upwards of 10 hours per week texting compared to ~4 hours for kids elsewhere.
And yet … it appears that the kids themselves have an inkling there are healthy limits that should be placed on the “online, all the time” experience. Half of the American youths surveyed agree that “instant messaging and texting make learning to write more difficult.”
Anyone who has frustratingly received e-mail messages from their children in college … with nary a capitalized letter or punctuation mark to be found in them … will surely identify with this last bit!
Border guards dismantling the fence dividing East and West: Austro-Hungarian border, Summer 1989.
This month, the world commemorates the momentous events of 20 years ago when the Berlin Wall fell and a divided Germany came together amidst the wreckage of the Soviet Empire.
Already, there have been poignant tributes such as the recent celebration in Berlin honoring three elder statesmen who were at the center of the events at that time: Mikhail Gorbachev, President Bush (the elder) and Germany’s Prime Minister Helmut Kohl.
But what seems lost among the commemorations is the fact that the Berlin events were set in motion earlier in 1989, some 350 miles to the south. And they involved neither East nor West Germany.
A reformer who was also a Communist Party member, Németh had come to power in 1988 and was determined to bring Hungary into a more close economic and political relationship with the rest of Europe. Faced with horrific economic conditions at home, he knew had had limited time to effect positive change or he would be replaced.
Students of history know that the “ties that bind” Austria and Hungary date back ~700 years, through centuries of the Habsburg Empire to the early 1900s when Vienna and Budapest were two of the most glittering cities of Europe.
In a sense, the forced separation of the two countries between East and West Bloc factions was as unnatural as the division of Germany itself; a quick look at the bevy of German and Hungarian surnames in the Vienna telephone directory proves the point.
Secret communications between Hungary and Austria culminated in a public ceremony held on the Austro-Hungarian frontier on May 2, 1989, where, documented by television cameras, the electric fence running the length of the border was declared an “anachronism” and a hole was ceremoniously cut in it.
“What are those Hungarians up to?” bellowed East German premier Erich Honecker at an East German Politburo meeting the next day. The answer was obvious. Soon throngs of East German citizens, traveling to a fellow Eastern Bloc country on tourist visas, simply moved across the Hungarian border into Austria from where they could continue on to West Germany to be reunited at long last with relatives and friends.
The die was cast. Faced with the prospect of its citizens draining out of the country, the East German government had little choice but to announce a relaxation in travel restrictions to West Germany.
This attempt at accommodation was a classic case of “too little, too late”: The avalanche that was soon to come was simply overwhelming. Down came the Berlin Wall – and down went the East German government.
In hindsight, it’s easy to recognize the important role Mikhail Gorbachev played in the events of 1989. By signaling that Soviet troops would not necessarily come to the aid of beleaguered Eastern European satellite regimes, Gorbachev gave the restive citizens of East Germany the courage to seize the moment and take decisive action while they had the chance.
But the most credit must go to the governmental leaders of Hungary and Austria. It was these unsung heroes who took the biggest risks from the very beginning, bravely plotting their moves in the face of potentially severe political and military repercussions. (After all, memories of the ill-fated Hungarian Revolution of 1956 and the subsequent refugee flight across the Austrian border weren’t all that distant.)
In a sense, history came full circle in 1989. At the beginning of the century, Germany had been dragged into World War I because of problems faced by its Habsburg neighbor, Austria-Hungary. So many of the major political challenges in the 20th Century – communism, fascism, the Cold War, even the Middle Eastern conflict – stemmed from that struggle. And none of these were more searing for Germany than World War II and the subsequent division of the country between East and West.
Once, Austria and Hungary had created problems for Germany. Seventy-five years later, they helped solve them. Not a bad result in the end!
How much is one clickthrough to your web site worth? If you’re a legal firm specializing in bringing mesothelioma cases to court, it turns out it’s worth a lot.
In fact, the search term “mesothelioma” was the highest-priced keyword in the U.S. during the third quarter of 2009. That’s according to a recently-released AdGooroo Search Engine Advertising report.
Just how expensive? For Google’s AdWords program, the highest price paid for a #1 ranking on that search term was a whopping $99.44 per click. (Over at Yahoo, the high figure for this paid search term was a little less rich: $60.68 per click.)
One wonders how many times the advertisers have actually had to pay out this king’s ransom. Whether it’s for a few or many clicks, it’s clear that some legal firms recognize a highly lucrative revenue opportunity in filing mesothelioma lawsuits related to asbestos and lung cancer.
Interestingly, the highest paid search term in Bing’s paid search program isn’t “mesothelioma,” but rather “auto insurance comparison.” At $55.20 per click, the dollars aren’t as high, but it would seem like the potential payoff isn’t nearly as great, either. After all, there’s a pretty big difference between a multi-million dollar legal verdict and the value of an automotive insurance policy.
But beyond the eyebrow-raising stats of these extreme examples, the larger issue is how much more costly search advertising has become in recent times. A few short years ago, it was common to talk about search terms costing an advertiser 50 cents or $1.00 per click. Now, those same terms are far more likely to cost $2.50 or more.
Google, being the 500-pound gorilla in search engine marketing (SEM), has certainly contributed to the price inflation. That’s one reason why many are rooting for alternative search options like Bing to succeed (dream on).
More fundamental to the increase in keyword click pricing is the realization that advertising to people at the precise time they’re searching for particular goods and services is a far more effective way to engage customers and prospects and drive actual sales.