“Few aspects of office life are more dispiriting than hot-desking — the penny-pinching ploy that strips people of their own desk and casts them out to the noisy, chaotic wasteland of shared work spots.”
— Pilita Clark, Correspondent, Financial Times
I’ve blogged before about so-called “open office” layouts and how they’re disdained by many employees. But even with all of the unpopularity of open office layouts, there’s another office concept that appears to be even more despised: “hot-desking.”
Hot-desking is a concept that came into being more than a decade ago, and it takes the idea of “open offices” a step beyond. It’s a design in which workers are not assigned to a regular desk or cubicle, but instead find whatever desk is available to them on any given day.
If you read the literature from five or ten years ago, you’ll see all manner of compelling reasons being proffered as to why hot-desking is a worthwhile concept. One rationale is that hot-desking fosters “agile working” and more collaboration among employees — at the same time making communications between workers more effective and thereby enhancing the exchange of information.
Reading those advocacy pieces, there are numerous references to employees finding hot-desking to be “liberating,” “motivating,” “energizing,” and so forth.
Advocates of hot-desking appear to equate it the practice of working “wherever” — just to long as the work gets done. “Wherever” can mean at home, in coffee shops, or anywhere around the office.
If all this seems a little too neat and tidy, the your suspicions are warranted. Because some observers are onto the real reasons companies adopt hot-desking work environments — which is to save on office space and its associated costs.
Is it any wonder that hot-desking is promoted by top executives, finance and facilities management personnel more than any others?
Alison Hirst, PhD, a research specialist at Anglia Ruskin University in Cambridge, UK, who has experienced hot-desking personally, clues us in on what’s actually happening. She spent three years carefully studying one organization that moved to a hot-desk environment:
“Like many companies, it had switched to hot-desking to reduce property costs and enable precious office space to be used flexibly. In the language of facilities management, an office building can be ‘crunched’ by increasing the staff-to-desks ratio, and it can be ‘restacked’ as teams and departments are moved around like boxes.
But in this bid for cost-cutting, a number of employees are made to feel underappreciated at best and unwanted at worst. There is often a subtle division between those who can ‘settle’ and reliably occupy the same desk every day, and those who cannot.”
In Hirst’s observations, “settlers” are able to arrive at the office early choose their preferred desk. By repeating their choice over time, it effectively establishes this desk as “their” space — whether it be because of its preferred location near to windows, or near to their closest colleagues.
Those who cannot arrive sufficiently early — such as part-time employees or those with childcare responsibilities — are left to hunting around for a suitable workspace, often far removed from the colleagues with whom they need to work most closely.
Last year, business author and journalist Simon Constable penned an opinion piece for Forbes with the provocative title “How Hot-Desking Will Kill Your Company.” In it, Constable contends that for most companies, the drawbacks of hot-desking vastly outweigh any benefits. Constable, who also has personal experience working in a hot-desking environment, makes these salient points:
Hot-desking signals that employees don’t matter — companies like to say that their employees are their single best asset. But when an employee isn’t even offered a permanent desk, it sends a completely opposite message.
Super-quick meetings won’t happen — Brief impromptu meetings are a vital part of office efficiency. In concentrated work environments with relevant teams of employees, such micro-events are important but don’t interrupt much of the workflow because of the proximity of the workers involved. If having short 3- or 5-minute meetings will require summoning people from all over the building, that super-quick meeting will soon become a frustrating 15 or 20 minutes, eating away at productivity. Which means they’ll rarely happen at all.
Inefficiencies add up quickly — The combined total costs of small-but-incremental negative effects adds up. The larger the office, the worse the impact is likely to be.
Rank hypocrisy — Employees notice that many of the biggest advocates for hot-desking are the people who have dedicated desks for themselves — and often their own individual offices with doors. “Hot-desking for thee but not for me.”
Fortunately, I work in a small office where everyone is not only locationally proximate, we even have walls and a door for when privacy is needed for meetings or concentrated creative/copywriting time without distractions. Closed-door activities may only happen once or twice per week, but they increase work efficiencies. I consider our situation fortunate not to be forced into open-office or hot-desking scenarios.
What are your thoughts on the hot-desking concept? If you have personal experiences, please share them with other readers here.
Whereas the typical B-to-B video used to be around six minutes long, it’s now fallen to just over four minutes in duration.
On the plus side, due to those shorter times a larger percentage of viewers are watching all the way to the end of B-to-B videos. Even so, the proportion doing so is only around half (52%, up from approximately 46%).
And if a B-to-B video is shorter than 60 seconds in duration, an even larger proportion of viewers watches the entire video — typically nearly 70%.
It would appear that short attention span characteristics have leeched into the business realm as well: To have the greater impact in B-to-B video communications these days, “less is more.”
A recent survey of ~1,800 adult American drivers conducted by Wakefield Research has found that the top safety concern they have is distracted drivers on the road – a factor cited by ~70% of the respondents.
This far outstrips concerns about people driving under the influence of alcohol or other stimulants – a concern that was cited by just ~45% of the respondents.
But in a classic example of “do as I say, not as I do,” a clear majority of the survey respondents (~58%) reported that they check their own mobile devices when driving. Perhaps we believe that our own skills are far above those of the average driver …
This squares with the findings of another survey conducted recently by analytics firm Zendrive. That research found that 85% of drivers feel that distracted driving is a problem. Despite those concerns, nearly half of the Zendrive survey respondents (~47%) admit that they themselves use their phones 10% or more of the time when driving.
Phone usage seems pretty high overall, with nearly 6 in 10 reporting that they talk on the phone while driving, half use maps or other navigation tools, and nearly 4 in 10 text. Let’s take these results at face value … but I wonder if the actual behaviors are even more slanted towards mobile phone usage than the stats suggest.
We can at least give credit to the respondents for acknowledging that what they’re doing isn’t particularly kosher, since ~83% of them admitted that they put down their phones when they see law enforcement on the road.
And here’s one other finding that I found particularly interesting: nearly 40% of the survey respondents reported that their own children have asked them to stop using their phone while driving.
Talk about parent-shaming – and the parents admit it!
More findings from the Wakefield research can be viewed here.
Do you find these findings surprising, or about as you expected? Please share your thoughts and observations with other readers here.
No matter how busy people may have been in their daily activities this past Monday, it’s likely that many took a few minutes to read or watch – and then talk about — the devastating fire at the Cathedral of Notre-Dame in Paris.
Along with the Arc de Triomphe, Notre-Dame may well be the most iconic structure in the city. Personally, I know many friends and relatives who have made it a point to visit the cathedral during their trips to Paris. So it isn’t surprising that so many people all over the world would feel the tragedy in a personal way.
One of them is my brother, Nelson Nones, who has had a lifelong interest in Gothic architecture. Because he is someone who studied architecture and who has also visited Notre-Dame, I reached out to him for his assessment of the fire damage and what may be the future of the cathedral.
Here is what Nelson wrote in reply:
Watching the fiery collapse of Notre-Dame Cathedral’s central spire on TV yesterday was a truly sickening sight. As a student of Gothic architecture, and having visited most of France’s noteworthy cathedrals, I have many fond memories of Notre-Dame.
The best of those memories was a Sunday evening pipe organ recital held more than 30 years ago, in 1988, which drew such a large audience that my three oldest children and I had to sit on the floor. Nowhere in the United States, I thought, would a classical pipe organ performance – free or not – attract such a large crowd. I didn’t expect my kids, then between the ages of 10 and 14, to be very impressed, but all of us were deeply moved.
It’s way too early to know the full extent of fire damage, but the first pictures of the cathedral’s interior to be published as the fire subsided provide some vital clues. The stone vaults above the choir and crossing seem largely intact. (The circular opening at the apex of the vault at the crossing, still glowing with fire in the photos, is original construction.)
However, in the nave, the vault webbing spanning the two pairs of diagonal ribs nearest the crossing has completely collapsed, as has the cross rib between those diagonals. Because the cathedral’s central spire appeared to topple toward the nave before it crashed, it seems this section bore the brunt of the impact.
It doesn’t appear that any other vaulting collapsed in the nave. The condition of the vaults above the transepts isn’t visible from available photographs, nor is the state of the priceless stained glass rose windows in the transepts and nave.
The latest reports indicate that the great organ, begun in 1733 and rebuilt by Aristide Cavaillé-Coll in 1864-67, remains intact. Both the great organ and its console, replaced in 2012, are situated in the grandstand beneath the West rose window, between the cathedral’s iconic towers. It doesn’t appear that the organ was damaged by falling debris, nor did it sustain significant water damage as firefighters struggled to prevent the fire from spreading upward to the belfries.
The fire completely consumed the cathedral’s wooden roof and central spire, which was undergoing renovation at the time. Thankfully, from an architectural perspective the most important parts of the structure are built of limestone. Stones can crack from high heat, but only the stone vaults and perhaps the inner facade of the North tower appear to have received direct exposure to the fire. The integrity of the cathedral’s pointed arches, flying buttresses and piers, which are its primary structural components and are all made of stone, would have been imperiled to a far greater degree had the fire broken out at the base of the building and spread upward towards the roof.
In the Gothic style, the purpose of those arches, buttresses and piers is to transmit the considerable weight of stone vaulting vertically toward the ground. This technique replaced thick outer walls with glass windows in order to fill interiors with light and allow vaults to rise toward imposing heights. Notre-Dame de Paris, begun in 1160, was one of the Early Gothic cathedrals. Its vault rises up to 108 feet but was surpassed during the High Gothic period (c. 1200-1250) by Chartres (117 feet), Reims (125 feet), Amiens (139 feet) and Beauvais (159 feet). The builders of Beauvais, in fact, aimed so high (and reduced the thickness of the flying buttresses so much) that part of the vault collapsed in 1284, and the nave was never built.
We’ll soon know whether Notre-Dame’s rose windows and other artifacts survived, or not. Reckoning the extent of structural damage to the cathedral, and the time it will take to rebuild, will take longer. It’s clear, though, that much of the stone vaulting that the bulk of this magnificent structure was built to support survived, averting an even greater catastrophe by catching burning lumber which would otherwise have fallen and ignited the wooden screens, pews and paintings below.
Nelson’s note, acknowledging that this was a terrible event, suggests that the damage to Notre-Dame could have been even worse, and it is gratifying to know that the structure wasn’t a total loss. Many of us will be interested to hear updates in the coming days about the structural integrity of the building, and the plans to rebuild what has been lost.
If you have any particular thoughts on the aftermath of the fire – or just memories to share of when you may have visited Notre-Dame – please share your comments with other readers here.
Update (4/16/19): Subsequent to reports on the condition of Notre-Dame Cathedral the day after the fire, my brother submitted this second set of comments:
Photos taken and published on Tuesday, April 16th, after I wrote to you, provide further insight into the extent of fire damage at Notre-Dame de Paris.
The first (second still photo at https://nationalpost.com/opinion/john-robson-what-the-notre-dame-coverage-kept-missing) shows the nave, crossing and choir. This photo reveals that the entire vault over the crossing collapsed, including the diagonal ribs. Those ribs were still standing in the photo [pictured above] taken during the fire, as was at least half the vault which no longer remains. A closer inspection of the photo leads me to think that much of the eastern half of the vault webbing over the crossing may have already collapsed when the photo was taken, but it’s hard to tell for sure. In any event, whatever remained standing appears to have collapsed later that night. This is not at all surprising considering that the fire originally started above the crossing.
The newer photo also shows a circular hole in the high vault over the South side of the nave, in the third bay from the crossing, which was also visible in the earlier photo but which I didn’t describe in my earlier comments. I’m quite sure this damage was also inflicted when the spire collapsed.
I have found only one photo of the south transept taken after the fire but it doesn’t show the high vault. However, from news reports, I don’t think any of the high vaulting over the south transept collapsed.
At first I thought that only about 25% of the high vaulting was gone. A more precise figure would be 32% of the high vaulting. Specifically, the total floor area (not surface area) of the high vaults was about 19,830 square feet of which approximately 6,260 square feet (31.6%) no longer exists, based on the plan shown below. The red areas depict the high vaults which fell.
I should add that none of the lower vaults, flying buttresses, pointed arches or piers appear to have sustained any damage from the fire. Structural inspections are still being carried out on the two western towers.
There’s little doubt that additional sections of the high vault sustained so much exposure to high heat that they are no longer structurally sound. The greatest risk is the collapse of more diagonal and cross ribs, which would take down the stone webbing they support, too. It would not surprise me at all if it’s decided to replace nearly all the high vault when the cathedral is rebuilt, if only to allay future public safety concerns. Such a restoration could remain very true to the original masonry and would hardly be noticeable when completed – far less noticeable, I think, than if only part of the high vault is replaced.
The roof above the high vault, and the attic it encloses, will of course need to be completely rebuilt. Here I think the restorers will design and build a replacement that’s similar in appearance, but structurally very different from the one that burned. For example I should think they would want to use structural steel instead of oak framing, for fireproofing (not to mention the difficulty of finding enough virgin oak trees to duplicate the original timber). The original tile cladding was lead which is quite toxic, so I suspect the new cladding will be made of a completely different material such as copper. These changes won’t really affect the cathedral’s architectural integrity, because the old attic was visited very rarely, and cladding material such as copper eventually weathers to about the same color as lead.
A new central spire will rise above the roof, and here is where I think politics will rear its ugly head. The spire which collapsed wasn’t part of the original cathedral; it was built between 1844-64 to replace the original which was taken down in 1786. A Rolling Stone article published on April 16th (https://www.rollingstone.com/culture/culture-features/notre-dame-cathedral-paris-fire-whats-next-822743/) states, “Any rebuilding should be a reflection not of an old France, or the France that never was — a non-secular, white European France — but a reflection of the France of today, a France that is currently in the making.” It attributes this idea to John Harwood, an architectural historian and associate professor at the University of Toronto, but also quotes Jeffrey Hamburger, an art history professor at Harvard, who dismisses Harwood’s idea as “preposterous.”
My prediction? In the end, the restorers will bow to the secularists when it comes to rebuilding the spire, and put something there as grotesquely ugly (and quintessentially French) as the Centre Pompidou, Louvre Pyramid or Charles de Gaulle Terminal 1. But just like the spire which collapsed on Monday, this decision will be fraught with controversy and will stretch completion of the restoration well beyond the 5 years President Macron promised on April 16th.
Update (4/18/19): Here are additional observations from Nelson based on new developments at the Cathedral:
I found an aerial view of Notre-Dame Cathedral taken by a drone and published on Wednesday, 17th April. The aerial view confirms the red areas shown in the plan [see above]. Specifically, it is now clear that none of the high vaults collapsed in the south transept, or in the choir.
It’s also very clear why the grand organ survived. It is located in the grandstand at the far left of the photo (and the plan), between the two towers. The roof above that area did not burn at all. Apparently the fire began spreading upward from the roof on the east side of the north tower, but the Paris fire brigade managed to contain the blaze there with water guns. The fire (and the water poured onto it) never got any closer to the organ.
Using enlargements of this and other published drone photos, I have also concluded that the black areas which I haven’t identified as gaps in the vaulting are charred debris from the wooden roof, which came to rest at the top of the vaults.
“The fire that hit Saint-Sulpice reportedly started in a pile of clothes left outside the cathedral, before climbing up the door and to the stained glass. The clothes are believed to have been left there by a homeless person. Police said the fire was ‘not accidental,’ but the pastor of Saint-Sulpice argued it was not an anti-religious attack.”
[When I visited Saint-Sulpice in July 2012 during the main Sunday Mass, I saw several “homeless” people hanging around outside the main entrance, begging for money.]
Saint-Sulpice will temporarily serve as the cathedral church for the Diocese of Paris until Notre-Dame is re-opened. Makes sense, because it is the second-largest church in Paris.
After several years of relative calm, suddenly we’ve faced some pretty significant natural disasters in North America – from the hurricanes that have devastated Houston and other cities in Texas, Louisiana and Florida to earthquakes in the vicinity of Mexico City.
Certainly, when it comes to hurricanes, tornados, earthquakes, floods and fires, some cities are more prone to these natural disasters than others.
Acting on that hunch, Trulia, the online real estate service company, has analyzed federal disaster area data to prepare maps that show the U.S. regions and the metropolitan areas within in them that are most susceptible to suffering a catastrophic event of this kind.
As it turns out, most metropolitan areas are at a high risk for at least one of the potential natural disasters – although thankfully none are at a high risk for absolutely everything.
The Trulia maps show these broad contours:
California and other western regions are at a higher risk for earthquakes and wildfires.
Hurricane risks are highest in Florida and along the Gulf Coast.
Flooding risks factor into the Florida/Gulf Coast regions as well, but they also stretch up and down the Eastern Seaboard.
Tornado risks are highest in the Plains states, portions of the Great Lakes states, plus the Central-South region of the country.
What does the Trulia analysis tell us about the large urban areas that are “safest” from all of these natural disaster risks? Trulia finds them in places like Ohio (Cleveland, Akron and Dayton), in Upstate New York (Buffalo, Syracuse) and other parts of the Midwest and inland Northeast.
Looking at the various housing markets across the United States, here’s Trulia’s list of the ones that are, on balance, the “safest” from natural disasters:
#1. Syracuse, NY
#2. Cleveland, OH
#3. Akron, OH
#4. Buffalo, NY
#5. Bethesda-Rockville-Frederick, MD
#6. Dayton, OH
#7. Allentown, PA
#8. Chicago, IL
#9. Denver, CO
#10. Troy-Warren, MI
Of course, being safest from natural disasters doesn’t account for the dangers from “man-made disasters” — as former Director of Homeland Security Janet Napolitano euphemistically labeled the other kinds of catastrophic events.
For the riskier places viewed from that standpoint, one might look to the most “iconic” metro areas such as Washington, DC, New York City and Boston as the likelier targets.
Plus, with North Korean nuclear weapons development and saber-rattling being prominent in the news of late, Honolulu, San Francisco, Seattle and Portland, OR might also make it on that list.
Speaking for myself, as a resident of the region just 50 miles east of Washington, DC and in light of our prevailing west-to-east wind and weather patterns, the possibility of encountering radioactive fallout from a nuclear strike aimed at our nation’s capital has always been a really fun scenario to consider …
What hazards represent the biggest threats to employees at worksites across America? We all may have our own suspicions … but the federal government has been keeping records about them for years.
In fact, this week the Occupational Safety & Health Administration (OSHA) has published its annual list of the Top Ten most frequently cited violations it has found following inspections of worksites its officials undertake on a regular (and unannounced) basis.
The OSHA listing shines a light on the types of safety issues that are most pronounced in the workplace. Here’s OSHA’s latest list, based on the 12-month period from October 2014 through September 2015. It’s headlined by fall protection, which is the most frequent OSHA standards violation:
Fall protection violations (construction standard)
Hazard communication (general industry standard)
Respiratory protection (industry)
Control of hazardous energy (lockout/tagout) (industry)
Powered industrial trucks (industry)
Electrical (wiring methods, components and equipment)
Electrical (general requirements)
OSHA publishes the list once per year to alert U.S. employers about the most common violations being cited so that they’ll take precautions to fix similar hazards in their own companies before OSHA officials show up to carry out an inspection.
Reviewing the list, some of the categories fall into the “everyone knows” category. Who doesn’t think that fall protection, scaffolding and ladders are major contributors to injuries in the workplace?
But then there are other OSHA violations like electrical systems and industrial trucks; it’s a little surprising to me to find them among the most frequently cited violations.
Which workplace threats do you think represent the biggest safety hazards to workers? Share your thoughts with other readers here.
This, despite the fact that most motion pictures and TV shows are shot these days using digital video.
Because of the steep decline in film sales – Kodak’s movie-film sales are reportedly off by a whopping 96% compared to just 8 years ago, and are projected to amount to less than 450 million linear feet of output this year – Kodak had been mulling the possibility of closing down its film manufacturing capabilities.
If that were to happen, the last of the major movie film manufacturers would have exited the market. (Fuji, the other major supplier, stopped producing movie film in 2013.)
As it turns out, however, there are a number of “name” film directors who remain quite keen on using film – among them J.J. Abrams, Judd Apatow, Christopher Noland, Lasse Hallström and Quentin Tarantino.
These and other movie directors lobbied the heads of the major film studios to commit to purchasing film in sufficient quantities to allow Kodak’s Rochester film manufacturing facility to remain open.
And now the major studios have reportedly decided to do just that – even though they don’t actually know how many movies will be shot using film versus the digital medium.
About the pending deal, Bob Weinstein, co-chairman of Weinstein Company said this: “It’s a financial commitment, no doubt about it. But I don’t think we could look some of our filmmakers in the eyes if we didn’t do it.”
The big challenge for movies shot on film is that very few younger film directors have any experience working in the medium. That sort of filmmaking is hardly even taught in cinematic arts classes anymore.
Besides, post-production work is much easier and faster with digital.
Still, just like audiophiles are convinced of the superiority of analog recordings over those recorded digitally, some movie directors swear by film. “I’m a huge fan of film, but it’s so much more convenient digitally,” film director Ian Bryce told reporter Ben Fritz.
Judd Apatow is another director who loves the film medium. While he also recognizes the benefits of digital, “it would be a tragedy if suddenly directors didn’t have the opportunity to shoot on film,” he says. “There’s a magic to the grain and the color quality that you get with film.”
By the way, Mr. Apatow is shooting his latest movie – Trainwreck – using film. And the Lasse Hallström film The Hundred-Foot Journey, which just opened in theatres, was shot on film as well.
“Digital cameras are not able to capture all the subtleties of the forest,” Mr. Hallström reported. His goal was to capture the lush landscape and greenery in the scenes of mushroom and wild berry picking that helps make The Hundred-Foot Journey such a feast for the eyes.
“We compared film and video, and the video simplified all the greens. On film, you could see the nuances of all the shades,” Hallstrom emphasized.
With all the conflicting factors, what is the prognosis for the film medium?
Well, we now know that Kodak will continue to manufacture it for the next few years at least. With set purchase commitments comes the ability to plan for operational efficiencies.
We also know that film remains the “medium of choice” for long-term preservation of all types of movies – including those shot digitally.
But practically all movie theatres have switched over to digital projection by now, whereas projection film used to represent a far bigger portion of product sales than preservation film.
So I think we can safely say that short-term, the prognosis is good.
Medium-term is iffy … and long-haul, it’s likely that the term “film” to describe “movies” will be accurate only from a historical perspective.
Do you feel differently? If so, share your thoughts with other readers here.
You know the old adage: A picture is worth a thousand words.
Well, with the plethora of images being uploaded these days … we’re talking billions and billions of images and words.
Recently, Yahoo estimated that the number of images uploaded to the web is nearing 900 billion, which translates to nearly 125 photos for every person on the planet.
Facebook reports that it’s seeing more than 6 billion photos uploaded each month, on average.
And Instagram? It’s reporting that nearly 28,000 photos are uploaded every minute.
Clearly, we love our photos. And since digital technology makes it so easy to take good-quality photos and post them instantly, it seems people can’t get enough of doing so.
It’s an interesting twist — in a sense, taking us back to the cavemen days and illustrations on the walls.
Over the centuries, words and language have made it faster and easier to communicate, even as drawing, painting or developing photos using analog (film) technology was difficult and/or time-consuming.
In more recent times, Polaroid® photos gave us a more “instant” experience with images … but sharing them was no easlier than before. (Plus, let’s be honest: Most Polaroid shots were pretty lame in the quality department.)
Now that digital photography is as effortless as it is … it seems everyone is rushing back to pictures.
We’re even seeing it in the world of books. Take Amity Shlaes’ book The Forgotten Man, about the Great Depression. It came out in conventional form in 2008.
In recent weeks, I’ve begun reading more news items about legislation being passed to limit the damage so-called “patent trolls” can do to unsuspecting businesses.
These are the bottom-feeding firms which exist only to collect royalty payments and fines from companies due to supposed infringement on patents the firms have purchased.
Many of the victims of these schemes are smaller businesses with fewer than $10 million in annual revenues.
The reasons they’re targeted are pretty obvious: smaller companies are less able to defend themselves against such charges, and it’s often easier and less expensive to settle out of court — and avoid all the hassles that accompany litigation as well.
But the cumulative impact is pretty enormous. Patent risk specialist RPX Corporation estimates that it’s nearly $13 billion in legal fees, settlements or judgments.
The University of California’s Hastings College of the Law has also been studying the numbers. It finds that patent infringement claims against the portfolio companies of venture-capital firms cost an average of $100,000 each to settle.
Predictably, only a smidgeon of the monies collected by these patent-holding companies actually makes it back to the inventors. The rest goes right in the deep pockets of the people trolling the business world for easy money.
And then …
Then some patent trolls made the mistake of sending demand notifications to banking firms, related to things like the software used in ATMs.
Oops. Bad move.
Once the banking institutions got sensitized to the issue, a lot of legislators did, too. Funny how that works.
The results are now beginning to show. In recent months, more states have enacted legislation curbing the ability of patent trolls to make “bad faith” assertions of patent claims.
What is a questionable patent claim? It’s a claim that isn’t based on any clear evidence of infringement — but instead on vague accusations.
(In other words, these questionable claims represent the vast majority of the notifications delivered to the unsuspecting victims.)
States jumping on the “put the trolls on trial” bandwagon range from New York, Vermont to Oklahoma and Minnesota. Twelve so far, and the tally will surely increase in the coming months.
One of the interesting twists is the fact that most of new legislation also allows targeted companies to strike back in state courts with their own litigation … against the patent-holding companies themselves.
I guess turnabout is fair play.
Another twist …
Here’s an interesting case where financial institutions – an industry not particularly loved in many quarters – is helping to rout a particularly pernicious and avaricious bunch of businesspeople.
This sort of activity, based not only on any sense of commercial fair play but instead on playing mercantile “gotcha” games, is reprehensible and gives “the business of business” a bad name.
Too, it has to have had a chilling effect on the activities of smaller businesses in particular – especially those who rely on established technologies to create and commercialize new products.
Constantly looking over one’s shoulder to make sure no one is coming after you for something as innocuous as using an e-mail tool on a FAX machine is hardly the kind of environment that fosters innovation.
So let the cheering begin … and no stopping until these trolls are banished back under the bridge.
If you’re wondering what happened to all of the community volunteer activities people used to do – not to mention the popularity of participating in group social or recreational activities like softball or bowling leagues … you might look at the time Americans are spending online as one possible explanation.
The evidence comes in the form of research the Interactive Advertising Bureau did when they contracted with GfK Research to conduct an extensive online survey as part of a larger behavioral analysis of American adults.
Fielded in late 2013 with participation from ~5,000 adults between the ages of 18 and 65, the IAB/GfK survey revealed that Americans are spending an average of 2.5 hours of every day online.
Add that on to the average ~5 hours per day spent watching TV – a figure that’s hardly budged in years – and it’s little wonder that the Jaycees, Shriners’ and other service organizations are finding it more difficult to recruit new members … or that “old faithful” group social and recreational activities are in danger of becoming less relevant.
The IAB/GfK survey also revealed which types of online activities are engaged in the most. The chart below, created by Statista from the IAB/GfK report’s data and published in The Wall Street Journal, gives us the lowdown:
I wasn’t surprised to discover that social networks chew up the most online minutes per day. Online video viewing and search time seem about as expected, too. And who doesn’t enjoy a nice game of Spider Solitaire or Internet Spades to wind down after a long day?
But at ~30 minutes per day, the e-mail average seems on the high side. People must really be struggling with managing personal inboxes stuffed with marketing e-mails. (But if work-related e-mails are part of the equation, the half-hour figure seems more expected.)
Comparing these results to similar research done in prior years, the most recent survey charts an increase in online video watching; it’s doubled over the past four years.
Other activities that are on the rise include online gaming, and listening to online radio.
Adding it all up, total time spent online is continuing its inexorable rise thanks to mobile connectivity and the “always-on” digital environment in which Americans now live.
Perhaps the way to stem reduced interest in group social activities and volunteerism lies in giving people free reign to “multitask” even as they participate in the local bowling league or Ruritan Club meetings …
What are your thoughts on the time people are spending online – and if it’s crowding out other forms of daily activities? Please share your thoughts with other readers here.