Connect with us

BizNews

Not all ‘review bombing’ is bad for business

Having a one-size-fits-all, review bombing or political speech policy can lead to the suppression of legitimate expressions of support for the role a small business plays in the community.

Published

on

For a business on the receiving end of “review bombs” – the sudden influx of online customer reviews following a political or cultural controversy – an interventionist approach to content moderation might seem like a prudent strategy.

But a new open-access study by a Rutgers researcher finds that when review platforms such as Yelp enact tough moderation policies in a bid to sanitize political speech, it can unnecessarily constrain reasonable opinions and cultural context that consumers depend on to decide where to spend their money.

“Simply put, everything you think you know about review bombing is wrong,” said Will B. Payne, assistant professor of geographic information science at Rutgers’ Edward J. Bloustein School of Planning and Public Policy and author of the study, published in the journal Big Data & Society.

Online reviews can have a significant impact on an independent business’s revenue, particularly those on Yelp, the leading local review platform in the United States. One study found that a one-star increase in the average Yelp rating causes a 5% to 9% increase in revenue for nonchain restaurants.

To understand the geographic reach of review bombing incidents and how platforms define acceptable speech, Payne assessed Yelp’s moderation of comments on U.S. businesses embroiled in political controversies between 2004 and 2021. 

First, Payne created a database of businesses affected by national and local politics. Using news sources to identify specific cases and date ranges, he built a dataset of tens of thousands of political-themed reviews. Topics included the 2016 and 2020 U.S. elections, the Black Lives Matter and #MeToo movements and the COVID-19 pandemic.

Next, he analyzed Yelp’s publicly available metadata for reviews of affected businesses, including review date, username, star rating and user location.

Payne then selected two businesses with large numbers of Yelp reviews for in-depth analysis: Washington, D.C.-based pizzeria Comet Ping Pong (subject of the Pizzagate conspiracy theory in 2016) and St. Louis-based Pi Pizzeria, whose owner, Chris Sommers, became the target of online and offline harassment by pro-police supporters after he publicly backed the Black Lives Matter movement in 2017.

In Comet Ping Pong’s case, Payne found that review bombing resulted in primarily negative comments by reviewers mostly on the West Coast – thousands of miles away from the restaurant – while Pi Pizzeria experienced a much more local pattern (largely from the St. Louis area), with an even split of supporters and detractors.

Payne found that Yelp’s automated and human review filtering systems largely responded the same way to each incident, but with considerably different effects. For Comet Ping Pong, of the 283 reviews flagged and removed by Yelp, 229 were negative one-star reviews. By contrast, of the 588 Pi Pizzeria reviews that Yelp removed, most were in support of Sommers’ actions, positive reviews that averaged close to the restaurant’s four-star rating of Yelp-approved reviews.

“Local customers were censored for simply thanking Chris Sommers for standing with them as they marched against police violence,” Payne said. “They weren’t fake reviews about a conspiracy theory; they were legitimate statements by people supporting a business, in this case for the support its owner gave to the neighborhood.”

Payne also looked at Google’s approach to content moderation and found that unlike Yelp, Google rarely removes politically themed reviews. This, too, can be a double-edged sword; Comet Ping Pong still has dozens of public Google reviews referencing the false Pizzagate conspiracy. 

The data does have several limitations, Payne said. First is the possibility that the self-reported location of Yelp users was inaccurate, or that some users could have moved between the time they set up their Yelp profile and when they wrote a review.

Additionally, reviews on Google Maps – a popular Yelp competitor – don’t contain user location information and can be removed by Google without leaving the public metadata traces that Yelp provides for transparency.

As review bombing continues to test review platforms’ approaches to political discourse – the most recent example surfaced this month, when Yelp halted reviews of a McDonald’s franchise in Feasterville, Penn., where former President Donald J. Trump had held a campaign event – Payne said it’s worth considering whether content moderation has gone too far.

The question is particularly relevant for Yelp, which has used corporate communications and review search filters to support Black-owned, women-owned, and LGBTQ-inclusive businesses – speech that isn’t permitted by reviewers themselves unless accompanying a customer experience review.

“Having a one-size-fits-all, review bombing or political speech policy can lead to the suppression of legitimate expressions of support for the role a small business plays in the community, as in the case of Pi Pizzeria,” Payne said. “Some might disagree that the political positions of a business owner should guide consumer behavior, but on Yelp, it’s a choice that users can’t even make for themselves.”

BizNews

In-aisle store displays might crowd shoppers and reduce overall sales

Retailers might seek strategies to boost product exposure without also increasing crowding – especially for cart shoppers who may experience greater crowding effects – and that excessive use of in-aisle fixtures will likely dampen sales at the aggregate level rather than increasing it. 

Published

on

In a study involving a real-world grocery store, in-aisle displays meant to boost product visibility were in fact associated with reduced sales and purchase-related behaviors, with results amplified for shopping cart users.

Mathias Streicher of Austria’s Department of Management and Marketing presents these findings in the open-access journal PLOS One.

Retailers often place extra product displays directly in aisles in an effort to boost visibility and enhance sales. However, in-aisle displays could increase spatial crowding, which occurs when people feel restricted in their freedom of movement and has been linked with purchase-avoidance tendencies. To help clarify if in-aisle displays result in more purchases, Streicher conducted several experiments with a partnering grocery store.

First, they tracked weekly sales for an aisle containing household, baby and pet staples over a six-week period during which five product-display stands were placed mid-aisle. The stands were then removed for six weeks. Comparison of sales data showed that in fact, sales increased after removal of the in-aisle displays, with the average weekly percentage of total store revenue from that aisle rising from 4.33 to 4.83 percent.

A second in-store experiment in the same aisle showed that people using shopping carts also stopped and physically handled products—behavior previously linked with sales—about 7.05 times more often when in-aisle displays were absent than when they were present. Non-cart shoppers also touched products more often when displays were removed, but the effect was smaller (3.81 times).

Finally, in an online experiment, 200 participants imagined using a shopping cart or basket while viewing photographs of the same aisle from the in-store experiments, with or without in-aisle displays. They tended to rate the aisle with displays as more crowded and reported lower levels of perceived control for aisles with displays than those without, with effects amplified for imagined cart versus basket use.

Together, these findings suggest retailers might seek strategies to boost product exposure without also increasing crowding – especially for cart shoppers who may experience greater crowding effects – and that excessive use of in-aisle fixtures will likely dampen sales at the aggregate level rather than increasing it. 

Further research could address some of this study’s limitations, such as by considering the effects of human crowding, promotional offers on products, and seasonal influences on shopping behaviors.

Streicher adds: “The research shows that adding merchandise into store aisles can actually reduce overall sales by making the environment feel crowded and harder to navigate. Importantly, this negative effect is even stronger for shoppers using carts, as they experience greater spatial constraints and reduced control while shopping.”

Continue Reading

BizNews

Structure of online reviews shapes their helpfulness

Reviews that grow increasingly positive are most helpful to readers, while those that turn negative are least helpful. For average-rated products, progressively negative trajectories enhance helpfulness, whereas reviews that start negative and grow positive are least effective.

Published

on

A study of nearly 200,000 Amazon reviews shows that the usefulness of online product reviews depends not only on what is said, but on how the information is structured.

The researchers, from the Universities of Cambridge and Queensland, studied Amazon reviews for products ranging from clothing to food to electronics. They found that how the information is organised matters as much as what is said, and that different review structures are more or less helpful, depending on how highly the reviewer has rated the product.

Their results, published in the journal Scientific Reports, could help companies and third-party review platforms design their review pages to prompt the sort of reviews that will be most helpful to potential customers.

For example, a reviewer assessing a laptop might praise its performance and design while criticising its battery life, so how should such information be structured to be most useful to the reader? Should the review begin with criticism and end on a positive note, or start positively before turning to drawbacks?

“Any target of evaluation typically has both positive and negative aspects, which makes crafting evaluative messages challenging,” said co-author Dr Yeun Joon Kim from Cambridge Judge Business School. “The key question is how to structure these elements within a single message. For example, one might present criticism upfront and then move to praise, or instead integrate negative points within an otherwise positive evaluation. Yet research has paid little attention to this structural dimension.

“We wanted to understand whether certain structures are consistently more effective, or whether their effectiveness depends on the performance of the target being evaluated.”

The study was based on 195,675 reviews of 5,487 distinct products, and assessed performance and related factors, and a helpfulness score as measured by reader votes.

The researchers identified nine possible structures of online reviews ranging from Type A reviews that start positive and become more positive as they go along, to Type I reviews that start negatively and become even more negative – with lots of variance in between.

For highly-rated products, reviews that grow increasingly positive are most helpful to readers, while those that turn negative are least helpful. For average-rated products, progressively negative trajectories enhance helpfulness, whereas reviews that start negative and grow positive are least effective. For low-rated products, reviews are judged most helpful when they open constructively before introducing criticism.

“The results are nuanced but very clear,” said co-author Dr Luna Luan from the University of Queensland, who carried out the research while earning her PhD at Cambridge Judge Business School. “Looking at the overall sentiment of reviews does not fully translate into message effectiveness. It is the broader structure of sentiment – how positivity and negativity evolve throughout the review – that shapes how readers interpret online reviews.”

“Our findings have practical implications for how platforms and companies can design review pages in order to elicit the sort of reviews that will be most helpful to readers based on how highly products are rated,” said Kim. “For example, instead of simply asking ‘Write your review here’, the online review form could instead include micro-prompts that guide how reviewers structure feedback in a way recipients find most helpful.”

The researchers found the most commonly used review styles are not necessarily the most helpful to readers. In particular, for average- and low-rated products, the structures that reviewers tend to adopt often differ from those that readers find most useful.

This mismatch likely reflects different underlying motivations. Reviewers are not always writing to maximise usefulness for others, but may instead be expressing their own experiences, frustrations or emotions – especially when evaluating products of moderate or poor quality. As a result, review writing often serves both as information sharing and as a form of self-expression. This helps explain why widely used review styles do not always align with what readers perceive as most informative or helpful.

Continue Reading

BizNews

Reversible words can lower consumer disbelief in ads

A simple word choice in marketing messages can significantly impact how confident consumers feel about believing – or not believing – a claim.

Published

on

It’s estimated that consumers experience hundreds if not thousands of marketing messages daily. While the exact number can depend, how much someone believes the message can be more important for marketing success than the number of messages they see. 

A new study reveals that a simple word choice in marketing messages can significantly impact how confident consumers feel about believing – or not believing – a claim. Researchers found that when words differ in their “reversability,” or how easily people can think of their opposites, it can trigger different mental processes when consumers evaluate marketing language. 

Imagine the messaging options for a new sunscreen designed specifically for those who like a strong scented product. The first product description reads, “The scent is prominent,” while the second notes, “The scent is intense.” The word “prominent” is uni-polar, meaning people tend to negate it by adding “not” to the original statement.

“Intense,” though, is a bi-polar word, meaning readers can easily come up with its opposite meaning and negate the statement by replacing it with its antonym. In this example, “The scent is mild,” instead of, “The scent is intense.” 

“When people encounter easily reversible words, like ‘intense’, in messages processed as negations (mild), they experience lower confidence in their judgements compared to words that are hard to reverse, like ‘prominent,’” explained Giulia Maimone, a postdoctoral scholar in marketing at the University of Florida Warrington College of Business. 

Across two experiments of more than 1,000 participants, the research demonstrated that this effect occurs because negations of bi-polar, or reversible, words engage a more elaborate cognitive process requiring additional mental effort, resulting in lower confidence of the statement’s truthfulness. 

Based on their findings, the researchers suggest that marketers take this advice when crafting language: for new products, use affirmative statements with easily reversible words, like ‘The scent is intense’ in the sunscreen example, which most consumers will judge as true with high confidence. Importantly, this language would also minimize the confidence of consumers who will be skeptical about the message, as they will process it via a more complex cognitive process that reduces confidence in those consumers’ disbelief. 

“This simple lexical choice could help companies maximize confidence in their desired messaging and minimize confidence among the doubters,” Maimone explained. 

Continue Reading
Advertisement
Advertisement

Like us on Facebook

Trending