Connect with us

BizNews

Security response planning on the rise, but containing attacks remains an issue – IBM

While organizations surveyed have slowly improved in their ability to plan for, detect and respond to cyberattacks over the past five years, their ability to contain an attack has declined by 13% during this same period.

Published

on

IBM announced the results of a global report examining businesses’ effectiveness in preparing for and responding to cyberattacks. While organizations surveyed have slowly improved in their ability to plan for, detect and respond to cyberattacks over the past five years, their ability to contain an attack has declined by 13% during this same period.

The global survey conducted by Ponemon Institute and sponsored by IBM Security found that respondents’ security response efforts were hindered by the use of too many security tools, as well as a lack of specific playbooks for common attack types.

While security response planning is slowly improving, the vast majority of organizations surveyed (74%) are still reporting that their plans are either ad-hoc, applied inconsistently, or that they have no plans at all. This lack of planning can impact the cost of security incidents, as companies that have incident response teams and extensively test their incident response plans spend an average of $1.2 million less on data breaches than those who have both of these cost-saving factors in place.

The key findings of those surveyed from the fifth annual Cyber Resilient Organization Report include:

  • Slowly Improving: More surveyed organizations have adopted formal, enterprise-wide security response plans over the past 5 years of the study; growing from 18% of respondents in 2015, to 26% in this year’s report (a 44% improvement).
  • Playbooks Needed: Even amongst those with a formal security response plan, only one third (representing 17% of total respondents) had also developed specific playbooks for common attack types — and plans for emerging attack methods like ransomware lagged even further behind.
  • Complexity Hinders Response: The amount of security tools that an organization was using had a negative impact across multiple categories of the threat lifecycle amongst those surveyed. Organizations using 50+ security tools ranked themselves 8% lower in their ability to detect, and 7% lower in their ability to respond to an attack, than those respondents with less tools.
  • Better Planning, Less Disruption: Companies with formal security response plans applied across the business were less likely to experience significant disruption as the result of a cyberattack. Over the past two years, only 39% of these companies experienced a disruptive security incident, compared to 62% of those with less formal or consistent plans.

“While more organizations are taking incident response planning seriously, preparing for cyberattacks isn’t a one and done activity,” said Wendi Whitmore, Vice President of IBM X-Force Threat Intelligence. “Organizations must also focus on testing, practicing and reassessing their response plans regularly. Leveraging interoperable technologies and automation can also help overcome complexity challenges and speed the time it takes to contain an incident.”

Updating Playbooks for Emerging Threats
The survey found that even amongst organizations with a formal cybersecurity incident response plan (CSIRP), only 33% had playbooks in place for specific types of attacks. Since different breeds of attack require unique response techniques, having pre-defined playbooks provides organizations with consistent and repeatable action plans for the most common attacks they are likely to face.   

Amongst the minority of responding organizations who do have attack-specific playbooks, the most common playbooks are for DDoS attacks (64%) and malware (57%). While these methods have historically been top issues for the enterprise, additional attack methods such as ransomware are on the rise. While ransomware attacks have spiked nearly 70% in recent years, only 45% of those in the survey using playbooks had designated plans for ransomware attacks.

Additionally, more than half (52%) of those with security response plans said they have never reviewed or have no set time period for reviewing or testing those plans. With business operations changing rapidly due to an increasingly remote workforce, and new attack techniques constantly being introduced, this data suggests that surveyed businesses may be relying on outdated response plans which don’t reflect the current threat and business landscape.

More Tools Led to Worse Response Capabilities
The report also found that complexity is negatively impacting incident response capabilities. Those surveyed estimated their organization was using more than 45 different security tools on average, and that each incident they responded to required coordination across around 19 tools on average. However, the study also found that an over-abundance of tools may actually hinder organizations ability to handle attacks. In the survey, those using more than 50 tools ranked themselves 8% lower in their ability to detect an attack (5.83/10 vs. 6.66/10), and around 7% lower when it comes to responding to an attack (5.95/10 vs. 6.72/10).

These findings suggest that adopting more tools didn’t necessarily improve security response efforts — in fact, it may have done the opposite. The use of open, interoperable platforms as well as automation technologies can help reduce the complexity of responding across disconnected tools. Amongst high-performing organizations in the report, 63% said the use of interoperable tools helped them improve their response to cyberattacks.

While security response planning is slowly improving, the vast majority of organizations surveyed (74%) are still reporting that their plans are either ad-hoc, applied inconsistently, or that they have no plans at all.

Better Planning Pays Off
This year’s report suggests that surveyed organizations who invested in formal planning were more successful in responding to incidents. Amongst respondents with a CSIRP applied consistently across the business, only 39% experienced an incident that resulted in a significant disruption to the organization within the past two years  compared to 62% of those who didn’t have a formal plan in place.

Looking at specific reasons that these organizations cited for their ability to respond to attacks, security workforce skills were found to be a top factor. 61% of those surveyed attributed hiring skilled employees as a top reason for becoming more resilient; amongst those who said their resiliency did not improve, 41% cited the lack of skilled employees as the top reason.

Technology was another differentiator that helped organizations in the report become more cyber resilient, especially when it comes to tools that helped them resolve complexity. Looking at organizations with higher levels of cyber resilience, the top two factors cited for improving their level of cyber resilience were visibility into applications and data (57% selecting) and automation tools (55% selecting). Overall, the data suggests that surveyed organizations that were more mature in their response preparedness relied more heavily on technology innovations to become more resilient.

BizNews

Remote work doesn’t negatively affect productivity, study suggests

Employee and company resiliency may be enhanced through the opportunity for employees to work remotely during natural disasters and other events that cause workplace displacement.

Published

on

Photo by Avi Richards from Unsplash.com

A research team from the Texas A&M University School of Public Health found that employee and company resiliency may be enhanced through the opportunity for employees to work remotely during natural disasters and other events that cause workplace displacement.

The team, which was comprised of Kamrie Sarnosky, Mark Benden, Garett Sansom, Leslie Cizmas and Annette Regan, worked with a large oil and gas company in Houston, Texas, to analyze ergonomic software data from 264 employees. During the study period, the company was forced to close its offices because of flooding from Hurricane Harvey, which required employees to work remotely for an extended period.

The researchers looked at employee technology data before, during and after Hurricane Harvey. They found that although total computer use declined during the hurricane, employees’ work behaviors during the seven-month period of working remotely returned to pre-hurricane levels. This finding suggests that remote work does not negatively impact workplace productivity.

This study, which was published in IOS Press in February, offers important insights into information workers who have become increasingly used to and interested in working remotely as a result of the COVID-19 pandemic.

“In the future, there will be a greater percentage of the workforce who is involved in some sort of office-style technology work activities,” said Benden, who is director of the school’s Ergonomics Center. “Almost all of the study’s employees were right back up to the same level of output as they were doing before Hurricane Harvey. This is a huge message right now for employers because we’re having national debates about whether or not employees should be able to work remotely or in a hybrid schedule.”

This study is part of a large effort by the Ergonomics Center that is looking at the health of information workers. Although seemingly less taxing than blue-collar work, information workers are prone to injury such as carpal tunnel syndrome. “The research says that if you work a certain way at a certain pace over a certain duration, you’re more likely to become injured from that work,” Benden said. “But if you work a little less or a little less often or break up the duration or have certain other character traits — like posture — then you’re less likely to develop a problem from doing your office work.”

The Texas A&M researchers believe this information can be used to promote healthy behaviors for employees, including those working remotely, and to inform corporate policies. They also will be looking at tracking the ergonomic environment in employees’ home offices. The team believes that tracking this type of data can help companies address remote employee health issues, including stress, depression and substance abuse.

“The question was whether we could track people and rather than letting them stay in a bad place, a bad habit or bad behavior, could we give them a healthful nudge over the computer to remind them that it was time to take a walk or a break,” Benden said. “We as humans are not very good at keeping track of time, especially when we’re in the zone. In order to keep us from physically hurting our bodies, we need to have nudges and reminders, which people respond to, and which work really well.”

Benden noted that taking breaks does not hinder employees’ quality of work.

“The people who took the recommended breaks were more productive overall. They got more done,” he said. “We need to learn this about people, we need to teach people about it, and then we need to help people actually do it.”

Continue Reading

BizNews

Loud and clear: High-energy ads keep viewers tuned in, study shows

“By matching the energy level of ad content with consumers’ state of mind, we believe advertisers can expect higher levels of acceptance and effectiveness of their messages.”

Published

on

Photo by @theblowup from Unsplash.com

TV advertising has become not only high-volume, but increasingly high-energy — a trend noticed by academics and practitioners.

A new study from the University of Notre Dame confirms the shift and shows that advertisers should pay attention to components of ad content other than loudness, which has been regulated by law.

More energetic commercials are likely to be tuned in more or avoided less by viewers, according to “High-Energy Ad Content: A Large-Scale Investigation of TV Commercials,” forthcoming in the Journal of Marketing Research from Joonhyuk Yang, assistant professor of marketing at Notre Dame’s Mendoza College of Business.

After examining more than 27,000 TV commercials on major U.S. networks from 2015 to 2018 and almost all Super Bowl ads from 1969 to 2020, the researchers noted that, overall, more energetic commercials hold viewers’ attention.

The study measures the energy levels in commercials based on Spotify’s measure of energy in song tracks. Spotify defines energy as “a perceptual measure of intensity and powerful activity released throughout the track. Typical energetic tracks feel fast, loud, and noisy.” The paper’s measure of energy levels in commercials is highly correlated with people’s psychological state of “arousal,” or “the subjective experience of energy mobilization, which can be conceptualized as an affective dimension ranging from sleepy to frantic excitement.”

The top five keywords mentioned by the paper’s survey participants regarding high-energy commercials were “fast,” “music,” “movement,” “upbeat” and “exciting.”

The team first presented a framework to algorithmically measure the energy level in ad content from the video of ads. They then compared this measure with human-perceived energy levels, showing they’re related to the level of arousal stimulated by ad content.

“The positive association between energy levels in ad content and ad-tuning is statistically significant after controlling for placement and other aspects of commercials,” Yang said.

The study also finds the association varies across product categories and program genres.

“High-energy food and beverage commercials are likely to be viewed longer when placed within entertainment and news programs, but not in sports programs,” Yang said, “while energetic health and beauty commercials are viewed for shorter periods of time when placed in sports programs.”

Targeted advertising has typically focused on who audiences are, as well as their locations and behaviors. This study suggests adding another dimension — the emotional or psychological state of the audience.

“By matching the energy level of ad content with consumers’ state of mind, we believe advertisers can expect higher levels of acceptance and effectiveness of their messages,” Yang said. “For instance, advertisers might want to vary the energy level of their ad content between day and night.”

Relatedly, advertisers and television networks boost the audio of ads, making the volume much louder than the programs in which they are aired, assuming this draws attention to the ads and makes people less likely to ignore or avoid them.

This practice became so prevalent that it raised concerns about the health effects of loudness on viewing audiences, leading to regulatory limitations on how much louder ads can be than the programs in which they are placed. The resulting CALM (Commercial Advertisement Loudness Mitigation) Act passed in 2010 limits the average loudness of an ad to no more than the average loudness of the program in which it is aired.

Advertisers and networks, therefore, cannot continue to rely on loudness as a means of attracting attention to reduce ad avoidance. This forces advertisers to figure out ways to be creative in using audio to attract and retain audience attention.

Yang recommends advertisers conduct A/B tests with multiple designs of ad creatives. A/B testing splits an audience to test a number of variations to determine which performs better — for example, showing version A to one half of an audience and version B to the other; or alternating A and B across time.

“Recall that the effect varies across product markets and likely across media outlets, including digital advertising,” Yang said. “I hope this study motivates the initiation of such testing as well as for providing initial guidelines on designing such studies. Also, we want to showcase the importance of careful feature engineering of ad content when relating it to consumer behavior. I would be more than happy to help practitioners who are interested in moving forward.”

Co-authors of the study include Yingkang Xie and Lakshman Krishnamurthi from Northwestern University and Purushottam Papatla from the University of Wisconsin-Milwaukee.

Continue Reading

BizNews

Can bad reviews be good for business? New research says yes

When negative comments come from a socially distant source, a negative review actually increases purchase intentions – and that is a game changer.

Published

on

Negative online reviews and low-star ratings are generally known to be bad for brands, so much that there are entire businesses devoted to reversing the damage. But a new study from the UBC Sauder School of Business found that this isn’t always the case.

UBC Sauder Associate Professor Dr. Lisa Cavanaugh (she/her) and her research team have found that negative online comments have little effect in cases where brand relationships are strong and consumers personally identify with a brand’s products. In some instances, they found negative reviews can actually have positive effects for brands.

“When consumers personally identify with a brand, they see facets of themselves in that brand,” Dr. Cavanaugh said. “When a reviewer leaves a disparaging comment about an identity-relevant brand, consumers feel compelled to protect the brand, and by extension themselves, by scrutinizing the source of the negative review.”

Photo by Karsten Winegeart from Unsplash.coom

Social proximity matters when it comes to reviews

The researchers found that the social proximity (e.g., demographic or geographic location) of a negative reviewer plays a critical role in how consumers respond to a negative review.

For the study, researchers conducted sixteen different experiments using identity-relevant brands, which are companies that consumers tend to feel strongly about and personally identify with, such as Apple, Tim Hortons, Roots, and NFL football.

In one experiment, researchers surveyed NFL fans, asking them about their reactions to negative or positive online reviews of an NFL-branded sweatshirt. They found that if the reviewer’s profile reflected a different social demographic, or a distant location, a negative review could actually boost the participant’s interest in buying the sweatshirt even more than a positive review did.

A similar case was also observed where a negative review from a person in distant social proximity increased the likelihood participants would buy an Apple watch.  And in another experiment involving President’s Choice packaged coffee, participants who read a negative review by a socially distant reviewer demonstrated a 6 to 12 per cent boost in their interest in purchasing the product.

“Marketers have generally assumed that when people say positive things, purchase interest increases, and when people say negative things, purchase interest decreases,” says Dr. Cavanaugh, who co-authored the study with Boston College’s Dr. Nailya Ordabayeva, and UBC Sauder Professor Dr. Darren Dahl. “But when negative comments come from a socially distant source, a negative review actually increases purchase intentions – and that is a game changer.” 

Dr. Cavanaugh explains that when we read negative reviews coming from someone we perceive as socially closer (e.g., maybe they share a national identity, they are a fellow student, or they like the same music) we tend to listen and accept what they have to say, causing us to downgrade our assessment of the product, as well as our willingness to purchase it.

“But when that negative review comes from a socially distant person, it is an entirely different story. You will scrutinize the source of the review and look for reasons not to believe it, specifically when it’s an identity-relevant brand, “she said. 

Bad reviews are still bad for lesser-known brands

Businesses shouldn’t get too comfortable with poor reviews, however.

Researchers observed that if consumers don’t personally identify with the product or brand — if it’s a toilet brush as opposed to an Apple computer or Timbits, for example — they won’t be driven to defend it, so poor reviews will still pack a punch.

Dr. Cavanaugh says the research reinforces the importance of forging strong brand relationships with customers, and cultivating that coveted connection to people’s identities.

In terms of managing online reviews, she also points to the importance of displaying reviewers’ profiles and review histories so readers can more easily assess their social proximity and any pattern of negative reviews. And advises that, sometimes, doing nothing (i.e., allowing those negative reviews to be seen) is better when it comes to negative reviews of identity-relevant brands.

This study was published in the Journal of Marketing.

Continue Reading
Advertisement
Advertisement

Like us on Facebook

Trending