Connect with us

BizNews

Security response planning on the rise, but containing attacks remains an issue – IBM

While organizations surveyed have slowly improved in their ability to plan for, detect and respond to cyberattacks over the past five years, their ability to contain an attack has declined by 13% during this same period.

Published

on

IBM announced the results of a global report examining businesses’ effectiveness in preparing for and responding to cyberattacks. While organizations surveyed have slowly improved in their ability to plan for, detect and respond to cyberattacks over the past five years, their ability to contain an attack has declined by 13% during this same period.

The global survey conducted by Ponemon Institute and sponsored by IBM Security found that respondents’ security response efforts were hindered by the use of too many security tools, as well as a lack of specific playbooks for common attack types.

While security response planning is slowly improving, the vast majority of organizations surveyed (74%) are still reporting that their plans are either ad-hoc, applied inconsistently, or that they have no plans at all. This lack of planning can impact the cost of security incidents, as companies that have incident response teams and extensively test their incident response plans spend an average of $1.2 million less on data breaches than those who have both of these cost-saving factors in place.

The key findings of those surveyed from the fifth annual Cyber Resilient Organization Report include:

  • Slowly Improving: More surveyed organizations have adopted formal, enterprise-wide security response plans over the past 5 years of the study; growing from 18% of respondents in 2015, to 26% in this year’s report (a 44% improvement).
  • Playbooks Needed: Even amongst those with a formal security response plan, only one third (representing 17% of total respondents) had also developed specific playbooks for common attack types — and plans for emerging attack methods like ransomware lagged even further behind.
  • Complexity Hinders Response: The amount of security tools that an organization was using had a negative impact across multiple categories of the threat lifecycle amongst those surveyed. Organizations using 50+ security tools ranked themselves 8% lower in their ability to detect, and 7% lower in their ability to respond to an attack, than those respondents with less tools.
  • Better Planning, Less Disruption: Companies with formal security response plans applied across the business were less likely to experience significant disruption as the result of a cyberattack. Over the past two years, only 39% of these companies experienced a disruptive security incident, compared to 62% of those with less formal or consistent plans.

“While more organizations are taking incident response planning seriously, preparing for cyberattacks isn’t a one and done activity,” said Wendi Whitmore, Vice President of IBM X-Force Threat Intelligence. “Organizations must also focus on testing, practicing and reassessing their response plans regularly. Leveraging interoperable technologies and automation can also help overcome complexity challenges and speed the time it takes to contain an incident.”

Updating Playbooks for Emerging Threats
The survey found that even amongst organizations with a formal cybersecurity incident response plan (CSIRP), only 33% had playbooks in place for specific types of attacks. Since different breeds of attack require unique response techniques, having pre-defined playbooks provides organizations with consistent and repeatable action plans for the most common attacks they are likely to face.   

Amongst the minority of responding organizations who do have attack-specific playbooks, the most common playbooks are for DDoS attacks (64%) and malware (57%). While these methods have historically been top issues for the enterprise, additional attack methods such as ransomware are on the rise. While ransomware attacks have spiked nearly 70% in recent years, only 45% of those in the survey using playbooks had designated plans for ransomware attacks.

Additionally, more than half (52%) of those with security response plans said they have never reviewed or have no set time period for reviewing or testing those plans. With business operations changing rapidly due to an increasingly remote workforce, and new attack techniques constantly being introduced, this data suggests that surveyed businesses may be relying on outdated response plans which don’t reflect the current threat and business landscape.

More Tools Led to Worse Response Capabilities
The report also found that complexity is negatively impacting incident response capabilities. Those surveyed estimated their organization was using more than 45 different security tools on average, and that each incident they responded to required coordination across around 19 tools on average. However, the study also found that an over-abundance of tools may actually hinder organizations ability to handle attacks. In the survey, those using more than 50 tools ranked themselves 8% lower in their ability to detect an attack (5.83/10 vs. 6.66/10), and around 7% lower when it comes to responding to an attack (5.95/10 vs. 6.72/10).

These findings suggest that adopting more tools didn’t necessarily improve security response efforts — in fact, it may have done the opposite. The use of open, interoperable platforms as well as automation technologies can help reduce the complexity of responding across disconnected tools. Amongst high-performing organizations in the report, 63% said the use of interoperable tools helped them improve their response to cyberattacks.

While security response planning is slowly improving, the vast majority of organizations surveyed (74%) are still reporting that their plans are either ad-hoc, applied inconsistently, or that they have no plans at all.

Better Planning Pays Off
This year’s report suggests that surveyed organizations who invested in formal planning were more successful in responding to incidents. Amongst respondents with a CSIRP applied consistently across the business, only 39% experienced an incident that resulted in a significant disruption to the organization within the past two years  compared to 62% of those who didn’t have a formal plan in place.

Looking at specific reasons that these organizations cited for their ability to respond to attacks, security workforce skills were found to be a top factor. 61% of those surveyed attributed hiring skilled employees as a top reason for becoming more resilient; amongst those who said their resiliency did not improve, 41% cited the lack of skilled employees as the top reason.

Technology was another differentiator that helped organizations in the report become more cyber resilient, especially when it comes to tools that helped them resolve complexity. Looking at organizations with higher levels of cyber resilience, the top two factors cited for improving their level of cyber resilience were visibility into applications and data (57% selecting) and automation tools (55% selecting). Overall, the data suggests that surveyed organizations that were more mature in their response preparedness relied more heavily on technology innovations to become more resilient.

BizNews

E-commerce retailers can save money by considering pick failures at stores

While warehouses are built for efficiency in picking, packing, and shipping items, pick failures are much higher in physical stores that are not designed for these purposes for several reasons (e.g., customers moving inventory without tracking, delivery receiving and recording errors, issues with labeling, theft).

Published

on

The share of e-commerce retail sales has grown steadily over the last decade. This trend has been driven by retailers with traditional brick-and-mortar stores adopting online channels to connect to customers. In a new study, researchers explored the world of omnichannel retailing — the merging of in-store and online channels in which customers can select from a combination of online and physical channels to place and receive orders.

The study examined top U.S. retailers’ use of omnichannel ship-from-store programs in which retailers use store inventory to deliver orders to homes instead of using a dedicated warehouse or fulfillment center. For the first time, the study incorporated the possibility of fulfillment attempts at stores to fail and identified how such retailers can adopt a policy that leads to significant savings when these effects are considered.

Conducted by researchers at Carnegie Mellon University (CMU) and Onera, Inc., the study is published in Manufacturing & Service Operations Management.

“The rising trend in e-commerce has been accelerated by the COVID-19 pandemic, with online sales jumping from 11.8 percent in the first quarter of 2020 to 16.1 percent in the second quarter,” says Sagnik Das, a former Ph.D. Candidate in Operations Research at CMU’s Tepper School of Business, who led the study. “In omnichannel fulfillment, retailers attempt to minimize costs while fulfilling orders within acceptable time periods.”

Das and his colleagues focused on single-item orders. Typically, online orders are sent to a favorable sequence of locations to be filled in order. Failed trials (i.e., when orders are not filled) are sent to stores later in the order for further attempts until the process reaches a time limit.

“The problem of multistage order fulfillment is an interplay of pick failure — that is, the likelihood that orders will not be filled due to unavailability — at the stores where they may be shipped from, walk-in demand at the stores, and associated shipping costs,” explains R. Ravi, Andris A. Zoltners Professor of Business, and of Operations Research and Computer Science, at CMU’s Tepper School of Business, who co-authored the study.

As stores become an integral part of retailers’ fulfillment strategy in omnichannel ship-from-store programs, the high rate of pick failures at stores becomes a considerable factor in fulfillment costs. While warehouses are built for efficiency in picking, packing, and shipping items, pick failures are much higher in physical stores that are not designed for these purposes for several reasons (e.g., customers moving inventory without tracking, delivery receiving and recording errors, issues with labeling, theft).

Researchers modeled the problem as one of sequencing the stores from which an attempt is made to pick based on anticipated pick failure and ship an order in the most cost-effective way over several stages. To identify the best solution to the fulfillment problem, they modeled pick-failure probabilities as a function of current inventory positions and the result of other online order fulfillment trials.

The study used data on actual orders from several top U.S. retailers that worked with an e-commerce solutions provider to optimize their fulfillment strategies. Researchers proposed three order fulfillment models: one in which physical and online demand were both sparse, another in which physical demand was dense, and another in which both demands were dense. They extended the third model to also incorporate order acceptance decisions along with sequencing the stores from where they are filled once accepted.

By enabling retailers to incorporate the probability of pick failure in their order management systems for ship-from-store programs, the study’s proposed online order-acceptance policies saved omnichannel retailers as much as 22 percent. Specifically, they identified the optimal sequence of stores to try the accepted orders to minimize costs; one of the policies also uses these downstream costs to determine when to shut off the online channel for selling certain items based on current inventory availability levels.

“Our study demonstrates that modeling pick failures along with their interaction with selecting and shipping costs is an important component in optimizing ship-from-store fulfillment costs for large retailers,” says Srinath Sridhar, Chief Technology Officer at Onera, Inc., who co-authored the study.

Continue Reading

BizNews

Choosing a lucky CEO means bad luck for the hiring company

Sometimes CEOs happen to attain outstanding performance thanks to events beyond their control. Firms that subsequently hire them pay them more and experience declining results, according to a study.

Published

on

Seneca, the Roman stoic philosopher, wrote that “luck does not exist.” Modern managerial studies take the liberty of disagreeing. Luck exists in the form of events that are beyond the control of CEOs and firms alike. Movements in oil prices and the business cycle (e.g., variations in GDP growth, and employment rate) that boost the market value of firms are a couple of examples.

A recent study by Mario Daniele Amore (Bocconi University, Milan) and Sebastian Schwenen (Technical University of Munich) found that choosing a lucky CEO means bad luck for the hiring company.

Good luck allows CEOs to “shine” in the labor market, making them more likely to leave their firm. “The hiring companies, though, are not perfectly able to separate out luck from task performance in their candidate pool,” Prof. Amore explained. “Therefore, lucky CEOs are likely to possess greater bargaining power vis-a-vis new firms’ shareholders, and thus gain benefits in the form of higher compensation and more attractive job assignments.”

Using a sample of S&P 1,500 US firms from 1992 to 2018, the authors found a positive association between a CEO’s luck at the departing firm and the level of pay at the new firm. Specifically, this larger pay is mostly made of non-cash compensation items like stocks awards and options, rather than salary and bonus. More interestingly, lucky CEOs were observed to move more swiftly to new firms and to have a shorter time-spell between CEO jobs.

Authors also observed that the increase in lucky CEOs’ bargaining power especially occurs in less competitive industries.

Unfortunately, incoming CEOs’ luck is also associated with a subsequent decline in the performance of the hiring firms. In particular, the performance of firms that hired low-luck CEOs gradually improves, whereas the performance of firms that hired high-luck CEOs experiences a moderate decline.

What is worse, luck may induce an attribution bias: high-luck CEOs, or the boards that hire them, misattribute luck-driven performance to observed individual actions, with the consequence that lucky CEOs will likely implement at the hiring firm the same corporate investment policies they implemented in their former companies, irrespective of their real effectiveness.

“Luck increases the attractiveness of CEOs in the managerial labor market of less competitive industries, bringing about higher bargaining power of lucky CEOs to transit swiftly and earn more. Nevertheless, appointing a lucky CEO is associated with poorer company performance and slower growth,” Professor Amore concluded.

Mario Daniele Amore and Sebastian Schwenen wrote “Hiring Lucky CEOs”, which was published in The Journal of Law, Economics, and Organization.

Continue Reading

BizNews

Purpose beyond profit: How brands can benefit consumer well-being

I a brand adequately addresses moderating factors, the potential benefits to consumers and marketers are considerable. These factors include consumer trust, brand authenticity, brand credibility, commitment to purpose, consumer-value congruence, and brand-purpose proximity.

Published

on

Researchers from The Wharton School of the University of Pennsylvania and the Owen Graduate School of Management at Vanderbilt University published a new paper in the Journal of Consumer Psychology that offers fresh insights into “brand purpose” and its potential benefits to consumers.

The article, “Conceptualizing brand purpose and considering its implications for consumer eudaimonic well-being,” is authored by Patti Williams, Jennifer Edson Escalas, and Andrew Morningstar.

In response to industry reports, apparent consumer demand, and high-profile calls from top executives including BlackRock Chairman and CEO Larry Fink, brands have publicly begun pursuing purpose beyond profit. Brands in a wide variety of categories have sought to define, articulate, communicate, and act according to their “brand purpose.” 

The authors define brand purpose as a brand’s long-term aim central to “identity, meaning structure and strategy” that leads to productive engagement with some aspect of the world beyond profit.  

This research team explores the different types of well-being consumers may experience by engaging with brands they believe reflect their own values. Specifically, they focus on eudaimonia, a feeling of fulfillment resulting from living a meaningful life, contributing meaningfully to society, and acting in alignment with moral virtues.

Their framework cites five mediating factors that affect the relationship between brand purpose and consumer well-being: consumer purpose, meaning and significance, self-acceptance/achievement of true self, positive relationships, and other-praising emotions.  

The article suggests that, if a brand adequately addresses moderating factors, the potential benefits to consumers and marketers are considerable. These factors include consumer trust, brand authenticity, brand credibility, commitment to purpose, consumer-value congruence, and brand-purpose proximity.

While consumers may gain a vital sense of well-being; marketers, may secure positive brand judgements, brand loyalty, and brand evangelism.

“The ultimate goal of our review,” the authors write, “is to guide future consumer psychology research into brand purpose, a concept that we believe may have a transformative impact on business, consumers, and society.

Continue Reading
Advertisement
Advertisement

Like us on Facebook

Trending