AI cybersecurity solutions detect ransomware in under 60 seconds – Security Intelligence
Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.
Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.
Criminals have been using AI for some time now — for example, to assist with phishing email content creation. Also, groups have been using LLMs to help with basic scripting tasks, including file manipulation, data selection, regular expressions and multiprocessing, to potentially automate or optimize technical operations.
Like a chess match, organizations must think several moves ahead of their adversaries. One of these anticipatory moves can include cloud-based AI cybersecurity to help identify anomalies that might indicate the start of a cyberattack.
Recently, AI cybersecurity solutions have emerged that can detect anomalies like ransomware in less than 60 seconds. To help clients counter threats with earlier and more accurate detection, IBM has announced new AI-enhanced versions of the IBM FlashCore Module technology available inside the IBM Storage FlashSystem products and a new version of IBM Storage Defender software. These solutions will help security teams better detect and respond to attacks in the age of artificial intelligence.
Immutable copies of data are used to protect data from corruption, such as ransomware attacks, accidental deletion, natural disasters and outages. These backups are also useful for helping organizations comply with data regulations.
Storage protection based on immutable copies of data is typically separated from production environments. These safeguarded copies cannot be modified or deleted by anyone and are only accessible by authorized administrators. This type of solution offers the cyber resiliency necessary to ensure immediate access to data recovery in response to ransomware attacks.
However, given the growing need for AI-ready ransomware security, new solutions are in demand. Unlike traditional storage arrays, systems like IBM FlashSystem leverage machine learning to monitor data patterns, looking for anomalous behaviors indicative of a cyber threat.
This new technology is designed to continuously monitor statistics gathered from every single I/O using machine learning models to detect anomalies like ransomware in less than a minute.
Advanced systems can use machine learning models to distinguish ransomware and malware from normal behavior. This dramatically accelerates threat detection and response, enabling organizations to take action and keep operating during an attack. For example, autonomous responses can trigger alerts or IT playbook activation that will minimize the impact of an attack against data.
Cyber criminals are continuously developing AI-enhanced attack capabilities. AI-driven cyberattacks are quickly evolving that can pinpoint vulnerabilities, detect patterns and exploit weaknesses. Plus, AI’s efficiency and rapid data analysis can give hackers a tactical advantage over poorly equipped cyber defenses. Traditional cybersecurity methods are no longer enough to combat AI security threats as new tools evolve in real-time. The result is rapid intrusion and undetected ransomware deployment.
Moreover, there are also predictions about how LLMs and other generative AI tools will be offered as a paid service, like Ransomware-as-a-Service, to help attackers deploy their attacks more efficiently with less effort involved. This means the threat will grow into something even more dangerous and more proliferative.
The only response is to fight fire with fire. AI cybersecurity solutions, such as AI-enhanced versions of the IBM FlashCore Module technology, are designed to thwart the most dangerous attacks now — as well as the ones that security teams will face in the future.
2 min read – Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on…
5 min read – As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly…
4 min read – Explore top findings from the 2024 X-Force Threat Intelligence Index, and learn how your organization can defend against evolving threats.
4 min read – Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…
2 min read – Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…
4 min read – While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…
Analysis and insights from hundreds of the brightest minds in the cybersecurity industry to help you prove compliance, grow business and stop threats.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.
Criminals have been using AI for some time now — for example, to assist with phishing email content creation. Also, groups have been using LLMs to help with basic scripting tasks, including file manipulation, data selection, regular expressions and multiprocessing, to potentially automate or optimize technical operations.
Like a chess match, organizations must think several moves ahead of their adversaries. One of these anticipatory moves can include cloud-based AI cybersecurity to help identify anomalies that might indicate the start of a cyberattack.
Recently, AI cybersecurity solutions have emerged that can detect anomalies like ransomware in less than 60 seconds. To help clients counter threats with earlier and more accurate detection, IBM has announced new AI-enhanced versions of the IBM FlashCore Module technology available inside the IBM Storage FlashSystem products and a new version of IBM Storage Defender software. These solutions will help security teams better detect and respond to attacks in the age of artificial intelligence.
Immutable copies of data are used to protect data from corruption, such as ransomware attacks, accidental deletion, natural disasters and outages. These backups are also useful for helping organizations comply with data regulations.
Storage protection based on immutable copies of data is typically separated from production environments. These safeguarded copies cannot be modified or deleted by anyone and are only accessible by authorized administrators. This type of solution offers the cyber resiliency necessary to ensure immediate access to data recovery in response to ransomware attacks.
However, given the growing need for AI-ready ransomware security, new solutions are in demand. Unlike traditional storage arrays, systems like IBM FlashSystem leverage machine learning to monitor data patterns, looking for anomalous behaviors indicative of a cyber threat.
This new technology is designed to continuously monitor statistics gathered from every single I/O using machine learning models to detect anomalies like ransomware in less than a minute.
Advanced systems can use machine learning models to distinguish ransomware and malware from normal behavior. This dramatically accelerates threat detection and response, enabling organizations to take action and keep operating during an attack. For example, autonomous responses can trigger alerts or IT playbook activation that will minimize the impact of an attack against data.
Cyber criminals are continuously developing AI-enhanced attack capabilities. AI-driven cyberattacks are quickly evolving that can pinpoint vulnerabilities, detect patterns and exploit weaknesses. Plus, AI’s efficiency and rapid data analysis can give hackers a tactical advantage over poorly equipped cyber defenses. Traditional cybersecurity methods are no longer enough to combat AI security threats as new tools evolve in real-time. The result is rapid intrusion and undetected ransomware deployment.
Moreover, there are also predictions about how LLMs and other generative AI tools will be offered as a paid service, like Ransomware-as-a-Service, to help attackers deploy their attacks more efficiently with less effort involved. This means the threat will grow into something even more dangerous and more proliferative.
The only response is to fight fire with fire. AI cybersecurity solutions, such as AI-enhanced versions of the IBM FlashCore Module technology, are designed to thwart the most dangerous attacks now — as well as the ones that security teams will face in the future.
2 min read – Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on…
5 min read – As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly…
4 min read – Explore top findings from the 2024 X-Force Threat Intelligence Index, and learn how your organization can defend against evolving threats.
4 min read – Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…
2 min read – Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…
4 min read – While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…
Analysis and insights from hundreds of the brightest minds in the cybersecurity industry to help you prove compliance, grow business and stop threats.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

