Will Generative AI Impact IoT Cybersecurity?
Artificial Intelligence technology has taken the business world by storm since the release of ChatGPT in late 2022. Although not exactly new in terms of its use in cybersecurity – endpoint detection and response (EDR) tools have used machine learning algorithms to detect advanced attacks for the past decade – the scale of the large language models (LLMs) in use with ChatGPT and other generative AI solutions has created an arms race of sorts in the vendor landscape.
With good reason. The new class of generative AI models have the capacity to analyze and quantify large datasets substantially faster than even previous machine learning algorithms. Security operations center (SOC) teams can become more effective with the time savings made available by AI’s processing power. Detecting anomalous behavior at scale, uncovering malicious traffic, and determining the validity of alerts can all be performed far faster than in previous years.
In the world of Internet of Things (IoT) security, the ability to analyze data at scale to uncover anomalous behavior is a force multiplier in terms of effectiveness. There are, right now, more than 15 billion IoT devices connected worldwide; that number is expected to double by 2030. Understanding appropriate behavior and analyzing traffic going to and coming from these connected devices is a titanic undertaking. A tool capable of analyzing extremely large datasets at speed and empowering better decision-making can benefit organizations that are already resource-constrained in regard to cybersecurity.
Despite the clear benefits, AI technology is not a panacea. Using AI-enabled security tools can and does mean that decision-making gets faster, and human team members can be more effective, but the reality is deploying AI-infused systems does not make for a comprehensive IoT security strategy.
Generative AI Cybersecurity Trends and Watchouts
The proliferation of generative AI into the cybersecurity realm has created more than a few major changes. Already, 69% of organizations have officially adopted generative AI tools into their operations, according to Deep Instinct. In the same study, senior security professionals tended to view generative AI as a disruptive threat, with 46% of respondents believing generative AI will increase their organization’s vulnerability to attacks.
Already, 75% of security professionals in the Deep Instinct survey said that there has been an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI. This volume is likely to increase over time, heightening the stress that cybersecurity teams are already under with the trends of ransomware-as-a-service and a lower barrier to entry for cybercriminals.
In terms of those attacks, part of the issue arises from generative AI making it easy to create fake content for social engineering attacks, such as phishing emails. Historically, threat actors haven’t had much interest in AI algorithms as part of their attacks, despite its availability. The rise of generative AI, especially for content creation, however, has made these technologies more interesting to threat actors, according to Mandiant.
Tools like FraudGPT and WormGPT are already being marketed on cybercrime forums as malicious LLMs designed to make cyberattacks easier. Reports have emerged about the two technologies for sale in different arenas, with a primary difference between these LLMs and mainstream ones like ChatGPT being no restrictions on answering questions about illegal activities.
Cybercriminals can use these LLMs to create bots at a massive scale and deploy them. Defenders and white hat hackers have access to the same tooling, making it possible for them to do the same. As a result, we’re starting to see a “war” of sorts arise between cybercriminals and cyber defenders with internet-deployed bots created at an unheard-of scale.
On a more positive note, generative AI tools are being used to create attack scenarios that defenders can use to test their systems and processes. In IoT security, AI is being deployed for threat detection and behavior analysis in addition to vulnerability scanning and detection. Ultimately, AI tools can provide substantial benefits by analyzing extremely large datasets at speeds far faster than humans can. This saves staff time and ideally empowers team members with more direction about where to focus their limited efforts.
Asimily’s Deep Packet Inspection Augmented with AI
AI techniques work well with the large datasets available for network traffic – sources, destinations and the traffic itself. That’s all structured data, available in large quantities – perfect for AI to analyze and provide insight from.
The problem, especially with connected devices, is that collecting and analyzing traditional network data is only part of the solution. Ingesting basic packet-level data can surface issues, of course, but the scale of the install base for IoT devices means packet-level data collection can only go so far in identifying problems.
The more comprehensive data collection of deep packet inspection (DPI) is required to truly ensure network security. Gathering detailed device information to locate, identify, classify and reroute or block packets with specific data or code payloads is a more granular approach to security for critical connected devices.
Augmenting AI tools with deep packet inspection on the network allows more nuance in the identification of anomalous behavior. AI can point to suspicious packets as they traverse the network, but only from the perspective of header information often collected with traditional packet filtering. When DPI is plugged into the network as part of standard operating procedures, security teams can confirm the algorithm’s suspicions about specific traffic with the more granular data gathered through that technology.
Asimily’s Threat Detection Upleveled with AI
Asimily integrates AI throughout its security platform to augment its already powerful capabilities in IoT security. Data analysis at speed is represented in our anomalous behavior and threat detection capabilities.
This data collection, paired with AI’s ability to analyze data at scale, allows for behavior monitoring that should be able to surface anomalous activity far faster than human analysts looking at the collected data. Collecting data in this way, and providing insight at machine speed thus empowers organizations with more insight faster.
Similarly, in the threat detection arena, Asimily’s packet capture capabilities help network analysts and SOC teams expedite incident response times by detecting, investigating, and responding to costly cyber attacks automatically or on demand, saving millions in lost data.
The packet capture allows network teams to capture data continuously on an arbitrary or pre-programmed interval. This information helps security officers to accurately and immediately identify a breach. AI integrated to perform speedy analysis on this data makes these incident responders even faster.
The AI revolution is already here. Threat actors have acknowledged the power of AI-generated content for social engineering attacks, and are starting to leverage AI tools to create malware code at speed. Defenders need to keep up, adopting AI for data analysis and for practice scenarios to enhance response. Asimily is helping with AI integrated into its technology to empower better threat detection at scale and enhance insight into anomalous behavior. AI may not be enough on its own, but it can make defenders better through more information and faster insight.
To learn more about the Asimily risk remediation platform, download our Total Cost of Ownership Analysis on Connected Device Cybersecurity Risk whitepaper or contact us today.
Reduce Vulnerabilities 10x Faster with Half the Resources
Find out how our innovative risk remediation platform can help keep your organization’s resources safe, users protected, and IoT and IoMT assets secure.