Microsoft recently announced how they leverage ChatGPT to augment security operations and, specifically, incident response (IR) teams with AI. This announcement was part of a broader pattern with organizations discussing their experience augmenting IR teams with AI. Omar Alanezi does an effective job breaking down the traditional steps of incident response and explores which are most impacted by AI. Cado Security has concluded similarly that the mean time to resolution can be improved with AI.
I generally agree with their conclusions. For small-scale experimentation, they are accurate and reflective of the complexity of many enterprises. The difficulty for many organizations will be adopting these tools at scale in a way that provides a positive impact and does not inject bad data into already fast-moving processes while being reproducible and auditable for later investigation and analysis.
When thinking about the adoption of Generative AI in our cyber teams, we must first define our operating principles:
Finally, no Generative AI post would be complete without exploring hallucinations. Every incident is unique. ChatGPT and associated tools have the risk of exposing past experiences in unhelpful or even harmful ways, as highlighted by Adam Cohen Hillel at Cado Security, “see that the username ‘hacker’ was not actually involved in the incident—that was the model’s own invention.” Hallucinations are a growing concern in the AI community. They drive many to require humans to stay in the loop for decision-making to ensure these outcomes are not actioned on. Models are regularly improved by understanding how manifestations make it into user results.
While Generative AI is not ready to take over the role of our IR teams, it is another powerful tool to augment teams and provide complete coverage of events, timelines, system interactions, and remediation guidance. As part of an IR toolbox, Generative AI can provide better accuracy when understanding a series of events and deeper insight into their relationship and influence when investigating cyber incidents.
These tools do not replace our IR teams, nor do they enable us to decrease the size of our teams. As Generative AI tools continue to evolve, the winners in this market will be those that bring both industry experience and link to profound insights about your enterprise’s specific architecture, traffic patterns, user behaviors, and threats. The big win will be LLMs trained on a combination of industry and localized specific company data—the intersection of those two are powerful for incident response and other cyber security tasks.