Artificial intelligence is showing up in new ways across almost every security tool enterprises rely on, but the industry’s lopsided focus on the technology, instead of the underlying data, is misguided, according to IDC analysts.
AI can and will deliver positive results for defense, but those outcomes depend on the structure and integrity of the underlying data that feeds it, the research firm concluded in a guide for CISOs weighing the benefits of AI.
Data is the enabling infrastructure for security AI, and the framework structures, management and curation of that data will determine its success, according to Frank Dickson, group VP for IDC’s security and trust practice.
“Right now it’s more ingredients, it’s more hype than outcome,” Dickson told Cybersecurity Dive. “We’re going to figure out the outcome, but I don’t think we quite know the outcome yet.”
The industry is trying to apply generative AI to defense without lumping it together with the benefits predictive AI has delivered to cybersecurity for at least a decade, Dickson said.
Many cybersecurity vendors already analyze data at scale with predictive AI to spot statistical patterns and prevent potential problems.
“Predictive AI is like meat — it’s full of protein, it’s full of nutrition, it’s great for you and it fundamentally builds muscle and bones,” Dickson said.
“Generative AI gives you energy quick. It’s like an energy drink, it’s got lots of sugar, it’s got lots of caffeine, it gets me through the day real fast. It’s great, but at the end of the day I’m exhausted, I’m tired and I’m cranky,” he said. “It helped me address that point problem but it’s not integral. It didn’t permanently solve my problem. It just got me to the end of the day.”
This is where the data cybersecurity vendors and enterprises feed into generative AI models comes into play.
CISOs and cybersecurity practitioners need to address the structure, management and curation of data as they pursue benefits from AI for defense, Dickson said.
The challenges enterprises endure in normalizing and qualifying security telemetry from multiple sources is dramatically more difficult than building generative AI tools, he said. “The progress that we've had in generative AI in the past year is probably equal to what we've done in data in the past 10.”
The cybersecurity industry is making progress on these fronts, partly because security professionals are distrusting by default, he said.
This predisposition will empower security professionals to be deliberate and avoid the pitfalls that might await other industries rushing to adopt AI, according to Dickson.
As cybersecurity vendors scramble to integrate generative AI interfaces to their tools, IDC advises enterprises to force their vendors to demonstrate true value.
Benefits that can’t be objectively measured may not be real.
Source link