
A Bit of Security for March 17, 2025
Many organizations are struggling to achieve huge savings by using AI to burn through hard problems. Ther are some great successes, and many disappointments. In this talk I’ll discuss three shortcomings from trying to use AI for threat detection.
AI is great at aggregating huge amounts of data and detecting patterns. A model needs to scan vast amounts of information to develop theses about what it might mean and then test those hypotheses against the data. As new data arrives, the model may either evaluate it against its existing set of rules, or it may modify its rules to accommodate the new patterns the data reveals.
This doesn’t work very well for real-time threat detection for three reasons.
First, the learning process takes time. During that time, the AI model needs to let the system perform uninterrupted. The means for four or five weeks, the AI needs to leave the system unprotected to collect enough data to build pattens of normal use. If the AI model is trained on someone else’s system, it will permit processes that the other system uses, and block those processes it hasn’t seen before. And since no two businesses run the same applications, that means the AI system will throw a lot of false negatives – processes that are unfamiliar but perfectly okay.
One particular area that remains problematic is year-end processing. Financial reports need to accumulate data annually and develop special reports summarizing the year’s events. In my first professional programming job, I wrote some of those reports for the payroll and benefits system at a major insurer in Boston. The data feeds and the programs were not used except during that process. An AI-style threat detection system would flag all these processes as anomalous – which they are, because they are only run once a year. And, having worked through a number of yearend closes, I can assure you that you do not want your anti-malware processes to interrupt year-end processing.
The second major problem with AI-based threat detection is that it assumes the system is behaving well during its training phase. This leads to two dangers. First, if there is already malware in the system, the AI will decide that the malware is normal. This was the case at Solar Winds, which was running rogue processes for a year under the protection of an AI-based threat detection and prevention product. The other danger is that during the training process, a malicious individual could train the AI system to accept certain symptoms as okay. That requires the bad actors to know when the system is going through its training process. That preps the system for an undetected attack. And regardless, existing malware will still be allowed.
The third major problem with AI-based threat detection is its resource cost. Continuously monitoring processes during run-time takes an enormous amount of overhead. For each conditional branch in a program, it takes at least eight instructions to detect the condition and log the action – that’s how the data for AI’s analysis comes into being. Since complex code has one conditional branch for every seven instructions. That essentially doubles the run time for processes in learning mode. And that doesn’t count the asynchronous process of gathering that log data and writing it to a large, inexpensive storage device for AI’s analysis later. Physical I/O takes thousands of times longer than a computer’s clock speed for the fastest bus-attached solid-state storage devices. Physical hard drives are orders of magnitudeslower than that.
In sum: we are leaving the system unprotected for a month or more, including during year-end processing, cutting its throughput in half during training and monitoring, and hoping that there isn’t any malware in the system and that nobody is attacking the system during the training process.
For cybersecurity professionals, hope and faith are poor strategies. Defense in depth means running scans for known malware rigorously – and training the AI that such scans are normal parts of the business, with tightly defined parameters, known read and write activity, and predictable behavior, to be secure.
AI can help with assessing unfamiliar processes, but don’t expect it to replace tried and true conventional proven scanning techniques.
The Risks of AI for Detecting Threats - A Bit of Security for March 17, 2025
What is the downside of relying on AI to detect threats? Listen to this -
Let me know what you think in the comments below or at wjmalik@noc.social
#cybersecuritytips #attachsurface #antimalware #AIsecurity #threatdetection #BitofSec