Home > Technology peripherals > AI > body text

Is machine learning for security a beautiful lie?

WBOY
Release: 2023-04-15 20:13:01
forward
1228 people have browsed it

Translator | Bugatti

Reviewer | Sun Shujuan

Machine learning (ML) is not a magical technology. Generally speaking, ML is suitable for solving narrow problems with large data sets, and the patterns of interest are highly repeatable or predictable. Most security problems do not require or benefit from ML. Many experts, including those at Google, recommend that when solving complex problems, you should only try ML after exhausting all other methods.

ML combines a wide range of statistical techniques: ML allows us to train computers to estimate answers to problems, even if we are not programmed with the correct answers in advance. If a well-designed ML system is used to tackle the right type of problem, it can uncover insights that would otherwise be unavailable.

Every organization’s IT environment has different purposes, architecture, priorities and risk tolerances. It is impossible to create algorithms, ML or other products that broadly support security use cases in all scenarios. This is why most successful applications of ML in security combine multiple approaches to solve a very specific problem. Typical examples include spam filters, DDoS or bot mitigation, and malware detection.

1. Garbage in Garbage out

The biggest challenge with ML is having relevant and usable data to solve real problems. For supervised ML, you need a large, properly labeled dataset. For example, to build a model that recognizes cat photos, you need to train the model with many cat photos labeled "cat" and many non-cat photos labeled "non-cat." If you don't have enough photos or they're not labeled accurately, the model won't turn out well.

In security, a well-known supervised ML use case is signatureless malware detection. Many endpoint protection platform (EPP) vendors use ML to label large numbers of malicious and benign samples to train models on "what malware looks like." These models can correctly identify evasive mutant malware and other subterfuges (files that have been tampered with so that they can evade signature detection methods, but are still malicious). Instead of matching features, ML uses another set of features to predict malicious content, often catching malware that feature-based methods miss.

Since ML models are probabilistic, trade-offs are required. ML can catch malware that signature methods miss, but it can also miss malware that signature methods miss. That’s why modern EPP tools use a hybrid approach, combining ML and feature-based techniques to achieve maximum protection coverage.

2. False positive problem

Even if the model is carefully designed, ML will bring some additional challenges when interpreting the output, including:

  • The result is a probability . ML models output possibilities. If your model was designed to identify cats, you would get something like "There's an 80% chance that this thing is a cat." This uncertainty is inherent to ML systems and can make results difficult to interpret. Is 80% probability a cat accurate enough?
  • The model cannot be adjusted, at least not by the end user. To handle probabilistic results, tools may process them into binary results using thresholds set by the vendor. For example, a cat recognition model might report that any "cat" has a >90% probability of being a cat. Your organization's tolerance in this area may be higher or lower than the tolerance set by the supplier.
  • False Negatives (FN), the failure to detect truly malicious content, is a major drawback of ML models, especially poorly tuned models. We don't like false positives (FP) because they waste time. But there is an inherent trade-off between PF rate and FN rate. ML models are tuned to optimize this trade-off, prioritizing the "best" balance of FP rate-FN rate. However, the “right” balance will vary from organization to organization, depending on their individual threat and risk assessments. When using ML-based products, you must trust the vendor to choose appropriate thresholds for you.
  • Insufficient context for alert classification. Part of the magic of ML is extracting salient predictive yet arbitrary “features” from a data set. Imagine that identifying a cat happens to be highly correlated with the weather. No one would reason this way. But that’s the whole point of ML – finding patterns we wouldn’t otherwise find, and doing so at scale. Even if the predicted cause can be exposed to the user, it is often unhelpful in alert triage or incident response situations. This is due to the optimization of predictive capabilities by the “features” that ultimately define the ML system’s decisions.

3. Does any other name for the "statistical" method

sound beautiful?

In addition to the pros and cons of ML, there is another thing to note: not all "ML" is true ML. Statistical methods can provide you with some conclusions about your data. ML makes predictions based on data you have on data you don’t have. Marketers are keen to ride on the popularity of "ML" and "artificial intelligence", claiming that these are some kind of modern, innovative and advanced technological products. However, people often give little thought to whether this technology uses ML, let alone whether ML is the right approach.

4.Can ML detect malicious content?

ML can detect when “malicious content” is well defined and narrow in scope. It can also detect deviations from expected behavior in highly predictable systems. The more stable the environment, the more likely ML is to correctly identify anomalies. But not every exception is malicious, and operators don't always have enough context to respond.

The power of ML lies in augmenting, rather than replacing, existing methods, systems and teams to achieve optimal coverage and efficiency.

Original link: https://www.darkreading.com/vulnerabilities-threats/the-beautiful-lies-of-machine-learning-in-security

The above is the detailed content of Is machine learning for security a beautiful lie?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template