Is machine learning for security a beautiful lie?
Translator | Bugatti
Reviewer | Sun Shujuan
Machine learning (ML) is not a magical technology. Generally speaking, ML is suitable for solving narrow problems with large data sets, and the patterns of interest are highly repeatable or predictable. Most security problems do not require or benefit from ML. Many experts, including those at Google, recommend that when solving complex problems, you should only try ML after exhausting all other methods.
ML combines a wide range of statistical techniques: ML allows us to train computers to estimate answers to problems, even if we are not programmed with the correct answers in advance. If a well-designed ML system is used to tackle the right type of problem, it can uncover insights that would otherwise be unavailable.
Every organization’s IT environment has different purposes, architecture, priorities and risk tolerances. It is impossible to create algorithms, ML or other products that broadly support security use cases in all scenarios. This is why most successful applications of ML in security combine multiple approaches to solve a very specific problem. Typical examples include spam filters, DDoS or bot mitigation, and malware detection.
1. Garbage in Garbage out
The biggest challenge with ML is having relevant and usable data to solve real problems. For supervised ML, you need a large, properly labeled dataset. For example, to build a model that recognizes cat photos, you need to train the model with many cat photos labeled "cat" and many non-cat photos labeled "non-cat." If you don't have enough photos or they're not labeled accurately, the model won't turn out well.
In security, a well-known supervised ML use case is signatureless malware detection. Many endpoint protection platform (EPP) vendors use ML to label large numbers of malicious and benign samples to train models on "what malware looks like." These models can correctly identify evasive mutant malware and other subterfuges (files that have been tampered with so that they can evade signature detection methods, but are still malicious). Instead of matching features, ML uses another set of features to predict malicious content, often catching malware that feature-based methods miss.
Since ML models are probabilistic, trade-offs are required. ML can catch malware that signature methods miss, but it can also miss malware that signature methods miss. That’s why modern EPP tools use a hybrid approach, combining ML and feature-based techniques to achieve maximum protection coverage.
2. False positive problem
Even if the model is carefully designed, ML will bring some additional challenges when interpreting the output, including:
- The result is a probability . ML models output possibilities. If your model was designed to identify cats, you would get something like "There's an 80% chance that this thing is a cat." This uncertainty is inherent to ML systems and can make results difficult to interpret. Is 80% probability a cat accurate enough?
- The model cannot be adjusted, at least not by the end user. To handle probabilistic results, tools may process them into binary results using thresholds set by the vendor. For example, a cat recognition model might report that any "cat" has a >90% probability of being a cat. Your organization's tolerance in this area may be higher or lower than the tolerance set by the supplier.
- False Negatives (FN), the failure to detect truly malicious content, is a major drawback of ML models, especially poorly tuned models. We don't like false positives (FP) because they waste time. But there is an inherent trade-off between PF rate and FN rate. ML models are tuned to optimize this trade-off, prioritizing the "best" balance of FP rate-FN rate. However, the “right” balance will vary from organization to organization, depending on their individual threat and risk assessments. When using ML-based products, you must trust the vendor to choose appropriate thresholds for you.
- Insufficient context for alert classification. Part of the magic of ML is extracting salient predictive yet arbitrary “features” from a data set. Imagine that identifying a cat happens to be highly correlated with the weather. No one would reason this way. But that’s the whole point of ML – finding patterns we wouldn’t otherwise find, and doing so at scale. Even if the predicted cause can be exposed to the user, it is often unhelpful in alert triage or incident response situations. This is due to the optimization of predictive capabilities by the “features” that ultimately define the ML system’s decisions.
3. Does any other name for the "statistical" method
sound beautiful?
In addition to the pros and cons of ML, there is another thing to note: not all "ML" is true ML. Statistical methods can provide you with some conclusions about your data. ML makes predictions based on data you have on data you don’t have. Marketers are keen to ride on the popularity of "ML" and "artificial intelligence", claiming that these are some kind of modern, innovative and advanced technological products. However, people often give little thought to whether this technology uses ML, let alone whether ML is the right approach.
4.Can ML detect malicious content?
ML can detect when “malicious content” is well defined and narrow in scope. It can also detect deviations from expected behavior in highly predictable systems. The more stable the environment, the more likely ML is to correctly identify anomalies. But not every exception is malicious, and operators don't always have enough context to respond.
The power of ML lies in augmenting, rather than replacing, existing methods, systems and teams to achieve optimal coverage and efficiency.
Original link: https://www.darkreading.com/vulnerabilities-threats/the-beautiful-lies-of-machine-learning-in-security
The above is the detailed content of Is machine learning for security a beautiful lie?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



In the fields of machine learning and data science, model interpretability has always been a focus of researchers and practitioners. With the widespread application of complex models such as deep learning and ensemble methods, understanding the model's decision-making process has become particularly important. Explainable AI|XAI helps build trust and confidence in machine learning models by increasing the transparency of the model. Improving model transparency can be achieved through methods such as the widespread use of multiple complex models, as well as the decision-making processes used to explain the models. These methods include feature importance analysis, model prediction interval estimation, local interpretability algorithms, etc. Feature importance analysis can explain the decision-making process of a model by evaluating the degree of influence of the model on the input features. Model prediction interval estimate

Common challenges faced by machine learning algorithms in C++ include memory management, multi-threading, performance optimization, and maintainability. Solutions include using smart pointers, modern threading libraries, SIMD instructions and third-party libraries, as well as following coding style guidelines and using automation tools. Practical cases show how to use the Eigen library to implement linear regression algorithms, effectively manage memory and use high-performance matrix operations.

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Translator | Reviewed by Li Rui | Chonglou Artificial intelligence (AI) and machine learning (ML) models are becoming increasingly complex today, and the output produced by these models is a black box – unable to be explained to stakeholders. Explainable AI (XAI) aims to solve this problem by enabling stakeholders to understand how these models work, ensuring they understand how these models actually make decisions, and ensuring transparency in AI systems, Trust and accountability to address this issue. This article explores various explainable artificial intelligence (XAI) techniques to illustrate their underlying principles. Several reasons why explainable AI is crucial Trust and transparency: For AI systems to be widely accepted and trusted, users need to understand how decisions are made

In C++, the implementation of machine learning algorithms includes: Linear regression: used to predict continuous variables. The steps include loading data, calculating weights and biases, updating parameters and prediction. Logistic regression: used to predict discrete variables. The process is similar to linear regression, but uses the sigmoid function for prediction. Support Vector Machine: A powerful classification and regression algorithm that involves computing support vectors and predicting labels.

Use machine learning in Golang to develop intelligent algorithms and data-driven solutions: Install the Gonum library for machine learning algorithms and utilities. Linear regression using Gonum's LinearRegression model, a supervised learning algorithm. Train the model using training data, which contains input variables and target variables. Predict house prices based on new features, from which the model will extract a linear relationship.

Java framework design enables security by balancing security needs with business needs: identifying key business needs and prioritizing relevant security requirements. Develop flexible security strategies, respond to threats in layers, and make regular adjustments. Consider architectural flexibility, support business evolution, and abstract security functions. Prioritize efficiency and availability, optimize security measures, and improve visibility.

The applications of Go coroutines in the field of artificial intelligence and machine learning include: real-time training and prediction: parallel processing tasks to improve performance. Parallel hyperparameter optimization: Explore different settings simultaneously to speed up training. Distributed computing: Easily distribute tasks and take advantage of the cloud or cluster.
