Large Language Models (LLMs) have made remarkable progress that can perform a variety of tasks, from generating human-like text to answering questions. However, understanding how these models work remains challenging, especially because there is a phenomenon called superposition where features are mixed in a neuron, making it very difficult to extract human-understandable representations from the original model structure . This is why methods like sparse autoencoder seem to be able to untangle features to improve interpretability.
In this blog post, we will use the sparse autoencoder to look for some feature loops in a particularly interesting case of object-verb consistency and understand how the model components contribute to the task.
In the context of neural networks, the feature loop is how the network learns to combine input features to form complex patterns at a higher level. We use the metaphor of "loop" to describe how features are processed in various layers of a neural network, because this way of processing reminds us of the process of processing and combining signals in electronic circuits. These feature loops are gradually formed through the connection between the neuron and the layer, where each neuron or layer is responsible for transforming the input features, and their interactions lead to useful feature combinations working together to make the final prediction.
The following is an example of feature loops: In many visual neural networks, we can find "a loop, as a family of units that detect curves in different angles. Curve detectors are mainly composed of early, less complex curve detectors. and line detector implementation. These curve detectors are used in the next layer to create 3D geometry and complex shape detectors”[1].
In the following chapters, we will examine a feature loop for subject-predicate consistent tasks in LLM.
Overlay and sparse autoencoder
This is what the Sparse Autoencoder (SAE) does.
SAE helps us unblock the activation of the network into a sparse set of features. These sparse features are often understandable by humans, allowing us to better understand the model. By applying SAE to hidden layer activation of the LLM model, we can isolate features that contribute to the model's output.
You can find details on how SAE works in my previous blog post.
Case Study: Subject-predicate consistency
For humans, understanding this simple rule is very important for tasks such as text generation, translation, and question and answer. But how do we know if LLM really learned this rule?
We will now explore how LLM forms feature loops for this task.
Now let's build the process of creating feature loops. We will proceed in four steps:
We first build a toy language model, which may be of no sense to the following code. This is a neural network with two simple layers.
For subject-predicate consistency, the model should:
<code># ====== 定义基础模型(模拟主谓一致)====== class SubjectVerbAgreementNN(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(2, 4) # 2 个输入 → 4 个隐藏激活 self.output = nn.Linear(4, 2) # 4 个隐藏 → 2 个输出 (runs/run) self.relu = nn.ReLU() def forward(self, x): x = self.relu(self.hidden(x)) # 计算隐藏激活 return self.output(x) # 预测动词</code>
It is not clear what is happening inside the hidden layer. Therefore, we introduced the following sparse autoencoder:
<code># ====== 定义稀疏自动编码器 (SAE) ====== class c(nn.Module): def __init__(self, input_dim, hidden_dim): super().__init__() self.encoder = nn.Linear(input_dim, hidden_dim) # 解压缩为稀疏特征 self.decoder = nn.Linear(hidden_dim, input_dim) # 重构 self.relu = nn.ReLU() def forward(self, x): encoded = self.relu(self.encoder(x)) # 稀疏激活 decoded = self.decoder(encoded) # 重构原始激活 return encoded, decoded</code>
We train the original models SubjectVerbAgreementNN and SubjectVerbAgreementNN, using sentences designed to represent different singular and plural forms of verbs, such as "The cat runs", "the babies run". But, as before, for toy models, they may not make any sense.
Now we visualize the feature loop. As mentioned earlier, feature loops are neuronal units used to process specific features. In our model, features include:
For real cases, we run similar code on GPT2-small. We show a characteristic loop diagram representing the decision to select singular verbs.
Feature loops help us understand how different parts of complex LLM lead to the final output. We show the possibility of forming feature loops using SAE for subject-predicate consistent tasks.
However, we must admit that this approach still requires some human intervention, because we do not always know whether loops can really be formed without proper design.
[1] Zoom: Circuit Introduction
Please note that I have preserved the image placeholders and assumed the images are still accessible at the provided URLs. I have also maintained the original formatting as much as possible while reforming and restructuring the text for improved flow and clarity. The code blocks remain unchanged.
The above is the detailed content of Formulation of Feature Circuits with Sparse Autoencoders in LLM. For more information, please follow other related articles on the PHP Chinese website!