


Why fine-tuning an MLP model on a small dataset still maintains the same test accuracy as pre-trained weights?
I designed a simple mlp model to train on 6k data samples.
class mlp(nn.module): def __init__(self,input_dim=92, hidden_dim = 150, num_classes=2): super().__init__() self.input_dim = input_dim self.num_classes = num_classes self.hidden_dim = hidden_dim #self.softmax = nn.softmax(dim=1) self.layers = nn.sequential( nn.linear(self.input_dim, self.hidden_dim), nn.relu(), nn.linear(self.hidden_dim, self.hidden_dim), nn.relu(), nn.linear(self.hidden_dim, self.hidden_dim), nn.relu(), nn.linear(self.hidden_dim, self.num_classes), ) def forward(self, x): x = self.layers(x) return x
And the model has been instantiated
model = mlp(input_dim=input_dim, hidden_dim=hidden_dim, num_classes=num_classes).to(device) optimizer = optimizer.adam(model.parameters(), lr=learning_rate, weight_decay=1e-4) criterion = nn.crossentropyloss()
and hyperparameters:
num_epoch = 300 # 200e3//len(train_loader) learning_rate = 1e-3 batch_size = 64 device = torch.device("cuda") seed = 42 torch.manual_seed(42)
My implementation mainly follows this question. I saved the model as pretrained weights model_weights.pth
.
model
on the test data set is 96.80%
.
Then, I have another 50 samples (in finetune_loader
) on which I am trying to fine-tune the model:
model_finetune = MLP() model_finetune.load_state_dict(torch.load('model_weights.pth')) model_finetune.to(device) model_finetune.train() # train the network for t in tqdm(range(num_epoch)): for i, data in enumerate(finetune_loader, 0): #def closure(): # Get and prepare inputs inputs, targets = data inputs, targets = inputs.float(), targets.long() inputs, targets = inputs.to(device), targets.to(device) # Zero the gradients optimizer.zero_grad() # Perform forward pass outputs = model_finetune(inputs) # Compute loss loss = criterion(outputs, targets) # Perform backward pass loss.backward() #return loss optimizer.step() # a model_finetune.eval() with torch.no_grad(): outputs2 = model_finetune(test_data) #predicted_labels = outputs.squeeze().tolist() _, preds = torch.max(outputs2, 1) prediction_test = np.array(preds.cpu()) accuracy_test_finetune = accuracy_score(y_test, prediction_test) accuracy_test_finetune Output: 0.9680851063829787
I checked, the accuracy remains the same as before fine-tuning the model to 50 samples, and the output probabilities are also the same.
What could be the reason? Did I make some mistakes in fine-tuning the code?
Correct answer
You must reinitialize the optimizer with a new model (model_finetune object). Currently, as I can see in your code, it seems to still use the optimizer that is initialized with the old model weights - model.parameters().
The above is the detailed content of Why fine-tuning an MLP model on a small dataset still maintains the same test accuracy as pre-trained weights?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This tutorial demonstrates how to use Python to process the statistical concept of Zipf's law and demonstrates the efficiency of Python's reading and sorting large text files when processing the law. You may be wondering what the term Zipf distribution means. To understand this term, we first need to define Zipf's law. Don't worry, I'll try to simplify the instructions. Zipf's Law Zipf's law simply means: in a large natural language corpus, the most frequently occurring words appear about twice as frequently as the second frequent words, three times as the third frequent words, four times as the fourth frequent words, and so on. Let's look at an example. If you look at the Brown corpus in American English, you will notice that the most frequent word is "th

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

Python provides a variety of ways to download files from the Internet, which can be downloaded over HTTP using the urllib package or the requests library. This tutorial will explain how to use these libraries to download files from URLs from Python. requests library requests is one of the most popular libraries in Python. It allows sending HTTP/1.1 requests without manually adding query strings to URLs or form encoding of POST data. The requests library can perform many functions, including: Add form data Add multi-part file Access Python response data Make a request head

Dealing with noisy images is a common problem, especially with mobile phone or low-resolution camera photos. This tutorial explores image filtering techniques in Python using OpenCV to tackle this issue. Image Filtering: A Powerful Tool Image filter

PDF files are popular for their cross-platform compatibility, with content and layout consistent across operating systems, reading devices and software. However, unlike Python processing plain text files, PDF files are binary files with more complex structures and contain elements such as fonts, colors, and images. Fortunately, it is not difficult to process PDF files with Python's external modules. This article will use the PyPDF2 module to demonstrate how to open a PDF file, print a page, and extract text. For the creation and editing of PDF files, please refer to another tutorial from me. Preparation The core lies in using external module PyPDF2. First, install it using pip: pip is P

This tutorial demonstrates how to leverage Redis caching to boost the performance of Python applications, specifically within a Django framework. We'll cover Redis installation, Django configuration, and performance comparisons to highlight the bene

Natural language processing (NLP) is the automatic or semi-automatic processing of human language. NLP is closely related to linguistics and has links to research in cognitive science, psychology, physiology, and mathematics. In the computer science

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap
