Home > Technology peripherals > AI > Based on PyTorch, easy to use, fine-grained image recognition deep learning tool library Hawkeye is open source

Based on PyTorch, easy to use, fine-grained image recognition deep learning tool library Hawkeye is open source

WBOY
Release: 2023-04-12 20:43:03
forward
1631 people have browsed it

Fine-grained image recognition [1] is an important research topic in visual perception learning. It has great application value in the intelligent new economy and industrial Internet, and has been widely used in many real-world scenarios... In view of the lack of information in the current field As an open source tool library for deep learning in this area, the team of Professor Wei Xiucen of Nanjing University of Science and Technology spent nearly a year developing, polishing and completing Hawkeye - an open source tool library for fine-grained image recognition deep learning for reference by researchers and engineers in related fields. use. This article is a detailed introduction to Hawkeye.

1. What is Hawkeye library

Hawkeye is a fine-grained image recognition deep learning tool library based on PyTorch, specially designed for researchers and engineers in related fields. Currently, Hawkeye includes a variety of fine-grained recognition methods of representative paradigms, including "based on depth filters", "based on attention mechanisms", "based on high-order feature interactions", "based on special loss functions", "based on network data" and other methods.

Hawkeye project code style is good, the structure is clear and easy to read, and the scalability is strong. For those who are new to the field of fine-grained image recognition, Hawkeye is easier to get started, making it easier for them to understand the main processes and representative methods of fine-grained image recognition, and it is also convenient for them to quickly implement their own algorithms on this tool library. In addition, we also provide training example codes for each model in the library. Self-developed methods can also be quickly adapted and added to Hawkeye according to the examples.

Hawkeye open source library link: https://github.com/Hawkeye-FineGrained/Hawkeye

2. Models and methods supported by Hawkeye

Hawkeye currently supports fine-grained images There are a total of 16 models and methods of the main learning paradigms in recognition, as follows:

Based on deep filter

  • S3N (ICCV 2019)
  • Interp-Parts (CVPR 2020)
  • ProtoTree (CVPR 2021)

Based on attention mechanism

  • OSME MAMC (ECCV 2018)
  • MGE-CNN (ICCV 2019)
  • APCNN (IEEE TIP 2021)

##Based on high-order feature interaction

    BCNN (ICCV 2015)
  • CBCNN (CVPR 2016)
  • Fast MPN-COV (CVPR 2018)

Based on special loss function

    Pairwise Confusion (ECCV 2018)
  • API-Net (AAAI 2020)
  • CIN (AAAI 2020)

Based on network data

Peer-Learning (ICCV 2021)

Other methods

NTS-Net (ECCV 2018)

CrossX (ICCV 2019)

DCL (CVPR 2019)

3. Install Hawkeye

Install dependencies

Use conda or pip to install related dependencies:

    Python 3.8
  • PyTorch 1.11.0 or higher
  • torchvison 0.12.0 or higher
  • numpy
  • yacs
  • tqdm
Clone the repository:

git clone https://github.com/Hawkeye-FineGrained/Hawkeye.gitcd Hawkeye

Preparing data sets

We provide 8 commonly used fine-grained recognition data sets and the latest download link:

    CUB200: https://data.caltech.edu/records/65de6-vp158/files/CUB_200_2011.tgz
  • Stanford Dog: http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar
  • Stanford Car: http://ai.stanford.edu/~jkrause/car196/car_ims.tgz
  • FGVC Aircraft: https://www.robots.ox.ac.uk/~vgg/ data/fgvc-aircraft/archives/fgvc-aircraft-2013b.tar.gz
  • iNat2018: https://ml-inat-competition-datasets.s3.amazonaws.com/2018/train_val2018.tar.gz
  • WebFG-bird: https://web-fgvc-496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-bird.tar.gz
  • WebFG-car : https://web-fgvc-496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-car.tar.gz
  • WebFG-aircraft: https://web-fgvc- 496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-aircraft.tar.gz
First, download a data set (taking CUB200 as an example):

cd Hawkeye/data
wget https://data.caltech.edu/records/65de6-vp158/files/CUB_200_2011.tgz
mkdir bird && tar -xvf CUB_200_2011.tgz -C bird/
Copy after login

We provide the meta-data files of the above 8 data sets, which can match the FGDataset in the library to easily load the training set and test set. The training set and test set are the official divisions provided by each data set. When using different data sets, you only need to modify the dataset configuration in the config file of the experiment to facilitate switching.

Modify the dataset configuration in the experimental config file, the example is as follows:

dataset:
name: cub
root_dir: data/bird/CUB_200_2011/images
meta_dir: metadata/cub
Copy after login

4. Use Hawkeye to train the model

For each method supported by Hawkeye, we provide Separate training templates and configuration files. For example, training APINet only requires one command:

python Examples/APINet.py --config configs/APINet.yaml
Copy after login

The parameters of the experiment are all in the corresponding yaml file, which is highly readable and easy to modify, such as:

experiment:
name: API_res101 2# 实验名称
log_dir: results/APINet # 实验日志、结果等的输出目录
seed: 42# 可以选择固定的随机数种子
resume: results/APINet/API_res101 2/checkpoint_epoch_19.pth# 可以从训练中断的 checkpoint 中恢复训练
dataset:
name: cub# 使用 CUB200 数据集
root_dir: data/bird/CUB_200_2011/images # 数据集中图像放置的路径
meta_dir: metadata/cub# CUB200 的 metadata 路径
n_classes: 10 # 类别数,APINet 需要的数据集
n_samples: 4# 每个类别的样本数
batch_size: 24# 测试时的批样本数
num_workers: 4# Dataloader 加载数据集的线程数
transformer:# 数据增强的参数配置
image_size: 224# 图像输入模型的尺寸 224x224
resize_size: 256# 图像增强前缩放的尺寸 256x256
model:
name: APINet# 使用 APINet 模型,见 ​​model/methods/APINet.py​​
num_classes: 200# 类别数目
load: results/APINet/API_res101 1/best_model.pth # 可以加载训练过的模型参数
train:
cuda: [4]# 使用的 GPU 设备 ID 列表,[] 时使用 CPU
epoch: 100# 训练的 epoch 数量
save_frequence: 10# 自动保存模型的频率
val_first: False# 可选是否在训练前进行一次模型精度的测试
optimizer:
name: Adam# 使用 Adam 优化器
lr: 0.0001# 学习率为 0.0001
weight_decay: 0.00000002
scheduler:
# 本例使用自定义组合的 scheduler,由 warmup 和余弦退火学习率组合而成,见 ​​Examples/APINet.py​​
name: ''
T_max: 100# scheduler 的总迭代次数
warmup_epochs: 8# warmup 的 epoch 数
lr_warmup_decay: 0.01# warmup 衰减的比例
criterion:
name: APINetLoss# APINet 使用的损失函数,见 ​​model/loss/APINet_loss.py​​
Copy after login

The main program of the experiment Examples/APINet The trainer APINetTrainer in .py is inherited from Trainer. There is no need to write complex training process, logger, model saving, configuration loading and other codes. You only need to modify some modules as needed. We also provide multiple hooks in the training phase, which can meet the special implementation of some methods.

Log files, model weight files, training codes used for training, and configuration files at that time will be saved in the experiment output directory log_dir. Backing up the configuration and training code facilitates comparison of different experiments in the future.

For more detailed examples, please refer to the specific information in the project link: https://github.com/Hawkeye-FineGrained/Hawkeye

The above is the detailed content of Based on PyTorch, easy to use, fine-grained image recognition deep learning tool library Hawkeye is open source. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template