Home > Backend Development > Python Tutorial > pytorch + visdom handles simple classification problems

pytorch + visdom handles simple classification problems

不言
Release: 2018-06-04 16:07:16
Original
3458 people have browsed it

This article mainly introduces how pytorch visdom handles simple classification problems. It has certain reference value. Now I share it with you. Friends in need can refer to it

##Environment

System: win 10

Graphics card: gtx965m
cpu: i7-6700HQ
python 3.61
pytorch 0.3

Package reference

import torch
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
import visdom
import time
from torch import nn,optim
Copy after login

Data preparation

use_gpu = True
ones = np.ones((500,2))
x1 = torch.normal(6*torch.from_numpy(ones),2)
y1 = torch.zeros(500) 
x2 = torch.normal(6*torch.from_numpy(ones*[-1,1]),2)
y2 = y1 +1
x3 = torch.normal(-6*torch.from_numpy(ones),2)
y3 = y1 +2
x4 = torch.normal(6*torch.from_numpy(ones*[1,-1]),2)
y4 = y1 +3 

x = torch.cat((x1, x2, x3 ,x4), 0).float()
y = torch.cat((y1, y2, y3, y4), ).long()
Copy after login

The visualization is as follows:


##visdom visualization preparationFirst create the windows that need to be observed

viz = visdom.Visdom()
colors = np.random.randint(0,255,(4,3)) #颜色随机
#线图用来观察loss 和 accuracy
line = viz.line(X=np.arange(1,10,1), Y=np.arange(1,10,1))
#散点图用来观察分类变化
scatter = viz.scatter(
  X=x,
  Y=y+1, 
  opts=dict(
    markercolor = colors,
    marksize = 5,
    legend=["0","1","2","3"]),)
#text 窗口用来显示loss 、accuracy 、时间
text = viz.text("FOR TEST")
#散点图做对比
viz.scatter(
  X=x,
  Y=y+1, 
  opts=dict(
    markercolor = colors,
    marksize = 5,
    legend=["0","1","2","3"]
  ),
)
Copy after login

The effect is as follows:

Logistic regression processingInput 2, output 4

logstic = nn.Sequential(
  nn.Linear(2,4)
)
Copy after login

Gpu or CPU selection:

if use_gpu:
  gpu_status = torch.cuda.is_available()
  if gpu_status:
    logstic = logstic.cuda()
    # net = net.cuda()
    print("###############使用gpu##############")
  else : print("###############使用cpu##############")
else:
  gpu_status = False
  print("###############使用cpu##############")
Copy after login

Optimizer and loss function:

loss_f = nn.CrossEntropyLoss()
optimizer_l = optim.SGD(logstic.parameters(), lr=0.001)
Copy after login

Training 2000 times:

start_time = time.time()
time_point, loss_point, accuracy_point = [], [], []
for t in range(2000):
  if gpu_status:
    train_x = Variable(x).cuda()
    train_y = Variable(y).cuda()
  else:
    train_x = Variable(x)
    train_y = Variable(y)
  # out = net(train_x)
  out_l = logstic(train_x)
  loss = loss_f(out_l,train_y)
  optimizer_l.zero_grad()
  loss.backward()
  optimizer_l.step()
Copy after login

After training, observation and visualization:

if t % 10 == 0:
  prediction = torch.max(F.softmax(out_l, 1), 1)[1]
  pred_y = prediction.data
  accuracy = sum(pred_y ==train_y.data)/float(2000.0)
  loss_point.append(loss.data[0])
  accuracy_point.append(accuracy)
  time_point.append(time.time()-start_time)
  print("[{}/{}] | accuracy : {:.3f} | loss : {:.3f} | time : {:.2f} ".format(t + 1, 2000, accuracy, loss.data[0],
                                  time.time() - start_time))
  viz.line(X=np.column_stack((np.array(time_point),np.array(time_point))),
       Y=np.column_stack((np.array(loss_point),np.array(accuracy_point))),
       win=line,
       opts=dict(legend=["loss", "accuracy"]))
   #这里的数据如果用gpu跑会出错,要把数据换成cpu的数据 .cpu()即可
  viz.scatter(X=train_x.cpu().data, Y=pred_y.cpu()+1, win=scatter,name="add",
        opts=dict(markercolor=colors,legend=["0", "1", "2", "3"]))
  viz.text("<h3 align=&#39;center&#39; style=&#39;color:blue&#39;>accuracy : {}</h3><br><h3 align=&#39;center&#39; style=&#39;color:pink&#39;>"
       "loss : {:.4f}</h3><br><h3 align =&#39;center&#39; style=&#39;color:green&#39;>time : {:.1f}</h3>"
       .format(accuracy,loss.data[0],time.time()-start_time),win =text)
Copy after login

We first run it on the CPU once, and the results are as follows:

Then run it with gpu, the results are as follows:

I found that cpu is much faster than gpu, but I I heard that machine learning should be faster with GPUs. I searched on Baidu and found the answer on Zhihu:


My understanding is that GPUs are processing a lot of image recognition. The computing power of matrix operations and other aspects is much higher than that of the CPU. When processing some inputs and outputs with very few inputs and outputs, the CPU has the advantage.

Add a neural layer:

net = nn.Sequential(
  nn.Linear(2, 10),
  nn.ReLU(),  #激活函数
  nn.Linear(10, 4)
)
Copy after login

Add a 10-unit neural layer and see if there will be any effect Improvement:


Use cpu:


Use gpu:

Comparative observation does not seem to make any difference. It seems that when dealing with simple classification problems (small input and output), neural layers and GPUs will not support machine learning.

Related recommendations:


Example of building a simple neural network on PyTorch to implement regression and classification


Detailed explanation of PyTorch Batch training and optimizer comparison


The above is the detailed content of pytorch + visdom handles simple classification problems. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template