We know that the perceptron is the simplest neural network, with only one layer. Perceptrons are machines that simulate the behavior of biological neurons. So the teaching I bring to you this time is how to use python to implement the perceptron. The model is as follows.
Given an n-dimensional input, w and b are parameters, w is the weight, each input corresponds to a weight, and b is the bias term, which needs to be trained from the data.
Activation function There are many choices for the activation function of the perceptron. For example, we can choose the following step function f as the activation function:
In fact, the perceptron can fit any linear function, Any linear classification or linear regression problem can be solved using perceptrons. But the perceptron cannot implement the XOR operation, and of course all linear classifiers cannot implement the XOR operation.
The so-called XOR operation:
For the above picture, we cannot find a straight line that can separate categories 0 and 1. For the and operation, the perceptron can be implemented, and we can find a straight line to divide it into two parts. .
For the and operation:
Training of the perceptron
First randomly initialize the weight w and bias b to a very small number, and then continuously update w during training and the value of b.
1. Initialize the weight to 0 or a small random number
2. For each training sample x(i), perform the following steps:
Calculate the output value y^.
Update weights
The perceptron is used to implement the and operation below. The specific code is as follows:
# -*- coding: utf-8 -*- # python 3.4 import numpy as np from random import choice from sklearn import cross_validation from sklearn.linear_model import LogisticRegression '''''
1. Change the weight Initialized to 0 or a small random number
2. For each training sample x(i), perform the following steps:
Calculate the output value y^.
Update Weight
''' def load_data(): input_data=[[1,1], [0,0], [1,0], [0,1]] labels=[1,0,0,0] return input_data,labels def train_pre(input_data,y,iteration,rate): #=========================== '''''
Parameters:
input_data: input data
y: label list
iteration: number of training rounds
rate: learning Rate
''' #============================ unit_step = lambda x: 0 if x < 0 else 1 w=np.random.rand(len(input_data[0]))#随机生成[0,1)之间,作为初始化w bias=0.0#偏置 for i in range(iteration): samples= zip(input_data,y) for (input_i,label) in samples:#对每一组样本 #计算f(w*xi+b),此时x有两个 result=input_i*w+bias result=float(sum(result)) y_pred=float(unit_step(result))#计算输出值 y^ w=w+rate*(label-y_pred)*np.array(input_i)#更新权重 bias=rate*(label-y_pred)#更新bias return w,bias def predict(input_i,w,b): unit_step = lambda x: 0 if x < 0 else 1#定义激活函数 result=result=result=input_i*w+b result=sum(result) y_pred=float(unit_step(result)) print(y_pred) if __name__=='__main__': input_data,y=load_data() w,b=train_pre(input_data,y,20,0.01) predict([1,1],w,b)
I believe you have mastered the method after reading these cases. For more exciting information, please pay attention to other related articles on the php Chinese website!
Related reading:
The most popular in php Simple string matching algorithm, php matching algorithm_PHP tutorial
The simplest string matching algorithm tutorial in php
The above is the detailed content of How to implement a perceptron in python. For more information, please follow other related articles on the PHP Chinese website!