I saw a big data article today that analyzed the advantages of Python as a machine learning language. It was mentioned that in 2010, when Python's Theano library was run on the CPU, its speed was 1.8 times that of Numpy, while on the GPU When running on Numpy, it is 11 times faster than Numpy.
#So I started to look up the related concepts of GPU and Theano.
The following is a text introduction to GPU from Nvidia’s official website, and the video is particularly intuitive.
GPU-accelerated computing uses a graphics processing unit (GPU) and a CPU to accelerate scientific, engineering, and enterprise applications. NVIDIA® pioneered this in 2007, and GPUs now power energy-efficient data centers in government labs, universities, enterprises, and small and medium-sized businesses around the world.
How Applications Leverage GPUs for Acceleration
A simple way to understand the difference between CPUs and GPUs is to compare how they handle tasks. The CPU consists of several cores optimized for sequential serial processing. GPUs, on the other hand, are made up of thousands of smaller, more efficient cores designed to handle multiple tasks simultaneously.
GPU has thousands of cores and can efficiently handle parallel tasks
Theano is one of the mainstream deep learning Python libraries and also supports GPU. However, it is difficult to get started with Theano. .
Deep learning based on Python
Among the Python libraries that implement neural network algorithms, Theano is undoubtedly the most popular. However, Theano is not strictly a neural network library, but a Python library that can implement a variety of mathematical abstractions. Because of this, Theano has a steep learning curve, so I will introduce two neural network libraries built on Theano that have a flatter learning curve.
The first library is Lasagne. This library provides a nice abstraction that allows you to build each layer of a neural network and then stack it on top of each other to build a complete model. Although this is better than Theano, building each layer and then appending it on top of each other would be a bit tedious, so we will use the Nolearn library, which provides a Scikit-Learn style API on top of the Lasagne library to easily Build a multi-layer neural network.
Machine learning
supervised learning
Supervised learning (supervised learning) means that the data includes the information we want to predict Attributes of supervised learning problems have the following two categories:
classification
Classification (classification): The samples belong to two or more categories, and we hope to learn from the data of the labeled categories, to predict the classification of unlabeled data. For example, recognizing handwritten digits is a classification problem, where the goal is to map each input vector to a finite number category. To think about it from another perspective, classification is a discrete (as opposed to continuous) form of supervised learning. For n samples, one side has a corresponding limited number of categories, and the other side tries to label the sample and assign it to the correct category. .
regression
Regression (regression): If the desired output is one or more continuous variables, then the task is called regression, such as predicting salmon as a function of age and weight length.
The above is the detailed content of python gpu what does it mean. For more information, please follow other related articles on the PHP Chinese website!