Table of Contents
How TensorFlow works
Using TensorFlow with Python
Using TensorFlow with JavaScript
TensorFlow Lite
Why use TensorFlow
Using TensorFlow for deterministic model training
TensorFlow competes with PyTorch, CNTK and MXNet
Home Technology peripherals AI Why can TensorFlow do machine learning development?

Why can TensorFlow do machine learning development?

Apr 08, 2023 pm 09:21 PM
machine learning tensorflow

Machine learning is a complex subject, but because machine learning frameworks (such as Google's TensorFlow) simplify the process of obtaining data, training models, providing predictions and improving future results, implementing machine learning is far less daunting than it once was. .

Why can TensorFlow do machine learning development?

Created by the Google Brain team and initially released to the public in 2015, TensorFlow is an open source library for numerical computing and large-scale machine learning. TensorFlow bundles together a wide range of machine learning and deep learning models and algorithms (also known as neural networks) and makes them useful through common programming metaphors. It provides a convenient front-end API for building applications using Python or JavaScript while executing them in high-performance C.

TensorFlow competes with frameworks such as PyTorch and Apache MXNet to train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence-to-sequence models for machine translation, Natural Language Processing and PDE (Partial Differential Equations) based simulations. Best of all, TensorFlow supports production predictions at scale, using the same model for training.

TensorFlow also has an extensive library of pre-trained models that can be used in your own projects. You can also use the code in the TensorFlow Model Park as examples of best practices for training your own models.

How TensorFlow works

TensorFlow allows developers to create data flow graphs—structures that describe how data moves through a graph or series of processing nodes. Each node in the graph represents a mathematical operation, and each connection or edge between nodes is a multidimensional data array, or tensor.

TensorFlow applications can run on most convenient targets: local machines, clusters in the cloud, iOS and Android devices, CPUs, or GPUs. If you use Google's own cloud, you can run TensorFlow on Google's custom TensorFlow Processing Unit (TPU) chip for further acceleration. However, the resulting models created by TensorFlow can be deployed on most devices used to provide predictions.

TensorFlow 2.0 was released in October 2019, with various improvements to the framework based on user feedback, making it easier to use (for example, by using the relatively simple KerasAPI for model training) and higher performance. Distributed training is easier to run thanks to new APIs, and support for TensorFlow Lite makes it possible to deploy models on a wider variety of platforms. However, code written for earlier versions of TensorFlow must be rewritten—sometimes only slightly, sometimes significantly—to take maximum advantage of new TensorFlow 2.0 features.

The trained model can be used to provide predictions as a service through a Docker container using REST or gRPC API. For more advanced service scenarios, you can use Kubernetes

Using TensorFlow with Python

TensorFlow provides all these capabilities to programmers through the Python language. Python is easy to learn and use, and it provides convenient ways to express how to couple high-level abstractions together. TensorFlow is supported on Python versions 3.7 to 3.10, and while it may work on earlier versions of Python, it is not guaranteed to do so.

Nodes and tensors in TensorFlow are Python objects, and TensorFlow applications themselves are Python applications. However, the actual mathematical operations are not performed in Python. The transformation libraries provided through TensorFlow are written as high-performance C binaries. Python simply directs the flow between the various parts and provides high-level programming abstractions to connect them together.

Advanced work in TensorFlow (creating nodes and layers and linking them together) uses the Keras library. The Keras API is deceptively simple; a basic three-layer model can be defined in less than 10 lines of code, and the same training code requires only a few lines of code. But if you want to "lift the veil" and do more fine-grained work, like writing your own training loops, you can do that.

Using TensorFlow with JavaScript

Python is the most popular language for working with TensorFlow and machine learning. But JavaScript is now also the first-class language for TensorFlow, and one of the huge advantages of JavaScript is that it runs anywhere there is a web browser.

TensorFlow.js (called the JavaScript TensorFlow library) uses the WebGL API to accelerate computations with any GPU available in the system. It can also be performed using a WebAssembly backend, which is faster than a regular JavaScript backend if you only run on the CPU, but it's best to use the GPU whenever possible. Pre-built models let you get simple projects up and running, giving you an idea of ​​how things work.

TensorFlow Lite

The trained TensorFlow model can also be deployed on edge computing or mobile devices, such as iOS or Android systems. The TensorFlow Lite toolset optimizes TensorFlow models to run well on such devices by allowing you to trade off model size and accuracy. Smaller models (i.e. 12MB vs. 25MB, or even 100 MB) are less accurate, but the loss in accuracy is usually small and offset by the speed and energy efficiency of the model.

Why use TensorFlow

The biggest benefit TensorFlow provides for machine learning development is abstraction. Developers can focus on overall application logic rather than dealing with the details of implementing algorithms or figuring out the correct way to connect the output of one function to the input of another. TensorFlow takes care of the details behind the scenes.

TensorFlow provides greater convenience for developers who need to debug and understand TensorFlow applications. Each graph operation can be evaluated and modified individually and transparently, rather than building the entire graph as a single opaque object and evaluating it at once. This so-called "eager execution mode" was available as an option in older versions of TensorFlow and is now standard.

TensorBoard Visualization Suite lets you inspect and analyze how your graphs are running through an interactive web-based dashboard. The Tensorboard.dev service (hosted by Google) lets you host and share machine learning experiments written in TensorFlow. It can be used for free to store up to 100M of scalar, 1GB of tensor data, and 1GB of binary object data. (Please note that any data hosted in Tensorboard.dev is public, so please do not use it for sensitive projects.)

TensorFlow also gains many advantages from the support of Google's top-notch commercial organizations. Google has driven the project's rapid growth and created many important products that make TensorFlow easier to deploy and use. The TPU chip described above for accelerating performance in Google Cloud is just one example.

Using TensorFlow for deterministic model training

Some details of TensorFlow implementation make it difficult to obtain completely deterministic model training results for some training jobs. Sometimes, a model trained on one system will be slightly different than a model trained on another system, even though they are provided with the exact same data. The reasons for this difference are tricky - one reason is how and where the random numbers are seeded; the other has to do with some non-deterministic behavior when using GPUs. The 2.0 branch of TensorFlow has an option to enable determinism throughout the entire workflow with a few lines of code. However, this feature comes at the cost of performance and should only be used when debugging workflows.

TensorFlow competes with PyTorch, CNTK and MXNet

TensorFlow competes with many other machine learning frameworks. PyTorch, CNTK, and MXNet are the three main frameworks that serve many of the same needs. Let’s take a quick look at where they stand out and fall short compared to TensorFlow:

  • PyTorch is built in Python and has many other similarities with TensorFlow: Hardware acceleration components under the hood , a highly interactive development model that allows out-of-the-box design work, and already contains many useful components. PyTorch is often a better choice for rapid development of projects that need to be up and running in a short time, but TensorFlow is better suited for larger projects and more complex workflows.
  • CNTK is the Microsoft Cognitive Toolkit, similar to TensorFlow in using graph structures to describe data flow, but it mainly focuses on creating deep learning neural networks. CNTK can handle many neural network jobs faster and has a wider API (Python, C, C#, Java). But currently it is not as easy to learn or deploy as TensorFlow. It is also only available under the GNU GPL 3.0 license, while TensorFlow is available under the more liberal Apache license. And CNTK has less positive developments. The last major version was in 2019.
  • Adopted by Amazon as the premier deep learning framework on AWS, Apache MXNet scales almost linearly across multiple GPUs and multiple machines. MXNet also supports a wide range of language APIs - Python, C, Scala, R, JavaScript, Julia, Perl, Go - although its native API is not as easy to use as TensorFlow. It also has a much smaller community of users and developers.


Original title:​​What is TensorFlow? The machine learning library explained​

The above is the detailed content of Why can TensorFlow do machine learning development?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

15 recommended open source free image annotation tools 15 recommended open source free image annotation tools Mar 28, 2024 pm 01:21 PM

Image annotation is the process of associating labels or descriptive information with images to give deeper meaning and explanation to the image content. This process is critical to machine learning, which helps train vision models to more accurately identify individual elements in images. By adding annotations to images, the computer can understand the semantics and context behind the images, thereby improving the ability to understand and analyze the image content. Image annotation has a wide range of applications, covering many fields, such as computer vision, natural language processing, and graph vision models. It has a wide range of applications, such as assisting vehicles in identifying obstacles on the road, and helping in the detection and diagnosis of diseases through medical image recognition. . This article mainly recommends some better open source and free image annotation tools. 1.Makesens

This article will take you to understand SHAP: model explanation for machine learning This article will take you to understand SHAP: model explanation for machine learning Jun 01, 2024 am 10:58 AM

In the fields of machine learning and data science, model interpretability has always been a focus of researchers and practitioners. With the widespread application of complex models such as deep learning and ensemble methods, understanding the model's decision-making process has become particularly important. Explainable AI|XAI helps build trust and confidence in machine learning models by increasing the transparency of the model. Improving model transparency can be achieved through methods such as the widespread use of multiple complex models, as well as the decision-making processes used to explain the models. These methods include feature importance analysis, model prediction interval estimation, local interpretability algorithms, etc. Feature importance analysis can explain the decision-making process of a model by evaluating the degree of influence of the model on the input features. Model prediction interval estimate

Identify overfitting and underfitting through learning curves Identify overfitting and underfitting through learning curves Apr 29, 2024 pm 06:50 PM

This article will introduce how to effectively identify overfitting and underfitting in machine learning models through learning curves. Underfitting and overfitting 1. Overfitting If a model is overtrained on the data so that it learns noise from it, then the model is said to be overfitting. An overfitted model learns every example so perfectly that it will misclassify an unseen/new example. For an overfitted model, we will get a perfect/near-perfect training set score and a terrible validation set/test score. Slightly modified: "Cause of overfitting: Use a complex model to solve a simple problem and extract noise from the data. Because a small data set as a training set may not represent the correct representation of all data." 2. Underfitting Heru

Transparent! An in-depth analysis of the principles of major machine learning models! Transparent! An in-depth analysis of the principles of major machine learning models! Apr 12, 2024 pm 05:55 PM

In layman’s terms, a machine learning model is a mathematical function that maps input data to a predicted output. More specifically, a machine learning model is a mathematical function that adjusts model parameters by learning from training data to minimize the error between the predicted output and the true label. There are many models in machine learning, such as logistic regression models, decision tree models, support vector machine models, etc. Each model has its applicable data types and problem types. At the same time, there are many commonalities between different models, or there is a hidden path for model evolution. Taking the connectionist perceptron as an example, by increasing the number of hidden layers of the perceptron, we can transform it into a deep neural network. If a kernel function is added to the perceptron, it can be converted into an SVM. this one

The evolution of artificial intelligence in space exploration and human settlement engineering The evolution of artificial intelligence in space exploration and human settlement engineering Apr 29, 2024 pm 03:25 PM

In the 1950s, artificial intelligence (AI) was born. That's when researchers discovered that machines could perform human-like tasks, such as thinking. Later, in the 1960s, the U.S. Department of Defense funded artificial intelligence and established laboratories for further development. Researchers are finding applications for artificial intelligence in many areas, such as space exploration and survival in extreme environments. Space exploration is the study of the universe, which covers the entire universe beyond the earth. Space is classified as an extreme environment because its conditions are different from those on Earth. To survive in space, many factors must be considered and precautions must be taken. Scientists and researchers believe that exploring space and understanding the current state of everything can help understand how the universe works and prepare for potential environmental crises

Implementing Machine Learning Algorithms in C++: Common Challenges and Solutions Implementing Machine Learning Algorithms in C++: Common Challenges and Solutions Jun 03, 2024 pm 01:25 PM

Common challenges faced by machine learning algorithms in C++ include memory management, multi-threading, performance optimization, and maintainability. Solutions include using smart pointers, modern threading libraries, SIMD instructions and third-party libraries, as well as following coding style guidelines and using automation tools. Practical cases show how to use the Eigen library to implement linear regression algorithms, effectively manage memory and use high-performance matrix operations.

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Explainable AI: Explaining complex AI/ML models Explainable AI: Explaining complex AI/ML models Jun 03, 2024 pm 10:08 PM

Translator | Reviewed by Li Rui | Chonglou Artificial intelligence (AI) and machine learning (ML) models are becoming increasingly complex today, and the output produced by these models is a black box – unable to be explained to stakeholders. Explainable AI (XAI) aims to solve this problem by enabling stakeholders to understand how these models work, ensuring they understand how these models actually make decisions, and ensuring transparency in AI systems, Trust and accountability to address this issue. This article explores various explainable artificial intelligence (XAI) techniques to illustrate their underlying principles. Several reasons why explainable AI is crucial Trust and transparency: For AI systems to be widely accepted and trusted, users need to understand how decisions are made

See all articles