Between TensorFlow and PyTorch, who do you choose?
Alchemists must have been tortured by TF, static pictures, dependency issues, inexplicableChangeInterface, even after Google released TF 2.0, the problem still has not been solved. After having no choice but to switch to PyTorch, the world has become sunny.
"Life is short, I use PyTorch"
Even started from Google to announce the new generation computing framework JAXLooking at it, it seems that the officials have given up on TF. TensorFlow is only half a step away from the grave.
Before TF’s seventh birthday, the TensorFlow development team published a blog announcing that TensorFlow will continue to be developed and will be released in 2023 A new version will be released in 2020, rectifying the dirty, messy and poor interface, and promising 100% backward compatibility!
About seven years ago, that is, on November 9, 2015, TensorFlow was officially open source.
Since then, thousands of open source contributors and Google development experts in the community, community organizers, researchers and global educators have invested in the development of TensorFlow superior.
Today, seven years later, TensorFlow is the most commonly used machine learning platform, used by millions of developers.
TF is the third-ranked software resource library on gitHub (after Vue and React), and is also the most downloaded machine learning on PyPI software package.
TF also brings machine learning to the mobile ecosystem: TFLite runs on 4 billion devices.
TensorFlow also brings machine learning to the browser: TensorFlow.js is downloaded 170,000 times per week.
TensorFlow powers nearly all production machine learning across Google’s product portfolio, including Search, GMail, YouTube, Maps, Play, Ads, Photos, and more.
In addition to Google, among other subsidiaries of Alphabet, TensorFlow and Keras also provide the basis of machine intelligence for Waymo’s self-driving cars.
In the broader industry, TensorFlow powers machine learning systems at thousands of companies, including most of the world’s largest machine learning users—Apple, ByteDance, Netflix , Tencent, Twitter, etc.
In the research field, every month, Google Scholar includes more than 3,000 new scientific documents mentioning TensorFlow or Keras
TF Today, its user base and developer ecosystem are larger than ever and still growing!
The development of TensorFlow is not only an achievement worth celebrating, but also an opportunity to further provide more value to the machine learning community.
The development team’s goal is to provide the best machine learning platform on the planet and work to transform machine learning from a niche industry into an industry as mature as web development.
To achieve this goal, the development team is willing to listen to user needs, anticipate new industry trends, iterate the software's interfaces, and strive to make large-scale innovation increasingly easier.
Machine learning is evolving rapidly, and so is TensorFlow.
The development team has begun working on the next iteration of TensorFlow, which will support the next decade of machine learning development and fight for the future together!
Fast and scalable: XLA compilation, distributed computing, performance optimization
TF will focus on the compilation of XLA, based on the performance advantages of TPU, making the training and inference workflow of most models faster on GPU and CPU. The development team hopes that XLA will become the industry standard for deep learning compilers, and it is now open source as part of the OpenXLA initiative.
At the same time, the team also began to study a new interface DTensor that can be used for large-scale model parallelism, which may open up the future of very large model training and deployment. When users develop large models, even if they use multiple clients at the same time, it feels like training on a single machine.
DTensor will be unified with the tf.distribution interface to support flexible models and data parallelism.
The development team will also further research algorithm performance optimization techniques, such as mixed precision and reduced precision calculations, which can provide considerable GPU and TPU speed improvements.
Applied Machine Learning
Provides new tools for the fields of computer vision and natural language processing.
The applied machine learning ecosystem the team is working on, specifically through the KerasCV and KerasNLP packages, provides modular and composable components for applied CV and NLP use cases, including a large number of state-of-the-art Pre-trained model.
For developers, the team will also be adding more code samples, guides, and documentation for popular and emerging applied machine learning use cases , the ultimate goal is to gradually reduce industry barriers to machine learning and transform it into a tool in the hands of every developer.
Easier to deploy
Developers will be able to easily export models, such as to mobile devices (Android or iOS ), edge devices (microcontrollers), server backends or JavaScript will become simpler.
In the future, exporting models to TFLite and TF.js and optimizing their inference performance will be as simple as calling mod.export().
At the same time, the team is also developing a public TF2 C interface for native server-side inference, which can be directly used as part of a C program.
Whether you develop models using JAX and TensorFlow Serving, or mobile and web models developed using TensorFlow Lite and TensorFlow.js, it will become easier to deploy.
Easier
As the field of machine learning has expanded over the past few years, TensorFlow also has a growing number of interfaces, and they are not always presented in a consistent or easy-to-understand way.
The development team is actively integrating and simplifying these APIs, such as adopting the NumPy API standard for numbers.
Model debugging is also an issue that needs to be considered. An excellent framework is not only its API interface design, but also the debugging experience.
The team’s goal is to minimize the solution time for developing any applied machine learning system through better debugging capabilities.
The development team hopes that TensorFlow will become the cornerstone of the machine learning industry, so the stability of the API is also the most important feature.
As an engineer who relies on TensorFlow as part of a product, and as a builder of TensorFlow ecosystem packages, you can upgrade to the latest TensorFlow version and immediately take advantage of new features and performance improvements without worrying that your existing code base might break.
Therefore, the development team promises full backward compatibility from TensorFlow 2 to the next version.
TensorFlow 2 code can be run as-is, without transcoding or manual changes.
The team plans to release a preview version of new TensorFlow features in the second quarter of 2023, and will release a product version later this year.
The above is the detailed content of The new version of TensorFlow has another flag! The official team clarified the 'four pillars': committed to 100% backward compatibility and released in 2023. For more information, please follow other related articles on the PHP Chinese website!