[Global Network Technology Reporter Lin Mengxue] Currently, generative AI and large models are showing high popularity around the world. During the just past 2023 World Artificial Intelligence Conference (WAIC 2023), various manufacturers even set off According to incomplete statistics from the organizing committee, a total of more than 30 large-scale model platforms were released and unveiled in the "Hundred Model War". 60% of the offline booths displayed relevant introductions and applications of generative AI technology, and 80% of the participants discussed the content. It revolves around large models.
During WAIC 2023, Zhang Yu, senior chief AI engineer of Intel Corporation and chief technology officer of Network and Edge Division China, believed that the core factor promoting the development of this round of artificial intelligence is actually the continuous advancement of computing, communication and storage technologies. promote. In the entire AI ecosystem, whether it is large-scale models or AI fusion, the edge plays a vital role.
Zhang Yu said, “With the digital transformation of the industry, people’s demands for agile connections, real-time business and application intelligence have promoted the development of edge artificial intelligence. However, most of the applications of edge artificial intelligence are still at the edge. Inference stage. That is to say, we need to use a large amount of data and huge computing power to train a model in the data center, and we push the training results to the front end to perform an inference operation. This is the current use of most artificial intelligence on the edge. model."
"This model will inevitably limit the frequency of model updates, but we have also seen that many smart industries actually have demands for model updates. Autonomous driving needs to be able to adapt to various road conditions and be suitable for the driving of different drivers. Habit. However, when we train a role model in a car factory, there are often certain differences between the training data used and the data generated during dynamic driving. This difference affects the generalization ability of the model, that is, to The ability to adapt to new road conditions and new driving behaviors. We need to continuously train and optimize models at the edge to advance this process," he said.
Therefore, Zhang Yu proposed that the second stage of artificial intelligence development should be the edge training stage. "If we want to implement edge training, we need more automated means and tools to complete a complete development process from data annotation to model training, as well as model deployment." He said that the next development direction of edge artificial intelligence should be It is independent learning.
In the actual development process, edge artificial intelligence also faces many challenges. In Zhang Yu’s view, in addition to the challenges of edge training, there are also challenges of edge equipment. "Since the power consumption that the provided computing power can carry is often limited, how to implement edge reasoning and training with limited resources puts forward higher requirements for the performance and power consumption ratio of the chip." He also It is pointed out that the fragmentation of edge devices is very obvious, and how to use software to achieve migration between different platforms also puts forward more requirements.
In addition, the development of artificial intelligence is closely related to computing power, and behind the computing power is a huge data foundation. In the face of massive data assets, how to protect data has become a hot topic in the development of edge artificial intelligence. Once AI is deployed at the edge, these models will be beyond the control of the service provider. How do we protect the model at this time? And it is necessary to achieve good protection effects during storage and operation. These are the challenges faced by edge artificial intelligence. ”
"Intel is a data company, and our products cover all aspects of computing, communications and storage. In terms of computing, Intel provides many products including CPUs, GPUs, FPGAs and various artificial intelligence acceleration chips. A variety of products to meet users' different requirements for computing power. For example, in terms of large artificial intelligence models, the Gaudi2 product launched by Intel's Habana is the only product in the entire industry that has shown excellent performance in large model training. At the edge In terms of inference, the OpenVINO deep learning deployment tool suite provided by Intel can quickly deploy the models designed and trained by developers on the open artificial intelligence framework to different hardware platforms to perform inference operations."
The above is the detailed content of Intel Zhang Yu: Edge computing plays an important role in the entire AI ecosystem. For more information, please follow other related articles on the PHP Chinese website!