From chatbots to self-driving cars, AI is transforming the technology landscape. One area where AI adoption is accelerating rapidly is on consumer devices like smartphones. Tech giants are racing to integrate on-device AI capabilities into their latest hardware and software.
But what exactly is on-device AI, and how does it work?
Today, most consumer AI is powered by huge datasets stored in the cloud. However, uploading personal data to AI servers isn't good for privacy. That's where on-device AI comes in.
On-device AI refers to artificial intelligence capabilities that run locally on a device rather than in the cloud. This means the AI processing happens right on your phone, tablet, or other devices without connecting to the internet.
Here's how it works under the hood: AI models like Stable Diffusion, DALLE-E, Midjourney, etc., are trained in the cloud using lots of data and computing power. These models are then optimized to run efficiently on the target devices. The optimized AI models are embedded into apps and downloaded onto each user's device. The app data stays isolated in a secure enclave.
When the app needs to perform a task like recognizing a sound or identifying an object in a photo, it runs the AI model locally using the device's onboard CPU or GPU. The AI model processes the input data and generates an output prediction or result without sending any data externally.
All of this happens entirely on your device, keeping your interactions private.
On-device AI allows for impressive capabilities on your device. This includes fast image generation using models like Stable Diffusion, where images can be created in seconds, which is blazing fast. This could be great if you want to create images and backgrounds for your social media posts quickly. And since it's processed locally, your image prompts remain private.
An AI assistant can also be powered locally, providing a personalized experience based on your data and preferences, like favorite activities and fitness levels. It can understand natural language for tasks like photo and video editing. For example, you may be able to say "remove the person on the left" to edit an image instantly.
For photography, on-device AI can enable features like extending image borders (similar to Photoshop's fill feature), using simultaneous front and back cameras for video effects, and editing different layers independently.
Gaming graphics can also benefit from on-device AI through upscaling resolution up to 8K, accelerating ray tracing for realistic lighting, and efficiently doubling framerates. Regarding audio, on-device AI can enable real-time audio syncing so your audio from videos and games never goes out of sync. It also allows for crystal clear calls and music even if you walk into another room.
Overall, on-device AI aligns with what many users want—quick results and privacy—as it keeps more processing local instead of the cloud.
On-device AI is pretty neat. It allows our phones and other gadgets to run advanced AI algorithms locally without needing to connect to the cloud and without lag.
Companies like Qualcomm are already packing their latest chips with specialized hardware to run neural networks efficiently on our devices. For example, Qualcomm's Snapdragon 8 Gen 3 supports up to 24GB (on a smartphone!), has integrated support for Stable Diffusion and Llama 2, and uses Qualcomm's AI Stack to deliver some of the best on-device AI performance at the current time.
So, with each new generation of smartphones, expect to see even more powerful on-device AI capabilities for things like intelligent assistants, real-time translation, automated photo editing, and lightning-fast generative AI image generation.
The above is the detailed content of What Is On-Device AI and How Does It Work?. For more information, please follow other related articles on the PHP Chinese website!