What can edge artificial intelligence (Edge AI) do for us?
- Artificial intelligence (AI) is a type of decentralized computing that allows devices to make data-driven decisions at the closest point of interaction with the user.
- The benefits of this technology include improved privacy and cost savings, but the data is typically discarded after processing.
- Coming advances, including 5G technology and lower-cost processing chips, will make edge AI increasingly useful for certain applications — from smart home devices to medical technology.
Imagine that you want your new smart thermostat to quickly turn up the temperature so your house is warm when you get home from get off work on an unusually cold day. You connect from your smartphone and ask it to take action. You wouldn't know it, but the operation may take a few seconds as it sends your request to the cloud and receives the instructions back.
Now imagine that the self-driving car you are riding in suddenly senses a dog running onto the road in front of you. Cars need to react within milliseconds to avoid disaster. This response requires edge artificial intelligence (AI) – technology that can make decisions at the closest point of interaction with the user, and in this case, the car’s sensors are crucial. This is the definition of a split second decision.
Dynamic Data
With today’s Internet of Things (IoT), data is always in motion. It flows from legacy systems to the cloud, all the way to edge devices, and beyond an organization’s systems to partners and customers. Answers need to be delivered in real time, so using centralized computing power is not always efficient when the data can be processed via edge devices. When a self-driving car only has milliseconds to react, it doesn’t have time to wait for the cloud to make a decision.
No matter where the device is located, vast amounts of data can be fed into AI algorithms at the edge, and the benefits are numerous. Dynamic data can deliver important patient information to doctors, shorten queues at amusement parks, alert power companies to potential power outages, and enable self-driving cars to respond in time to prevent tragedy.
Edge AI allows devices to make these decisions on their own at the device level. It does not necessarily have to be connected to the Internet to process data. Consider a watch that monitors your sleep patterns, but instead of pushing the data to the cloud for storage and processing, it records the data on the watch itself for processing.
Edge-enabled AI devices also include video games, smart speakers, drones and robots. Security cameras can also enable edge capabilities – cameras on the factory floor look for product defects during the manufacturing process and can quickly identify which products need to be pulled out immediately. When speed saves lives, edge AI can also be used to analyze images for emergency medical care. The closer the processing capabilities are, the faster the response time.
While edge technology won’t replace the cloud, user data that belongs only to you (such as your sleep patterns or gaming data) can be processed in edge-enabled devices. This decentralization of data solves privacy concerns, which is an important issue in the IoT market. Edge
AI can provide convenience without compromising privacy. And, in some cases, it could be cheaper—one company is currently developing voice-controlled home appliances, such as washing machines and dishwashers, using tiny microprocessors that cost a few dollars each.
“When it comes to the gadgets in my house, I actually wish they were less smart.” — Clive Thompson, WiredFor example, the speech recognition AI of a coffee machine only needs to recognize about 200 words, all of which are related to the task of making coffee. Think about it, says Wired reporter Clive Thompson: "I don't need a bad joke or a light switch to achieve self-awareness. They just need to recognize "on" and "off" and maybe "dark." When it comes to the gadgets that share my house, I'd actually prefer them to be less intelligent."
In addition to faster and cheaper processing, edge AI doesn't require the ever-expanding internet. With the rapid growth of the Internet of Things, vast amounts of data are now being sensed and generated at the edge—Statista estimates that this number will reach nearly 80 zettabytes by 2025.
This is so huge that using the bandwidth of today’s Internet to transfer all this data from edge devices to cloud servers for storage and processing is technically infeasible. Even if bandwidth is available, there needs to be enough data center resources to handle all the data. Less bandwidth requirements translate into cost savings. About 10% of enterprise-generated data is created and processed outside of traditional centralized data centers or the cloud. Gartner predicts that this number will reach 75% by 2025. One of the most vexing issues in the IoT world is that the large number of people who cannot afford devices or live in rural areas without local networks may not be able to participate in the impact on our daily lives transformation. A history of limited network capacity can become a vicious cycle. Edge networks are not simple to build and can be expensive. Developing countries may fall further behind in their ability to process data via edge devices that require newer technologies. The growth of edge computing is therefore another way in which structural inequality may increase, particularly as it relates to the accessibility of life-changing artificial intelligence and IoT devices. Another risk with edge AI is that data may be discarded after processing—by its very nature “at the edge,” this means it may not make it to the cloud for storage. Devices can be instructed to discard information to save costs. While there are certainly disadvantages to central processing and storage, the advantage is that the data is there when you need it. Artificial intelligence (AI) is impacting every aspect of society—homes, businesses, schools and even public spaces. But as technology rapidly evolves, multi-stakeholder collaboration is needed to optimize accountability, transparency, privacy, and fairness. World Economic Forum’s platform for shaping the future of technology governance: Artificial intelligence and machine learning are bringing together diverse perspectives to drive innovation and build trust. If it’s just you and your self-driving car driving on an empty road, that large amount of data may not seem important, but think again. A lot can be learned from the data on this empty road, including information about road conditions and how the vehicle and others like it behave under those conditions. Finally, when it comes to edge computing, a clear business case must be scrutinized to ensure the cost of the network is balanced against the value created. Still, despite the inequality or data loss, and with advances in 5G technology and cheaper processing chips, it’s easy to see how “at the edge” is here to stay — whether that’s you The self-driving car is also your coffee maker, getting you ready for your commute. Balancing Risks and Rewards
Artificial Intelligence, Machine Learning, Technology
How does the World Economic Forum ensure that the development of artificial intelligence benefits all stakeholders?
The above is the detailed content of What can edge artificial intelligence (Edge AI) do for us?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
