


How edge computing helps enterprises reduce costs and increase efficiency
Growing hopes for edge computing have filled the industry with bold ideas, such as “the edge will eat the cloud” and that real-time automation will become ubiquitous in healthcare, retail and manufacturing.
Today, more and more experts believe that edge computing will play a key role in the digital transformation of almost every enterprise. But progress has been slow. Traditional thinking prevents companies from taking full advantage of real-time decision-making and resource allocation. To understand how and why this happened, let’s take a look back at the first wave of edge computing and what’s happened since then.
First Wave of Edge Computing: Internet of Things (IoT)
For most industries, the concept of edge is closely related to the first wave of Internet of Things (IoT). At the time, much of the focus was on collecting data from small sensors affixed to everything and then transmitting that data to a central location — such as the cloud or a main data center.
These data streams must then be correlated with what is commonly known as sensor fusion. At the time, sensor economics, battery life, and ubiquity often resulted in data streams that were too limited and of low fidelity. Additionally, retrofitting existing equipment with sensors is often costly. While the sensors themselves are inexpensive, installation is time-consuming and requires trained personnel to perform. Finally, the expertise required to analyze data using sensor fusion is embedded in the knowledge base of employees across the organization. This has led to a slowdown in IoT adoption.
In addition, concerns about security also affect the large-scale application of the Internet of Things. The calculation is simple: Thousands of connected devices across multiple locations equate to a massive and often unknown amount of exposure. With the potential risks outweighing the unproven benefits, many believe it is prudent to take a wait-and-see approach.
Beyond IoT 1.0
It is becoming increasingly clear that the edge is not about the Internet of Things,
but about operations across distributed sites and geographies Real-time decisions. In IT and increasingly in industrial environments, we refer to these distributed data sources as the edge. We call decision-making from all of these locations outside of the data center or cloud edge computing.
Nowadays, the edge is everywhere
—where we live, where we work, and where human activities take place. Sparse sensor coverage has been addressed with newer and more flexible sensors. New assets and technologies come with a wide range of integrated sensors. Sensors are now often augmented with high-resolution/high-fidelity imaging (X-ray equipment, lidar). The reason is simple: there is not enough available bandwidth and time between the edge location and the cloud. Data at the edge is most important in the short term. Data can now be analyzed and consumed in real-time at the edge, rather than processed and analyzed later in the cloud. To achieve new levels of efficiency and superior operational feedback, computing must occur at the edge. This is not to say that the cloud is irrelevant. The cloud still plays an important role in edge computing because of the ability to deploy and manage it across all locations. For example, the cloud provides access to applications and data from other locations, as well as remote experts to manage systems, data, and applications around the world. Additionally, the cloud can be used to analyze large data sets across multiple locations, show trends over time, and generate predictive analytics models. Therefore, edge technology is about dealing with big data flows in large numbers of geographically dispersed locations. One must adopt this new understanding of the edge to truly understand what is now possible with edge computing. Today: Real-time Edge AnalyticsIt’s amazing what can be done at the edge today compared to just a few years ago. Now, data can be generated from a large number of sensors and cameras, rather than being limited to a few. This data will then be analyzed on computers thousands of times more powerful than those of 20 years ago - all at a reasonable cost. High core count CPUs and GPUs along with high-throughput networks and high-resolution cameras are now readily available, making real-time edge analytics a reality. Deploying real-time analytics at the edge (where business activity occurs) helps enterprises understand their operations and react immediately. With this knowledge, many operations can be further automated, thereby increasing productivity and reducing losses. Here are some of today’s real-time edge analytics use cases:Supermarket Fraud Prevention
Many supermarkets now use some form of self-service checkout, and unfortunately, they are also seeing an increase in fraud incidents. Some unscrupulous shoppers can substitute cheaper barcodes for more expensive items and pay less. To detect this type of fraud, stores now use high-resolution cameras that compare the scans and weight of a product to the actual value of the product. These cameras are relatively cheap but generate huge amounts of data. By moving computing to the edge, data can be analyzed instantly. This means stores can detect fraud in real time, rather than after the "customer" has left the parking lot.
Food Production Monitoring
Today, a manufacturing plant can be outfitted with dozens of cameras and sensors at every step of the manufacturing process. Real-time analytics and AI-driven reasoning can reveal if an error exists in milliseconds or even microseconds. For example, maybe the camera will show that too much sugar has been added, or that there are too many ingredients. With cameras and real-time analytics, production lines can adjust to improve problems and even compute shutdowns when repairs are needed—without causing catastrophic damage.
AI-Powered Edge Computing for Healthcare
In the healthcare sector, infrared and x-ray cameras have been changing the game as they offer high resolution and rapidly deliver it to technicians and doctors Provide images. With such high resolution, AI can now filter, evaluate and diagnose abnormalities before a doctor confirms them. By deploying AI-powered edge computing, doctors can save time because they don’t need to send data to the cloud to get a diagnosis. Therefore, when oncologists see if a patient has lung cancer, they can apply real-time AI filtering to the patient’s lung images to obtain a quick and accurate diagnosis and greatly reduce the patient’s anxiety of waiting for a reply.
Autonomous Cars Driven by Analytics
Today, self-driving cars are possible because of relatively cheap and available cameras that provide 360-degree stereoscopic vision perception. Analysis also enables precise image recognition, so a computer can recognize the difference between a tumbleweed and a neighbor's cat, and decide whether to brake or maneuver around an obstacle to stay safe.
The affordability, availability, and miniaturization of high-performance GPUs and CPUs enable real-time pattern recognition and vector planning for autonomous vehicle driving intelligence. For self-driving cars to be successful, they must have enough data and processing power to make intelligent decisions and take corrective actions fast enough. Now, this is only possible with today’s edge technologies.
Distributed Architecture in Practice
When extremely powerful computing is deployed at the edge, enterprises can better optimize operations without worrying about latency or losing connectivity to the cloud. Everything is now distributed at the edge, so problems are solved in real time with only sporadic connectivity.
We have come a long way since the first wave of edge technologies. Thanks to advances in edge technology, businesses are now gaining a more complete view of their operations. Today's edge technologies not only help businesses increase profits, in fact, it helps them reduce risk and improve products, services, and customer experiences.
The above is the detailed content of How edge computing helps enterprises reduce costs and increase efficiency. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
