


Why edge computing and artificial intelligence strategies must complement each other
Many enterprises have begun exploring edge computing use cases because of the ability to push computing power closer to data sources and closer to end users. At the same time, artificial intelligence or machine learning may be being explored or implemented, and the ability to automate discovery and gain data-driven insights has also been recognized. But if you don’t proactively combine edge and AI strategies, you’ll miss out on the potential for change.
Enter the field of artificial intelligence
There are clear signs that edge analysis and data analysis are converging. According to data, by 2025, the creation of edge data will increase by 33%, accounting for more than one-fifth of data. By 2023, data analysis professionals will focus more than 50% of their energy on creating and analyzing edge data. Edge solutions are very or extremely important to achieving the enterprise's mission. 78% of leaders believe the edge will have the greatest impact on AI and ML.
Traditionally, enterprises need to transport remote data to data centers or commercial clouds to perform analysis and extract value. This can be challenging in edge environments due to increased data volumes, limited or no network access, and the increasing need for faster decision-making in real-time.
But today, the enhanced availability of small-capacity chipsets, high-density compute and storage, and mesh networking technologies has laid the foundation for enterprises to deploy artificial intelligence workloads closer to the source of data production.
Getting Started with Edge Artificial Intelligence
To enable edge AI use cases, identify where near-real-time data decisions can significantly enhance the user experience and achieve mission goals. We are seeing an increasing number of edge use cases focused on next-generation flight kits to support law enforcement, cybersecurity and health investigations. Where investigators once collected data for subsequent processing, the new deployment suite includes advanced tools for processing and exploring data in the field.
Next, determine where to transmit large amounts of edge data. If the data can be processed at a remote location, then only the results need to be transferred. By moving only a small portion of your data, you free up bandwidth, reduce costs, and make decisions faster.
Leverage loosely coupled edge components to achieve the necessary computing power. A single sensor cannot perform processing. But high-speed mesh networks allow for connected nodes, some of which handle data collection, and others that process and so on. ML models can even be retrained at the edge to ensure continued prediction accuracy.
Infrastructure as Code for Remote AI
The best practice for edge AI is infrastructure code. Infrastructure code allows network and security configurations to be managed through configuration files rather than through physical hardware. Using infrastructure code, configuration files include infrastructure specifications, making it easier to change and distribute configurations and ensuring environments are consistently provisioned.
You can also consider using microservices and running them within them and leveraging development ops capabilities such as CI/CD pipelines, giitops, etc. to automate the iterative deployment of ML models into production environments at the edge and provide Write code once and use it anywhere with the flexibility.
We should seek to use consistent technologies and tools at the edge and core. This way, no specialized expertise is required, one-off issues are avoided, and it can be expanded more easily.
Edge Artificial Intelligence in the Real World and Beyond
Everyone from the military to law enforcement to agencies managing critical infrastructure is executing AI at the edge. For example, the International Space Station.
The International Space Station includes an on-site laboratory for conducting research and operating experiments. In one example, scientists focused on sequencing the DNA genomes of microorganisms discovered on the International Space Station. Genome sequencing generates vast amounts of data, but scientists only need to analyze a portion of it.
In the past, the International Space Station transmitted all data to ground stations for centralized processing, often with many terabytes of data per sequence. At transitional transmission rates, data could take weeks to reach scientists on Earth. But harnessing the power of edge and artificial intelligence, the research is done directly on the International Space Station, with only the results transmitted to the ground. Analysis can now be performed on the same day.
The system is easy to manage in environments where space and power are limited. Software updates are pushed to the edge of necessity and ML model training is performed on-site. And the system is flexible enough to handle other types of ML-based analysis in the future.
Combining artificial intelligence and edge computing allows enterprises to perform analytics anywhere. AI can be scaled and scaled at remote locations with a common framework from core to edge. By placing analytics close to where data is generated and users interact, decisions can be made faster, services can be delivered faster, and tasks can be scaled to wherever they are needed.
The above is the detailed content of Why edge computing and artificial intelligence strategies must complement each other. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Java frameworks are combined with edge computing to enable innovative applications. They create new opportunities for the Internet of Things, smart cities and other fields by reducing latency, improving data security, and optimizing costs. The main integration steps include selecting an edge computing platform, deploying Java applications, managing edge devices, and cloud integration. Benefits of this combination include reduced latency, data localization, cost optimization, scalability and resiliency.
