


How Artificial Intelligence is Bringing New Everyday Work to Data Center Teams
In hyperscale environments, secret features and micro-optimizations may provide real benefits, but for the mass market, they may not be necessary. If it were critical to do so, the move to the cloud would be limited by the emergence of tailor-made network solutions, but unfortunately, this is not the case.
Artificial intelligence has gone from distant imagination to near-term imperative, driven by breakthrough use cases in generating text, art, and video. It is affecting the way people think about every field, and data center networking is certainly not immune. But what might artificial intelligence mean in the data center? How will people get started?
While researchers may unlock some algorithmic approaches to network control, this does not appear to be the primary use case for artificial intelligence in data centers. The simple fact is that data center connectivity is largely a solved problem.
In hyperscale environments, secret features and micro-optimizations may provide real benefits, but for the mass market, they may not be necessary. If it were critical to do so, the move to the cloud would be limited by the emergence of tailor-made network solutions, but unfortunately, this is not the case.
If AI is to make a lasting impression, it must be operational. Networking practices will become the battleground for the workflows and activities required to network. Combined with the industry's 15-year ambition around automation, this actually makes a lot of sense. Can AI provide the technology push needed to finally move the industry from dreaming of operational advantages to actively leveraging automated, semi-autonomous operations?
Deterministic or random?
It seems possible, but the answer to this question is nuanced. At a macro level, data centers have two different operating behaviors: one that is deterministic and leads to known results, and the other that is random or probabilistic.
For deterministic workflows, AI is more than just overkill; it’s completely unnecessary. More specifically, with known architectures, the configuration required to drive the device does not require an AI engine to handle it. It requires translation from an architectural blueprint to a device-specific syntax.
Configuration can be fully predetermined even in the most complex cases (multi-vendor architectures with varying sizing requirements). There might be nested logic to handle changes in device type or vendor configuration, but nested logic would hardly qualify as artificial intelligence.
But even outside of configuration, many day-two operational tasks don’t require artificial intelligence. For example, take one of the more common use cases where marketers have been using AI for years: resource thresholding. The logic is that AI can determine when critical thresholds such as CPU or memory usage are exceeded and then take some remedial action.
Threshold is not that complicated. Mathematicians and AI purists might comment that linear regression is not really intelligence. Rather, this is pretty rough logic based on trend lines, and importantly, these things have been showing up in various production settings before artificial intelligence became a fashionable term.
So, does this mean artificial intelligence has no role? Absolutely not! This does mean that AI is not a requirement or even applicable to everything, but there are some workflows in the network that can and will benefit from AI. Workflows that are probabilistic rather than deterministic would be the best candidates.
Troubleshooting as a Potential Candidate
There may be no better candidate for probabilistic workflows than root cause analysis and troubleshooting. When a problem occurs, network operators and engineers engage in a series of activities designed to troubleshoot the problem and hopefully identify the root cause.
For simple problems, the workflow may be scripted. But for anything other than the most basic of problems, the operator is applying some logic and choosing the most likely but not predetermined path forward. Make some refinements based on what you know or have learned, either seek more information or make guesses.
Artificial intelligence can play a role in this. We know this because we understand the value of experience during troubleshooting. A new hire, no matter how skilled they are, will usually perform less well than someone who has been around for a long time. Artificial intelligence can replace or supplement all ingrained experiences, while recent advances in natural language processing (NLP) help smooth the human-machine interface.
AI starts with data
The best wine starts with the best grapes. Likewise, the best AI will start with the best data. This means that well-equipped environments will prove to be the most fertile environments for AI-driven operations. Hyperscalers are certainly further along the AI path than others, thanks in large part to their software expertise. But it cannot be ignored that they attach great importance to real-time collection of information through streaming telemetry and large-scale collection frameworks when setting up data centers.
Businesses that want to leverage artificial intelligence to some extent should examine their current telemetry capabilities. Basically, does the existing architecture help or hinder any serious pursuit? Architects then need to build these operational requirements into the underlying architecture assessment process. In enterprises, operations are often some additional work that is done after the equipment passes through the purchasing department. This is not the norm for any data center hoping to one day leverage anything beyond simple scripting operations.
Going back to the issue of determinism or randomness, this issue really shouldn’t be framed as an either/or proposition. Both sides have their roles to play. Both have to play a role. Each data center will have a deterministic set of workflows and the opportunity to do some groundbreaking things in a probabilistic world. Both will benefit from data. Therefore, regardless of goals and starting points, everyone should focus on data.
Lower expectations
For most businesses, the key to success is to lower expectations. The future is sometimes defined by grand declarations, but often the grander the vision, the more out of reach it seems.
What if the next wave of progress was driven more by boring innovations rather than exaggerated promises? What if reducing hassle tickets and human error was enough to get people to take action? Aiming at the right goals makes it easier for people to grow. This is especially true in an environment that lacks enough talent to meet everyone's ambitious agenda. So even if the AI trend hits a trough of disillusionment in the coming years, data center operators still have an opportunity to make a meaningful difference to their businesses.
The above is the detailed content of How Artificial Intelligence is Bringing New Everyday Work to Data Center Teams. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



According to news on April 15, 2024, a 2024 CIO and technology executive survey recently released by Gartner shows that more than 60% of Chinese companies plan to deploy generative artificial intelligence (GenAI) in the next 12 to 24 months. Since Chinese companies tend to deploy GenAI locally rather than through the public cloud, the current infrastructure environment cannot support GenAI projects. This will promote the design transformation of Chinese enterprise data centers. Zhang Lukeng, research director at Gartner, said: "Due to security and data privacy concerns and regulatory requirements, some enterprises prefer to deploy GenAl solutions or fine-tune large language models (LLM) on-premises. Deploying GenAl on-premises is important for data centers not just

According to news from this website on June 18, Samsung Semiconductor recently introduced its next-generation data center-grade solid-state drive BM1743 equipped with its latest QLC flash memory (v7) on its technology blog. ▲Samsung QLC data center-grade solid-state drive BM1743 According to TrendForce in April, in the field of QLC data center-grade solid-state drives, only Samsung and Solidigm, a subsidiary of SK Hynix, had passed the enterprise customer verification at that time. Compared with the previous generation v5QLCV-NAND (note on this site: Samsung v6V-NAND does not have QLC products), Samsung v7QLCV-NAND flash memory has almost doubled the number of stacking layers, and the storage density has also been greatly improved. At the same time, the smoothness of v7QLCV-NAND

The rapid rise of generative artificial intelligence (AI) highlights the breakneck pace with which enterprises are adopting AI. According to a recent Accenture report, 98% of business leaders say artificial intelligence will play an important role in their strategy over the next three to five years. McKinsey analysts found that nearly 65% of enterprises plan to increase investment in artificial intelligence in the next three years. NVIDIA, AMD and Intel are launching new chips designed for generative artificial intelligence and high-performance computing (HPC). The momentum is only just started. Public cloud providers and emerging chip companies are also competing. IDC analysts predict that global spending on artificial intelligence software, hardware and services will reach $300 billion, exceeding this year’s forecast of $154 billion.

News from this site on January 19. According to official news from Inspur Server, on January 18, Inspur Information and Intel jointly released the world’s first fully liquid-cooled cold plate server reference design and opened it to the industry to promote fully liquid-cooled cold plate solutions. Large-scale deployment of applications in global data centers. Based on this reference design, Inspur Information launched a fully liquid-cooled cold plate server, claiming to achieve nearly 100% liquid cooling of server components and achieve a PUE value close to 1. Note from this site: PUE is the abbreviation of Power Usage Effectiveness. The calculation formula is "total data center energy consumption/IT equipment energy consumption". The total data center energy consumption includes IT equipment energy consumption and energy consumption of cooling, power distribution and other systems. The higher the PUE Close to 1 represents non-IT equipment consumption

With the rapid development of the Internet, the number of website visits is also growing. To meet this demand, we need to build a highly available system. A distributed data center is such a system that distributes the load of each data center to different servers to increase the stability and scalability of the system. In PHP development, we can also implement distributed data centers through some technologies. Distributed cache Distributed cache is one of the most commonly used technologies in Internet distributed applications. It caches data on multiple nodes to improve data access speed and

As demand for data processing and storage continues to surge, data centers are grappling with the challenges of ever-evolving and scaling. Continuous changes in platforms, device designs, topologies, power density requirements and cooling needs have emphasized the urgent need for new structural designs. Data center infrastructure often struggles to align current and projected IT loads with their critical infrastructure, resulting in mismatches that threaten their ability to meet escalating demands. Against this backdrop, traditional data center approaches must be modified. Data centers are now integrating artificial intelligence (AI) and machine learning (ML) technologies into their infrastructure to stay competitive. By implementing an AI-driven layer into traditional data center architecture, enterprises can create autonomous data centers without the need for human labor

Colocation data centers are typically designed to accommodate dozens or even hundreds of customers' diverse applications. However, Nvidia offers a unique data center model that is dedicated to running specific applications for a single customer. The emergence of "artificial intelligence factory" This new type of data center is different from traditional data centers. It focuses on providing more efficient and flexible infrastructure services. Traditional data centers often host multiple applications and multiple tenants, while new data centers focus more on dynamic allocation and optimization of resources to meet the needs of different applications and tenants. The design of this new data center is more flexible and intelligent, and can adjust resource allocation in real time according to demand, improving overall efficiency and performance. Through this innovative design concept, these new data centers mainly use

Recently, some friends have asked the editor how Migu Video can enter the data center. The following will bring you the method of Migu Video entering the data center. Friends who need it can come and learn more. 1. Open the Migu Video APP and click My in the lower right corner of the homepage (as shown in the picture). 2. Click Data Center (as shown in the picture). 3. You can view all the data (as shown in the figure).
