Key infrastructure challenges limiting AI's potential
Patrick Lastennet, director of marketing and business at digital real estate company Interxion, looks at the barriers to accelerating AI innovation. It believes that it is important to develop a strong infrastructure strategy for AI deployment from the beginning.
People’s demand for artificial intelligence is growing. Businesses in every industry are exploring how to use artificial intelligence to accelerate innovation and provide a powerful competitive advantage. However, designing AI infrastructure is complex and overwhelming, and as a result, 76% of businesses believe infrastructure is a barrier to AI success.
However, this is no excuse to slow down progress. As more companies actively pursue, or at least lag behind, AI, those who wait will only fall further behind.
A recent survey of IT decision-makers in eight European countries found that nearly two-thirds of enterprises (62%) are currently deploying or testing AI, with a further 17% planning to use AI in 2020.
Respondents noted that many infrastructural barriers limit the large-scale deployment of AI, ranging from a lack of resources, such as funding, personnel and physical infrastructure, to unclear corporate strategies that do not take AI into account.
Because AI deployment is a slow build process for many enterprises, a huge technology gap will form between those that have entered the deployment phase and those that have not yet begun planning. Businesses unwilling to invest in AI will miss out on opportunities to gain a competitive advantage.
That’s why it’s important to have a strong infrastructure strategy for AI deployment from the beginning. Here are some questions to consider.
Barriers to Success
Often, companies leading major AI R&D do not have significant initial investment from their IT departments. As a result, teams unfortunately create shadow AI—AI infrastructure created under IT’s radar, which is a challenge to operate successfully and ultimately ineffective. Enterprises can avoid shadow AI by developing an infrastructure strategy specifically optimized for AI.
The survey highlighted that immeasurable costs are the primary issue (21%). From the need for new investments in people and equipment, to unforeseen costs on the winding road between AI design and deployment, to rapid innovation and shifts in technology requirements, AI investments can be substantial and difficult to predict. Additionally, internal disconnects between IT and AI development can lead to low ROI if businesses fail to deploy the technology.
The lack of in-house expert staff is also a major challenge. Enterprises often need to hire professional developers, which can be costly and take time for new employees to learn the business to meet AI design and organizational goals.
Inadequate IT equipment also prevents companies from envisioning how artificial intelligence can be integrated into their operations. According to the survey, many enterprises are concerned that their current infrastructure is not optimized to support AI and that data centers are reaching capacity.
Barriers to the strategy phase are generally similar across industries, but specific infrastructure decisions may vary by industry. Legal or compliance requirements, such as GDPR, as well as the types of data and workflows involved, will impact AI infrastructure decisions.
The study found that 39% of enterprises across industries use major public clouds - the majority of which are manufacturers looking for flexibility and speed. Meanwhile, 29% of respondents prefer in-house solutions backed by consultants — typically financial, energy and healthcare companies that want to keep their personally identifiable information (PII) data under tight security and under better control.
Elements of a Successful Artificial Intelligence Infrastructure
Since many businesses are starting from scratch, it is essential to have a clear strategy from the beginning, as re-architecting later can be costly of time, money and resources. To successfully enable AI at scale, businesses need to examine several aspects.
First, enterprises need to be able to ensure that they have the right infrastructure in place to support the data acquisition and collection required for the datasets prepared for AI workloads. In particular, attention must be paid to the effectiveness and cost of collecting data from edge or cloud devices where AI inference runs. Ideally, this needs to be implemented in multiple regions around the world, while leveraging high-speed connections and ensuring high availability. This means enterprises need infrastructure supported by a network fabric that provides:
Proximity to AI data: 5G and fixed-line core nodes in enterprise data centers will come from field devices, offices and AI data from manufacturing facilities is brought into regional interconnected data centers for processing along a multi-node architecture.
Direct Cloud Access: Provides high-performance access to cloud hyperscale environments to support hybrid deployments of AI training or inference workloads.
Geographic scale: By placing their infrastructure in multiple data centers located in strategic geographic areas, enterprises can achieve low-cost data acquisition and high-performance AI work globally Load delivered.
When enterprises consider training AI/deep learning models, they must consider a data center partner that can accommodate the necessary power and cooling technology to support GPU accelerated computing in the long term, which requires:
High Rack Density: To support AI workloads, enterprises need more computing power from each rack in their data center. This means higher power density. In fact, most enterprises will need to at least triple their maximum density to support AI workloads and prepare for higher levels in the future.
Volume and Scale: The key to leveraging the benefits of AI is implementation at scale. The ability to run on large-scale hardware (GPU) enables large-scale computing effects.
The Path to a Realistic Artificial Intelligence
Most on-premises enterprise data centers cannot handle this scale of data. Public clouds, meanwhile, offer the path of least resistance, but are not always the best environment for training AI models at scale or deploying them into production due to high costs or latency issues.
So what’s the best approach for enterprises that want to design infrastructure to support AI workloads? By examining how enterprises that are already gaining value from AI are choosing to deploy their infrastructure , important lessons can be learned.
Hyperscale enterprises such as Google, Amazon, Facebook, and Microsoft have successfully deployed AI at scale using their own core and edge infrastructure, often deployed in highly connected, high-quality data centers. They use colocation heavily around the world because they know it can support the scale, density and connectivity they need.
By leveraging the knowledge and experience of these AI leaders, businesses will be able to chart their own destiny in AI.
The above is the detailed content of Key infrastructure challenges limiting AI's potential. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Docker uses Linux kernel features to provide an efficient and isolated application running environment. Its working principle is as follows: 1. The mirror is used as a read-only template, which contains everything you need to run the application; 2. The Union File System (UnionFS) stacks multiple file systems, only storing the differences, saving space and speeding up; 3. The daemon manages the mirrors and containers, and the client uses them for interaction; 4. Namespaces and cgroups implement container isolation and resource limitations; 5. Multiple network modes support container interconnection. Only by understanding these core concepts can you better utilize Docker.

How to define header files using Visual Studio Code? Create a header file and declare symbols in the header file using the .h or .hpp suffix name (such as classes, functions, variables) Compile the program using the #include directive to include the header file in the source file. The header file will be included and the declared symbols are available.

Writing C in VS Code is not only feasible, but also efficient and elegant. The key is to install the excellent C/C extension, which provides functions such as code completion, syntax highlighting, and debugging. VS Code's debugging capabilities help you quickly locate bugs, while printf output is an old-fashioned but effective debugging method. In addition, when dynamic memory allocation, the return value should be checked and memory freed to prevent memory leaks, and debugging these issues is convenient in VS Code. Although VS Code cannot directly help with performance optimization, it provides a good development environment for easy analysis of code performance. Good programming habits, readability and maintainability are also crucial. Anyway, VS Code is

YAML is used to configure containers, images, and services for Docker. To configure: For containers, specify the name, image, port, and environment variables in docker-compose.yml. For images, basic images, build commands, and default commands are provided in Dockerfile. For services, set the name, mirror, port, volume, and environment variables in docker-compose.service.yml.

Docker uses container engines, mirror formats, storage drivers, network models, container orchestration tools, operating system virtualization, and container registry to support its containerization capabilities, providing lightweight, portable and automated application deployment and management.

The Docker image hosting platform is used to manage and store Docker images, making it easy for developers and users to access and use prebuilt software environments. Common platforms include: Docker Hub: officially maintained by Docker and has a huge mirror library. GitHub Container Registry: Integrates the GitHub ecosystem. Google Container Registry: Hosted by Google Cloud Platform. Amazon Elastic Container Registry: Hosted by AWS. Quay.io: By Red Hat

The command to start the container of Docker is "docker start <Container name or ID>". This command specifies the name or ID of the container to be started and starts the container that is in a stopped state.

Depending on the specific needs and project size, choose the most suitable IDE: large projects (especially C#, C) and complex debugging: Visual Studio, which provides powerful debugging capabilities and perfect support for large projects. Small projects, rapid prototyping, low configuration machines: VS Code, lightweight, fast startup speed, low resource utilization, and extremely high scalability. Ultimately, by trying and experiencing VS Code and Visual Studio, you can find the best solution for you. You can even consider using both for the best results.
