Patrick Lastennet, director of marketing and business at digital real estate company Interxion, looks at the barriers to accelerating AI innovation. It believes that it is important to develop a strong infrastructure strategy for AI deployment from the beginning.
People’s demand for artificial intelligence is growing. Businesses in every industry are exploring how to use artificial intelligence to accelerate innovation and provide a powerful competitive advantage. However, designing AI infrastructure is complex and overwhelming, and as a result, 76% of businesses believe infrastructure is a barrier to AI success.
However, this is no excuse to slow down progress. As more companies actively pursue, or at least lag behind, AI, those who wait will only fall further behind.
A recent survey of IT decision-makers in eight European countries found that nearly two-thirds of enterprises (62%) are currently deploying or testing AI, with a further 17% planning to use AI in 2020.
Respondents noted that many infrastructural barriers limit the large-scale deployment of AI, ranging from a lack of resources, such as funding, personnel and physical infrastructure, to unclear corporate strategies that do not take AI into account.
Because AI deployment is a slow build process for many enterprises, a huge technology gap will form between those that have entered the deployment phase and those that have not yet begun planning. Businesses unwilling to invest in AI will miss out on opportunities to gain a competitive advantage.
That’s why it’s important to have a strong infrastructure strategy for AI deployment from the beginning. Here are some questions to consider.
Often, companies leading major AI R&D do not have significant initial investment from their IT departments. As a result, teams unfortunately create shadow AI—AI infrastructure created under IT’s radar, which is a challenge to operate successfully and ultimately ineffective. Enterprises can avoid shadow AI by developing an infrastructure strategy specifically optimized for AI.
The survey highlighted that immeasurable costs are the primary issue (21%). From the need for new investments in people and equipment, to unforeseen costs on the winding road between AI design and deployment, to rapid innovation and shifts in technology requirements, AI investments can be substantial and difficult to predict. Additionally, internal disconnects between IT and AI development can lead to low ROI if businesses fail to deploy the technology.
The lack of in-house expert staff is also a major challenge. Enterprises often need to hire professional developers, which can be costly and take time for new employees to learn the business to meet AI design and organizational goals.
Inadequate IT equipment also prevents companies from envisioning how artificial intelligence can be integrated into their operations. According to the survey, many enterprises are concerned that their current infrastructure is not optimized to support AI and that data centers are reaching capacity.
Barriers to the strategy phase are generally similar across industries, but specific infrastructure decisions may vary by industry. Legal or compliance requirements, such as GDPR, as well as the types of data and workflows involved, will impact AI infrastructure decisions.
The study found that 39% of enterprises across industries use major public clouds - the majority of which are manufacturers looking for flexibility and speed. Meanwhile, 29% of respondents prefer in-house solutions backed by consultants — typically financial, energy and healthcare companies that want to keep their personally identifiable information (PII) data under tight security and under better control.
Since many businesses are starting from scratch, it is essential to have a clear strategy from the beginning, as re-architecting later can be costly of time, money and resources. To successfully enable AI at scale, businesses need to examine several aspects.
First, enterprises need to be able to ensure that they have the right infrastructure in place to support the data acquisition and collection required for the datasets prepared for AI workloads. In particular, attention must be paid to the effectiveness and cost of collecting data from edge or cloud devices where AI inference runs. Ideally, this needs to be implemented in multiple regions around the world, while leveraging high-speed connections and ensuring high availability. This means enterprises need infrastructure supported by a network fabric that provides:
Proximity to AI data: 5G and fixed-line core nodes in enterprise data centers will come from field devices, offices and AI data from manufacturing facilities is brought into regional interconnected data centers for processing along a multi-node architecture.
Direct Cloud Access: Provides high-performance access to cloud hyperscale environments to support hybrid deployments of AI training or inference workloads.
Geographic scale: By placing their infrastructure in multiple data centers located in strategic geographic areas, enterprises can achieve low-cost data acquisition and high-performance AI work globally Load delivered.
When enterprises consider training AI/deep learning models, they must consider a data center partner that can accommodate the necessary power and cooling technology to support GPU accelerated computing in the long term, which requires:
High Rack Density: To support AI workloads, enterprises need more computing power from each rack in their data center. This means higher power density. In fact, most enterprises will need to at least triple their maximum density to support AI workloads and prepare for higher levels in the future.
Volume and Scale: The key to leveraging the benefits of AI is implementation at scale. The ability to run on large-scale hardware (GPU) enables large-scale computing effects.
Most on-premises enterprise data centers cannot handle this scale of data. Public clouds, meanwhile, offer the path of least resistance, but are not always the best environment for training AI models at scale or deploying them into production due to high costs or latency issues.
So what’s the best approach for enterprises that want to design infrastructure to support AI workloads? By examining how enterprises that are already gaining value from AI are choosing to deploy their infrastructure , important lessons can be learned.
Hyperscale enterprises such as Google, Amazon, Facebook, and Microsoft have successfully deployed AI at scale using their own core and edge infrastructure, often deployed in highly connected, high-quality data centers. They use colocation heavily around the world because they know it can support the scale, density and connectivity they need.
By leveraging the knowledge and experience of these AI leaders, businesses will be able to chart their own destiny in AI.
The above is the detailed content of Key infrastructure challenges limiting AI's potential. For more information, please follow other related articles on the PHP Chinese website!