


Shenzhen: By 2025, the general computing power will reach 14EFLOPS, the intelligent computing power will reach 25EFLOPS, and the supercomputing power will reach 2EFLOPS.
Shenzhen Municipal Bureau of Industry and Information Technology today released the "Shenzhen Municipal Computing Infrastructure High-Quality Development Action Plan (2024-2025)". This news was obtained from this site on December 5
The overall goal
By 2025, the city will basically form a scientific and reasonable spatial layout, with a scale and volume that matches the needs of ultra-fast pioneer city construction, and with computing power, carrying capacity, storage capacity and application empowerment that are in line with the digital economy. Adapt to quality development, lay out advanced computing power infrastructure with significantly improved levels of green, low-carbon and independent controllability, build a diversified computing power supply system with coordinated development of general, intelligent, supercomputing and edge computing, and create a "diversified supply" , strong computing empowerment, ubiquitous connectivity, and secure integration" is the benchmark for China's computing and network cities.
General layout. Build advanced computing power infrastructure and continue to optimize network connection facilities. By 2025, the city’s data center rack size will reach 500,000 standard racks, and the level of computing power and computing efficiency will be significantly improved.
Technology System. Basically formed a computing power infrastructure technology system with diverse and ubiquitous computing power, safe and reliable storage capacity, high-quality interconnection of transportation capacity, and collaborative construction of computing, storage and transportation. By 2025, the general computing power will reach 14EFLOPS (FP32), the intelligent computing power will reach 25EFLOPS (FP16), and the supercomputing power will reach 2EFLOPS (FP64). The total storage capacity reaches 90EB. Advanced storage capacity accounts for more than 30%, and disaster recovery coverage for core data and important data in key industries reaches 100%. The delay between data centers in the city is not higher than 1ms, the delay to the national hub node in Shaoguan is not higher than 3ms, and the delay to the national hub node in Gui'an is not higher than 10ms.
Green and safe. Strengthen green and safe development. By 2025, the power utilization efficiency (PUE) of new data centers in our city will be reduced to less than 1.25, and the green and low-carbon level will reach 4A or above. Start the upgrade and transformation of "old and small" data centers. Strengthen security management and capacity building of network, data, and computing facilities, and build a complete security system.

The Ministry of Industry and Information Technology has previously stated that the scale of China’s computing industry will reach 2.6 trillion yuan in 2022, with a cumulative total of It has shipped more than 20.91 million general-purpose servers and 820,000 AI servers. The number of valid domestic invention patents in computing technology ranks first among all industries.
According to the "China Computing Power Development Index White Paper (2023)" released by the China Academy of Information and Communications Technology, the diversified development of my country's computing power continues to advance. It is expected that by 2025, the global computing power will exceed 3ZFlops (Note from this site: ZFlops means ten trillion floating-point operations per second), and by 2030 it will exceed 20ZFlops.
Zhao Zhiguo, chief engineer of the Ministry of Industry and Information Technology, said that advanced computing represented by heterogeneous computing, intelligent computing, quantum computing, etc. has evolved to a critical stage of qualitative change, and the computing industry has shown strong vitality and immeasurable potential.
Related reading:
"Huawei's rotating chairman says not to have illusions: We should unswervingly build a computing industry ecosystem"
Advertising statement: The article contains some external links (including but not Limited to hyperlinks, QR codes, passwords, etc.), aiming to provide more information and save screening time. However, please note that these links are for reference only and all articles on this site are accompanied by this statement
The above is the detailed content of Shenzhen: By 2025, the general computing power will reach 14EFLOPS, the intelligent computing power will reach 25EFLOPS, and the supercomputing power will reach 2EFLOPS.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Electronic computers were born in the 1940s, and within 10 years of the emergence of computers, the first AI application in human history appeared. AI models have been developed for more than 70 years. They can now not only create poetry, but also generate images based on text prompts, and even help humans discover unknown protein structures. What is the reason why AI technology has achieved exponential growth in such a short period of time? The reason? A long picture from "OurWorld in Data" traces the history of AI development through changes in computing power used to train AI models as scales. High-definition large picture: https://www.visualcapitalist.com/wp-content/

In 2023, there seem to be only two camps left in the chatbot field: "OpenAI's ChatGPT" and "Others." ChatGPT is powerful, but it’s almost impossible for OpenAI to open source it. The "other" camp performed poorly, but many people are working on open source, such as LLaMA, which was open sourced by Meta some time ago. LLaMA is the general name for a series of models with parameters ranging from 7 billion to 65 billion. Among them, the 13 billion parameter LLaMA model can outperform GPT-3 with 175 billion parameters "on most benchmarks". However, this model has not undergone instruction tuning (instructtuning), so the generation effect is relatively small.

According to news from this website on July 12, according to official news from the China Academy of Information and Communications Technology (referred to as "CAICT"), the country's first computing power interconnection public platform was released on July 11. The platform will identify, register and test computing resources across the country. Through the platform, you can query computing resources and related computing power scheduling services nationwide, provide real and credible computing power support for all walks of life, and accelerate the promotion of Computing power interconnection. On July 11, the China Academy of Information and Communications Technology released a public service platform for computing power interconnection, and jointly launched an action to create a consensus on computing power Internet with the industry. The Computing Power Interconnection Public Service Platform is a comprehensive service platform that promotes and manages the national computing power interconnection and computing power Internet system, including computing power identification management, computing power Internet business inquiry, computing power unified market, policies and

The Shenzhen Municipal Bureau of Industry and Information Technology today released the "Shenzhen Municipal Computing Infrastructure High-Quality Development Action Plan (2024-2025)". This news is based on the overall goal obtained by this site on December 5. By 2025, the city has basically formed The spatial layout is scientific and reasonable, the scale and volume match the needs of rapid pioneer city construction, the computing power, carrying capacity, storage capacity and application empowerment are compatible with the high-quality development of the digital economy, and the level of green, low carbon and independent controllability is remarkable Improve the layout of advanced computing power infrastructure, build a diversified computing power supply system with coordinated development of general, intelligent, supercomputing and edge computing, and create a Chinese computing network city with "diversified supply, strong computing empowerment, ubiquitous connection, and safe integration" Benchmark. Overall layout. Build advanced computing power infrastructure and continuously optimize it

Zuosi Auto R&D released the "2022 China Autonomous Driving Data Closed Loop Research Report". 1. The development of autonomous driving has gradually shifted from technology-driven to data-driven. Today, autonomous driving sensor solutions and computing platforms have become increasingly homogeneous, and the technology gap between suppliers is increasingly narrowing. In the past two years, autonomous driving technology iterations have advanced rapidly, and mass production has accelerated. According to Zuosi Data Center, in 2021, the cumulative number of domestic L2 assisted driving passenger vehicles will reach 4.79 million, a year-on-year increase of 58.0%. From January to June 2022, the penetration rate of China's L2 assisted driving in the new passenger car market climbed to 32.4%. For autonomous driving, data runs through the entire life cycle of research and development, testing, mass production, operation and maintenance. With the rapid increase in the number of smart connected car sensors

This site reported on March 6 that Yu Xiaohui, a member of the National Committee of the Chinese People’s Political Consultative Conference and president of the China Academy of Information and Communications Technology, said in an interview with a reporter from the Global Times on March 5 that in the era of the digital economy, especially with the booming development of artificial intelligence, computing power has become a Strategic resources in short supply globally. Currently, China’s total computing power ranks second in the world. However, Yu Xiaohui also said that due to insufficient precise docking and differences in regional computing power resource endowments and demands, China's contradictory situation of tight supply of computing power resources and inability to effectively utilize them exists at the same time. It is urgent to explore and gradually establish a "national unified computing power service market" ”, allowing computing power to fully unleash its role as China’s new productive force. This site has noticed that relevant national departments have already started preparing relevant matters. In December last year, the National Development and Reform Commission and the National Data

In recent years, language models (LM) have become more prominent in natural language processing (NLP) research and increasingly influential in practice. In general, increasing the size of a model has been shown to improve performance across a range of NLP tasks. However, the challenge of scaling up the model is also obvious: training new, larger models requires a lot of computing resources. In addition, new models are often trained from scratch and cannot utilize the training weights of previous models. Regarding this problem, Google researchers explored two complementary methods to significantly improve the performance of existing language models without consuming a lot of additional computing resources. First, in "Transcending Scaling Laws with 0.1

1. Mount the memory storage directory on the host. Create a directory for mounting mkdir/mnt/memory_storage. Mount the tmpfs file system mount-ttmpfs-osize=800Gtmpfs/mnt/memory_storage. The storage space will be used on demand, that is, when using 100G storage. It will occupy 100G of memory. There is 2T memory on the host node, and 800G memory is allocated here to store Elasticsearch data. Create the directory mkdir/mnt/memory_storage/elasticsearch-data-es-jfs-prod-es-defaul in advance
