What is the origin of big data?
The origin of big data is the "Internet". The big data industry refers to the information service industry based on data storage, value extraction, intelligent processing and distribution based on the collection of a large number of data resources from a wide range of channels such as the Internet, Internet of Things, and cloud computing.
#The origin of big data is the "Internet".
With the advancement of intelligent manufacturing, artificial intelligence technology has also developed rapidly, and the biggest application of big data is reflected in artificial intelligence technology, because big data emphasizes not causality, but It is a correlation relationship. Through the analysis of a series of data, we can determine whether there is any correlation and promote the advancement of intelligent manufacturing. This article will popularize the ins and outs of "big data".
The concept of big data originally originated in the United States and was developed through initiatives by companies such as Cisco, VMware, Oracle, and IBM. Starting around 2009, “big data” became a buzzword in the Internet information technology industry. In fact, the big data industry refers to the information service industry based on data storage, value extraction, intelligent processing and distribution based on the collection of a large number of data resources through a wide range of channels such as the Internet, Internet of Things, and cloud computing. Most big data companies are dedicated to Empower all users to gain actionable insights from virtually any data, including insights previously hidden in unstructured data.
The first institution to propose that "the era of big data has arrived" was the world-renowned consulting firm McKinsey. In 2011, McKinsey pointed out in a research report titled "Massive Data, the Next New Area for Innovation, Competition, and Increased Productivity" that data has penetrated into every industry and business function area and has gradually become an important production factor; The use of massive amounts of data will herald a new wave of productivity growth and consumer surplus.
Big data is an evolving concept. Its current rise is because major changes have taken place from IT technology to data accumulation. In just a few years, big data has evolved from a professional term spoken by executives of large Internet companies to a major technical proposition that determines our future digital lifestyle. In 2012, the United Nations published a big data government white paper "Big Data for Development: Challenges and Opportunities"; multinational IT giants such as EMC, IBM, and Oracle have released big data strategies and products; almost all world-class Internet companies have extended their business reach to the big data industry; whether it is social platform competition, e-commerce price war or portal competition, it has its shadow; the US government invested US$200 million to launch the "Big Data Research and Development Plan", and raised big data to a national strategic level . In 2013, big data is transforming from a technical hot word into a social wave that will affect all aspects of social life.
About the origin of the concept of "big data"
1. The name of "big data" comes from "The Third Wave" written by futurist Toffler
Although the term "big data" has not received much attention until recently, as early as 1980, the famous futurist Toffler enthusiastically referred to "big data" in his book "The Third Wave" "Praised as "the cadenza of the third wave." "Nature" magazine launched a cover column called "Big Data" in September 2008. Since 2009, “big data” has become a buzzword in the Internet technology industry.
2. The earliest application of "big data" was McKinsey.
The idea of collecting and analyzing "big data" came from the world-famous management consulting firm McKinsey. McKinsey & Company saw the potential commercial value of the massive amounts of personal information recorded on various online platforms, so it invested a lot of manpower and material resources in research, and released a report on "Big Data" in June 2011. The report analyzed the impact of "Big Data" on The impact, key technologies and application areas are all analyzed in detail. McKinsey's report received great attention from the financial community, and then gradually attracted attention from all walks of life.
3. The characteristics of "big data" were proposed by Victor Maier-Schoenberg and Kenneth Cukier in "The Era of Big Data"
Victor Maier In "The Era of Big Data" written by Schonberger and Kenneth Kaye, it is proposed that the 4V characteristics of "big data" are: Volume (large amount of data), Velocity (fast input and processing speed), Variety (diversity of data) sex), Value (low value density). These characteristics are basically recognized by everyone. Articles that mention the characteristics of "big data" basically use these four characteristics.
4. Only after the emergence of cloud computing did "big data" highlight its true value
Since the emergence of cloud computing servers, "big data" has a track that can run and can be realized its true value. Some people vividly compare various "big data" applications to "cars", and the "highway" that supports the operation of these "cars" is cloud computing. The most famous example is the Google search engine. Faced with massive amounts of Web data, Google first proposed the concept of cloud computing in 2006. What supports various "big data" applications within Google is the cloud computing server developed by Google itself.
If you want to read more related articles, please visit PHP Chinese website! !
The above is the detailed content of What is the origin of big data?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Big data structure processing skills: Chunking: Break down the data set and process it in chunks to reduce memory consumption. Generator: Generate data items one by one without loading the entire data set, suitable for unlimited data sets. Streaming: Read files or query results line by line, suitable for large files or remote data. External storage: For very large data sets, store the data in a database or NoSQL.

AEC/O (Architecture, Engineering & Construction/Operation) refers to the comprehensive services that provide architectural design, engineering design, construction and operation in the construction industry. In 2024, the AEC/O industry faces changing challenges amid technological advancements. This year is expected to see the integration of advanced technologies, heralding a paradigm shift in design, construction and operations. In response to these changes, industries are redefining work processes, adjusting priorities, and enhancing collaboration to adapt to the needs of a rapidly changing world. The following five major trends in the AEC/O industry will become key themes in 2024, recommending it move towards a more integrated, responsive and sustainable future: integrated supply chain, smart manufacturing

In the Internet era, big data has become a new resource. With the continuous improvement of big data analysis technology, the demand for big data programming has become more and more urgent. As a widely used programming language, C++’s unique advantages in big data programming have become increasingly prominent. Below I will share my practical experience in C++ big data programming. 1. Choosing the appropriate data structure Choosing the appropriate data structure is an important part of writing efficient big data programs. There are a variety of data structures in C++ that we can use, such as arrays, linked lists, trees, hash tables, etc.

1. Background of the Construction of 58 Portraits Platform First of all, I would like to share with you the background of the construction of the 58 Portrait Platform. 1. The traditional thinking of the traditional profiling platform is no longer enough. Building a user profiling platform relies on data warehouse modeling capabilities to integrate data from multiple business lines to build accurate user portraits; it also requires data mining to understand user behavior, interests and needs, and provide algorithms. side capabilities; finally, it also needs to have data platform capabilities to efficiently store, query and share user profile data and provide profile services. The main difference between a self-built business profiling platform and a middle-office profiling platform is that the self-built profiling platform serves a single business line and can be customized on demand; the mid-office platform serves multiple business lines, has complex modeling, and provides more general capabilities. 2.58 User portraits of the background of Zhongtai portrait construction

In today's big data era, data processing and analysis have become an important support for the development of various industries. As a programming language with high development efficiency and superior performance, Go language has gradually attracted attention in the field of big data. However, compared with other languages such as Java and Python, Go language has relatively insufficient support for big data frameworks, which has caused trouble for some developers. This article will explore the main reasons for the lack of big data framework in Go language, propose corresponding solutions, and illustrate it with specific code examples. 1. Go language

Yizhiwei’s 2023 autumn product launch has concluded successfully! Let us review the highlights of the conference together! 1. Intelligent inclusive openness, allowing digital twins to become productive Ning Haiyuan, co-founder of Kangaroo Cloud and CEO of Yizhiwei, said in his opening speech: At this year’s company’s strategic meeting, we positioned the main direction of product research and development as “intelligent inclusive openness” "Three core capabilities, focusing on the three core keywords of "intelligent inclusive openness", we further proposed the development goal of "making digital twins a productive force". 2. EasyTwin: Explore a new digital twin engine that is easier to use 1. From 0.1 to 1.0, continue to explore the digital twin fusion rendering engine to have better solutions with mature 3D editing mode, convenient interactive blueprints, and massive model assets

As an open source programming language, Go language has gradually received widespread attention and use in recent years. It is favored by programmers for its simplicity, efficiency, and powerful concurrent processing capabilities. In the field of big data processing, the Go language also has strong potential. It can be used to process massive data, optimize performance, and can be well integrated with various big data processing tools and frameworks. In this article, we will introduce some basic concepts and techniques of big data processing in Go language, and show how to use Go language through specific code examples.

In big data processing, using an in-memory database (such as Aerospike) can improve the performance of C++ applications because it stores data in computer memory, eliminating disk I/O bottlenecks and significantly increasing data access speeds. Practical cases show that the query speed of using an in-memory database is several orders of magnitude faster than using a hard disk database.