


The most detailed 3D map of the human brain is published in Science! GPT-4 parameters are only equivalent to 0.2% of humans
The human brain tissue is the size of a sesame seed, and its synapse size is equivalent to one GPT-4!
Google and Harvard teamed up to conduct nanoscale modeling of a partial human brain, and the paper has been published in Science.
This is the largest and most detailed replica of the human brain to date, showing for the first time the network of synaptic connections in the brain.
With ultra-high resolution, the reconstruction, called H01, has revealed some previously unseen details about the human brain.
Corresponding author of the project, Professor Lichtman of Harvard University, said that no one has really seen such a complex synaptic network before.
This modeling result will help to gain an in-depth understanding of the workings of the brain and inspire further research on brain functions and diseases.
It is also worth mentioning that this study involved 1 cubic millimeter of human brain tissue, but the amount of data generated was as high as 1.4PB.
According to a study based on the volume of the human brain, modeling the entire human brain would generate 1.76 ZB of data, and the current most advanced supercomputer has a storage capacity of only 7/10,000 ZB. Less than 0.4% of a single state of a single human brain.
Even if we take all the servers on the entire Internet, we can only store 9 human brains.
At the same time, 1 cubic millimeter of brain tissue contains 57,000 cells and 150 million synapses, and the number of synapses in the entire brain is as high as quadrillion .
In contrast, the number of parameters of GPT-4 is only 2 trillion, which is only 0.2% of the number of synapses in the human brain. According to this calculation, it is the size of a sesame seed when placed in the brain.
Some people lamented that AGI may be far away again...
Nanoscale modeling brings new discoveries
Specifically, the researchers obtained a sample of temporal lobe cortex tissue from a 45-year-old female epilepsy patient, which was approximately 1 cubic millimeter in size.
After the samples were quickly fixed, stained and embedded in resin, the researchers used an ultramicrotome with an automatic collection device to cut 5019 continuous sections with a thickness of approximately 33.9 nanometers.
The researchers then used a multi-beam scanning electron microscope to image each slice at a resolution of 4×4 nanometers/pixel, obtaining original two-dimensional image data with a total size of approximately 1.4PB.
Next, the researchers used computational tools to splice and align these two-dimensional images, and reconstruct three-dimensional voxel data.
After that, they used a machine learning algorithm called flood-filling networks (FFN) to segment the neuron morphology of the entire voxel, and manually corrected the segmentation errors. , and finally reconstructed the three-dimensional shape of all cells, synapses, blood vessels and other structures in this 1 cubic millimeter of brain tissue.
FFN was proposed by Google Brains in 2018. The basic idea is to start from a seed point and recursively expand around it, marking all voxels connected to it until it encounters the background or the boundaries of other objects.
At the same time, they also used machine learning models to automatically identify synaptic locations and distinguish excitatory and inhibitory synapses.
In the end, the team successfully modeled 1 cubic millimeter of brain tissue at the nanometer level, containing more than 50,000 cell nuclei and 150 million synapses, as well as 230 mm of super Small veins.
On this basis, by analyzing the reconstructed cell morphology, the researchers identified the main cell type composition of the brain area.
Of the total 57,180 cells, 49,080 are neurons and glial cells, and 8,100 are related to blood vessels. Among neurons and glial cells, the number of the latter is about twice that of the former.
Among neurons, 65.5% are pyramidal neurons with spikes, and 29.1% are interneurons with smooth processes; among glial cells, oligodendrocytes are the most common.
Researchers developed a machine learning model to automatically identify synapse locations and their types (excitatory/inhibitory).
This brain area contains a total of about 150 million synapses, of which 111 million are excitatory synapses and the other 39 million are inhibitory synapses. The distribution of excitatory and inhibitory synapses at different cortical levels There are also some differences in density.
By analyzing the synaptic input received by each neuron, the researchers found that the vast majority (96.49%) of axons only form one synapse with their target cell, but a few axons can form multiple synapses. (up to more than 50) and establish a particularly strong connection with the target cells.
Further analysis found that such polysynaptic "strong connections" are prevalent in both excitatory and inhibitory axons, and their number is significantly higher than that formed randomly expected levels at synapses.
The researchers speculate that among a large number of random weak connections, a specific few axons may regulate the activity of neurons through deliberately formed strong connections.
In addition, the researchers also analyzed a special type of pyramidal neurons in detail.
There are two mirror-symmetrical orientations of the basal dendrites of these "triangular" and "compass" cells, suggesting that they may have different functions.
However, the author also stated that the relevant samples were from epilepsy patients. Although no obvious pathological changes were found under a light microscope, it cannot be ruled out that long-term epilepsy or drug treatment may have an impact on The connections, or structure, of cortical tissue have some more subtle effects.
In other words, the universality of this model may need to be further verified, but at least it has unveiled another layer of the synaptic network.
In order for people to use the modeling results to discover more mysteries, the research team has made all original data, modeling results and related tools open source.
All data tools are open source
The author has established an online interactive data visualization platform Neuroglancer, which other researchers can use to explore the H01 data set at different scales.
It includes all original electron microscope slice images, as well as segmentation results of neuron morphology, synapse location and excitability/inhibition, as well as labels of different types of cells. Users can flexibly observe the data set micro and macro structures.
In addition to the data, the author also open sourced CREST, a tool for exploring synaptic connections between neurons, and CAVE, an online collaborative correction platform deeply integrated with Neuroglancer, to help other researchers Explore and analyze this unprecedented large-scale human brain data set from every angle.
The author said that making this result open source will provide the academic community with a physical basis for studying the structure and function of the human brain, and provide a reference for disease research.
Although H01 has brought unprecedented detailed information, compared with the entire human brain, these data are just the tip of the iceberg of this huge organ. In the future, similar studies will be needed on more regions and levels of the human brain. For nanoscale imaging and three-dimensional reconstruction, the author also calls on the academic community to work together.
One More Thing
The release of the H01 series of data coincides with the 10th anniversary of the establishment of Google Research’s Connectomics team.
Previously, the team also released a Drosophila brain map containing 25,000 neurons and millions of connections between them.
Last year, the team also announced that it would cooperate with a number of universities and spend US$33 million to map the hippocampus in the mouse brain. This project is also the focus of the team's next step.
The H01 map released this time was first released as a data set and preprint paper in June 2021. After optimization and a deeper analysis of synaptic characteristics, the official version of the paper was unveiled today .
Paper address: https://www.science.org/doi/10.1126/science.adk4858
参考链接:
[1]https://research.google/blog/a-browsable-petascale-reconstruction-of-the-human-cortex/。
[2]https://www.sciencealert.com/amazingly-detailed-images-reveal-a-single-cubic-millimeter-of-human-brain-in-3d。
[3]https://news.ycombinator.com/item?id=40313193。
The above is the detailed content of The most detailed 3D map of the human brain is published in Science! GPT-4 parameters are only equivalent to 0.2% of humans. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

Regarding Llama3, new test results have been released - the large model evaluation community LMSYS released a large model ranking list. Llama3 ranked fifth, and tied for first place with GPT-4 in the English category. The picture is different from other benchmarks. This list is based on one-on-one battles between models, and the evaluators from all over the network make their own propositions and scores. In the end, Llama3 ranked fifth on the list, followed by three different versions of GPT-4 and Claude3 Super Cup Opus. In the English single list, Llama3 overtook Claude and tied with GPT-4. Regarding this result, Meta’s chief scientist LeCun was very happy and forwarded the tweet and

The volume is crazy, the volume is crazy, and the big model has changed again. Just now, the world's most powerful AI model changed hands overnight, and GPT-4 was pulled from the altar. Anthropic released the latest Claude3 series of models. One sentence evaluation: It really crushes GPT-4! In terms of multi-modal and language ability indicators, Claude3 wins. In Anthropic’s words, the Claude3 series models have set new industry benchmarks in reasoning, mathematics, coding, multi-language understanding and vision! Anthropic is a startup company formed by employees who "defected" from OpenAI due to different security concepts. Their products have repeatedly hit OpenAI hard. This time, Claude3 even had a big surgery.

In less than a minute and no more than 20 steps, you can bypass security restrictions and successfully jailbreak a large model! And there is no need to know the internal details of the model - only two black box models need to interact, and the AI can fully automatically defeat the AI and speak dangerous content. I heard that the once-popular "Grandma Loophole" has been fixed: Now, facing the "Detective Loophole", "Adventurer Loophole" and "Writer Loophole", what response strategy should artificial intelligence adopt? After a wave of onslaught, GPT-4 couldn't stand it anymore, and directly said that it would poison the water supply system as long as... this or that. The key point is that this is just a small wave of vulnerabilities exposed by the University of Pennsylvania research team, and using their newly developed algorithm, AI can automatically generate various attack prompts. Researchers say this method is better than existing

"ComputerWorld" magazine once wrote an article saying that "programming will disappear by 1960" because IBM developed a new language FORTRAN, which allows engineers to write the mathematical formulas they need and then submit them. Give the computer a run, so programming ends. A few years later, we heard a new saying: any business person can use business terms to describe their problems and tell the computer what to do. Using this programming language called COBOL, companies no longer need programmers. . Later, it is said that IBM developed a new programming language called RPG that allows employees to fill in forms and generate reports, so most of the company's programming needs can be completed through it.

When you wake up, the way you work is completely changed. Microsoft has fully integrated the AI artifact GPT-4 into Office, and now ChatPPT, ChatWord, and ChatExcel are all integrated. CEO Nadella said directly at the press conference: Today, we have entered a new era of human-computer interaction and re-invented productivity. The new feature is called Microsoft 365 Copilot (Copilot), and it becomes a series with GitHub Copilot, the code assistant that changed programmers, and continues to change more people. Now AI can not only automatically create PPT, but also create beautiful layouts based on the content of Word documents with one click. Even what should be said for each PPT page when going on stage is arranged together.

OpenAI, the company that developed ChatGPT, shows a case study conducted by Morgan Stanley on its website. The topic is "Morgan Stanley Wealth Management deploys GPT-4 to organize its vast knowledge base." The case study quotes Jeff McMillan, head of analytics, data and innovation at Morgan Stanley, as saying, "The model will be an internal-facing Powered by a chatbot that will conduct a comprehensive search of wealth management content and effectively unlock Morgan Stanley Wealth Management’s accumulated knowledge.” McMillan further emphasized: "With GPT-4, you basically immediately have the knowledge of the most knowledgeable person in wealth management... Think of it as our chief investment strategist, chief global economist
