


Science's annual top ten scientific research announcements: the Webb telescope was selected, accompanied by AIGC!
Finally, the biggest award for science magazines in 2022 has been announced!
On December 16, the science official website released the "Top Ten Scientific Breakthroughs in 2022", among which the Webb Telescope won the crown and was published on the cover of the latest issue.
Science magazine gave the reason for the award:
Due to the technical feats of its construction and launch As well as the huge prospects for exploring the universe, the James Webb Telescope was named Science magazine's 2022 Scientific Breakthrough of the Year.
In addition, major achievements in the scientific community in the past year, including AIGC, NASA's successful impact on an asteroid, and Yunnan University's creation of perennial rice, have also been were selected one after another.
Let’s review these blockbuster studies in the past year.
Annual Breakthrough—Webb Telescope
On July 12, NASA released the first batch of full-color deep space images from the Webb Space Telescope, allowing humans to pass JWST for the first time The "Eye of the Universe" glimpsed the mysteries of the deep space universe.
But the significance of the Webb telescope goes far beyond that. Science magazine is quoted as saying, "The Webb telescope will completely change mankind's view of the universe."
NASA stated that the Webb Telescope will continue to push the limits of astronomical observation, and the release of this deep space image is just the beginning.
The Webb Telescope is larger and more sophisticated than Hubble. It allows us to see earlier images of the observable universe, and also provides us with more possibilities for discovering more exoplanets. sex.
NASA engineer Scott Friedman said that as long as the Webb telescope keeps working, it will provide humans with a glimpse of the early universe that has never been seen.
Capturing light from the first generations of stars and galaxies, mapping their evolution from birth in clouds of gas and dust to their supernova deaths, we will see the earliest How galaxies interact and grow.
From art to mathematics, AIGC makes a strong appearance
The award-winning work Théâtre D'opéra Spatial by artist Jason Allen , created by AIGC
AIGC can be said to be brilliant in 2022, from Dall-E to AlphaFold, from Stable Diffusion to ChatGPT, many creative works that only belonged to humans in the past All can be done through AI.
Starting from OpenAI’s Dall·E 2, there has been a boom in text-image generation.
Immediately afterwards, giants such as Meta and Google also began to launch their own products and competed to recruit people.
In addition, machine learning has also shown its creativity in the fields of science, mathematics and programming.
Science magazine's 2021 Breakthrough Award recognizes AlphaFold2 for predicting the 3D structure of proteins based on the sequence of their amino acid building blocks.
The AlphaTensor and AlphaCode released by Deepmind this year have shown strong capabilities in matrix multiplication and programming competitions.
As a review in Science magazine put it:
There is no doubt that in the future humans will use these tools just as we used to accept looms in the past , cameras and other inventions like that.
Yunnan University Perennial Rice—PR23
The world’s major food crops such as rice, wheat, and corn must be replanted every time they are harvested.
This is a lot of work for farmers and can lead to environmental problems such as soil erosion.
While perennial grains can ease the burden, cultivating long-lived, high-yielding plants has been a challenge.
This year, Hu Fengyi’s team at Yunnan University announced that it has successfully cultivated rice PR23 that can grow continuously for many years.
It is harvested twice a year. Once planted, it can be harvested continuously for 3-4 years without tillage. The average yield per season is as high as 6.8 tons/hectare.
The team also stated that PR23 has been promoted in China and some African countries, and the planting area in 2021 has exceeded 15,553 hectares.
The above is the detailed content of Science's annual top ten scientific research announcements: the Webb telescope was selected, accompanied by AIGC!. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics





Large-scale language models (LLMs) have demonstrated compelling capabilities in many important tasks, including natural language understanding, language generation, and complex reasoning, and have had a profound impact on society. However, these outstanding capabilities require significant training resources (shown in the left image) and long inference times (shown in the right image). Therefore, researchers need to develop effective technical means to solve their efficiency problems. In addition, as can be seen from the right side of the figure, some efficient LLMs (LanguageModels) such as Mistral-7B have been successfully used in the design and deployment of LLMs. These efficient LLMs can significantly reduce inference memory while maintaining similar accuracy to LLaMA1-33B

In recent years, multimodal learning has received much attention, especially in the two directions of text-image synthesis and image-text contrastive learning. Some AI models have attracted widespread public attention due to their application in creative image generation and editing, such as the text image models DALL・E and DALL-E 2 launched by OpenAI, and NVIDIA's GauGAN and GauGAN2. Not to be outdone, Google released its own text-to-image model Imagen at the end of May, which seems to further expand the boundaries of caption-conditional image generation. Given just a description of a scene, Imagen can generate high-quality, high-resolution

3nm process, performance surpasses H100! Recently, foreign media DigiTimes broke the news that Nvidia is developing the next-generation GPU, the B100, code-named "Blackwell". It is said that as a product for artificial intelligence (AI) and high-performance computing (HPC) applications, the B100 will use TSMC's 3nm process process, as well as more complex multi-chip module (MCM) design, and will appear in the fourth quarter of 2024. For Nvidia, which monopolizes more than 80% of the artificial intelligence GPU market, it can use the B100 to strike while the iron is hot and further attack challengers such as AMD and Intel in this wave of AI deployment. According to NVIDIA estimates, by 2027, the output value of this field is expected to reach approximately

The most comprehensive review of multimodal large models is here! Written by 7 Chinese researchers at Microsoft, it has 119 pages. It starts from two types of multi-modal large model research directions that have been completed and are still at the forefront, and comprehensively summarizes five specific research topics: visual understanding and visual generation. The multi-modal large-model multi-modal agent supported by the unified visual model LLM focuses on a phenomenon: the multi-modal basic model has moved from specialized to universal. Ps. This is why the author directly drew an image of Doraemon at the beginning of the paper. Who should read this review (report)? In the original words of Microsoft: As long as you are interested in learning the basic knowledge and latest progress of multi-modal basic models, whether you are a professional researcher or a student, this content is very suitable for you to come together.

Recommended computers suitable for students majoring in geographic information science 1. Recommendation 2. Students majoring in geographic information science need to process large amounts of geographic data and conduct complex geographic information analysis, so they need a computer with strong performance. A computer with high configuration can provide faster processing speed and larger storage space, and can better meet professional needs. 3. It is recommended to choose a computer equipped with a high-performance processor and large-capacity memory, which can improve the efficiency of data processing and analysis. In addition, choosing a computer with larger storage space and a high-resolution display can better display geographic data and results. In addition, considering that students majoring in geographic information science may need to develop and program geographic information system (GIS) software, choose a computer with better graphics processing support.

Scientists aim to discover meaningful formulas that accurately describe experimental data. Mathematical models of natural phenomena can be created manually based on domain knowledge, or they can be created automatically from large data sets using machine learning algorithms. The academic community has studied the problem of merging related prior knowledge and related function models, and believes that finding a model that is consistent with prior knowledge of general logical axioms is an unsolved problem. Researchers from the IBM research team and Samsung AI team developed a method "AI-Descartes" that combines logical reasoning with symbolic regression to conduct principled derivation of natural phenomenon models from axiomatic knowledge and experimental data. The study is based on "Combining data and theory for

This work of EfficientSAM was included in CVPR2024 with a perfect score of 5/5/5! The author shared the result on a social media, as shown in the picture below: The LeCun Turing Award winner also strongly recommended this work! In recent research, Meta researchers have proposed a new improved method, namely mask image pre-training (SAMI) using SAM. This method combines MAE pre-training technology and SAM models to achieve high-quality pre-trained ViT encoders. Through SAMI, researchers try to improve the performance and efficiency of the model and provide better solutions for vision tasks. The proposal of this method brings new ideas and opportunities to further explore and develop the fields of computer vision and deep learning. by combining different

The image-to-video generation (I2V) task is a challenge in the field of computer vision that aims to convert static images into dynamic videos. The difficulty of this task is to extract and generate dynamic information in the temporal dimension from a single image while maintaining the authenticity and visual coherence of the image content. Existing I2V methods often require complex model architectures and large amounts of training data to achieve this goal. Recently, a new research result "I2V-Adapter: AGeneralImage-to-VideoAdapter for VideoDiffusionModels" led by Kuaishou was released. This research introduces an innovative image-to-video conversion method and proposes a lightweight adapter module, i.e.
