Table of Contents
Use your mind to output "HELLO"" >Use your mind to output "HELLO"
What is a smart metasurface? " >What is a smart metasurface?
Home Technology peripherals AI Wireless 'soul communication'! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

Wireless 'soul communication'! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

Apr 09, 2023 am 09:31 AM
Research brain machine

Wireless soul communication! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

In recent years, coding metasurfaces have achieved real-time and programmable control of electromagnetic functions, while previous electromagnetic functions were either static , or are very limited in traditional passive devices. However, such metasurfaces still require manual operation.

In order to directly detect and distinguish people's wishes, scientists subsequently proposed the concept of brain-computer interface (BCI), trying to use brain-computer interface to Interfaces to establish communication between the brain and the device provide a new perspective on the control of programmable metasurfaces. By collecting brain signals from the "special hat," the brain-computer interface can decode the operator's wishes and send commands to the controlled object without requiring the operator to perform some complex muscle activities.

Now, the team of Academician Cui Tiejun of the State Key Laboratory of Millimeter Waves at Southeast University, together with scientific research institutions such as South China University of Technology and the National University of Singapore, have gone one step further and developed an electromagnetic brain-computer metasurface (electromagnetic brain-computer- metasurface, EBCM).

According to reports, this metasurface can flexibly and non-invasively control information synthesis and wireless transmission, converting the operator's brain information into electroencephalogram (EEG) signals, and then into various electromagnetic (EM) signals. instructions, thereby achieving wireless "spiritual communication" between two operators.

As shown below, a monitor showing related commands is placed in front of the operator. By simply receiving simple instructions, the EBCM can understand the operator's intentions and implement electromagnetic functions such as visual-beam scanning, wave modulations, and pattern encoding.

Wireless soul communication! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

The relevant research paper is titled "Directly wireless communication of human minds via non-invasive brain-computer-metasurface platform" The problem was published in the scientific journal eLight.

The researchers said that this study combines electromagnetic wave space with brain-computer interface, opening up a new direction for the exploration of the deep integration of supersurface, human brain intelligence and artificial intelligence, and helping to build a new generation of Biointelligent metasurface systems.

Use your mind to output "HELLO"

In this study, the research team designed and experimentally demonstrated a Wireless text communications for EBCM.

The research team provides a textual graphical user interface (GUI) for brain-computer interface operators, so that visual buttons can be directly encoded into specific encoding sequences composed of "0" and "1".

In the experiment, a high-gain single-beam mode and a low-gain random scattering mode were used to distinguish the amplitude of the metasurface reflection, corresponding to the codes "1" (high amplitude) and "for wireless information transmission" respectively. 0" (low amplitude).

As a demonstration of the prototype, the researchers demonstrated the wireless transmission of text from one operator to another in the EBCM communications system.

Operator A, acting as a text sender, sends letters by visually viewing the character buttons on the EBCM GUI. When decoding the target letters from the EEG signal, an ASCII-based encoding sequence is implemented on the FPGA to switch time-varying modes, manipulating the metasurface to send information into space where it is received, demodulated, and presented by Operator B's EBCM.

Wireless soul communication! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

As shown below, the research team shows the wireless transmission process of 5 letters "HELLO", operator B's screen The word "HELLO" is successfully displayed on the screen.

In the visual beam scanning experiment, the operator directly achieved the desired beam scanning direction by visually looking in a specific direction. After detecting the operator's EEG, EBCM can display the execution encoding pattern of the relevant beam scanning direction.

Wireless soul communication! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive

In addition, the research team also demonstrated the pattern encoding process of EBCM. The operator enters the required code by pressing a specific button. Input codes detected by EBCM are displayed on the screen as yellow squares. The last code "C4" is a stop instruction that terminates the encoding process and instructs the FPGA to calculate the final encoding pattern. Afterwards, EBCM executes the calculated encoding patterns and displays them on the metasurface.

The above experiments show that the operator no longer needs any movements involving muscles, but only needs to stare at specific visual buttons for relevant continuous stimulation. EBCM can These stimuli are recognized and converted into corresponding EM signals for communication.

What is a smart metasurface?

Metasurface refers to an artificial layered material whose thickness is smaller than the wavelength. According to the in-plane structural form, metasurfaces can be divided into two types: one with lateral subwavelength fine structures, and the other with a uniform film layer. Metasurfaces can realize flexible and effective control of electromagnetic wave phase, polarization mode, propagation mode and other characteristics.

Smart metasurface is an important application of information metamaterials in the field of mobile communications. Its basic principle is to control the electromagnetic properties of metamaterials through digital programming, change the diffuse reflection of space electromagnetic waves on ordinary walls, and achieve Intelligent control and beamforming of space electromagnetic waves, as well as low power consumption and low cost, are expected to become an important infrastructure for future mobile communication networks.

As early as 2014, Academician Cui Tiejun’s team took the lead in realizing an intelligent metasurface hardware system, setting a precedent for promoting the application of information metamaterials.

In February this year, Academician Cui Tiejun’s team and their collaborators used multi-layer transmission digitally encoded metasurfaces to construct a fully diffraction neural network (Programmable Artificial Intelligence Machine, PAIM) that can be adjusted in real time and successfully achieved It has realized the real-time programming and light speed calculation characteristics of network parameters, and demonstrated a variety of application cases, including image recognition, reinforcement learning, and communication multi-channel encoding and decoding. It has implemented and demonstrated a fully diffraction adjustable neural network in microwave space for the first time in the world. .

Of course, the application scenarios of supersurfaces are far from limited to this.

The rich and unique physical properties of metasurfaces and their ability to flexibly control electromagnetic waves can make them have important application prospects in many fields such as stealth technology, antenna technology, microwave and terahertz devices, and optoelectronic devices.

The above is the detailed content of Wireless 'soul communication'! Academician Cui Tiejun leads the development of new brain-computer supersurface, flexible and non-invasive. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

A deep dive into models, data, and frameworks: an exhaustive 54-page review of efficient large language models A deep dive into models, data, and frameworks: an exhaustive 54-page review of efficient large language models Jan 14, 2024 pm 07:48 PM

Large-scale language models (LLMs) have demonstrated compelling capabilities in many important tasks, including natural language understanding, language generation, and complex reasoning, and have had a profound impact on society. However, these outstanding capabilities require significant training resources (shown in the left image) and long inference times (shown in the right image). Therefore, researchers need to develop effective technical means to solve their efficiency problems. In addition, as can be seen from the right side of the figure, some efficient LLMs (LanguageModels) such as Mistral-7B have been successfully used in the design and deployment of LLMs. These efficient LLMs can significantly reduce inference memory while maintaining similar accuracy to LLaMA1-33B

The powerful combination of diffusion + super-resolution models, the technology behind Google's image generator Imagen The powerful combination of diffusion + super-resolution models, the technology behind Google's image generator Imagen Apr 10, 2023 am 10:21 AM

In recent years, multimodal learning has received much attention, especially in the two directions of text-image synthesis and image-text contrastive learning. Some AI models have attracted widespread public attention due to their application in creative image generation and editing, such as the text image models DALL・E and DALL-E 2 launched by OpenAI, and NVIDIA's GauGAN and GauGAN2. Not to be outdone, Google released its own text-to-image model Imagen at the end of May, which seems to further expand the boundaries of caption-conditional image generation. Given just a description of a scene, Imagen can generate high-quality, high-resolution

Crushing H100, Nvidia's next-generation GPU is revealed! The first 3nm multi-chip module design, unveiled in 2024 Crushing H100, Nvidia's next-generation GPU is revealed! The first 3nm multi-chip module design, unveiled in 2024 Sep 30, 2023 pm 12:49 PM

3nm process, performance surpasses H100! Recently, foreign media DigiTimes broke the news that Nvidia is developing the next-generation GPU, the B100, code-named "Blackwell". It is said that as a product for artificial intelligence (AI) and high-performance computing (HPC) applications, the B100 will use TSMC's 3nm process process, as well as more complex multi-chip module (MCM) design, and will appear in the fourth quarter of 2024. For Nvidia, which monopolizes more than 80% of the artificial intelligence GPU market, it can use the B100 to strike while the iron is hot and further attack challengers such as AMD and Intel in this wave of AI deployment. According to NVIDIA estimates, by 2027, the output value of this field is expected to reach approximately

The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document The most comprehensive review of multimodal large models is here! 7 Microsoft researchers cooperated vigorously, 5 major themes, 119 pages of document Sep 25, 2023 pm 04:49 PM

The most comprehensive review of multimodal large models is here! Written by 7 Chinese researchers at Microsoft, it has 119 pages. It starts from two types of multi-modal large model research directions that have been completed and are still at the forefront, and comprehensively summarizes five specific research topics: visual understanding and visual generation. The multi-modal large-model multi-modal agent supported by the unified visual model LLM focuses on a phenomenon: the multi-modal basic model has moved from specialized to universal. Ps. This is why the author directly drew an image of Doraemon at the beginning of the paper. Who should read this review (report)? In the original words of Microsoft: As long as you are interested in learning the basic knowledge and latest progress of multi-modal basic models, whether you are a professional researcher or a student, this content is very suitable for you to come together.

I2V-Adapter from the SD community: no configuration required, plug and play, perfectly compatible with Tusheng video plug-in I2V-Adapter from the SD community: no configuration required, plug and play, perfectly compatible with Tusheng video plug-in Jan 15, 2024 pm 07:48 PM

The image-to-video generation (I2V) task is a challenge in the field of computer vision that aims to convert static images into dynamic videos. The difficulty of this task is to extract and generate dynamic information in the temporal dimension from a single image while maintaining the authenticity and visual coherence of the image content. Existing I2V methods often require complex model architectures and large amounts of training data to achieve this goal. Recently, a new research result "I2V-Adapter: AGeneralImage-to-VideoAdapter for VideoDiffusionModels" led by Kuaishou was released. This research introduces an innovative image-to-video conversion method and proposes a lightweight adapter module, i.e.

VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything! VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything! Mar 02, 2024 am 10:10 AM

This work of EfficientSAM was included in CVPR2024 with a perfect score of 5/5/5! The author shared the result on a social media, as shown in the picture below: The LeCun Turing Award winner also strongly recommended this work! In recent research, Meta researchers have proposed a new improved method, namely mask image pre-training (SAMI) using SAM. This method combines MAE pre-training technology and SAM models to achieve high-quality pre-trained ViT encoders. Through SAMI, researchers try to improve the performance and efficiency of the model and provide better solutions for vision tasks. The proposal of this method brings new ideas and opportunities to further explore and develop the fields of computer vision and deep learning. by combining different

2022 Boltzmann Prize announced: Founder of Hopfield Network wins award 2022 Boltzmann Prize announced: Founder of Hopfield Network wins award Aug 13, 2023 pm 08:49 PM

The two scientists who have won the 2022 Boltzmann Prize have been announced. This award was established by the IUPAP Committee on Statistical Physics (C3) to recognize researchers who have made outstanding achievements in the field of statistical physics. The winner must be a scientist who has not previously won a Boltzmann Prize or a Nobel Prize. This award began in 1975 and is awarded every three years in memory of Ludwig Boltzmann, the founder of statistical physics. Deepak Dharistheoriginalstatement. Reason for award: In recognition of Deepak Dharistheoriginalstatement's pioneering contributions to the field of statistical physics, including Exact solution of self-organized critical model, interface growth, disorder

Google AI rising star switches to Pika: video generation Lumiere, serves as founding scientist Google AI rising star switches to Pika: video generation Lumiere, serves as founding scientist Feb 26, 2024 am 09:37 AM

Video generation is progressing in full swing, and Pika has welcomed a great general - Google researcher Omer Bar-Tal, who serves as Pika's founding scientist. A month ago, Google released the video generation model Lumiere as a co-author, and the effect was amazing. At that time, netizens said: Google joins the video generation battle, and there is another good show to watch. Some people in the industry, including StabilityAI CEO and former colleagues from Google, sent their blessings. Lumiere's first work, Omer Bar-Tal, who just graduated with a master's degree, graduated from the Department of Mathematics and Computer Science at Tel Aviv University in 2021, and then went to the Weizmann Institute of Science to study for a master's degree in computer science, mainly focusing on research in the field of image and video synthesis. His thesis results have been published many times

See all articles