Home Technology peripherals AI Can artificial intelligence really help us talk to animals?

Can artificial intelligence really help us talk to animals?

Apr 12, 2023 pm 04:58 PM
AI machine learning animal language

A dolphin trainer signals "together" with his hands, followed by "create." Two trained dolphins disappear underwater, exchange sounds and then surface, flipping onto their backs and raising their tails. They devised their own new tricks and performed them one after another as required. "This does not prove that language exists," says Aza Raskin. "But if they could use a rich, symbolic form of communication, it would certainly make the task easier."

Raskin is a co-founder of the Earth Species Project (ESP) and President, a California nonprofit with the ambition to use a form of artificial intelligence (AI) called machine learning to decode non-human communications and will expose all available proprietary technology, deepening our relationship with other biological species to help protect them. A 1970 album of whale songs inspired a movement that led to a ban on commercial whaling. What does Google Translate of the Animal Kingdom produce?

The organization, founded in 2017 with the help of major donors including LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to initiate communication with animals in our lifetime. "What we're working towards is whether we can decode animal communication and discover the mysteries of non-human speech," Raskin said. "In the process, equally important, we are developing technologies that support biologists and animal conservation."

Understanding the vocalizations of animals has long been a subject of fascination and inquiry for humans. The alarm calls produced by various primates vary depending on the predator; dolphins use signature whistles to call in friends; and some songbirds can extract elements from their calls and rearrange them to convey different messages. But most experts don't call it a language because no animal communication meets all the criteria.

Until recently, decoding relied primarily on painstaking observation. However, there is strong interest in applying machine learning to process the vast amounts of data that can now be collected by modern animal communication sensors. "People started using it," says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. "But we don't know yet how much we can do."

Briefer co-developed an algorithm that analyzes pigs' grunts to determine whether the animals are experiencing positive or negative emotions. Another method, called DeepSqueak, determines whether a rodent is stressed based on its ultrasonic calls. Another initiative—the CETI project (which stands for Cetacean Translation Initiative)—plans to use machine learning to translate sperm whale communications.

Can artificial intelligence really help us talk to animals?

Earlier this year, Elodie Briefer and colleagues published a study based on the vocal emotions of pigs. 7,414 sounds were collected from 411 pigs in various scenarios.

Yet ESP says its approach is different because it doesn't focus on decoding one species' communications, but all of them. While Ruskin acknowledges that the potential for rich symbolic communication is higher among social animals such as primates, whales, and dolphins, the goal is to develop tools that can be applied across the entire animal kingdom. “We are species agnostic,” Raskin said. "We develop tools... to work across all biology, from worms to whales." Raskin said his "intuition-stimulating" work on ESP shows that machine learning can be used in diverse , and translate between sometimes distant human languages—without any prior knowledge.

The process begins with developing an algorithm to represent words in physical phase space. In this multidimensional geometric representation, the distances and directions between points (words) describe how they are meaningfully related to each other (their semantic relationships). For example, the relationship between "king" and "man" is the same as the distance and direction between "woman" and "queen". (The mapping is done not by knowing what the words mean, but by seeing how often they are close to each other.)

It was later noticed that these "shapes" were similar for different languages. Then, in 2017, two groups of researchers working independently discovered a technique that could achieve translation by aligning shapes. To go from English to Urdu, align their shapes and find the Urdu word point that is closest to the English word point. "That way you can translate most words pretty well," Raskin said.

ESP’s aspiration is to create such representations of animal communication—working on single species and many species simultaneously—and then explore questions such as whether there is overlap with universal human communication “shapes.” We don't know how animals experience the world, Raskin said, but there are emotions, such as sadness and joy, that some animals seem to share with us and likely communicate with others in their species. "I don't know which is more incredible - the parts of the overlapping shapes that we can directly communicate or translate, or the parts with which we can't."

Can artificial intelligence really help us talk to animals?

Dolphins use clicks, whistles, and other sounds to communicate. But what are they talking about?

Animals communicate through more than just sound, he added. For example, bees use a "waggle dance" to let others know the location of a flower. Translation across different modes of communication is also required.

The goal is "like going to the moon," Ruskin admits, but the idea won't be achieved all at once. Instead, ESP’s roadmap involves solving a series of small problems to achieve a bigger picture. This should see the development of general tools that can help researchers trying to apply artificial intelligence to unlock the secrets of the species they study.

For example, ESP recently published a paper (and shared its code) on the so-called "cocktail party problem" in animal communication, in which it is difficult to tell which individual in a group of identical animals is communicating in a noisy Vocalization in social contexts.

"To our knowledge, no one has done this kind of end-to-end [animal voice] disentanglement before," Raskin said. The AI-based model developed by ESP, which was tested on dolphin signature whistles, macaque coos and bat vocalizations, worked best when the calls came from individuals the model was trained on; but with larger data sets, it was able to unravel Mixed calls from animals not in the training queue.

Another project involves using artificial intelligence to generate new animal sounds, using humpback whales as a test species. The novel calls - made by breaking the vocalizations into microphones (different sound units lasting one hundredth of a second) and using language models to "speak" something like a whale - can then be played back to the animals to see how they respond. If AI can identify the causes of random changes versus semantically meaningful changes, it could bring us closer to meaningful communication, Raskin explained. "It will allow artificial intelligence to speak the language, even though we don't know what it means yet."

Can artificial intelligence really help us talk to animals?

Hawaiian crows are known for using tools, but are also thought to have a Particularly complex vocalizations.

Another project aims to develop an algorithm that determines how many call types a species has by applying self-supervised machine learning, which does not require any labeling of the data by human experts to learn patterns. In an early test case, it will mine recordings made by a team led by Christian Rutz, professor of biology at the University of St. Andrews, to create an inventory of the vocal repertoire of Hawaiian crows — Rutz discovered the vocal repertoire of Hawaiian crows with the ability to make and The ability to use foraging tools and is thought to have a more complex vocal repertoire than other crow species.

Rutz is particularly excited about the project’s animal conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is bred for reintroduction into the wild. It is hoped that by recording records over time, it will be possible to track whether the species' call repertoire has been eroded in captivity - for example, specific alarm calls may have been lost - which could have implications for its reintroduction; this loss could be mitigated through intervention solve. "This could lead to a step forward in our ability to help these birds recover from crises," Rutz said, adding that manual detection and triage calls would be labor-intensive and error-prone.

Meanwhile, another project seeks to automatically understand the functional meaning of vocalizations. It's being studied in the lab of Ari Friedlaender, a professor of marine sciences at UC Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, move underwater and runs one of the largest tagging programs in the world. Small electronic "bio-recording" devices attached to animals can capture their location, types of movements, and even what they see (these devices can contain cameras). The lab also has data from strategically placed recorders in the ocean.

ESP aims to first apply self-supervised machine learning to labeled data to automatically measure what an animal is doing (such as whether it is eating, resting, traveling or socializing), and then adding audio data to see if it can empower Functional meaning is associated with the call. (Any findings can then be verified using playback experiments, as well as previously decoded calls.) The technique will initially be applied to humpback whale data—the lab has already tagged several animals in the same group, so it can see how Send and receive signals. Friedlander said he has "reached a ceiling" in terms of what currently available tools can tease out of the data. "We hope that the work ESP can do will provide new insights," he said.

​But not everyone is so enthusiastic about the power of artificial intelligence to achieve such ambitious goals. Robert Seyfarth, professor emeritus of psychology at the University of Pennsylvania, has been studying social behavior and vocal communication among primates in their natural habitats for more than 40 years. While he thinks machine learning can solve some problems, such as identifying an animal's vocal repertoire, there are other areas, including discovering the meaning and function of vocalizations, where he suspects it will pose many problems.

The problem, he explains, is that while many animals can have complex societies, their repertoire of sounds is much smaller than that of humans. The result is that the exact same sound can be used to mean different things in different contexts, and this can only be done by studying the context - who the individual is calling, how they relate to others, where they fit in the hierarchy , who they interact with—and whose meaning can hopefully be established. “I just don’t think these AI approaches are enough,” Seyfarth said. "You have to go out and see the animals." dance".

There are also questions about the concept itself—that the forms of animal communication would overlap in meaningful ways with human communication “shapes.” It's one thing to apply computer-based analysis to human language, which we're so familiar with, Seyfarth said. But doing so for other species could be "completely different." "It's an exciting idea, but it's a big stretch," said Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm. Can artificial intelligence really help us talk to animals?

Raskin acknowledged that artificial intelligence alone may not be enough to unlock communication with other species. But he points to research showing that the way many species communicate is "more complex than humans thought." The stumbling blocks are our ability to collect enough data and conduct large-scale analysis, and our own limited knowledge. "These are the tools that allow us to take off our human glasses and understand the entire communication system of the species," he said.

The above is the detailed content of Can artificial intelligence really help us talk to animals?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

See all articles