Neurosymbolic Regression: Extracting Science from Data
Translator|Li Rui
Reviser|Sun Shujuan
The universe is noisy and chaotic, Complex enough to make predictions difficult. Human intelligence and intuition contribute to a basic understanding of some activities in the surrounding world, and are sufficient to have some basic understanding of individual events on macroscopic space and time scales from the limited perspective of individuals and small groups.
Natural philosophers in human prehistory and antiquity were mostly limited to common sense rationalization and guess testing. These methods have significant limitations, especially for things that are too large or complex, thus leading to the prevalence of superstitious or magical thinking.
This is not to disparage guessing and checking (which are the basis of modern scientific method), but to see that changes in human ability to investigate and understand are driven by the desire and tools to distill physical phenomena into mathematical expressions Caused.
This was especially evident after the Enlightenment led by Newton and other scientists, although there are traces of analytical reductionism in antiquity as well. The ability to move from observations to mathematical equations (and the predictions those equations make) is integral to scientific exploration and progress.
Deep learning is also fundamentally about learning transformations related to input-output observations, just as human scientists try to learn functional relationships between inputs and outputs in the form of mathematical expressions.
Of course, the difference is that the input-output relationship learned by deep neural networks (the result of the universal approximation theorem) consists of an uninterpretable "black box" of numerical parameters, mainly weights, biases and their connected node.
The universal approximation theorem states that a neural network that meets very relaxed criteria should be able to get very close to any well-behaved function. In practice, a neural network is a fragile and leaky abstraction that represents input-output relationships resulting from simple yet precise underlying equations.
Unless special attention is paid to training the model (or ensemble of models) to predict uncertainty, neural networks tend to perform very poorly when making predictions outside the distribution for which they were trained.
Deep learning predictions are also poor at making falsifiable predictions, i.e. out-of-the-box assumptions that form the basis of the scientific method. So while deep learning is a well-proven tool that's good at fitting data, its usefulness is limited in one of humanity's most important pursuits: exploring the universe around us through scientific methods.
Although deep learning has various shortcomings in human scientific endeavors, the huge fitting ability and numerous successes of deep learning in scientific disciplines cannot be ignored.
Modern science produces large amounts of data, the output of which cannot be observed by individuals (or even teams) and cannot be intuitively converted from noisy data into clear mathematical equations.
For this, you can turn to symbolic regression, an automated or semi-automated method of reducing data into equations.
The Current Gold Standard: Evolutionary Methods
Before getting into some exciting recent research on applying modern deep learning to symbolic regression, it is important to first understand the evolution of transforming data sets into equations The current state of the method. The most commonly mentioned symbolic regression package is Eureqa, which is based on genetic algorithms.
Eureqa was originally developed as a research project by Hod Lipson’s team at Cornell University and offered as proprietary software from Nutonian, which was later acquired by DataRobot Corporation. Eureqa has been integrated into the Datarobot platform, led by Michael Schmidt, co-author of Eureqa and CTO of Datarobot.
Eureqa and similar symbolic regression tools use genetic algorithms to simultaneously optimize systems of equations for accuracy and simplicity.
TuringBot is an alternative symbolic regression package based on simulated annealing. Simulated annealing is an optimization algorithm similar to metallurgical annealing used to change the physical properties of metals.
In simulated annealing, the "temperature" is lowered when selecting candidate solutions to the optimization problem, where higher temperatures correspond to acceptance of poorer solutions and are used to promote early exploration, enabling the search for the global optimal solution. merit and provide energy to escape local optima.
TuringBot is another symbolic regression package based on simulated annealing. Simulated annealing is an optimization algorithm similar to metallurgical annealing used to change the physical properties of metals.
In simulated annealing, the "temperature" is lowered when selecting candidate solutions to the optimization problem, where higher temperatures correspond to acceptance of poorer solutions and are used to promote early exploration, enabling the search for the global optimal solution. merit and provide energy to escape local optima.
TuringBot is a free version, but has significant limitations in data set size and complexity, and code modifications are not allowed.
While commercial symbolic regression software (especially Eureqa) provides an important baseline for comparison when developing new tools for symbolic regression, the role of closed source programs is limited.
Another open source alternative called PySR is released under the Apache 2.0 license and is led by Princeton University doctoral student Miles Cranmer and shares the optimization goals of accuracy and parsimony (simplicity), along with Eureqa and The combination method used by TuringBot.
In addition to providing a free and freely modifiable software library for performing symbolic regression, PySR is also interesting from a software perspective: it is written in Python but uses the Julia programming language as a fast backend.
While genetic algorithms are generally considered the current state-of-the-art for symbolic regression, the past few years have seen an exciting explosion of new symbolic regression strategies.
Many of these new developments leverage modern deep learning models, either as function approximation components in multi-step processes, or in an end-to-end manner based on large Transformer models, originally developed for natural language processing, And anything in between.
In addition to new symbolic regression tools based on deep learning, there is also a resurgence in probabilistic and statistical methods, especially Bayesian statistical methods.
Combined with modern computing power, a new generation of symbolic regression software is not only an interesting study in its own right, but also provides real utility and contributions to scientific disciplines including large data sets and comprehensive experiments.
Symbolic Regression with Deep Neural Networks as Function Approximators
Due to the universal approximation theorem described and studied by Cybenko and Hornik in the late 1980s/early 1990s, one can expect to have at least one Neural networks with nonlinear activation of hidden layers can approximate any well-behaved mathematical function.
In practice, deeper neural networks tend to achieve better performance on more complex problems. However, in principle, a hidden layer is needed to approximate various functions.
The physics-inspired AI Feynman algorithm uses the universal approximation theorem as part of a more complex puzzle.
AI Feynman (and its successor AI Feynman 2.0) was developed by physicists Silviu-Marian Udrescu and Max Tegmark (along with some colleagues). AI Feynman takes advantage of functional properties found in many physical equations, such as smoothness, symmetry, and compositionality, among other properties.
Neural networks function as function approximators, learning input-output transformation pairs represented in a data set and facilitating the study of these properties by generating synthetic data under the same functional transformations.
AI The functional properties Feynman uses to solve problems are common in physics equations, but cannot be applied arbitrarily to the space of all possible mathematical functions. However, they are still reasonable assumptions to look for in various functions that correspond to the real world.
Like the genetic algorithm and simulated annealing methods described previously, AI Feynman fits each new data set from scratch. There is no generalization or pre-training involved, and deep neural networks form only an orchestrated part of a larger, physically information-rich system.
AI Feynman symbolic regression did an excellent job of deciphering the 100 equations (or puzzles) in Feynman's physics lectures, but the lack of generalization meant that each new data set (corresponding to a new equation) required a large calculation budget.
A new set of deep learning strategies for symbolic regression leverage the highly successful family of Transformer models, originally introduced as natural language models by Vaswani et al. These new methods are not perfect, but using pre-training can save a lot of computational time during inference.
The first generation of symbolic regression based on natural language models
Given that the attention-based very large Transformer model has been widely used in computer vision, audio, reinforcement learning, recommendation systems and many other fields (in addition to text-based (original role in natural language processing) has achieved great success on a variety of tasks, so it is not surprising that the Transformer model will eventually be applied to symbolic regression as well.
While the realm of numeric input-output pairs to symbolic sequences requires some careful engineering, the sequence-based nature of mathematical expressions naturally lends itself to Transformer methods.
Crucially, using Transformer to generate mathematical expressions allowed them to leverage pre-training on the structure and numerical meaning of millions of automatically generated equations.
This also lays the foundation for improving the model through scaling up. Scaling is one of the main advantages of deep learning, where larger models and more data continue to improve model performance well beyond the classic statistical learning limitations of overfitting.
Scaling is the main advantage mentioned by Biggio et al. in their paper titled "Scalable Neural Symbolic Regression", which is called NSRTS. The NSRTS Transformer model uses a dedicated encoder to transform each input-output pair of the dataset into a latent space. The encoded latent space has a fixed size independent of the input size of the encoder.
NSRTS decoder constructs a sequence of tokens to represent an equation, conditioned on the encoded latent space and the symbols generated so far. Crucially, the decoder only outputs placeholders for numeric constants, but otherwise uses the same vocabulary as the pre-trained equations dataset.
NSRTS uses PyTorch and PyTorch Lightning and has a permissive open source MIT license.
After generating constant-free equations (called equation skeletons), NSRTS uses gradient descent to optimize the constants. This approach layers a general optimization algorithm on top of sequence generation, shared by the so-called “SymbolicGPT” developed simultaneously by Valipour et al.
Valipour et al. did not use an attention-based encoder as in the NSRTS method. Instead, a model based on the Stanford point cloud model PointNet is used to generate a fixed-dimensional feature set that is used by the Transformer decoder to generate equations. Like NSRT, Symbolic GPT uses BFGS to find the numerical constants of the equation skeleton generated by the Transformer decoder.
Second generation symbolic regression based on natural language models
While some recent articles describe the use of natural language processing (NLP) Transformers to achieve generalization and scalability of symbolic regression, The above models are not truly end-to-end as they do not estimate numerical constants.
This can be a serious flaw: imagine a model that generates equations with 1000 sinusoidal bases of different frequencies. Optimizing the coefficients of each term using BFGS will probably be a good fit for most input data sets, but in reality it's just a slow and roundabout way of performing Fourier analysis.
Just in the spring of 2022, the second generation Transformer-based symbolic regression model has been released on ArXiv by Vastl et al. on SymFormer, while another end-to-end Transformer was released by Kamienny and colleagues.
The important difference between these and previous Transformer-based symbolic regression models is that they predict numeric constants as well as symbolic mathematical sequences.
SymFormer uses a double-headed Transformer decoder to complete end-to-end symbol regression. One head produces mathematical symbols, and the second head learns the task of numerical regression, i.e. estimating numerical constants that appear in equations.
The end-to-end models of Kamienny and Vastl differ in details, such as the accuracy of numerical estimates, but the solutions of both groups still rely on subsequent optimization steps for refinement.
Even so, according to the authors, they have faster inference times and produce more accurate results than previous methods, produce better equation skeletons, and provide a good starting point for optimization steps and Estimate constant.
The Era of Symbolic Regression is Coming
In most cases, symbolic regression has been an elegant and computationally intensive machine learning method. Over the past decade, it has gained The attention is much lower than that of general deep learning.
This is partly due to the "use it and lose it" approach of genetic or probabilistic methods, which must start from scratch for each new data set, a characteristic that is inconsistent with intermediate applications from deep learning to symbolic regression. (such as AI Feynman) are the same.
Using the Transformer as an integral component in symbolic regression enables recent models to take advantage of large-scale pre-training, thereby reducing energy, time and computational hardware requirements at inference time.
This trend has been extended further with new models that can estimate numerical constants and predict mathematical symbols, enabling faster inference and greater accuracy.
The task of generating symbolic expressions, which in turn can be used to generate testable hypotheses, is a very human task and is at the heart of science. Automated methods of symbolic regression have continued to make interesting technical advances over the past two decades, but the real test is whether they are useful to researchers doing real science.
Symbolic regression is starting to produce more and more publishable scientific results beyond technical demonstrations. A Bayesian symbolic regression approach yields a new mathematical model for predicting cell division.
Another research team used a sparse regression model to generate reasonable equations for ocean turbulence, paving the way for improved multiscale climate models.
A project combining graph neural networks and symbolic regression with Eureqa’s genetic algorithm generalizes expressions describing many-body gravity and derives a new equation describing the distribution of dark matter from conventional simulators .
Future development of symbolic regression algorithm
Symbolic regression is becoming a powerful tool in the scientist's toolbox. The generalization and scalability of Transformer-based methods are still hot topics and have not yet penetrated into general scientific practice. As more researchers adapt and improve the model, it promises to further advance scientific discoveries.
Many of these projects are conducted under open source licenses, so you can expect them to have an impact within a few years, and their application may be wider than proprietary software such as Eureqa and TuringBot.
Symbolic regression is a natural complement to the output of deep learning models, which are often mysterious and difficult to interpret, whereas output that is more understandable in mathematical language can help generate new testable hypotheses and Driving intuitive leaps.
These characteristics and the straightforward capabilities of the latest generation of symbolic regression algorithms promise to provide greater opportunities for moments of significant discovery.
The above is the detailed content of Neurosymbolic Regression: Extracting Science from Data. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



BERT is a pre-trained deep learning language model proposed by Google in 2018. The full name is BidirectionalEncoderRepresentationsfromTransformers, which is based on the Transformer architecture and has the characteristics of bidirectional encoding. Compared with traditional one-way coding models, BERT can consider contextual information at the same time when processing text, so it performs well in natural language processing tasks. Its bidirectionality enables BERT to better understand the semantic relationships in sentences, thereby improving the expressive ability of the model. Through pre-training and fine-tuning methods, BERT can be used for various natural language processing tasks, such as sentiment analysis, naming

Activation functions play a crucial role in deep learning. They can introduce nonlinear characteristics into neural networks, allowing the network to better learn and simulate complex input-output relationships. The correct selection and use of activation functions has an important impact on the performance and training results of neural networks. This article will introduce four commonly used activation functions: Sigmoid, Tanh, ReLU and Softmax, starting from the introduction, usage scenarios, advantages, disadvantages and optimization solutions. Dimensions are discussed to provide you with a comprehensive understanding of activation functions. 1. Sigmoid function Introduction to SIgmoid function formula: The Sigmoid function is a commonly used nonlinear function that can map any real number to between 0 and 1. It is usually used to unify the

Latent Space Embedding (LatentSpaceEmbedding) is the process of mapping high-dimensional data to low-dimensional space. In the field of machine learning and deep learning, latent space embedding is usually a neural network model that maps high-dimensional input data into a set of low-dimensional vector representations. This set of vectors is often called "latent vectors" or "latent encodings". The purpose of latent space embedding is to capture important features in the data and represent them into a more concise and understandable form. Through latent space embedding, we can perform operations such as visualizing, classifying, and clustering data in low-dimensional space to better understand and utilize the data. Latent space embedding has wide applications in many fields, such as image generation, feature extraction, dimensionality reduction, etc. Latent space embedding is the main

Written previously, today we discuss how deep learning technology can improve the performance of vision-based SLAM (simultaneous localization and mapping) in complex environments. By combining deep feature extraction and depth matching methods, here we introduce a versatile hybrid visual SLAM system designed to improve adaptation in challenging scenarios such as low-light conditions, dynamic lighting, weakly textured areas, and severe jitter. sex. Our system supports multiple modes, including extended monocular, stereo, monocular-inertial, and stereo-inertial configurations. In addition, it also analyzes how to combine visual SLAM with deep learning methods to inspire other research. Through extensive experiments on public datasets and self-sampled data, we demonstrate the superiority of SL-SLAM in terms of positioning accuracy and tracking robustness.

In today's wave of rapid technological changes, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are like bright stars, leading the new wave of information technology. These three words frequently appear in various cutting-edge discussions and practical applications, but for many explorers who are new to this field, their specific meanings and their internal connections may still be shrouded in mystery. So let's take a look at this picture first. It can be seen that there is a close correlation and progressive relationship between deep learning, machine learning and artificial intelligence. Deep learning is a specific field of machine learning, and machine learning

1. Introduction Vector retrieval has become a core component of modern search and recommendation systems. It enables efficient query matching and recommendations by converting complex objects (such as text, images, or sounds) into numerical vectors and performing similarity searches in multidimensional spaces. From basics to practice, review the development history of Elasticsearch vector retrieval_elasticsearch As a popular open source search engine, Elasticsearch's development in vector retrieval has always attracted much attention. This article will review the development history of Elasticsearch vector retrieval, focusing on the characteristics and progress of each stage. Taking history as a guide, it is convenient for everyone to establish a full range of Elasticsearch vector retrieval.

Almost 20 years have passed since the concept of deep learning was proposed in 2006. Deep learning, as a revolution in the field of artificial intelligence, has spawned many influential algorithms. So, what do you think are the top 10 algorithms for deep learning? The following are the top algorithms for deep learning in my opinion. They all occupy an important position in terms of innovation, application value and influence. 1. Deep neural network (DNN) background: Deep neural network (DNN), also called multi-layer perceptron, is the most common deep learning algorithm. When it was first invented, it was questioned due to the computing power bottleneck. Until recent years, computing power, The breakthrough came with the explosion of data. DNN is a neural network model that contains multiple hidden layers. In this model, each layer passes input to the next layer and

Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve
