Table of Contents
A new era of machine learning has begun
KAN back on the poker table
Theoretical basis of KAN
KAN Architecture
Implementation details
Parameters
Stronger performance" >Stronger performance
Interactive interpretation of KAN
Interpretability Verification
Pareto optimal
Solving Partial Differential Equations
Continuous learning, catastrophic forgetting will not occur
#Discover the knot theory, and the results surpass DeepMind
Physical Anderson localization is solved
Home Technology peripherals AI MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

May 06, 2024 pm 03:10 PM
ai math

Overnight, the machine learning paradigm is about to change!

Today, the infrastructure that dominates the field of deep learning is the multilayer perceptron (MLP)—which places activation functions on neurons.

So, besides this, are there any new routes we can take?

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Just today, teams from MIT, California Institute of Technology, Northeastern University and other institutions released a major , a new neural network structure-Kolmogorov–Arnold Networks (KAN).

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

The researchers made a simple change to the MLP, that is, the learnable activation function is derived from the nodes (neurons) Move to the edge (weight)!

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Paper address: https://arxiv.org/pdf/2404.19756

This changes at first glance It may seem baseless at first, but it has a profound connection with "approximation theories" in mathematics.

It turns out that the Kolmogorov-Arnold representation corresponds to a two-layer network, and there is a learnable activation function on the edges, not on the nodes.

Inspired by the representation theorem, researchers used neural networks to explicitly parameterize the Kolmogorov-Arnold representation.

It is worth mentioning that the origin of the name KAN is to commemorate the two great late mathematicians Andrey Kolmogorov and Vladimir Arnold.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Experimental results show that KAN has superior performance than traditional MLP, improving the accuracy and interpretability of neural networks.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

The most unexpected thing is that the visualization and interactivity of KAN give it potential application value in scientific research and can help Scientists discover new mathematical and physical laws.

In the research, the author used KAN to rediscover the mathematical laws in knot theory!

Moreover, KAN replicated the results of DeepMind in 2021 with a smaller network and automation.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

In terms of physics, KAN can help physicists study Anderson localization (which is a term in condensed matter physics a phase change).

By the way, all examples of KAN in the study (except parameter scanning) can be reproduced in less than 10 minutes on a single CPU.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

The emergence of KAN directly challenged the MLP architecture that had always dominated the field of machine learning and caused an uproar across the entire network.

A new era of machine learning has begun

Some people say that a new era of machine learning has begun!

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Google DeepMind research scientists said, "Kolmogorov-Arnold strikes again! A little-known fact is: this theorem appeared in a seminal paper on permutation invariant neural networks (depth sets), showing This illustrates the complex connection between this representation and the way ensembles/GNN aggregators are constructed (as a special case)."

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

A new neural network architecture was born! KAN will dramatically change the way artificial intelligence is trained and fine-tuned.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Is AI entering the 2.0 era?

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Some netizens used popular language to make a vivid metaphor of the difference between KAN and MLP:

Kolmogorov-Arnold Network (KAN) is like a three-layer cake recipe that can bake any cake, while Multi-Layer Perceptron (MLP) is a customized cake with different number of layers. MLP is more complex but more general, while KAN is static but simpler and faster for one task.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

The author of the paper, MIT professor Max Tegmark, said that the latest paper shows that a completely different architecture from the standard neural network can be used to process Achieve higher accuracy with fewer parameters when solving interesting physics and mathematics problems.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Next, let’s take a look at how KAN, which represents the future of deep learning, is implemented?

KAN back on the poker table

Theoretical basis of KAN

Kolmogorov-Arnold theorem (Kolmogorov–Arnold representation theorem) pointed out that if f is a multi-variable continuous function defined on a bounded domain, then the function can be expressed as a finite combination of multiple single-variable, additive continuous functions.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

For machine learning, the problem can be described as: the process of learning a high-dimensional function can be simplified into learning a one-dimensional function of a polynomial quantity.

But these one-dimensional functions may be non-smooth, or even fractal, and may not be learned in practice. It is precisely because of this "pathological behavior" that Cole Mogorov-Arnold said that the theorem is basically a "death sentence" in the field of machine learning, that is, the theory is correct, but it is useless in practice.

In this article, the researchers are still optimistic about the application of this theorem in the field of machine learning and propose two improvements:

1. In the original equation, there are only two layers of nonlinearity and one hidden layer (2n 1), which can generalize the network to any width and depth;

2. Scientific and Most functions in daily life are mostly smooth and have sparse combinatorial structures that may contribute to smooth Kolmogorov-Arnold representations. Similar to the difference between physicists and mathematicians, physicists are more concerned with typical scenarios, while mathematicians are more concerned with worst-case scenarios.

KAN Architecture

The core idea of ​​Kolmogorov-Arnold Network (KAN) design is to transform the approximation problem of multi-variable functions into learning a set of single Variable function problem. Within this framework, every univariate function can be parameterized with a B-spline, which is a local, piecewise polynomial curve whose coefficients are learnable.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#In order to extend the two-layer network in the original theorem deeper and wider, the researchers proposed a more "generalized" version of the theorem To support the design of KAN:

Inspired by the cascading structure of MLPs to improve network depth, the article also introduces a similar concept, the KAN layer, which is composed of a one-dimensional function matrix, each Functions all have trainable parameters.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

According to the Kolmogorov-Arnold theorem, the original KAN layer consists of internal functions and external functions, which correspond to different inputs. and output dimensions. This design method of stacking KAN layers not only expands the depth of KANs, but also maintains the interpretability and expressiveness of the network. Each layer is composed of univariate functions, and the functions can be learned independently. and understanding.

f in the following formula is equivalent to KAN

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Implementation details

Although the design concept of KAN seems simple and relies purely on stacking, it is not easy to optimize. The researchers also explored some techniques during the training process.

1. Residual activation function: By introducing a combination of basis function b(x) and spline function, using the concept of residual connection to construct the activation function ϕ(x), we have Contribute to the stability of the training process.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#2. Initialization scales (scales): The initialization of the activation function is set to a spline function close to zero, and the weight w uses the Xavier initialization method, with Helps maintain gradient stability in the early stages of training.

3. Update the spline grid: Since the spline function is defined in a bounded interval, and the activation value may exceed this interval during the neural network training process, the spline is dynamically updated. The grid ensures that the spline function always operates within the appropriate interval.

Parameters

1. Network depth: L

2. Width of each layer :N

3. Each spline function is defined based on G intervals (G 1 grid point), k order (usually k=3)

So the number of parameters of KANs is aboutMLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#For comparison, the number of parameters of MLP is O(L*N^2), which seems to be better than KAN More efficient, but KANs can use smaller layer widths (N), which not only improves generalization performance but also improves interpretability.

What is KAN better than MLP?

Stronger performance

As a plausibility check, the researchers constructed five known smooth KAs (Kolmogorov-Arnold) The represented example is used as a validation data set. KANs are trained by adding grid points every 200 steps, covering the range of G as {3,5,10,20,50,100,200,500,1000}

Use MLPs of different depths and widths as baseline models, and both KANs and MLPs use the LBFGS algorithm for a total of 1800 training steps, and then use RMSE as an indicator for comparison.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

It can be seen from the results that the curve of KAN is more jittery, can converge quickly, and reaches a stable state; and it is better than the scaling curve of MLP, especially in high-dimensional situations.

It can also be seen that the performance of three-layer KAN is much stronger than that of two-layer KAN, indicating that deeper KANs have stronger expressive capabilities, in line with expectations.

Interactive interpretation of KAN

The researchers designed a simple regression experiment to show that users can obtain The most interpretable results.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Assuming that the user is interested in finding out the symbolic formula, a total of 5 interactive steps are required.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#Step 1: Training with sparsification.

Starting from a fully connected KAN, the network can be made sparser through training with sparse regularization, so that 4 out of 5 neurons in the hidden layer can be found None of them seem to have any effect.

Step 2: Pruning

After automatic pruning, discard all useless hidden neurons, leaving only one KAN. The activation function is matched to a known symbolic function.

Step 3: Set up the symbolic function

Assuming the user can correctly guess these symbolic formulas from staring at the KAN chart, they can set it directly

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

If the user has no domain knowledge or does not know which symbolic functions these activation functions may be, the researchers provide a function suggest_symbolic to suggest symbolic candidates .

Step 4: Further training

After all activation functions in the network are symbolized, the only remaining parameters are the affine parameters ;Continue training the affine parameters, and when you see the loss drop to machine precision, you will realize that the model has found the correct symbolic expression.

Step 5: Output symbolic formula

Use Sympy to calculate the symbolic formula of the output node and verify the correct answer.

Interpretability Verification

The researchers first designed six samples in a supervised toy data set to demonstrate the performance of the KAN network Combinatorial structural capabilities under symbolic formulas.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

It can be seen that KAN has successfully learned the correct single variable function, and through visualization, it can explain KAN’s thinking process .

In an unsupervised setting, the data set only contains input features x. By designing the relationship between certain variables (x1, x2, x3), the KAN model can be tested to find The ability of dependencies between variables.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Judging from the results, the KAN model successfully found the functional dependency between variables, but the author also pointed out that it is still only synthesizing data. To conduct experiments on, a more systematic and controllable method is needed to discover the complete relationship.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Pareto optimal

By fitting special functions, the author shows KAN and MLP The Pareto Frontier in the plane spanned by the number of model parameters and the RMSE loss.

Among all special functions, KAN always has a better Pareto front than MLP.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Solving Partial Differential Equations

In the task of solving partial differential equations, the researchers plotted the difference between the predicted and true solutions The L2 squared and H1 squared losses.

In the figure below, the first two are the training dynamics of the loss, and the third and fourth are the Sacling Law of the number of loss functions.

As shown in the results below, KAN converges faster, has lower loss, and has a steeper expansion law compared to MLP.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Continuous learning, catastrophic forgetting will not occur

We all know that catastrophic forgetting is A serious problem in machine learning.

The difference between artificial neural networks and the brain is that the brain has different modules that function locally in space. When learning a new task, structural reorganization occurs only in local areas responsible for the relevant skill, while other areas remain unchanged.

However, most artificial neural networks, including MLP, do not have this concept of locality, which may be the reason for catastrophic forgetting.

Research has proven that KAN has local plasticity and can use splines locality to avoid catastrophic forgetting.

The idea is very simple, since the spline is local, the sample will only affect some nearby spline coefficients, while the distant coefficients remain unchanged.

In contrast, since MLP usually uses global activation (such as ReLU/Tanh/SiLU), any local changes may propagate to distant regions uncontrollably, thereby destroying the information stored there.

The researchers used a one-dimensional regression task (composed of 5 Gaussian peaks). The data around each peak is presented to KAN and MLP sequentially (rather than all at once).

The results are shown in the figure below. KAN only reconstructs the area where data exists in the current stage, leaving the previous area unchanged.

And MLP will reshape the entire area after seeing new data samples, leading to catastrophic forgetting.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#Discover the knot theory, and the results surpass DeepMind

What does the birth of KAN mean for the future application of machine learning?

Knot theory is a discipline in low-dimensional topology. It reveals the topological problems of three-manifolds and four-manifolds, and is used in biology and topology. Quantum computing and other fields have a wide range of applications.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

#In 2021, the DeepMind team used AI to prove the knot theory for the first time in Nature.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Paper address: https://www.nature.com/articles/s41586-021-04086-x

In this study, a new theorem related to algebraic and geometric knot invariants was derived through supervised learning and human domain experts.

That is, gradient saliency identifies key invariants of the supervision problem, which led domain experts to propose a conjecture that was subsequently refined and proven.

In this regard, the author studies whether KAN can achieve good interpretable results on the same problem to predict the signature of knots.

In the DeepMind experiment, the main results of their study of the knot theory data set are:

1 Using the network attribution method, it is found that the signature MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind mainly depends on the intermediate distance MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind and the vertical distance λ.

2 Human domain experts later discovered that MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind had a high correlation with slopeMLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind and concluded that MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

In order to study question (1), the author treats 17 knot invariants as input and signatures as output.

Similar to the setup in DeepMind, signatures (even numbers) are encoded as one-hot vectors, and the network is trained with a cross-entropy loss.

The results found that a very small KAN can achieve a test accuracy of 81.6%, while DeepMind’s 4-layer width 300MLP only achieved a test accuracy of 78%.

As shown in the table below, KAN (G = 3, k = 3) has about 200 parameters, while MLP has about 300,000 parameters.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

It’s worth noting that KAN is not only more accurate; At the same time, the parameters are more efficient than MLP.

In terms of interpretability, the researchers scale the transparency of each activation according to its size, so it is immediately clear which input variables are important without feature attribution.

Then, KAN is trained on the three important variables and obtains a test accuracy of 78.2%.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

As follows, through KAN, the author rediscovered three mathematical relationships in the knot data set.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Physical Anderson localization is solved

And in physics applications, KAN has also played great value.

Anderson is a fundamental phenomenon in which disorder in a quantum system causes localization of the electron's wave function, bringing all transmission to a halt.

In one and two dimensions, the scaling argument shows that for any tiny random disorder, all electronic eigenstates are exponentially localized.

In contrast, in three dimensions, a critical energy forms a phase boundary that separates extended and localized states, which is called a mobility edge.

Understanding these mobility edges is critical to explaining a variety of fundamental phenomena such as metal-insulator transitions in solids, as well as the localization effects of light in photonic devices.

The author found through research that KANs make it very easy to extract mobility edges, whether numerically or symbolically.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

Obviously, KAN has become a powerful assistant and important collaborator for scientists.

In summary, KAN will be a useful model/tool ​​for AI Science thanks to the advantages of accuracy, parameter efficiency, and interpretability.

In the future, further applications of KAN in the scientific field have yet to be explored.

MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind

The above is the detailed content of MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Web3 trading platform ranking_Web3 global exchanges top ten summary Web3 trading platform ranking_Web3 global exchanges top ten summary Apr 21, 2025 am 10:45 AM

Binance is the overlord of the global digital asset trading ecosystem, and its characteristics include: 1. The average daily trading volume exceeds $150 billion, supports 500 trading pairs, covering 98% of mainstream currencies; 2. The innovation matrix covers the derivatives market, Web3 layout and education system; 3. The technical advantages are millisecond matching engines, with peak processing volumes of 1.4 million transactions per second; 4. Compliance progress holds 15-country licenses and establishes compliant entities in Europe and the United States.

How to avoid losses after ETH upgrade How to avoid losses after ETH upgrade Apr 21, 2025 am 10:03 AM

After ETH upgrade, novices should adopt the following strategies to avoid losses: 1. Do their homework and understand the basic knowledge and upgrade content of ETH; 2. Control positions, test the waters in small amounts and diversify investment; 3. Make a trading plan, clarify goals and set stop loss points; 4. Profil rationally and avoid emotional decision-making; 5. Choose a formal and reliable trading platform; 6. Consider long-term holding to avoid the impact of short-term fluctuations.

Top 10 cryptocurrency exchange platforms The world's largest digital currency exchange list Top 10 cryptocurrency exchange platforms The world's largest digital currency exchange list Apr 21, 2025 pm 07:15 PM

Exchanges play a vital role in today's cryptocurrency market. They are not only platforms for investors to trade, but also important sources of market liquidity and price discovery. The world's largest virtual currency exchanges rank among the top ten, and these exchanges are not only far ahead in trading volume, but also have their own advantages in user experience, security and innovative services. Exchanges that top the list usually have a large user base and extensive market influence, and their trading volume and asset types are often difficult to reach by other exchanges.

What does cross-chain transaction mean? What are the cross-chain transactions? What does cross-chain transaction mean? What are the cross-chain transactions? Apr 21, 2025 pm 11:39 PM

Exchanges that support cross-chain transactions: 1. Binance, 2. Uniswap, 3. SushiSwap, 4. Curve Finance, 5. Thorchain, 6. 1inch Exchange, 7. DLN Trade, these platforms support multi-chain asset transactions through various technologies.

What are the top ten platforms in the currency exchange circle? What are the top ten platforms in the currency exchange circle? Apr 21, 2025 pm 12:21 PM

The top exchanges include: 1. Binance, the world's largest trading volume, supports 600 currencies, and the spot handling fee is 0.1%; 2. OKX, a balanced platform, supports 708 trading pairs, and the perpetual contract handling fee is 0.05%; 3. Gate.io, covers 2700 small currencies, and the spot handling fee is 0.1%-0.3%; 4. Coinbase, the US compliance benchmark, the spot handling fee is 0.5%; 5. Kraken, the top security, and regular reserve audit.

Ranking of leveraged exchanges in the currency circle The latest recommendations of the top ten leveraged exchanges in the currency circle Ranking of leveraged exchanges in the currency circle The latest recommendations of the top ten leveraged exchanges in the currency circle Apr 21, 2025 pm 11:24 PM

The platforms that have outstanding performance in leveraged trading, security and user experience in 2025 are: 1. OKX, suitable for high-frequency traders, providing up to 100 times leverage; 2. Binance, suitable for multi-currency traders around the world, providing 125 times high leverage; 3. Gate.io, suitable for professional derivatives players, providing 100 times leverage; 4. Bitget, suitable for novices and social traders, providing up to 100 times leverage; 5. Kraken, suitable for steady investors, providing 5 times leverage; 6. Bybit, suitable for altcoin explorers, providing 20 times leverage; 7. KuCoin, suitable for low-cost traders, providing 10 times leverage; 8. Bitfinex, suitable for senior play

Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Why is the rise or fall of virtual currency prices? Apr 21, 2025 am 08:57 AM

Factors of rising virtual currency prices include: 1. Increased market demand, 2. Decreased supply, 3. Stimulated positive news, 4. Optimistic market sentiment, 5. Macroeconomic environment; Decline factors include: 1. Decreased market demand, 2. Increased supply, 3. Strike of negative news, 4. Pessimistic market sentiment, 5. Macroeconomic environment.

'Black Monday Sell' is a tough day for the cryptocurrency industry 'Black Monday Sell' is a tough day for the cryptocurrency industry Apr 21, 2025 pm 02:48 PM

The plunge in the cryptocurrency market has caused panic among investors, and Dogecoin (Doge) has become one of the hardest hit areas. Its price fell sharply, and the total value lock-in of decentralized finance (DeFi) (TVL) also saw a significant decline. The selling wave of "Black Monday" swept the cryptocurrency market, and Dogecoin was the first to be hit. Its DeFiTVL fell to 2023 levels, and the currency price fell 23.78% in the past month. Dogecoin's DeFiTVL fell to a low of $2.72 million, mainly due to a 26.37% decline in the SOSO value index. Other major DeFi platforms, such as the boring Dao and Thorchain, TVL also dropped by 24.04% and 20, respectively.

See all articles