首頁 > 網路3.0 > 主體

Solana 聯創新文章:Solana 的同時領導者機制,解決 MEV 並建立全球價格發現引擎

王林
發布: 2024-07-12 17:29:52
原創
714 人瀏覽過

作者:Anatoly Yakovenko

編譯:深潮TechFlow

Solana 联创新文:Solana 的并发领导者机制,解决 MEV 并构建全球价格发现引擎

區塊中一個對區塊的概述。與大多數無許可區塊鏈一樣,Solana 的目標是盡量減少鏈運營商從用戶提取的 MEV。

Solana 採取的方法是透過最大化領導者(即區塊生產者)之間的競爭來減少 MEV。這意味著要縮短插槽時間,減少單一領導者連續安排的插槽數量,並增加每個插槽的同時領導者數量。

一般來說,每秒更多的領導者意味著用戶在等待 T 秒後有更多選擇,可以從即將上任的領導者中選擇最優惠的報價。更多的領導者也意味著優秀領導者提供區塊空間的成本更低,用戶更容易只與優秀領導者進行交易,並排除不良領導者的交易。

市場應該決定什麼是好的,什麼是壞的。

Solana 的更大願景是建立一個全球無許可的價格發現引擎,能夠與任何中心化交易所(CEX)的最佳表現競爭。

如果新加坡發生了對市場有影響的事件,消息仍然需要透過光纖以光速傳輸到紐約的 CEX。在消息到達紐約之前,Solana 網路中的領導者應該已經在區塊中廣播了這則訊息。除非同時發生實體網際網路分割區,否則在訊息到達紐約時,Solana 的狀態已經反映了該訊息。因此,不應該在紐約的 CEX 和 Solana 之間存在套利機會。

要全面實現這一目標,Solana 需要許多並發領導者,並且具有高度樂觀的確認保證。

配置多個領導者

就像當前的領導者時間表一樣,系統將每個插槽配置為 2 個領導者,而不是 1 個領導者。為了區分這兩個領導者,一個通道標記為 A,一個通道標記為 B。 A 和 B 可以獨立輪換。實現這項計劃需要回答的問題是:

如果區塊 A 和 B 到達不同的時間或失敗怎麼辦?

  • 如何合併區塊 A 和 B 中的交易順序?

  • 如何在 A 和 B 之間分配區塊容量?

Transmitting concurrent blocks

To understand the specific process, we need to quickly understand Turbine.

The leader splits the block into shards when building it. A batch of 32 fragments is an erasure code of 32 code fragments. Lots of 64 fragments were mercury-signed and root-signed, and these were linked to the previous lot.

Each shard is sent via an independent deterministic random path. The retransmitter of each last batch signs the root.

From the receiver’s perspective, each receiver needs to receive 32 fragments from the authenticated retransmitter. Any missing pieces are randomly repaired.

This number can be increased or decreased with minimal impact on latency.

Assuming that the retransmitter fragment path sampling is random enough and weighted by shares, the shares required for a cooperatively partitioned network will be far more than ε shares, both in terms of arrival time and data. If the receiver detects that each batch of 32/64 (configurable) shards arrives within T time, then most likely every node does as well. This is because 32 random nodes are large enough and unlikely to all be in the same partition at random.

If a partition occurs, consensus needs to resolve it. This does not affect security, but is relatively slow.

Multi-Block Production

If a single block is transmitted, each receiver (including the next leader) will see the shard batches arrive for each block. If a block is incomplete for T milliseconds, the current leader will skip the block and build a fork without it. If the leader is wrong, all other nodes will vote on the block and the leader's block will be skipped. The non-faulty leader will immediately switch to the heaviest fork indicated by voting.

In the case of multi-block transfers, each node will need to wait up to T milliseconds before voting on the observed block partition. For two concurrent leaders, the possible scenarios are: A, B, or A and B. Additional latency will only be added if the block is delayed. Under normal operation, all blocks should arrive at the same time, and each validator can vote as soon as both arrive. Therefore, T may be close to zero in practice.

What this attack needs to focus on is whether a leader with a very small amount of tokens staked can transmit a block slightly later on the slot boundary, thus reliably causing the network to split and force the network to spend a lot of time to pass Consensus mechanism to solve problems. Part of the network will vote for A, part will vote for B, and part of the network will vote for both A and B. These three split situations all need to be resolved through consensus mechanisms.

Specifically, the goal of zero-neighborhood should be to ensure that nodes recover blocks at the same time. If an attacker has a cooperating node in the zero neighborhood, they can transmit 31/64 fragments normally and allow the attacker to selectively transmit the last fragment in an attempt to create a partition. Honest nodes can detect which retransmitters are late and push the missing fragments to any single honest node as soon as they recover the block. Retransmitters can continue if they receive the fragment from anywhere or restore it. Therefore, blocks should be recovered by all nodes shortly after one honest node recovers. Testing is needed to determine how long to wait, and whether it is absolute, or weighted by the arrival time of each shard, and whether stake node reputation should be used.

The probability of a coordinated leader and a retransmitter in each block is approximately P leader shares (64P retransmitter shares). 1% of the stake can be used to attempt attacks in ½ shard batches arranged by the attacker as the leader. Therefore, detection and mitigation need to be robust enough.

This attack has minimal impact on the next leader because asynchronous execution allows unused capacity to be carried forward. So if the current leader forces the next leader to skip a slot, and the next leader has 4 consecutive slots, the unused capacity of the skipped slot can be carried over, allowing the leader to re-include the skipped slot transaction.

Merge concurrent blocks

If a user sends the same transaction to both leaders A and B in order to increase the chance of being included or to be first in the block, this will lead to a waste of resources . If this happens, increasing the number of concurrent leaders will have very limited performance improvement because they will simply be processing twice as many garbage transactions.

To avoid duplicate transactions, the top N number of fee payers will determine which leader channel the transaction is valid in. In this example, the highest bit will select A or B. Fee payers must be assigned to an exclusive channel so that the leader can be certain that the fee payer is valid and has not spent all of its lamports (the smallest unit of currency in the Solana blockchain) on other leaders.

This will force the spammer to pay at least twice for the logically identical transaction, but in order to increase the probability of being the first transaction, the spammer may still send the logically identical transaction.

To prevent this behavior, users can choose to include an additional 100% burn order fee in addition to the leader’s priority fee. Orders with the highest fees are executed first. Otherwise, first-in-first-out (FIFO) ordering is used. In case of a tie, order is resolved using deterministic random permutations. Therefore, it is more cost-effective for spammers to increase their order fees and execute first than to pay inclusion fees twice.

In order to handle bundled and reordered transaction sequences, the system needs to support bundled transactions, which can add an order fee to cover the sequencing cost of the entire transaction sequence. The fee payer is only valid in its scheduled channel, so the bundle can only manipulate sequences in its own channel.

Alternatively, order fees may not be necessary. If FIFO ordering is used, and spammers are always charged a priority fee in all channels, it may be possible to simply allow the market to decide that paying N leaders to increase the cost of inclusion opportunities is the same as paying the closest leader most likely to include the transaction first the cost of the operator.

Manage block resources

In a blockchain network, when there are two concurrent leaders, each system-wide block capacity limit needs to be distributed equally. Specifically, not just the total capacity, but each specific limit, such as the write lock limit - no account can write more than 6 million compute units (CUs), and each leader can only schedule up to 24 million CUs. trade. In this way, even in the worst case scenario, the merged blocks will not exceed the total capacity limit of the system.

This mechanism may lead to fee fluctuations and resource underutilization, because the fee for scheduling priority will be determined by the capacity of each leader, and each leader has little knowledge of the scheduling status of other concurrent leaders .

To mitigate resource underutilization and resulting fee spikes, any unused block capacity should be rolled over to future blocks. That is, if the current merged block uses less than X in write locks, total bytes, or total compute units (CUs), K*X should be added to the next block, where 0 < K < 1 , until a certain maximum value. Asynchronous execution can lag the top of the chain by up to one epoch, so capacity rolling can be quite aggressive.

Based on recent block data, most blocks are typically 80% filled, while the write lock limit is well below 50%. Generally speaking, there should always be some spare capacity for future blocks. Since blocks may temporarily exceed capacity limits, execution must occur asynchronously to the consensus process. For more information about the asynchronous execution proposal, see the APE article.

以上是Solana 聯創新文章:Solana 的同時領導者機制,解決 MEV 並建立全球價格發現引擎的詳細內容。更多資訊請關注PHP中文網其他相關文章!

來源:chaincatcher.com
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
熱門教學
更多>
最新下載
更多>
網站特效
網站源碼
網站素材
前端模板
關於我們 免責聲明 Sitemap
PHP中文網:公益線上PHP培訓,幫助PHP學習者快速成長!