-
Couldn't load subscription status.
- Fork 274
Add warp contract implementation #718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
3fa2983 to
81520a5
Compare
| if readOnly { | ||
| return nil, remainingGas, vmerrs.ErrWriteProtection | ||
| } | ||
| // unpack the arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should readOnly be moved up the function so we don't have to deduct gas from remainingGas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could save gas in the case that someone calls this incorrectly in a readOnly state. I don't think this is a necessary change because the caller should not call this function, when it's in a readOnly state.
| 1. Read the SourceChainID of the signed message (C-Chain) | ||
| 2. Look up the SubnetID that validates C-Chain: Primary Network | ||
| 3. Look up the validator set of Subnet B (instead of the Primary Network) and the registered BLS Public Keys of Subnet B at the P-Chain height specified by the ProposerVM header | ||
| 4. Filter the validators of Subnet B to include only the validators that the signed message claims as signers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we assume that they signed the message already? I.e if there are too many validators, do we guarantee that the message related to a particular subnet will be relayed to validators of that subnet and they will sign it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, at this point we are verifying a signature, so we assume they have all already signed it. I'm not sure I follow the second part of the question, could you re-phrase? We do not guarantee message delivery at this level in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I was trying to say if it would be practical to assume "everyone" has signed it or do we guarantee that B validators will receive those messages (as priority). But I don't think this is in the scope at the moment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not require everyone has signed it only a threshold of stake. That threshold is configurable by the receiving VM, so if it is set to 67%, the VM makes no distinction between 70% and 100% of stake signing a message. Both are considered valid.
|
|
||
| It returns the message if present and a boolean indicating if a message is present. | ||
|
|
||
| To use this function, the transaction must include the signed Avalanche Warp Message encoded in the [predicate](#predicate-encoding) of the transaction. Prior to executing a block, the VM iterates through transactions and pre-verifies all predicates. If a transaction's predicate is invalid, then it is considered invalid to include in the block and dropped. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean we allow only 1 warp message per block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No this is encoded on a per transaction basis, so there can only be one warp message per transaction currently.
| if err != nil { | ||
| return fmt.Errorf("failed to parse warp log data into unsigned message (TxHash: %s, LogIndex: %d): %w", txHash, logIndex, err) | ||
| } | ||
| log.Info("Accepted warp unsigned message", "txHash", txHash, "logIndex", logIndex, "logData", common.Bytes2Hex(logData)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this signed and verified by VerifyPredicate at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. When you send a message it produces a log. When you verify a message, it verifies a predicate and makes it read-able throughout the transaction's execution.
|
Creating this GitHub issue to ensure that the Warp README provides sufficient documentation for the target audiences: #754. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks good and we can continue the README changes in a follow-up PR
| // within [predicateContext]. | ||
| func (c *Config) verifyWarpMessage(predicateContext *precompileconfig.ProposerPredicateContext, warpMsg *warp.Message) error { | ||
| // Use default quorum numerator unless config specifies a non-default option | ||
| quorumNumerator := params.WarpDefaultQuorumNumerator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of this, can we configure the default in module.go/#Configure and set to c.QuorumNumerator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
related to #718 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's simpler to keep this entirely in-memory on the config for warp itself and provide that from the config during verification, so that we don't need to write it into the state. I think writing it in the state makes sense when we want it to be mutable via on-chain events.
| DestinationAddress ids.ID `serialize:"true"` | ||
| Payload []byte `serialize:"true"` | ||
| SourceAddress common.Address `serialize:"true"` | ||
| DestinationChainID common.Hash `serialize:"true"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we change this from ids.ID to common.Hash?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a no-op change from one alias to the other, but changed it to common.Hash to stick with using the common package instead of one from each.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, Daniel from LimeChain here. Thanks for providing a thorough explanation of the design of the Warp protocol. Here are some of my considerations that popped up while I was reading the README. Sorry if some of those questions seem off / not relevant.
Disclaimer:
- I do not have a deep understanding of the finality guarantees and configuration options for Subnets
- I've gone through the specification and not the actual implementation
Thoughts:
-
Minimum Enforced Latency - How would the
receivingsubnet guard against reorgs on thedestinationsubnet? Is there aminimum_delayfor messages that would cover the cases where thesourcesubnet changes its state (due to not being finalised). The protocol must guardreceivingsubnets from applying messages from asendersubnet before being “finalised”, otherwise they risk their own finalisation. Applying a message from a network, which gets reorged after the message is delivered must trigger the receiving network to reorgs as-well and re-apply all state transitions at the point where the message was “applied”. However, if the receiving network indeed re-applies the state, that can be an attack vector for malicious Subnets to disturb the finalisation of the receiving networks. A solution to this problem might be a configuration for a minimal enforced latency/delay on messages for each subnet that the given subnet accepts messages from. -
DDoS Protection
- How is the C-Chain protected against malicious subnets DDoS-ing it by sending lots of Warp messages? Is there an economic security that would guard the receiving networks against DDoS attacks from malicious subnets that would create lots of messages which would need to be verified by receiving subnets? This becomes even harder if the message is multicast and gets to multiple subnets. Ideally, there should be something that the receiving network gets in exchange for “verifying” the message. Gas costs in the straightforward implementation do not count. They are security guarantees for the validators in the
sendingsubnet to guard against malicious users that want to DDoS the sending subnet, however they do not protectreceivingsubnet validators from malicioussendingvalidators. - I am not sure whether a maximum number of messages that can get applied (queue) in the receiving network per block has been considered. Ideally this number would be configurable similar to the block’s
gas limit.
- How is the C-Chain protected against malicious subnets DDoS-ing it by sending lots of Warp messages? Is there an economic security that would guard the receiving networks against DDoS attacks from malicious subnets that would create lots of messages which would need to be verified by receiving subnets? This becomes even harder if the message is multicast and gets to multiple subnets. Ideally, there should be something that the receiving network gets in exchange for “verifying” the message. Gas costs in the straightforward implementation do not count. They are security guarantees for the validators in the
-
Syncing the network - It seems that the P chain cannot be used to provide the data (validator sets) so that new nodes can sync the network trustlessly, meaning verify the authenticity of messages on their own. I am not quite sure how the issue has been resolved using transaction predicates. My assumption on how transaction predicates work might not be correct, however as far as I understand, the “message bytes” are added as data in the access list of the EOA transaction. If that is the case, there would be DevEx trade-offs such as:
- How would the EOA know what the message required for the TX is?
- What if the EOA interacts with a dApp that interacts with a second dApp that uses the Warp protocol. How would the UI of the first dApp know that the warp message should be placed within the access list?
In general I think that it would be great if additional information on how the Warp protocol will ensure that new nodes will be able to follow / sync the network ideally (although it might not be technically possible) without trusting the consensus at the time of message inclusion but rather being able to compute / deduce the same facts themselves.
If the syncing nodes are relying on the “facts” that a message _was considered valid at block
Xthat would mean that the full chain history afterblock Xbecomes trusted and not computed by the syncing node. -
Misc
- Latency - Developers would be interested in message latency, meaning how long it would take to deliver a message on a destination subnet.
- Costs - Developers would be interested in gas costs for
sendWarpMessageandgetVerifiedMessage
|
Thanks for taking a look!
Avalanche Subnets offer fast finality after a few seconds, such that there are no reorgs after a block has been accepted. Subnets only sign warp messages after a block has been accepted, so receiving subnets do not need to worry about handling reorgs.
The cost for sending a message is paid fully on the source subnet and the cost of verifying/receiving a message is paid fully on the destination subnet. In other words, you can send as many messages as you want from a Subnet to the C-Chain or between any two Subnets, which may create a surplus of outstanding messages to be delivered to the C-Chain, but delivering each one requires paying the full cost of verifying/receiving that message on the C-Chain.
Yes, we assume that the message was verified at the time it was verified by the Subnet. We make the same assumption in order to make bootstrapping faster in a few other places. To perform full verification for all Avalanche Warp Messages, we could either rely on fast archival lookups on the P-Chain to verify historical messages or switching to including proofs of the validator sets used to verify messages at a specific point in time. For now, we use the assumption that the network correctly verified warp messages when it was accepted by the network. It would be a reasonable feature request to add an option to fully verify every operation during a full sync.
Warp is planned to be used as a primitive to build cross-chain communication protocols on top of. A relayer would listen for Avalanche Warp Messages and deliver them to the recipient chain.
Great question, a communication layer built on top of Warp could handle this by delivering messages across multiple transactions and storing those messages in the state to be read all at once later. Alternatively, you could build a communication on top by verifying signed block hashes and proving the contents of the block, all of its ancestors, and any transactions/messages sent based off of that information.
The latency depends on how fast the relayer is able to aggregate signatures from the validator set of the sending Subnet and how quickly it can deliver the transaction on the receiving Subnet. In general, the latency should be dominated by the time for a block to be accepted on the source subnet (cannot aggregate signatures until this point) + the latency of confirming the transaction on the destination, which will be approximately two block confirmations one on each of the source and receiver.
The gas costs are present in the Warp contract code. We can add documentation to break down the gas costs for |
Why this should be merged
This PR implements the warp precompile with experimental support. Addresses #440
This PR replaces #586
Future TODOs before production readiness are:
How this works
This PR adds a warp precompile to integrate warp messaging into Subnet-EVM. This includes a Solidity interface for contracts that want to send a message via warp here.
How this was tested
This code has been tested through precompile unit tests, an e2e VM level test for the VM's handling of verification and acceptance of warp interactions, and an e2e ginkgo test that sends a warp message from subnet A to subnet B where the validator sets of subnet A and B are disjoint.
How is this documented
https://github.com/ava-labs/subnet-evm/blob/warp-e2e/precompile/contracts/warp/README.md