-
Notifications
You must be signed in to change notification settings - Fork 0
maintain account sequence #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I've reviewed the changes and found a few issues related to thread safety and sequence handling that should be addressed.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| txmod Module | ||
| config *TxConfig | ||
|
|
||
| accountNumber uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TxHelper now maintains internal state (accountNumber, nextSequence), which makes it not thread-safe. While action_msg currently handles synchronization externally, this makes TxHelper dangerous to use in other contexts.
Consider adding a sync.Mutex to TxHelper and locking in ExecuteTransaction (and other state-modifying methods) to ensure thread safety.
Fix it with Roo Code or mention @roomote and request a fix.
|
|
||
| // Run full tx flow | ||
| resp, err := h.ExecuteTransactionWithMsgs(ctx, []types.Msg{msg}, localAcc) | ||
| if err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking only err == nil is insufficient because a transaction might fail CheckTx (e.g. insufficient fees) and return a successful response object but with a non-zero Code. In such cases, the sequence is not incremented on-chain.
Incrementing nextSequence here will cause a drift, forcing the next transaction to fail with a sequence mismatch and trigger a retry. You should also check resp.TxResponse.Code == 0.
if err == nil {
if resp != nil && resp.TxResponse != nil && resp.TxResponse.Code == 0 {
// SUCCESS → bump local sequence
h.nextSequence++
}
return resp, nil
}Fix it with Roo Code or mention @roomote and request a fix.
jawad-rafique
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed pkg/lumera/modules/tx/helper.go changes.
Feedback:
TxHelpernow storesaccountNumber/nextSequence/seqInitwithout any synchronization. If a singleTxHelperinstance can be used concurrently (multiple goroutines), this introduces data races and can still produce duplicate/out-of-order sequences. Consider guarding sequence state with async.Mutex(or making the helper explicitly single-threaded in docs and enforcing it at call sites).- The retry path resyncs from committed chain state via
AccountInfoByAddress. In the common mismatch scenario where another tx is already in mempool but not committed, that query will still return the old sequence and the retry may fail again. Consider parsing the “expected N, got M” value from the error and settingnextSequence = expectedbefore retrying (or a dedicated RPC for mempool-aware account sequence if available). isSequenceMismatchis very broad (expected+got) and may classify unrelated errors as sequence mismatch. If possible, narrow matching (e.g. codespace/auth + ErrWrongSequence, or a more specific regex).
CI:
cascade-e2e-testsis currently failing on this PR (Finalize simulation failed due to invalid metadata: “Kademlia ID doesn’t match expected format”). Might be unrelated to sequence tracking, but it will need to be green before merge.
PR Description
This PR fixes the long-standing
incorrect account sequenceerrors when multiple cascade finalizations occur close together.Key changes:
TxHelpernow owns and tracks the local account sequence (accountNumber,nextSequence), instead of re-querying committed state before every tx.Deterministic sequence handling:
Added strict serialization in
action_msg(already in place) to ensure only one tx flow per signer.This resolves the race where mempool txs hadn’t been committed yet, causing “expected X, got Y” failures.
Outcome: