Skip to content

feat(l1): pipeline body download with execution#6491

Merged
ilitteri merged 3 commits intomainfrom
feat/pipeline-body-download
Apr 28, 2026
Merged

feat(l1): pipeline body download with execution#6491
ilitteri merged 3 commits intomainfrom
feat/pipeline-body-download

Conversation

@azteca1998
Copy link
Copy Markdown
Contributor

Summary

  • Restructures sync_cycle_full to overlap body downloads with block execution
  • Spawns a background task that downloads bodies and sends batches through mpsc::channel(2) (2-batch buffer)
  • Main loop receives pre-fetched batches and executes them — next batch downloads while current batch executes
  • Handles both single-batch (headers in memory) and multi-batch (headers from store) paths
  • Errors from the download task propagate cleanly to the execution loop
  • Est. +12-15% throughput by hiding body download latency behind execution

Closes #6479

Test plan

  • Run full sync on hoodi with FULL_SYNC_BLOCK_LIMIT=50000 and verify completion
  • Compare total sync time with and without pipelining
  • Verify error handling: download task stops when execution errors

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 16, 2026

Lines of code report

Total lines added: 64
Total lines removed: 0
Total lines changed: 64

Detailed view
+-------------------------------------------+-------+------+
| File                                      | Lines | Diff |
+-------------------------------------------+-------+------+
| ethrex/crates/networking/p2p/sync/full.rs | 356   | +64  |
+-------------------------------------------+-------+------+

@github-actions
Copy link
Copy Markdown

🤖 Claude Code Review

Now I have enough context to write a thorough review.


PR Review: feat(sync): pipeline body download with execution (#6491)

Overview

This PR refactors sync_cycle_full to overlap network I/O (body downloads) with CPU work (block execution) using a Tokio task + mpsc::channel(2). The design is sound: the 2-slot buffer limits memory consumption while providing enough pipelining to hide download latency behind execution. The estimated 12-15% throughput gain is plausible.


Correctness

Silent skip of final_batch in empty batches

At full.rs:212-217 (multi-batch path):

if !blocks.is_empty() {
    if body_tx.send(Ok((blocks, final_batch))).await.is_err() {
        return;
    }
}

If the last batch (where final_batch = true) somehow produces zero blocks, the receiver never learns the sync completed. The execution loop drains normally, but add_blocks_in_batch never gets called with final_batch = true, which means the final batch is never executed in sequential (add_block_pipeline) mode. In practice this can't happen (a batch with headers will always have bodies), but it is a latent correctness hazard. The same guard existed in the original code, so this is not a regression — but it's worth noting.

Single-batch path always uses final_batch = true

full.rs:131:

let final_batch = true;

In the original code, even the single-batch path iterated for start in (start_block_number..end_block_number).step_by(*EXECUTE_BATCH_SIZE). If headers.len() > EXECUTE_BATCH_SIZE, the first chunks were executed with final_batch = false (batch mode, faster) and only the final chunk with final_batch = true (sequential, stores per-block state). The new code sends all headers as one message with final_batch = true, meaning all blocks are executed sequentially regardless of count.

In practice single_batch = true implies one P2P header response, which is bounded by the fetch limit (≤1024), and EXECUTE_BATCH_SIZE defaults to 1024 — so they're unlikely to diverge. But if they can diverge (e.g. EXECUTE_BATCH_SIZE is configured lower), this is a silent behavior change: blocks that should have been batch-executed will now be executed one-by-one, which is correct but slower.

Download task not awaited on execution error

full.rs:241:

download_task.await?;

This is only reached if the while let loop exits cleanly. If the loop exits early via result? (execution error), sync_cycle_full returns immediately, dropping the JoinHandle without awaiting. The download task continues running until it detects the closed channel. For a sync function this is tolerable, but worth noting: the task may hold peer connections or locks briefly after the function has declared failure.

A cleaner pattern (not mandatory, but good hygiene):

// on early exit, abort the download task
download_task.abort();

Or use a CancellationToken that's shared with the download task.


Code Quality

Significant code duplication

The inner body-download loop appears verbatim in both the single-batch path (full.rs:134-154) and the multi-batch path (full.rs:190-211). Extract a helper:

async fn download_bodies(
    peers: &mut PeerHandler,
    mut headers: Vec<BlockHeader>,
) -> Result<Vec<Block>, SyncError> {
    let mut blocks = Vec::new();
    while !headers.is_empty() {
        let end = min(MAX_BLOCK_BODIES_TO_REQUEST, headers.len());
        match peers.request_block_bodies(&headers[..end]).await {
            Ok(Some(bodies)) => {
                debug!("Obtained: {} block bodies", bodies.len());
                let batch = headers.drain(..bodies.len())
                    .zip(bodies)
                    .map(|(h, b)| Block { header: h, body: b });
                blocks.extend(batch);
            }
            Ok(None) => return Err(SyncError::BodiesNotFound),
            Err(e) => return Err(e.into()),
        }
    }
    Ok(blocks)
}

This would eliminate ~40 lines of duplication and make the two paths trivial to compare.

Asymmetric send-error handling

Single-batch path (full.rs:157):

let _ = body_tx.send(Ok((blocks, final_batch))).await;

Multi-batch path (full.rs:213):

if body_tx.send(Ok((blocks, final_batch))).await.is_err() {
    return;
}

The single-batch path discards the send result. Since it returns immediately after, the return would be a no-op if the send fails — functionally equivalent but inconsistent. It is confusing to a reader. Even if only for style consistency, either check or add a comment explaining why the result is intentionally ignored.

Unused variable binding

full.rs:131: let final_batch = true; could just be a literal true at the call site, or at minimum named const SINGLE_BATCH_IS_FINAL: bool = true; to make intent explicit. Minor.


Error Propagation

Error handling from the download task is correct:

  • body_tx.send(Err(e)) propagates errors to the receiver via result? in the main loop.
  • Panics in the download task propagate via download_task.await? as SyncError::JoinHandle.
  • Channel closure detection in the multi-batch path (is_err()) stops the download task when the receiver exits early.

The From impls on SyncError handle all the .into() conversions cleanly.


Performance

The channel buffer of 2 is a good tradeoff: the download task can prefetch one batch ahead while the current batch executes. Large buffers would increase memory pressure during sync.

No concerns on the execution-side performance; add_blocks_in_batch structure is unchanged.


Summary

Category Severity Item
Correctness Medium Single-batch always uses final_batch = true, potentially changing execution mode for large single batches
Correctness Low Empty batch silently drops final_batch = true signal
Design Low Download task not aborted/awaited on execution error
Style Medium Inner body-download loop duplicated in both paths
Style Low Asymmetric send-error handling in single vs multi-batch

The pipelining design is solid. The main request before merging is to extract the duplicated download loop into a shared helper and verify the single-batch / EXECUTE_BATCH_SIZE interaction. The test plan (hoodi full sync + timing comparison) is the right way to validate the throughput claim.


Automated review by Claude (Anthropic) · sonnet · custom prompt

@github-actions
Copy link
Copy Markdown

🤖 Codex Code Review

Findings

  1. The new downloader task is only joined on the success path, so failure paths now leave background peer requests running after sync_cycle_full has already returned. The task is spawned at full.rs:128, but it is only awaited at full.rs:241. Any early ? from the receive loop at full.rs:223 or from block execution at full.rs:230 returns immediately and skips that join. Because the task uses a cloned PeerHandler, it can continue issuing request_block_bodies calls until its next channel send fails, which means stale full-sync traffic can overlap with the next sync attempt. I’d treat this as a correctness/resource-lifecycle regression. Suggested fix: keep the spawned task in a guard and abort/await it on every exit path, or pass a cancellation signal into the downloader and always await shutdown before returning.

Notes

I did not see an EVM/gas-accounting or consensus-rule regression in the diff itself; batch ordering is still FIFO. I couldn’t run cargo check in this environment because rustup tried to write under a read-only path, so compile/test verification is still outstanding.


Automated review by OpenAI Codex · gpt-5.4 · custom prompt

@azteca1998 azteca1998 changed the title feat(sync): pipeline body download with execution feat(l1): pipeline body download with execution Apr 16, 2026
@github-actions github-actions Bot added the L1 Ethereum client label Apr 16, 2026
@azteca1998 azteca1998 marked this pull request as ready for review April 27, 2026 10:44
@azteca1998 azteca1998 requested a review from a team as a code owner April 27, 2026 10:44
@ethrex-project-sync ethrex-project-sync Bot moved this to In Review in ethrex_l1 Apr 27, 2026
@github-actions
Copy link
Copy Markdown

🤖 Claude Code Review

Now I have a complete picture. Let me write up the review.


PR #6491: feat(l1): pipeline body download with execution

Overview

This PR restructures sync_cycle_full to overlap block body downloads with block execution using a tokio::spawn background task and an mpsc::channel(2). The design is architecturally sound — a 2-slot buffer bounds memory while providing enough lookahead to hide download latency behind execution. The estimated 12-15% throughput gain is plausible.


Correctness

Task abandoned on error path (medium)

At line 228:

while let Some(result) = body_rx.recv().await {
    let (blocks, final_batch) = result?;   // <-- early return on error
    ...
}
download_task.await?;  // line 246 — only reached on happy path

When result? propagates an error, the function returns early, dropping body_rx and the download_task JoinHandle. In Tokio, dropping a JoinHandle does not cancel the spawned task — it continues running in the background until it detects the channel is closed. This is safe, but the task is never joined on the error path, so panics from download_task on the error branch are silently swallowed. Consider aborting the task explicitly on early exit, e.g.:

let result = async {
    while let Some(result) = body_rx.recv().await {
        let (blocks, final_batch) = result?;
        add_blocks_in_batch(...).await?;
    }
    Ok::<_, SyncError>(())
}.await;

download_task.await?;
result?;

cancel_token not honored in download task (minor)

The download task captures neither cancel_token nor any shutdown signal. On cancellation, add_blocks_in_batch returns an error, the function exits early, and the download task keeps requesting bodies from peers until it discovers the closed channel. For a high-latency call like request_block_bodies, this can meaningfully delay shutdown. Checking cancel_token.is_cancelled() at the top of the body-download loop in the background task would fix this.

final_batch = true silently dropped when blocks.is_empty() (minor)

In the single-batch path (lines 161–163):

if !blocks.is_empty() {
    let _ = body_tx.send(Ok((blocks, final_batch))).await;
}

And similarly in the multi-batch path (lines 217–222). If the last batch produced zero blocks (e.g., all headers were already canonical), neither path sends final_batch = true. The execution loop exits normally, but add_blocks_in_batch is never called with final_batch = true, meaning no batch triggers sequential pipeline execution (run_blocks_pipeline). In practice this cannot happen because a non-empty header list always yields at least one body, but the guard is a latent hazard that is hard to reason about. This is a pre-existing pattern in the original code, so not a regression.


Code Quality

Significant duplication — the body download inner loop appears twice

Lines 139–159 (single-batch) and 195–215 (multi-batch) are almost identical ~25-line blocks. Extracting a helper makes the intent clearer and ensures future fixes apply uniformly:

async fn download_bodies(
    peers: &mut PeerHandler,
    mut headers: Vec<BlockHeader>,
) -> Result<Vec<Block>, SyncError> {
    let mut blocks = Vec::new();
    while !headers.is_empty() {
        let end = min(MAX_BLOCK_BODIES_TO_REQUEST, headers.len());
        match peers.request_block_bodies(&headers[..end]).await {
            Ok(Some(bodies)) => {
                debug!("Obtained: {} block bodies", bodies.len());
                let batch = headers.drain(..bodies.len())
                    .zip(bodies)
                    .map(|(h, b)| Block { header: h, body: b });
                blocks.extend(batch);
            }
            Ok(None) => return Err(SyncError::BodiesNotFound),
            Err(e) => return Err(e.into()),
        }
    }
    Ok(blocks)
}

Inconsistent send-error handling between paths

  • Single-batch path (line 162): let _ = body_tx.send(Ok((blocks, final_batch))).await; — error discarded (fine, since return immediately follows).
  • Multi-batch path (lines 217–221): body_tx.send(...).await.is_err() — error checked, task returns.

The single-batch path is fine because return follows immediately, but the inconsistency makes the intent harder to read. After extracting the helper above, the two paths can share the same send/return logic.

Redundant let final_batch = true; binding (nit)

Line 136:

let final_batch = true;

This is never reassigned and used only once. Inline the literal where it's used.


Performance

The channel buffer of 2 is a reasonable starting point and bounds memory to at most 2 batches in flight simultaneously. This is appropriate. No concerns.

The peers.clone() at line 130 is correct given that peers is &mut PeerHandler and can't be moved into the task.


Summary

The pipelining design is correct and the 2-slot channel buffer is appropriate. The main issues are:

  • The download task is not joined or aborted on the error path, meaning panics from the download task are dropped when execution fails.
  • cancel_token is ignored inside the download task, slowing shutdown.
  • The inner download loop is duplicated between the two branches — a helper function would clean this up significantly and prevent future divergence.

Automated review by Claude (Anthropic) · sonnet · custom prompt

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Apr 27, 2026

Greptile Summary

This PR restructures sync_cycle_full to pipeline block body downloads alongside execution, using a bounded mpsc::channel(2) so that the next batch is pre-fetched while the current batch executes. The approach is sound and the happy-path pipelining is correct.

  • P1 — task leak on error: When add_blocks_in_batch (or the channel error) triggers an early ? return, the download_task JoinHandle is dropped without being awaited or aborted. Tokio detaches dropped JoinHandles, so the download task continues fetching block bodies (up to one full batch) after the sync function has already returned with an error. The cancel_token is available in scope but not propagated into the spawned task.
  • P2 — final_batch signal lost on empty last batch: In the multi-batch path, if the last batch produces zero blocks the final_batch = true signal is never forwarded, causing any future non-empty batch to be treated as non-final; harmless in practice but fragile.

Confidence Score: 3/5

Not safe to merge as-is — the download task is not cancelled on execution error, leaking peer connections and network I/O on the sync failure path.

One P1 finding (task leak on early exit) that affects the error path on every sync failure. The happy path is correct and the pipelining logic is sound, but the missing abort/cancellation is a real resource management defect. P1 with a single occurrence caps confidence at 4, and the core sync path being affected brings it to 3.

crates/networking/p2p/sync/full.rs — specifically the execution loop / download_task lifecycle around lines 227–246.

Important Files Changed

Filename Overview
crates/networking/p2p/sync/full.rs Restructures sync_cycle_full to overlap body downloads with block execution via an mpsc channel; one P1 issue: the download task is not aborted/cancelled when the execution loop exits early via ?, allowing it to continue wasting network I/O for up to one full batch.

Sequence Diagram

sequenceDiagram
    participant SC as sync_cycle_full
    participant DT as download_task (tokio::spawn)
    participant PH as PeerHandler
    participant CH as mpsc::channel(2)
    participant EL as execution loop
    participant BC as Blockchain

    SC->>DT: tokio::spawn(async move { ... })
    SC->>EL: while let Some(result) = body_rx.recv()

    loop For each batch
        DT->>PH: request_block_bodies(header_batch)
        PH-->>DT: Ok(Some(bodies))
        DT->>CH: body_tx.send(Ok((blocks, final_batch)))
        CH-->>EL: body_rx.recv() → (blocks, final_batch)
        EL->>BC: add_blocks_in_batch(blocks, final_batch)
        BC-->>EL: Ok(())
    end

    DT-->>SC: task completes (channel closed)
    EL-->>SC: body_rx exhausted
    SC->>DT: download_task.await? (join)

    Note over DT,EL: On execution error: EL returns early via ?,\nJoinHandle dropped → task detaches (P1 leak)
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 227-246

Comment:
**Download task leaks on execution error**

When `result?` or `add_blocks_in_batch(...).await?` returns an error, `sync_cycle_full` returns early and the `download_task` `JoinHandle` is dropped. In Tokio, dropping a `JoinHandle` **detaches** the task — it keeps running. The background task will continue downloading block bodies (up to one full batch) until it tries to send on the closed channel. This wastes peer connections and network I/O after a sync failure.

The `cancel_token` is already in scope but not wired into the download task. A minimal fix is to `abort` the task on early exit, or to select on the token inside the download loop:

```rust
// On error exit, ensure the task stops immediately
while let Some(result) = body_rx.recv().await {
    let (blocks, final_batch) = result?;
    // ...
    add_blocks_in_batch(...).await.map_err(|e| {
        download_task.abort();
        e
    })?;
}
```

Or, propagate the existing `cancel_token` into the `tokio::spawn` closure and `select!` on `cancel_token.cancelled()` inside the body-download loop.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 217-222

Comment:
**Silent skip when empty blocks on final batch in multi-batch path**

The condition `if !blocks.is_empty() && body_tx.send(...).await.is_err()` short-circuits when `blocks` is empty — correct for execution. But if `blocks` is empty on the iteration where `final_batch == true`, the `final_batch = true` signal is never sent. This means the execution loop never calls `add_blocks_in_batch` with `final_batch = true` for that iteration, which causes blocks to be executed via batch mode rather than pipeline mode (`add_blocks` checks `sync_head_found`). In practice an empty last batch is unlikely, but the asymmetry between the two code paths (single-batch always sends `final_batch = true`) is fragile and worth documenting.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 133-165

Comment:
**Single-batch path silently swallows empty-block case**

If `headers` arrives genuinely empty (all headers were canonical and `block_headers.drain` removed them all), `blocks` stays empty and nothing is sent to the channel. The overall control flow is correct and safe, but it would be worth an explicit comment so future readers don't wonder whether a send was accidentally dropped.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "Merge remote-tracking branch 'origin/mai..." | Re-trigger Greptile

Comment on lines +227 to +246
while let Some(result) = body_rx.recv().await {
let (blocks, final_batch) = result?;
info!(
"Executing {} blocks for full sync. First block hash: {:#?} Last block hash: {:#?}",
blocks.len(),
blocks.first().ok_or(SyncError::NoBlocks)?.hash(),
blocks.last().ok_or(SyncError::NoBlocks)?.hash()
);
add_blocks_in_batch(
blockchain.clone(),
cancel_token.clone(),
blocks,
final_batch,
store.clone(),
)
.await?;
}

// Ensure the download task completes and propagate any panics
download_task.await?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Download task leaks on execution error

When result? or add_blocks_in_batch(...).await? returns an error, sync_cycle_full returns early and the download_task JoinHandle is dropped. In Tokio, dropping a JoinHandle detaches the task — it keeps running. The background task will continue downloading block bodies (up to one full batch) until it tries to send on the closed channel. This wastes peer connections and network I/O after a sync failure.

The cancel_token is already in scope but not wired into the download task. A minimal fix is to abort the task on early exit, or to select on the token inside the download loop:

// On error exit, ensure the task stops immediately
while let Some(result) = body_rx.recv().await {
    let (blocks, final_batch) = result?;
    // ...
    add_blocks_in_batch(...).await.map_err(|e| {
        download_task.abort();
        e
    })?;
}

Or, propagate the existing cancel_token into the tokio::spawn closure and select! on cancel_token.cancelled() inside the body-download loop.

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 227-246

Comment:
**Download task leaks on execution error**

When `result?` or `add_blocks_in_batch(...).await?` returns an error, `sync_cycle_full` returns early and the `download_task` `JoinHandle` is dropped. In Tokio, dropping a `JoinHandle` **detaches** the task — it keeps running. The background task will continue downloading block bodies (up to one full batch) until it tries to send on the closed channel. This wastes peer connections and network I/O after a sync failure.

The `cancel_token` is already in scope but not wired into the download task. A minimal fix is to `abort` the task on early exit, or to select on the token inside the download loop:

```rust
// On error exit, ensure the task stops immediately
while let Some(result) = body_rx.recv().await {
    let (blocks, final_batch) = result?;
    // ...
    add_blocks_in_batch(...).await.map_err(|e| {
        download_task.abort();
        e
    })?;
}
```

Or, propagate the existing `cancel_token` into the `tokio::spawn` closure and `select!` on `cancel_token.cancelled()` inside the body-download loop.

How can I resolve this? If you propose a fix, please make it concise.

Comment thread crates/networking/p2p/sync/full.rs Outdated
Comment on lines +217 to +222
if !blocks.is_empty()
&& body_tx.send(Ok((blocks, final_batch))).await.is_err()
{
// Receiver dropped (execution loop stopped), stop downloading
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Silent skip when empty blocks on final batch in multi-batch path

The condition if !blocks.is_empty() && body_tx.send(...).await.is_err() short-circuits when blocks is empty — correct for execution. But if blocks is empty on the iteration where final_batch == true, the final_batch = true signal is never sent. This means the execution loop never calls add_blocks_in_batch with final_batch = true for that iteration, which causes blocks to be executed via batch mode rather than pipeline mode (add_blocks checks sync_head_found). In practice an empty last batch is unlikely, but the asymmetry between the two code paths (single-batch always sends final_batch = true) is fragile and worth documenting.

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 217-222

Comment:
**Silent skip when empty blocks on final batch in multi-batch path**

The condition `if !blocks.is_empty() && body_tx.send(...).await.is_err()` short-circuits when `blocks` is empty — correct for execution. But if `blocks` is empty on the iteration where `final_batch == true`, the `final_batch = true` signal is never sent. This means the execution loop never calls `add_blocks_in_batch` with `final_batch = true` for that iteration, which causes blocks to be executed via batch mode rather than pipeline mode (`add_blocks` checks `sync_head_found`). In practice an empty last batch is unlikely, but the asymmetry between the two code paths (single-batch always sends `final_batch = true`) is fragile and worth documenting.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +133 to +165
let download_task = tokio::spawn(async move {
// If single_batch, we already have headers in memory — send them as the one and only batch.
if single_batch {
let final_batch = true;
let mut batch_headers = headers;
let mut blocks = Vec::new();
while !batch_headers.is_empty() {
let end = min(MAX_BLOCK_BODIES_TO_REQUEST, batch_headers.len());
let header_batch = &batch_headers[..end];
match download_peers.request_block_bodies(header_batch).await {
Ok(Some(bodies)) => {
debug!("Obtained: {} block bodies", bodies.len());
let block_batch = batch_headers
.drain(..bodies.len())
.zip(bodies)
.map(|(header, body)| Block { header, body });
blocks.extend(block_batch);
}
Ok(None) => {
let _ = body_tx.send(Err(SyncError::BodiesNotFound)).await;
return;
}
Err(e) => {
let _ = body_tx.send(Err(e.into())).await;
return;
}
}
}
if !blocks.is_empty() {
let _ = body_tx.send(Ok((blocks, final_batch))).await;
}
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Single-batch path silently swallows empty-block case

If headers arrives genuinely empty (all headers were canonical and block_headers.drain removed them all), blocks stays empty and nothing is sent to the channel. The overall control flow is correct and safe, but it would be worth an explicit comment so future readers don't wonder whether a send was accidentally dropped.

Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/networking/p2p/sync/full.rs
Line: 133-165

Comment:
**Single-batch path silently swallows empty-block case**

If `headers` arrives genuinely empty (all headers were canonical and `block_headers.drain` removed them all), `blocks` stays empty and nothing is sent to the channel. The overall control flow is correct and safe, but it would be worth an explicit comment so future readers don't wonder whether a send was accidentally dropped.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Overlap block body downloads with block execution during full sync by
using a background tokio task and an mpsc channel (buffer of 2 batches).

The download task iterates through batches, fetches headers from the
store, downloads bodies from peers, constructs Block objects, and sends
them through the channel. The main loop receives completed batches and
executes them immediately. This hides network I/O latency behind
execution time for an estimated 12-15% throughput improvement.

Closes #6479
@azteca1998 azteca1998 force-pushed the feat/pipeline-body-download branch from 3073071 to 7c9875d Compare April 27, 2026 10:49
@github-actions
Copy link
Copy Markdown

🤖 Codex Code Review

  1. crates/networking/p2p/sync/full.rs:227 and crates/networking/p2p/sync/full.rs:245 — any error from result? or add_blocks_in_batch(...).await? exits sync_cycle_full before the join happens. Dropping a Tokio JoinHandle detaches the spawned downloader, so it can keep issuing stale GetBlockBodies requests and mutating peer scores after the sync cycle has already failed and may have been retried. This should explicitly abort/join the task on every early-return path.

  2. crates/networking/p2p/sync/full.rs:126, crates/networking/p2p/sync/full.rs:194, and crates/networking/p2p/snap/constants.rs:107 — the pipeline buffers whole Vec<Block> execute batches, each up to 1024 blocks. At peak you can have one batch executing, two more queued in the channel, and another already materialized in the producer waiting on send(), which is a substantial RSS increase versus the old one-batch-at-a-time flow. For mainnet-sized bodies this can turn the throughput win into memory pressure or OOM risk; consider buffering smaller units or reducing queue depth.

Otherwise the ordering and final_batch behavior look preserved, and I don’t see a direct EVM/gas/consensus-rule regression in this diff.

I couldn’t run cargo check here because the sandbox blocks writable cargo/rustup state and network fetches.


Automated review by OpenAI Codex · gpt-5.4 · custom prompt

@ilitteri ilitteri added this pull request to the merge queue Apr 28, 2026
Merged via the queue into main with commit e9ab479 Apr 28, 2026
55 checks passed
@ilitteri ilitteri deleted the feat/pipeline-body-download branch April 28, 2026 08:54
@github-project-automation github-project-automation Bot moved this from In Review to Done in ethrex_l1 Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

L1 Ethereum client

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Pipeline Body Download with Execution.

4 participants