From f3c916ca51fa383c1413524149422528aa0e16bf Mon Sep 17 00:00:00 2001 From: Harry Date: Wed, 29 Apr 2026 15:41:11 -0500 Subject: [PATCH 1/5] write product and tech specs --- specs/REMOTE-1519/PRODUCT.md | 43 +++++++++ specs/REMOTE-1519/TECH.md | 170 +++++++++++++++++++++++++++++++++++ 2 files changed, 213 insertions(+) create mode 100644 specs/REMOTE-1519/PRODUCT.md create mode 100644 specs/REMOTE-1519/TECH.md diff --git a/specs/REMOTE-1519/PRODUCT.md b/specs/REMOTE-1519/PRODUCT.md new file mode 100644 index 000000000..5d3076d91 --- /dev/null +++ b/specs/REMOTE-1519/PRODUCT.md @@ -0,0 +1,43 @@ +# Local-to-Cloud Handoff: UI Polish — Product Spec +Linear: [REMOTE-1519](https://linear.app/warpdotdev/issue/REMOTE-1519/make-ui-better-for-local-cloud-handoff) +## Summary +Polish the local-to-cloud handoff (REMOTE-1486) so that the cloud-mode pane that opens next to the local pane already shows the source conversation, and looks identical to a regular fresh cloud-mode run while the cloud agent is starting up. Today the user clicks the chip and is dropped into a blank pane that only fills in once the cloud agent's first turn streams in. +## Problem +Two related rough edges in the V0 handoff flow: +1. The new cloud-mode pane is empty between chip click and the cloud agent's first response. The user has lost their context — they have to remember what they handed off, or look at the local pane next to it. +2. The cloud-mode setup-v2 affordances (the "Running setup commands…" collapsible row that wraps the environment startup PTY output, the cloud-mode loading screen / queued-prompt indicator) work for fresh cloud-mode runs but render incorrectly during handoff. The handoff pane shows raw startup output instead of the polished setup-v2 surface. +## Goals +- The handoff pane is hydrated with the source conversation's AI exchanges immediately on chip click. The user sees the same conversation history they were just looking at, in the new pane, before they finish typing the follow-up. +- The cloud agent's shared-session replay (which rebroadcasts every exchange in the forked conversation) does not double-render content already on screen. Only genuinely new exchanges from the cloud agent appear after replay. +- The handoff pane uses the cloud-mode setup-v2 affordances during the loading phase, the same way a fresh cloud-mode run does: queued-prompt indicator, "Setting up environment" loading screen, "Running setup commands…" collapsible block wrapping the startup PTY output. +## Non-goals +- Bidirectional sync after handoff. The forked conversation diverges at chip-click; later edits in the local pane do not propagate to the cloud, and vice versa. Same posture as REMOTE-1486 V0. +- Restoring shell command blocks from the local pane into the new cloud pane. Only the conversation's AI exchanges are hydrated; terminal output that lived on the local terminal (e.g. unrelated commands run between agent turns) stays on the local pane. +- Cloud→cloud setup-v2 fixes. The cloud-cloud follow-up path (REMOTE-1290) may have similar gaps but is out of scope here; we'll only address local→cloud. +- A local "this conversation was handed off to " breadcrumb on the source pane. +## Behavior +### Fork timing and hydration on chip click +1. Clicking the "Hand off to cloud" chip (or invoking `/oz-cloud-handoff`) immediately mints a server-side fork of the source conversation. The new conversation token is returned synchronously to the client. +2. The new cloud-mode pane opens next to the local pane and is pre-populated with the source conversation's AI exchanges, rendered with live (non-restored) appearance — visually indistinguishable from staying in the local pane. +3. The forked conversation appears in the user's history under their account, owned by them. +4. Subsequent edits in the local pane after chip click do **not** appear in the handoff pane. The cloud agent will work against the conversation as it was at chip-click time. Users who want a more recent snapshot must close the handoff pane and click the chip again. +### Eligibility and fallback +5. Per-conversation eligibility (active conversation must be non-empty and have a synced server token) is unchanged from REMOTE-1486. When the active conversation isn't eligible, the chip still opens a fresh cloud-mode pane with no hydration and no fork — same fall-through as today. +6. If the server fork call fails for any reason (network, auth, source not synced to GCS), the new pane is **not** opened. The failure surfaces as an error toast in the local window. The local conversation is unaffected and the user can retry by clicking the chip again. +### Cloud session replay and dedup +7. When the cloud agent's shared session connects to the handoff pane, the agent's conversation replay rebroadcasts every exchange in the forked conversation. Because we already pre-populated the same exchanges, the replay events are suppressed at the response-stream level, identical to how cloud→cloud follow-up sessions handle stale replay (REMOTE-1290). +8. After the replay completes, genuinely new exchanges (the cloud agent's first response to the user's follow-up prompt) are appended normally. The user sees a smooth transition from "frozen pre-handoff state" to "cloud agent answering my follow-up prompt". +### Setup-v2 affordances during loading +9. After the user submits, the handoff pane shows the same cloud-mode setup-v2 affordances a fresh cloud-mode run shows: + - The submitted prompt as a queued user-query indicator (REMOTE-1454 visual treatment, no Send-now / dismiss buttons). + - The "Setting up environment" loading screen during the pre-session phase. + - The "Running setup commands…" collapsible row that wraps environment startup PTY output once the shared session connects. +10. When the cloud agent's first turn arrives, the queued-prompt indicator and the setup-v2 affordances tear down on the same transitions a fresh cloud-mode run uses (`AppendedExchange` for Oz, `HarnessCommandStarted` for non-Oz). +### Edge cases +11. If the user closes the handoff pane between chip click and submit, the server-side fork is orphaned (visible in the user's conversation history but never run against). V0 does not clean these up. +12. If the user clicks the chip twice on the same source conversation, two independent forks are minted — same as today's REMOTE-1486 chip behavior; nothing changes here. +13. The local pane is unaffected throughout: its conversation is not duplicated, archived, or annotated. The user can keep typing in the local pane. +## Success criteria +- Clicking the chip on a long conversation produces a fully populated handoff pane within ~300ms (network-dependent on the fork RPC), without flicker. +- The user never sees duplicate exchange blocks during the cloud agent's session connect / replay phase. +- The handoff pane's loading-phase UI is byte-for-byte identical to a fresh cloud-mode run's (modulo the pre-populated exchanges above the queued-prompt indicator). diff --git a/specs/REMOTE-1519/TECH.md b/specs/REMOTE-1519/TECH.md new file mode 100644 index 000000000..9eb185fef --- /dev/null +++ b/specs/REMOTE-1519/TECH.md @@ -0,0 +1,170 @@ +# Local-to-Cloud Handoff: UI Polish — Tech Spec +Product spec: `specs/REMOTE-1519/PRODUCT.md` +Linear: [REMOTE-1519](https://linear.app/warpdotdev/issue/REMOTE-1519/make-ui-better-for-local-cloud-handoff) +## Context +REMOTE-1486 shipped the V0 local-to-cloud handoff: a chip in the agent input footer (or `/oz-cloud-handoff`) opens a fresh cloud-mode pane next to the local pane, the user types a follow-up prompt, and on submit the client snapshots the workspace and spawns a cloud agent that's forked from the local conversation. +That V0 has two rough edges this spec addresses: +1. **No hydration of the source conversation in the new pane.** The fork is materialized server-side at submit time only (`enqueueAgentRun` in `../warp-server-2/router/handlers/public_api/agent_webhooks.go:376-386` calls `ForkConversationForHandoff` and points `task.AgentConversationID` at the fork). Until the cloud agent's shared session connects and replays the conversation transcript, the new pane is blank. The cloud session's replay then re-broadcasts every exchange the user already saw in the local pane. +2. **Setup-v2 affordances are not consistent with fresh cloud-mode runs.** A fresh cloud-mode pane uses `BlockList::set_is_executing_oz_environment_startup_commands(true)` (set in `app/src/terminal/model/terminal_model.rs:1238-1241`), which hides the active block, marks it as a setup command, and renders a "Running setup commands…" collapsible row above it (`CloudModeSetupTextBlock` in `app/src/terminal/view/ambient_agent/block/setup_command_text.rs`). The flag is reset on the first `AppendedExchange` (`app/src/terminal/view.rs:5113-5124`). For handoff panes the pre-populated conversation's exchanges trip that reset path early (when we restore them via `restore_conversations_on_view_creation`), unhiding the active block before the cloud session has even connected — so when the cloud agent's environment startup PTY output arrives it renders raw rather than wrapped in the setup-v2 surface. +The pieces this spec builds on: +- **Cloud-cloud handoff replay suppression.** When `attach_followup_session` joins a fresh shared session for a follow-up cloud execution, it uses `SharedSessionInitialLoadMode::AppendFollowupScrollback` (`app/src/terminal/shared_session/viewer/terminal_manager.rs:340-370`), which (a) deduplicates blocks by ID via `BlockList::append_followup_shared_session_scrollback` (`app/src/terminal/model/blocks.rs:725`) and (b) sets `should_suppress_existing_agent_conversation_replay = true` (`app/src/terminal/shared_session/viewer/event_loop.rs:132-134`). When the cloud agent's replay arrives, `BlocklistAIController::should_skip_replayed_response_for_existing_conversation` (`app/src/ai/blocklist/controller/shared_session.rs:220-239`) skips response streams whose conversation already has exchanges in our local history. We will reuse this exact mechanism for the local→cloud first-session connect. +- **Fork-into-new-pane restoration.** `BlocklistAIHistoryModel::fork_conversation` (`app/src/ai/blocklist/history_model.rs:1033`) materializes a forked `AIConversation` locally from a source conversation. `ConversationRestorationInNewPaneType::Forked { conversation }` (`app/src/terminal/view/load_ai_conversation.rs:104-106`) feeds it into a freshly-created pane via `restore_conversations_on_view_creation`, which restores AI blocks for every exchange with live (non-restored) appearance. +- **Server-side fork and conversation-token binding.** `ForkConversationForHandoff` in `../warp-server-2/logic/ai_conversation_fork.go` already implements the server fork end-to-end (auth on source, GCS data copy, metadata insert, `has_gcs_data = TRUE`); it's currently called only from `enqueueAgentRun`. The viewer-side `BlocklistAIController::find_existing_conversation_by_server_token` (`app/src/ai/blocklist/controller/shared_session.rs:418-433`) maps a `StreamInit.conversation_id` to a local `AIConversation` by token; if we set the local fork's `server_conversation_token` to the server fork's id at chip-click time, this lookup wires them up automatically when the cloud session arrives. +- **REMOTE-1486 client surface area.** `Workspace::start_local_to_cloud_handoff` (`app/src/workspace/view.rs:12952-13079`) is the entry point invoked by the chip and slash command. It splits a fresh cloud-mode pane via `pane_group.add_ambient_agent_pane(ctx)`, seeds `PendingHandoff` onto the new pane's `AmbientAgentViewModel`, and kicks off async touched-repo derivation. `AmbientAgentViewModel::submit_handoff` (`app/src/terminal/view/ambient_agent/model.rs:1108-1177`) runs the snapshot prep + upload orchestrator and then calls `spawn_agent_with_request` with `fork_from_conversation_id` set on the `SpawnAgentRequest`. +The Linear ticket description ("we should fork the conversation into the cloud pane and re-use the cloud mode loading v2 for the setup commands") covers both pieces; this spec wires them together because the fork-timing change is what enables the setup-v2 fix. +## Diagram +```mermaid +sequenceDiagram + participant U as User + participant C as Local Warp Client + participant LP as Local Pane + participant HP as Handoff Pane (new) + participant API as warp-server (public API) + participant Sand as Cloud Sandbox + U->>C: Click "Hand off to cloud" chip on local pane + C->>API: POST /agent/handoff/prepare-fork {source_conversation_id} + API->>API: ForkConversationForHandoff (auth, copy GCS, insert metadata) + API-->>C: {forked_conversation_id: T_C} + Note over C: On error here: error toast, no pane opens + C->>C: BlocklistAIHistoryModel::fork_conversation (local fork L', bind T_C) + C->>HP: split fresh cloud-mode pane next to LP + C->>HP: restore_conversations_on_view_creation(Forked { L' }) + Note over HP: Pre-populated with source's AI exchanges + par Background prep (kicked off after pane opens) + C->>C: derive_touched_workspace (walks conversation, git remotes) + C->>API: POST /agent/handoff/prepare-snapshot + API-->>C: {prep_token, upload_urls} + C->>API: PUT snapshot files (parallel) + end + U->>HP: Type follow-up prompt, submit + Note over HP: Send button disabled until prep_token cached on PendingHandoff + C->>API: POST /agent/runs {conversation_id: T_C, handoff_prep_token, prompt, config} + API-->>C: {task_id, run_id} + Note over HP: Setup-v2 affordances render: queued prompt, loading screen + Sand->>Sand: bootstrap, run setup commands (PTY → active block, hidden) + Sand-->>HP: shared session ready + HP->>HP: connect_to_session with AppendFollowupScrollback + Note over HP: should_suppress_existing_agent_conversation_replay = true + Sand-->>HP: replay forked conversation transcript + Note over HP: Replay events skipped (existing conversation has exchanges) + Sand-->>HP: cloud agent's first turn (rehydration prompt + user follow-up + response) + HP->>HP: AppendedExchange clears setup-v2 flag, queued-prompt block + Note over LP: Local pane unchanged throughout +``` +## Proposed changes +### 1. Server-side: split fork from spawn (`../warp-server-2`) +**Why split fork from spawn?** This whole spec hinges on pre-populating the new cloud pane with the source conversation at chip click. That requires a stable, materialized fork at chip-click time, not at submit time, for two reasons: +1. **Stable target.** Once the cloud pane is hydrated we don't want to keep re-syncing it as the user continues typing in the local pane — that would be O(local-conversation-edits) GCS writes for nothing, and would have to merge against whatever the cloud agent is doing in parallel. Forking on click freezes the cloud's view at the moment the user opted into the handoff and lets the two conversations evolve independently. +2. **Semantic match.** Handoff is fork→cloud per the product model: clicking the chip is the user saying "this conversation, as it stands right now, is what I'm sending to the cloud." Forking at submit-time is an implementation accident inherited from REMOTE-1486 V0 (which had no hydration so it didn't matter when the fork happened); forking at click-time mirrors the user's mental model exactly. +The fork currently happens inside `enqueueAgentRun` when `ForkFromConversationID` is set on the `RunAgentRequest` (`router/handlers/public_api/agent_webhooks.go:376-386`). This spec moves the fork to a new dedicated endpoint so the client can mint the fork at chip-click time and pre-populate the pane. +**New endpoint** `POST /api/v1/agent/handoff/prepare-fork`: +```go path=null start=null +type PrepareLocalHandoffForkRequest struct { + SourceConversationID string `json:"source_conversation_id" binding:"required"` +} +type PrepareLocalHandoffForkResponse struct { + ForkedConversationID string `json:"forked_conversation_id"` +} +``` +Add the handler alongside `PrepareLocalHandoffSnapshotHandler` in `router/handlers/public_api/agent_handoff.go`. It is a thin wrapper that: +1. Gates on `features.LocalToCloudHandoffEnabled()`. +2. Resolves `principal` via `middleware.GetRequiredPrincipalFromContext`. +3. Calls the existing `logic.ForkConversationForHandoff(ctx, db, datastores, req.SourceConversationID, principal)` and returns `{forked_conversation_id}`. +Wire the route under the same `aiCheckedGroup` as the existing snapshot prep endpoint at `router/handlers/public_api/agent_webhooks.go:205-207`. +**Remove `ForkFromConversationID` from `RunAgentRequest`.** Per user direction, no backwards compatibility is needed — the field is only used by the under-flag REMOTE-1486 branch which isn't merged. Delete the field declaration (`agent_webhooks.go:235-240`), the validation block (`agent_webhooks.go:337-344`), and the inline fork call (`agent_webhooks.go:376-386`). The existing `ConversationID *string` field at `agent_webhooks.go:222` continues to drive `task.AgentConversationID` (resume semantics) and is what the client now uses to point the new task at the pre-minted fork. +**`HandoffPrepToken` stays.** Snapshot prep + upload still flow through the existing `prepare-snapshot` endpoint and the same `attachHandoffSnapshotToTask` post-task-creation step; the only thing that moves is when the client triggers them (now async on chip click instead of submit time — see §3). The server handler block at `agent_webhooks.go:476-484` is unchanged. +### 2. Client-side API surface (`app/src/server/server_api/ai.rs`) +- Add `prepare_handoff_fork` to the `AIClient` trait: +```rust path=null start=null +async fn prepare_handoff_fork( + &self, + request: PrepareHandoffForkRequest, +) -> Result; +``` +implemented in `ServerApi` as `POST agent/handoff/prepare-fork`. Mirror the request/response shape pattern of `PrepareHandoffSnapshotRequest` (currently around line 221-249). +- On `SpawnAgentRequest`, **remove** the `fork_from_conversation_id: Option` field (currently line 213) and **add** `conversation_id: Option` for resume semantics. The client now always pre-mints the fork via the new endpoint and sends the resulting id under `conversation_id`. +- Update the snapshot pipeline call site that takes a `&ServerConversationToken` only for log labelling (`upload_snapshot_for_handoff` in `app/src/ai/agent_sdk/driver/snapshot.rs`) — no signature change needed; the source conversation token is still available on the `PendingHandoff`. +### 3. Client-side fork-on-chip-click (`app/src/workspace/view.rs`) +Extend `Workspace::start_local_to_cloud_handoff` (currently at `app/src/workspace/view.rs:12952-13079`) into a strict-ordering open path: +1. **Resolve eligibility synchronously.** Read the active session view's conversation via `BlocklistAIHistoryModel::active_conversation`. If the conversation is empty or has no `server_conversation_token`, fall back to the existing behavior (open a fresh cloud-mode pane with no hydration / no fork — same as today's REMOTE-1486 chip). +2. **Await the fork before opening the pane.** When the source resolves, `ctx.spawn` a future that calls `AIClient::prepare_handoff_fork({source_conversation_id: T_L})`. The new pane is **not** split until this returns. `start_local_to_cloud_handoff` itself returns to the caller immediately so the click handler doesn't block, but the pane-open work is gated on the RPC. + - **On error** (network, auth, `SourceConversationNotPersisted`, etc.), surface a `WorkspaceToastStack` error toast (mirroring the pattern used by `Self::show_fork_toast` at `app/src/workspace/view.rs:11586-11588` for failed local forks). Log the underlying error. Do **not** open a pane. + - **On success**, on the main thread, run the rest of the open path described below. +3. **Open and pre-populate the pane.** With `T_C` in hand: + - Call `pane_group.add_ambient_agent_pane(ctx)` to split the new pane next to the active pane (today's call site). + - Call `BlocklistAIHistoryModel::fork_conversation(&source_conversation, FORK_PREFIX, app)` to materialize a local fork `L'`. `fork_conversation` already handles SQLite persistence, the `forked_from_server_conversation_token` field, and reverted-action-id preservation. + - Set `L'.server_conversation_token = T_C` via `BlocklistAIHistoryModel::set_server_conversation_token_for_conversation` (existing helper used by the `link_forked_conversation_token` path). This makes `find_existing_conversation_by_server_token(T_C)` immediately return `L'` once the cloud session connects. + - On the new pane's terminal view, call `terminal_view.restore_conversation_after_view_creation(RestoredAIConversation::new(L'.clone()), /* use_live_appearance */ true, ctx)` (existing helper at `app/src/terminal/view/load_ai_conversation.rs:542-603`). This is the same restoration helper used by the in-current-pane fork path at `app/src/workspace/view.rs:11597-11607`. + - Set the new pane's `BlocklistAIContextModel` pending-query state for the forked conversation so the agent view's selected conversation matches `L'` (mirrors `restore_conversations_from_block_params` at `app/src/terminal/view/load_ai_conversation.rs:482-491`). + - Seed `PendingHandoff` on the new pane's `AmbientAgentViewModel` with `source_conversation_id: T_L`, `forked_conversation_id: T_C`, `touched_workspace: None`, `snapshot_prep_token: None`, `submission_state: Idle`. + - Apply the slash-command-supplied prompt pre-fill if any. +4. **Kick off async background prep.** After the pane is open, `ctx.spawn` a single chained future on the new pane's `AmbientAgentViewModel` that runs `derive_touched_workspace` → `upload_snapshot_for_handoff` (existing helpers in `app/src/ai/blocklist/handoff/touched_repos.rs` and `app/src/ai/agent_sdk/driver/snapshot.rs`). When derivation completes, call `set_pending_handoff_workspace` so the env-overlap pick can apply (existing behavior). When the upload completes, store the resulting prep token via a new `set_pending_handoff_snapshot_prep_token(Option, ctx)` setter on the model. The pane is fully interactive throughout — the user can type, scroll, and pick an env while this runs. +The send button's existing gate (`pending_handoff.touched_workspace.is_some()` plus prompt non-empty) is extended to also require `snapshot_prep_token.is_some_or_skipped()` — i.e. the upload is either complete or the touched workspace was empty (the existing `upload_snapshot_for_handoff` returns `Ok(None)` for empty workspaces and that's a valid skip). +### 4. Submit path uses resume semantics (`app/src/terminal/view/ambient_agent/model.rs`) +With the fork and the snapshot upload both completed during the chip-click open path, `AmbientAgentViewModel::submit_handoff` becomes a thin shim over `spawn_agent_with_request`. It reads the cached `forked_conversation_id` and `snapshot_prep_token` directly off `pending_handoff` — no orchestrator runtime needed: +```rust path=null start=null +let handoff = self.pending_handoff.as_ref()?; +let request = SpawnAgentRequest { + prompt, + config: Some(self.build_default_spawn_config(ctx)), + title: None, + team: None, + skill: None, + attachments, + interactive: None, + parent_run_id: None, + runtime_skills: vec![], + referenced_attachments: vec![], + conversation_id: Some(handoff.forked_conversation_id.clone()), + handoff_prep_token: handoff.snapshot_prep_token.clone(), +}; +self.spawn_agent_with_request(request, ctx); +``` +Delete the existing `app/src/ai/blocklist/handoff/orchestrator.rs` (`run_handoff` + `HandoffPrepared`) — the prep-and-upload phase moves to the chip-click path described in §3, and the orchestrator's only remaining role would be a redundant wrapper around `upload_snapshot_for_handoff`. Inline the call directly there. `submit_handoff` retains its existing double-submit guard via `submission_state`. +### 5. Replay-suppressing initial connect (`app/src/terminal/shared_session/viewer/terminal_manager.rs`) +`TerminalManager::connect_to_session` (`app/src/terminal/shared_session/viewer/terminal_manager.rs:322-338`) currently always uses `SharedSessionInitialLoadMode::ReplaceFromSessionScrollback`. Change it so handoff panes use `AppendFollowupScrollback` instead: +- Plumb a `should_append_followup: bool` flag into `connect_to_session` (or a new `connect_to_session_with_load_mode(session_id, load_mode, ctx)` variant — caller's choice). +- The cloud-mode subscription in `app/src/terminal/view/ambient_agent/mod.rs:88-90` calls `manager.connect_to_session(*session_id, ctx)` on `SessionReady`. Update it to also pass `view_model.is_local_to_cloud_handoff()` (read from the model on the same line). When true, use append mode. +The append mode then handles both pieces of dedup automatically: `BlockList::append_followup_shared_session_scrollback` skips block IDs we already have, and `EventLoop::should_suppress_existing_agent_conversation_replay = true` (`event_loop.rs:132-134`) drives `BlocklistAIController::should_skip_replayed_response_for_existing_conversation` to skip the historical response streams. No changes to the suppression machinery itself. +### 6. Setup-v2 active-block guard during conversation restore (`app/src/terminal/view.rs`) +The flag-reset block at `app/src/terminal/view.rs:5113-5124` flips `is_executing_oz_environment_startup_commands` to `false` whenever an `AppendedExchange` arrives in an ambient agent session. During `restore_conversations_on_view_creation`, every restored exchange emits `AppendedExchange` (via `update_conversation_for_new_request_input` → `BlocklistAIHistoryEvent::AppendedExchange`), which trips this reset before the cloud agent has even started its setup commands. +Gate the reset on the model not being in handoff-pre-spawn state: +```rust path=null start=null +if self.is_ambient_agent_session(ctx) + && self.model.lock().block_list().is_executing_oz_environment_startup_commands() + && !self.is_in_handoff_replay_phase(ctx) +{ + // existing reset... +} +``` +where `is_in_handoff_replay_phase` returns true when `ambient_agent_view_model.is_local_to_cloud_handoff() && (model.is_in_setup() || model.is_configuring_ambient_agent() || model.is_waiting_for_session())` — i.e. the cloud session has not yet connected and the active block should still be treated as a setup-command surface. After `SessionReady` (and thus once `Status::AgentRunning` is set), the predicate becomes false; the cloud agent's actual `AppendedExchange` (its first response post-rehydration) trips the existing reset path normally. +This is the single behavior fix needed for the setup-v2 affordances to render correctly during handoff. The "Running setup commands…" collapsible row, queued-prompt indicator, and loading screen are all already wired up via existing `CloudModeSetupV2`-gated paths and Just Work once the active block stays hidden through the pre-session window. +### 7. Drop the V2-input opt-out for handoff panes (`app/src/terminal/input/agent.rs`) +REMOTE-1486 added a guard at `app/src/terminal/input/agent.rs:65` so handoff panes don't opt into `CloudModeInputV2`. With the setup-v2 affordances now intentionally enabled for handoff panes (per §6 + the product spec's #9), remove the `&& !ambient_agent_view_model.is_local_to_cloud_handoff()` clause from `Input::is_cloud_mode_input_v2_composing`. Handoff panes go through the same V2 input path as fresh cloud-mode runs. +### 8. Feature-flag posture +No new feature flags. All of the changes are gated on the existing `FeatureFlag::OzHandoff && FeatureFlag::LocalToCloudHandoff` (client) and `features.LocalToCloudHandoffEnabled()` (server) used by REMOTE-1486. The client and server flags continue to roll out together. +## Risks and mitigations +- **Chip-click latency is now gated on the prepare-fork RPC.** Previously the pane opened instantly; now the user sees nothing until the fork resolves. *Mitigation:* the fork is a synchronous metadata + GCS-copy round-trip already used at submit time today; expected latency is similar to other authenticated public-API RPCs (<300ms p50). On error we surface a toast immediately so the user knows what happened. +- **Source conversation not synced to GCS.** `ForkConversationForHandoff` returns `InvalidRequestError.New("source conversation %s has not been fully synced to cloud storage; try again in a moment")` when `BatchDoesConversationDataExist` is false. *Mitigation:* the client surfaces this as the toast described above; the user can wait a moment and click again. +- **Replay suppression skips a genuinely new exchange.** `should_skip_replayed_response_for_existing_conversation` skips response streams during replay if the local conversation already has exchanges. If the cloud agent's first response stream arrives during the replay phase (before `AgentConversationReplayEnded`) it could be suppressed too. *Mitigation:* this is the same posture cloud→cloud uses today (`AppendFollowupScrollback`); the runtime emits `AgentConversationReplayEnded` before the new turn streams in, so the new turn lands in the post-replay window. +- **Snapshot upload still in flight at submit time.** The user types a follow-up faster than `derive_touched_workspace` + `upload_snapshot_for_handoff` complete. *Mitigation:* the send button gate already requires `pending_handoff.touched_workspace.is_some()` (existing); we extend it to also require the snapshot upload to be settled (either succeeded with `Some(prep_token)`, deliberately skipped with `Ok(None)` for empty workspaces, or failed with the existing `report_error!` posture so submit can proceed best-effort). +- **Snapshot upload failure.** Per-blob failures already retry with bounded backoff via `upload_snapshot_for_handoff`. If every blob fails, the existing `report_error!` fires and the prep token is still minted (cloud agent starts with no rehydration content). *Mitigation:* unchanged — same best-effort posture as cloud→cloud handoff today, just kicked off earlier. +## Testing and validation +### Unit tests +- `app/src/server/server_api/ai_test.rs`: serialization test for `PrepareHandoffForkRequest`, path test for `build_prepare_handoff_fork_url`, mirroring the pattern of the existing `serialize_run_followup_request` test. +- `app/src/ai/blocklist/history_model_test.rs`: test that `set_server_conversation_token_for_conversation` after `fork_conversation` updates the token-to-conversation reverse index so `find_conversation_id_by_server_token(T_C)` finds the fork. +- `app/src/terminal/view/view_test.rs`: a minimal regression covering the setup-v2 reset gate — restoring exchanges into a handoff pane while the model is in `Setup`/`Composing`/`WaitingForSession` does NOT flip `is_executing_oz_environment_startup_commands` to false. +- `app/src/terminal/shared_session/viewer/event_loop_test.rs`: extend the existing append-mode tests to cover the local→cloud connect path (i.e. `AppendFollowupScrollback` mode is what `connect_to_session` uses when the model reports `is_local_to_cloud_handoff`). +### Server tests (`../warp-server-2`) +- `router/handlers/public_api/agent_handoff_test.go`: extend the existing test file with a `TestPrepareLocalHandoffForkHandler_*` suite covering: feature-flag-off returns the standard error; missing `source_conversation_id` returns `invalid request payload`; happy path returns a valid UUID; auth failure on the source returns the wrapped `NotAuthorizedError`. +- Update the existing `agent_webhooks_test.go::TestHandoff_*` cases that exercise `ForkFromConversationID`. With the field removed those tests should switch to driving the new `prepare-fork` endpoint and then sending `ConversationID` on the run request, asserting the same end-state (`task.AgentConversationID = `, `snapshots/{task_id}/0/` populated). +### Integration / manual +- Click the chip on a long Oz conversation; verify the new pane is visibly populated with the AI exchanges before the cloud session connects, with no flicker or duplicate blocks during the connect/replay window. +- Submit a follow-up; verify the queued-prompt indicator + "Setting up environment" loading screen + "Running setup commands…" collapsible block all render the same way they do for a fresh cloud-mode run. +- After the cloud agent's first turn arrives, verify the pre-populated blocks remain in place, the queued-prompt indicator clears, and the new exchange appends below them. +- Click the chip on a non-eligible conversation (no synced server token); verify the pane opens as a fresh cloud-mode pane with no handoff context (existing fall-through preserved). +- Manually break a network connection during chip click so the prepare-fork RPC fails; verify **no pane opens** and an error toast surfaces in the local window. The local conversation should be unaffected and the chip should be re-clickable. +## Parallelization +The two-side change (server endpoint + client wiring) is small enough that one engineer/agent can implement it sequentially in two PRs — a server PR for the prepare-fork endpoint + `ForkFromConversationID` removal, then a client PR for the hydration + load mode + setup-v2 reset gate. The user has indicated they will handle the server-side changes themselves in `../warp-server-2`, so the client agent does not need to coordinate with a parallel server agent. No sub-agents needed for this scope. +## Follow-ups +- Cloud→cloud setup-v2 fixes. Cloud-cloud follow-ups (REMOTE-1290) likely have the same setup-v2 active-block reset issue when the follow-up's environment runs setup commands. Out of scope here, but the gate added in §6 can be generalized to also check for follow-up startups. From 5bb316fa1088fa16c9fa2e35062852d99997e244 Mon Sep 17 00:00:00 2001 From: Harry Date: Thu, 30 Apr 2026 13:37:09 -0500 Subject: [PATCH 2/5] clean up handoff UI --- app/src/ai/agent_sdk/ambient.rs | 2 +- app/src/ai/agent_sdk/mcp_config_tests.rs | 2 +- app/src/ai/ambient_agents/spawn_tests.rs | 4 +- .../agent_view/agent_input_footer/mod.rs | 8 +- .../agent_input_footer/toolbar_item.rs | 4 +- app/src/ai/blocklist/block/status_bar.rs | 1 + app/src/ai/blocklist/block/view_impl.rs | 70 +++--- .../ai/blocklist/block/view_impl/output.rs | 7 + .../ai/blocklist/controller/shared_session.rs | 76 ++++-- app/src/ai/blocklist/handoff/mod.rs | 12 +- app/src/ai/blocklist/handoff/orchestrator.rs | 70 ------ app/src/ai/blocklist/history_model.rs | 39 +++- app/src/ai/blocklist/history_model_test.rs | 98 ++++++++ app/src/pane_group/pane/terminal_pane.rs | 2 +- app/src/server/server_api/ai.rs | 43 +++- app/src/server/server_api/ai_test.rs | 32 ++- .../shared_session/viewer/event_loop.rs | 3 + .../shared_session/viewer/terminal_manager.rs | 39 +++- app/src/terminal/view.rs | 24 +- .../ambient_agent/block/setup_command_text.rs | 13 +- app/src/terminal/view/ambient_agent/mod.rs | 76 ++++-- app/src/terminal/view/ambient_agent/model.rs | 147 +++++------- .../terminal/view/ambient_agent/view_impl.rs | 16 +- app/src/workspace/view.rs | 218 ++++++++++++++---- specs/REMOTE-1499/TECH.md | 113 +++++++++ specs/REMOTE-1519/PRODUCT.md | 4 +- specs/REMOTE-1519/TECH.md | 4 +- 27 files changed, 790 insertions(+), 337 deletions(-) delete mode 100644 app/src/ai/blocklist/handoff/orchestrator.rs create mode 100644 specs/REMOTE-1499/TECH.md diff --git a/app/src/ai/agent_sdk/ambient.rs b/app/src/ai/agent_sdk/ambient.rs index 0e3cb86e1..95a3dad1b 100644 --- a/app/src/ai/agent_sdk/ambient.rs +++ b/app/src/ai/agent_sdk/ambient.rs @@ -489,7 +489,7 @@ impl AmbientAgentRunner { parent_run_id: None, runtime_skills: vec![], referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; diff --git a/app/src/ai/agent_sdk/mcp_config_tests.rs b/app/src/ai/agent_sdk/mcp_config_tests.rs index 0a1fe1e88..0bd9f3384 100644 --- a/app/src/ai/agent_sdk/mcp_config_tests.rs +++ b/app/src/ai/agent_sdk/mcp_config_tests.rs @@ -284,7 +284,7 @@ fn serializes_mcp_servers_as_object_not_string() { parent_run_id: None, runtime_skills: vec![], referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; diff --git a/app/src/ai/ambient_agents/spawn_tests.rs b/app/src/ai/ambient_agents/spawn_tests.rs index 1fd51b8f9..04d153674 100644 --- a/app/src/ai/ambient_agents/spawn_tests.rs +++ b/app/src/ai/ambient_agents/spawn_tests.rs @@ -341,7 +341,7 @@ async fn poll_stops_on_terminal_failure_like_state() { parent_run_id: None, runtime_skills: vec![], referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; @@ -485,7 +485,7 @@ async fn poll_for_session_join_info_waits_until_link_is_available() { parent_run_id: None, runtime_skills: vec![], referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; diff --git a/app/src/ai/blocklist/agent_view/agent_input_footer/mod.rs b/app/src/ai/blocklist/agent_view/agent_input_footer/mod.rs index 4916fdc69..03dbb00e5 100644 --- a/app/src/ai/blocklist/agent_view/agent_input_footer/mod.rs +++ b/app/src/ai/blocklist/agent_view/agent_input_footer/mod.rs @@ -231,7 +231,7 @@ pub struct AgentInputFooter { // "Hand off to cloud" chip. Visibility is gated only on the // `OzHandoff && LocalToCloudHandoff` feature flags. Per-conversation // eligibility is enforced by `Workspace::start_local_to_cloud_handoff`, - // which falls through to splitting a fresh cloud-mode pane when the + // which surfaces an error toast and does not open a pane when the // active conversation isn't handoff-able. handoff_to_cloud_button: ViewHandle, @@ -359,8 +359,8 @@ impl AgentInputFooter { // "Hand off to cloud" chip. On click dispatches the workspace action that // splits a new cloud-mode pane next to the local pane; that pane handles // the rest of the handoff flow. The chip is always visible when the feature - // flags are on; per-conversation eligibility falls through to splitting a - // fresh cloud-mode pane in `Workspace::start_local_to_cloud_handoff`. + // flags are on; per-conversation eligibility surfaces an error toast and + // does not open a pane in `Workspace::start_local_to_cloud_handoff`. let handoff_to_cloud_button = ctx.add_typed_action_view(|_ctx| { ActionButton::new("", AgentInputButtonTheme) .with_icon(Icon::UploadCloud) @@ -1984,7 +1984,7 @@ impl AgentInputFooter { // Always render the chip when the feature flags are on. // Per-conversation eligibility (synced server token, non-empty // history) is enforced by `Workspace::start_local_to_cloud_handoff`, - // which falls through to splitting a fresh cloud-mode pane when + // which surfaces an error toast and does not open a pane when // the active conversation isn't handoff-able. Some(ChildView::new(&self.handoff_to_cloud_button).finish()) } diff --git a/app/src/ai/blocklist/agent_view/agent_input_footer/toolbar_item.rs b/app/src/ai/blocklist/agent_view/agent_input_footer/toolbar_item.rs index c20cda7cf..2ba2999ef 100644 --- a/app/src/ai/blocklist/agent_view/agent_input_footer/toolbar_item.rs +++ b/app/src/ai/blocklist/agent_view/agent_input_footer/toolbar_item.rs @@ -73,8 +73,8 @@ pub enum AgentToolbarItemKind { /// that splits a fresh cloud-mode pane next to the active local pane. /// Visibility is gated only on the `OzHandoff && LocalToCloudHandoff` feature /// flags so the chip is always available; the click handler in - /// `Workspace::start_local_to_cloud_handoff` falls through to opening a - /// fresh cloud-mode pane when the active conversation isn't handoff-able + /// `Workspace::start_local_to_cloud_handoff` surfaces an error toast and + /// does not open a pane when the active conversation isn't handoff-able /// (no synced server token, empty, or no active conversation at all). HandoffToCloud, } diff --git a/app/src/ai/blocklist/block/status_bar.rs b/app/src/ai/blocklist/block/status_bar.rs index 8a57558c2..c8aa00df1 100644 --- a/app/src/ai/blocklist/block/status_bar.rs +++ b/app/src/ai/blocklist/block/status_bar.rs @@ -1157,6 +1157,7 @@ impl View for BlocklistAIStatusBar { is_cloud_agent_pre_first_exchange( Some(ambient_agent_view_model), &self.agent_view_controller, + &self.terminal_model, app, ) }) diff --git a/app/src/ai/blocklist/block/view_impl.rs b/app/src/ai/blocklist/block/view_impl.rs index 658265b05..cd3fb6718 100644 --- a/app/src/ai/blocklist/block/view_impl.rs +++ b/app/src/ai/blocklist/block/view_impl.rs @@ -159,22 +159,20 @@ fn add_slash_command_highlight( /// query blocks during live startup/streaming. /// /// To avoid duplicate UI, we suppress the AI block header/query only while the viewer is live -/// (not replaying historical conversation events). -/// -/// The prompts are rendered in the ambient-agent query block UI, so this helper only gates -/// duplicate rendering in the AI block path when that optimistic block was actually inserted. +/// (not replaying historical conversation events) AND the AI block's display query matches an +/// optimistically rendered user query. The per-query check is important for forked +/// conversations (e.g. local-to-cloud handoff) where the conversation's first exchange comes +/// from the source conversation and must remain visible — only the dispatched prompt has a +/// matching optimistic block to defer to. fn should_hide_ai_block_query_and_header( - has_inserted_cloud_mode_user_query_block: bool, has_optimistic_user_query: bool, is_shared_ambient_agent_session: bool, - is_first_exchange: bool, is_receiving_agent_conversation_replay: bool, ) -> bool { FeatureFlag::CloudModeSetupV2.is_enabled() && is_shared_ambient_agent_session && !is_receiving_agent_conversation_replay - && ((has_inserted_cloud_mode_user_query_block && is_first_exchange) - || has_optimistic_user_query) + && has_optimistic_user_query } #[cfg(test)] @@ -182,39 +180,31 @@ mod tests { use super::*; #[test] - fn test_should_hide_ai_block_query_and_header_for_initial_cloud_prompt() { + fn test_should_hide_ai_block_query_and_header_for_optimistic_prompt() { let _flag = FeatureFlag::CloudModeSetupV2.override_enabled(true); - assert!(should_hide_ai_block_query_and_header( - true, false, true, true, false - )); + assert!(should_hide_ai_block_query_and_header(true, true, false)); } #[test] - fn test_should_hide_ai_block_query_and_header_for_optimistic_followup_prompt() { + fn test_should_not_hide_ai_block_query_and_header_during_replay() { let _flag = FeatureFlag::CloudModeSetupV2.override_enabled(true); - assert!(should_hide_ai_block_query_and_header( - false, true, true, false, false - )); + assert!(!should_hide_ai_block_query_and_header(true, true, true)); } #[test] - fn test_should_not_hide_ai_block_query_and_header_during_replay() { + fn test_should_not_hide_ai_block_query_and_header_for_untracked_prompt() { let _flag = FeatureFlag::CloudModeSetupV2.override_enabled(true); - assert!(!should_hide_ai_block_query_and_header( - true, true, true, true, true - )); + assert!(!should_hide_ai_block_query_and_header(false, true, false)); } #[test] - fn test_should_not_hide_ai_block_query_and_header_for_untracked_prompt() { + fn test_should_not_hide_ai_block_query_and_header_outside_shared_session() { let _flag = FeatureFlag::CloudModeSetupV2.override_enabled(true); - assert!(!should_hide_ai_block_query_and_header( - false, false, true, false, false - )); + assert!(!should_hide_ai_block_query_and_header(true, false, false)); } } @@ -895,10 +885,6 @@ impl View for AIBlock { terminal_model.is_receiving_agent_conversation_replay(), ) }; - let is_first_exchange = conversation - .first_exchange() - .is_some_and(|exchange| exchange.id == self.client_ids.client_exchange_id); - let input_props = input::Props { comments: &self.comment_states, addressed_comment_ids: &addressed_comment_ids, @@ -929,22 +915,15 @@ impl View for AIBlock { query_and_index .as_ref() .is_some_and(|(query_for_display, ..)| { - let (has_inserted_cloud_mode_user_query_block, has_optimistic_user_query) = - self.ambient_agent_view_model - .as_ref() - .map(|model| { - let model = model.as_ref(app); - ( - model.has_inserted_cloud_mode_user_query_block(), - model.has_optimistic_user_query(query_for_display), - ) - }) - .unwrap_or((false, false)); + let has_optimistic_user_query = self + .ambient_agent_view_model + .as_ref() + .is_some_and(|model| { + model.as_ref(app).has_optimistic_user_query(query_for_display) + }); should_hide_ai_block_query_and_header( - has_inserted_cloud_mode_user_query_block, has_optimistic_user_query, is_shared_ambient_agent_session, - is_first_exchange, is_receiving_agent_conversation_replay, ) }); @@ -1093,6 +1072,14 @@ impl View for AIBlock { let is_conversation_transcript_viewer = terminal_model.is_conversation_transcript_viewer(); drop(terminal_model); + let is_cloud_agent_pre_first_exchange = + crate::terminal::view::ambient_agent::is_cloud_agent_pre_first_exchange( + self.ambient_agent_view_model.as_ref(), + &self.agent_view_controller, + &self.terminal_model, + app, + ); + contents.add_child(output::render( output::Props { model: self.model.as_ref(), @@ -1159,6 +1146,7 @@ impl View for AIBlock { .is_latest_non_passive_exchange_in_root_task(app) && self.has_imported_comments_in_current_thread(app), ask_user_question_view: self.ask_user_question_view.as_ref(), + is_cloud_agent_pre_first_exchange, }, app, )); diff --git a/app/src/ai/blocklist/block/view_impl/output.rs b/app/src/ai/blocklist/block/view_impl/output.rs index 390d74a60..69889a7cc 100644 --- a/app/src/ai/blocklist/block/view_impl/output.rs +++ b/app/src/ai/blocklist/block/view_impl/output.rs @@ -201,6 +201,12 @@ pub(crate) struct Props<'a> { pub(super) thinking_display_mode: crate::settings::ThinkingDisplayMode, pub(super) conversation_has_imported_comments: bool, pub(super) ask_user_question_view: Option<&'a ViewHandle>, + /// `true` when this block belongs to a cloud agent pane that is still in its setup + /// phase (running environment startup commands before the first agent turn). Used to + /// hide the response footer (thumbs up/down, credit usage, fork) until the agent has + /// produced real output — otherwise the footer renders awkwardly above the still- + /// pending optimistic user prompt. + pub(super) is_cloud_agent_pre_first_exchange: bool, } pub(super) fn render(props: Props, app: &AppContext) -> Box { @@ -245,6 +251,7 @@ pub(super) fn render(props: Props, app: &AppContext) -> Box { && !is_output_for_static_prompt_suggestions && !is_conversation_in_progress && request_type.is_active() + && !props.is_cloud_agent_pre_first_exchange && !status .error() .map(|e| e.is_invalid_api_key()) diff --git a/app/src/ai/blocklist/controller/shared_session.rs b/app/src/ai/blocklist/controller/shared_session.rs index 6bbbf55b8..8c77d02fc 100644 --- a/app/src/ai/blocklist/controller/shared_session.rs +++ b/app/src/ai/blocklist/controller/shared_session.rs @@ -112,6 +112,20 @@ impl BlocklistAIController { let existing_conversation_id = self.find_existing_conversation_by_server_token(&init_event.conversation_id, ctx); let conversation_id = existing_conversation_id + .inspect(|conversation_id| { + // The local conversation is bound to a cloud-side session, so the cloud agent + // is the source of truth for user inputs going forward. Mark it as a shared- + // session view so `apply_client_actions` reconstructs UserQuery / ActionResult + // inputs from the cloud agent's response messages — without this, the local + // exchange's inputs stay empty and the AI block has no user query to render. + // Idempotent for conversations that already have the flag set (e.g. regular + // cloud mode, where `start_new_conversation` set it at creation time); + // important for REMOTE-1519 local-to-cloud handoff, where the local fork + // started as a non-shared-session conversation. + history.update(ctx, |history, _| { + history.set_viewing_shared_session_for_conversation(*conversation_id, true); + }); + }) .or_else(|| { let selected_conversation_id = self .context_model @@ -150,9 +164,13 @@ impl BlocklistAIController { h.start_new_conversation(terminal_view_id, false, true, ctx) }) }); - if self - .should_skip_replayed_response_for_existing_conversation(existing_conversation_id, ctx) - { + let should_skip = self.should_skip_replayed_response_for_existing_conversation( + existing_conversation_id, + &init_event.request_id, + ctx, + ); + log::info!("[DEBUG] on_shared_init view_id={:?} req_id={} init_conv={} existing_conv={:?} resolved_conv={:?} was_existing={} skip={}", self.terminal_view_id, init_event.request_id, init_event.conversation_id, existing_conversation_id, conversation_id, existing_conversation_id.is_some(), should_skip); + if should_skip { self.shared_session_state.current_response_id = Some(stream_id); self.shared_session_state .should_skip_current_replayed_response = true; @@ -162,12 +180,15 @@ impl BlocklistAIController { self.shared_session_state.current_response_id = Some(stream_id.clone()); let Some(conversation) = history.as_ref(ctx).conversation(&conversation_id) else { - log::error!( - "Tried to initialize shared session stream for non-existent conversation {conversation_id:?}" - ); + log::error!("[DEBUG] on_shared_init conversation lookup MISSING for conversation_id={conversation_id:?}"); return; }; let task_id = conversation.get_root_task_id().clone(); + let known_task_ids: Vec = conversation + .all_tasks() + .map(|t| t.id().to_string()) + .collect(); + log::info!("[DEBUG] on_shared_init using root task_id={task_id:?} known_task_ids={known_task_ids:?}"); // Ensure the action executor is in view-only mode for shared-session viewers. self.action_model.update(ctx, |action_model, _ctx| { @@ -176,7 +197,8 @@ impl BlocklistAIController { // Eagerly create an exchange for this request (with empty inputs) and initialize output. history.update(ctx, |history_model, ctx| { - let _ = history_model.update_conversation_for_new_request_input( + let view_id = self.terminal_view_id; + if let Err(err) = history_model.update_conversation_for_new_request_input( RequestInput::for_task( vec![], task_id, @@ -189,7 +211,9 @@ impl BlocklistAIController { stream_id.clone(), self.terminal_view_id, ctx, - ); + ) { + log::info!("[DEBUG] update_conversation_for_new_request_input ERR view_id={view_id:?} conversation_id={conversation_id:?} err={err:?}"); + } history_model.initialize_output_for_response_stream( &stream_id, @@ -220,22 +244,45 @@ impl BlocklistAIController { fn should_skip_replayed_response_for_existing_conversation( &self, existing_conversation_id: Option, + init_request_id: &str, ctx: &mut ModelContext, ) -> bool { let Some(conversation_id) = existing_conversation_id else { return false; }; let model = self.terminal_model.lock(); - if !model.is_receiving_agent_conversation_replay() - || !model.should_suppress_existing_agent_conversation_replay() - { + let is_receiving_replay = model.is_receiving_agent_conversation_replay(); + let should_suppress = model.should_suppress_existing_agent_conversation_replay(); + if !is_receiving_replay || !should_suppress { return false; } drop(model); - BlocklistAIHistoryModel::as_ref(ctx) + // Only skip the replayed response stream when we already have a local + // exchange whose `server_output_id` matches its `request_id`. New + // exchanges that the cloud agent appended after the local fork (e.g. + // the user's first submitted prompt for a REMOTE-1519 local-to-cloud + // handoff pane) carry request_ids we have never seen and must flow + // through normally so the viewer's blocklist picks them up. + let history = BlocklistAIHistoryModel::as_ref(ctx); + let known_server_output_ids: Vec = history .conversation(&conversation_id) - .is_some_and(|conversation| conversation.exchange_count() > 0) + .map(|conversation| { + conversation + .all_exchanges() + .into_iter() + .filter_map(|exchange| { + exchange + .output_status + .server_output_id() + .map(|sid| sid.to_string()) + }) + .collect() + }) + .unwrap_or_default(); + known_server_output_ids + .iter() + .any(|sid| sid == init_request_id) } fn on_shared_client_actions( @@ -247,6 +294,7 @@ impl BlocklistAIController { .shared_session_state .should_skip_current_replayed_response { + log::info!("[DEBUG] on_shared_client_actions SKIPPED (suppressed replay) view_id={:?} action_count={}", self.terminal_view_id, actions.actions.len()); return; } let Some(stream_id) = self.shared_session_state.current_response_id.clone() else { @@ -360,11 +408,13 @@ impl BlocklistAIController { .shared_session_state .should_skip_current_replayed_response { + log::info!("[DEBUG] on_shared_finished SKIPPED (suppressed replay) view_id={:?}", self.terminal_view_id); self.shared_session_state.current_response_id.take(); self.shared_session_state .should_skip_current_replayed_response = false; return; } + log::info!("[DEBUG] on_shared_finished view_id={:?} current_response_id={:?}", self.terminal_view_id, self.shared_session_state.current_response_id); let Some(stream_id) = self.shared_session_state.current_response_id.take() else { log::warn!("Shared Finished missing request_id"); return; diff --git a/app/src/ai/blocklist/handoff/mod.rs b/app/src/ai/blocklist/handoff/mod.rs index c7d057d28..d8666f834 100644 --- a/app/src/ai/blocklist/handoff/mod.rs +++ b/app/src/ai/blocklist/handoff/mod.rs @@ -4,10 +4,12 @@ //! filesystem path the local agent has touched, groups those paths into git //! roots and orphan files, and exposes the env-overlap pick used by the //! handoff pane bootstrap. -//! - `orchestrator`: drives the prep + upload phases of the handoff off the main -//! thread. The actual cloud-agent spawn happens inside the handoff pane's -//! `AmbientAgentViewModel::submit_handoff` so the regular streaming spawn flow -//! (loading screen, shared-session join) is reused unchanged. +//! +//! The chip-click open path lives in `Workspace::start_local_to_cloud_handoff` +//! and drives the prep-fork RPC + the async snapshot upload directly via +//! `AIClient::prepare_handoff_fork` and `agent_sdk::driver::upload_snapshot_for_handoff`. +//! The actual cloud-agent spawn happens inside the handoff pane's +//! `AmbientAgentViewModel::submit_handoff`, which reads the cached +//! `forked_conversation_id` and `snapshot_prep_token` off `PendingHandoff`. -pub(crate) mod orchestrator; pub(crate) mod touched_repos; diff --git a/app/src/ai/blocklist/handoff/orchestrator.rs b/app/src/ai/blocklist/handoff/orchestrator.rs deleted file mode 100644 index d37777dc6..000000000 --- a/app/src/ai/blocklist/handoff/orchestrator.rs +++ /dev/null @@ -1,70 +0,0 @@ -//! Drives the local-to-cloud handoff lifecycle. -//! -//! Runs the prep + upload phases off the main thread by handing a `TouchedWorkspace` -//! to `agent_sdk::driver::upload_snapshot_for_handoff`, which mints a `prep_token`, -//! gathers patches and file contents, and uploads everything (plus a -//! `snapshot_state.json` manifest) to GCS. -//! -//! The actual cloud-agent spawn happens inside the handoff pane's -//! `AmbientAgentViewModel::submit_handoff` so the streaming `TaskSpawned` → -//! `SessionStarted` events drive the loading screen + shared-session join the same -//! way a normal cloud agent does. Doing the spawn here would leave us with only a -//! task id, no streaming hook, and a blank pane. - -use std::sync::Arc; - -use anyhow::Result; -use http_client::Client as HttpClient; - -use crate::ai::agent::api::ServerConversationToken; -use crate::ai::agent_sdk::driver::upload_snapshot_for_handoff; -use crate::ai::blocklist::handoff::touched_repos::TouchedWorkspace; -use crate::server::server_api::ai::AIClient; - -/// Outcome of a successful prep + upload. `submit_handoff` builds a -/// `SpawnAgentRequest` from this and dispatches it through the same -/// `spawn_agent_with_request` path that regular cloud-mode runs use. -/// -/// The agent config (env, model, worker_host, computer_use_enabled, harness) is -/// intentionally not carried here — by the time `submit_handoff` consumes this, the -/// pane's env selector chip has already updated the model's `environment_id` and -/// `build_default_spawn_config` reads the rest from the model + global preferences. -pub(crate) struct HandoffPrepared { - /// `handoff_prep_token` returned by `prepare_handoff_snapshot`. `None` when the - /// touched workspace had no declarations — the cloud-side spawn skips snapshot - /// rehydration in that case. - pub prep_token: Option, - /// `fork_from_conversation_id` to set on the spawn request — always the source - /// conversation's server token. - pub fork_from_conversation_id: String, - /// User prompt typed into the handoff pane. - pub prompt: String, -} - -/// Drive the prep + upload phases of a handoff. Runs entirely off the main thread; -/// callers should `ctx.spawn` this future so the local pane stays interactive -/// throughout. The actual `spawn_agent` call is intentionally NOT performed here -/// — see the module docs for why. -pub(crate) async fn run_handoff( - source_conversation_id: ServerConversationToken, - workspace: TouchedWorkspace, - prompt: String, - client: Arc, - http: Arc, -) -> Result { - let repo_paths = workspace.repos.into_iter().map(|r| r.git_root).collect(); - let prep_token = upload_snapshot_for_handoff( - repo_paths, - workspace.orphan_files, - client, - http.as_ref(), - &source_conversation_id, - ) - .await?; - - Ok(HandoffPrepared { - prep_token, - fork_from_conversation_id: source_conversation_id.as_str().to_string(), - prompt, - }) -} diff --git a/app/src/ai/blocklist/history_model.rs b/app/src/ai/blocklist/history_model.rs index eb0910aad..e31809a19 100644 --- a/app/src/ai/blocklist/history_model.rs +++ b/app/src/ai/blocklist/history_model.rs @@ -1030,10 +1030,17 @@ impl BlocklistAIHistoryModel { /// /// The `prefix` parameter specifies the prefix added to the root task description /// (e.g., `FORK_PREFIX` for forks, `PRE_REWIND_PREFIX` for pre-rewind backups). + /// + /// When `preserve_task_ids` is true, the forked conversation reuses the source's task ids + /// instead of minting new ones. Used by the REMOTE-1519 local-to-cloud handoff path so the + /// local fork's task store matches the cloud-side fork (which is a byte-for-byte copy of the + /// source's GCS data and therefore preserves task ids). The cloud agent's `ClientAction`s + /// reference those task ids; if we minted new ones locally they would fail to resolve. pub fn fork_conversation( &mut self, source_conversation: &AIConversation, prefix: &str, + preserve_task_ids: bool, app: &AppContext, ) -> Result { let tasks: Vec = source_conversation @@ -1041,7 +1048,8 @@ impl BlocklistAIHistoryModel { .filter_map(|t| t.source().cloned()) .collect(); - let updated_tasks_with_new_ids = update_forked_task_properties(tasks, prefix); + let updated_tasks_with_new_ids = + update_forked_task_properties(tasks, prefix, preserve_task_ids); let Some(sqlite_sender) = GlobalResourceHandlesProvider::as_ref(app) .get() .model_event_sender @@ -1193,7 +1201,8 @@ impl BlocklistAIHistoryModel { )); } - let updated_tasks_with_new_ids = update_forked_task_properties(truncated_tasks, prefix); + let updated_tasks_with_new_ids = + update_forked_task_properties(truncated_tasks, prefix, false); let Some(sqlite_sender) = GlobalResourceHandlesProvider::as_ref(app) .get() @@ -2395,12 +2404,34 @@ impl From<&AIAgentOutputStatus> for AIQueryHistoryOutputStatus { /// Updates the given tasks, which are presumed to be clones of tasks from a source conversation to be /// used to back a fork or copy of the source conversation. /// -/// Reassigns new task IDs to each forked task to ensure task IDs remain globally unique and updates -/// description of the root task, prepending it with the given prefix. +/// When `preserve_task_ids` is false, reassigns new task IDs to each forked task to ensure task IDs +/// remain globally unique. When true, leaves task IDs as-is so the local fork's task store matches +/// an externally-known set of task ids (e.g. for REMOTE-1519 local-to-cloud handoff, where the cloud +/// agent's ClientActions reference the source's task ids and must resolve in the local fork). +/// +/// Always prepends the given prefix to the root task's description. fn update_forked_task_properties( tasks: Vec, prefix: &str, + preserve_task_ids: bool, ) -> Vec { + if preserve_task_ids { + return tasks + .into_iter() + .map(|mut t| { + let is_root = t + .dependencies + .as_ref() + .map(|deps| deps.parent_task_id.is_empty()) + .unwrap_or(true); + if is_root { + t.description = format!("{}{}", prefix, t.description); + } + t + }) + .collect(); + } + let mut old_to_new_task_ids = HashMap::new(); fn get_new_task_id(new_ids: &mut HashMap, old_task_id: &str) -> String { new_ids diff --git a/app/src/ai/blocklist/history_model_test.rs b/app/src/ai/blocklist/history_model_test.rs index 6dc2a4f48..de99ed911 100644 --- a/app/src/ai/blocklist/history_model_test.rs +++ b/app/src/ai/blocklist/history_model_test.rs @@ -1273,3 +1273,101 @@ fn test_set_server_conversation_token_rebinds_reverse_index() { }); }); } + +/// REMOTE-1519 fork-on-chip-click flow. +/// Forking the local conversation must: +/// 1. carry the source's server token forward as `forked_from_*` (so the +/// cloud agent's response stream can be reconciled to the right local +/// conversation during replay), and +/// 2. accept a binding to the cloud T_C via +/// `set_server_conversation_token_for_conversation` such that the reverse +/// index resolves the cloud token to the forked conversation. +#[test] +fn test_fork_then_bind_handoff_token_resolves_to_forked_conversation() { + use crate::ai::agent::conversation::AIConversation; + use crate::persistence::model::AgentConversationData; + use crate::test_util::ai_agent_tasks::{create_api_task, create_message}; + + App::test((), |mut app| async move { + initialize_settings_for_tests(&mut app); + + // `fork_conversation` writes the new conversation through the + // sqlite sender, so a mock sender must be wired up. + let (sender, _receiver) = std::sync::mpsc::sync_channel(2); + let mut global_resource_handles = GlobalResourceHandles::mock(&mut app); + global_resource_handles.model_event_sender = Some(sender); + app.add_singleton_model(|_| GlobalResourceHandlesProvider::new(global_resource_handles)); + + let history_model = app.add_singleton_model(|_| BlocklistAIHistoryModel::new(vec![], &[])); + let terminal_view_id = EntityId::new(); + + // Build a source conversation with a real root task (so `fork_conversation` + // has a `Task::source()` to copy forward) and the local-side server token T_L. + let source_id = AIConversationId::new(); + let root_task = create_api_task( + "root-task", + vec![create_message("root-task-message", "root-task")], + ); + let source = AIConversation::new_restored( + source_id, + vec![root_task], + Some(AgentConversationData { + server_conversation_token: Some("src-token".to_string()), + conversation_usage_metadata: None, + reverted_action_ids: None, + forked_from_server_conversation_token: None, + artifacts_json: None, + parent_agent_id: None, + agent_name: None, + parent_conversation_id: None, + run_id: None, + autoexecute_override: None, + last_event_sequence: None, + }), + ) + .expect("restored source conversation should build"); + history_model.update(&mut app, |model, ctx| { + model.restore_conversations(terminal_view_id, vec![source], ctx); + }); + + // Fork the local conversation (REMOTE-1519: fork-on-chip-click). + let forked_id = history_model.update(&mut app, |model, ctx| { + let source = model + .conversation(&source_id) + .expect("source conversation must be in memory after restore") + .clone(); + let forked = model + .fork_conversation(&source, "[Fork] ", false, ctx) + .expect("fork must succeed when sqlite sender is wired up"); + assert_eq!( + forked + .forked_from_server_conversation_token() + .map(|t| t.as_str()), + Some("src-token"), + "forked conversation must carry its source token for replay reconciliation", + ); + assert!( + forked.server_conversation_token().is_none(), + "freshly forked conversation must not yet have a server token of its own", + ); + forked.id() + }); + + // Bind the cloud T_C returned by `prepare-fork` to the forked conversation. + history_model.update(&mut app, |model, _| { + model.set_server_conversation_token_for_conversation( + forked_id, + "cloud-T".to_string(), + ); + }); + + let cloud_token = ServerConversationToken::new("cloud-T".to_string()); + history_model.read(&app, |model, _| { + assert_eq!( + model.find_conversation_id_by_server_token(&cloud_token), + Some(forked_id), + "after binding, cloud T_C must resolve to the forked conversation", + ); + }); + }); +} diff --git a/app/src/pane_group/pane/terminal_pane.rs b/app/src/pane_group/pane/terminal_pane.rs index a02d376c3..94e7c8f04 100644 --- a/app/src/pane_group/pane/terminal_pane.rs +++ b/app/src/pane_group/pane/terminal_pane.rs @@ -1389,7 +1389,7 @@ fn handle_terminal_view_event( parent_run_id: Some(parent_run_id), runtime_skills, referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; diff --git a/app/src/server/server_api/ai.rs b/app/src/server/server_api/ai.rs index a4f124679..bd7f545a3 100644 --- a/app/src/server/server_api/ai.rs +++ b/app/src/server/server_api/ai.rs @@ -226,12 +226,11 @@ pub struct SpawnAgentRequest { /// Base64-encoded `warp.multi_agent.v1.Attachment` payloads to restore as referenced attachments. #[serde(skip_serializing_if = "Vec::is_empty")] pub referenced_attachments: Vec, - /// When set, instructs the server to fork the named conversation and use the resulting - /// fork id as `task.AgentConversationID`. Mutually exclusive with the existing - /// `conversation_id` field (resume semantics) on the server. Used by the local-to-cloud - /// handoff flow (REMOTE-1486). + /// Server-side conversation id to resume against (sets `task.AgentConversationID`). + /// For local-to-cloud handoff (REMOTE-1519) this is the forked conversation id + /// returned by `POST /agent/handoff/prepare-fork` at chip-click time. #[serde(skip_serializing_if = "Option::is_none")] - pub fork_from_conversation_id: Option, + pub conversation_id: Option, /// References a batch of files previously uploaded to handoff_prep/{prep_token}/ via /// `POST /agent/handoff/prepare-snapshot`. The server moves them into /// snapshots/{task_id}/{execution_id}/ post-task-creation. @@ -269,6 +268,22 @@ pub struct HandoffSnapshotUploadInfo { pub upload_url: String, } +/// Request body for `POST /agent/handoff/prepare-fork`. Used by the local-to-cloud +/// handoff flow (REMOTE-1519) to materialize a server-side fork of the source +/// conversation at chip-click time so the client can pre-populate the new pane. +#[derive(Debug, Clone, serde::Serialize)] +pub struct PrepareHandoffForkRequest { + pub source_conversation_id: String, +} + +/// Response body for `POST /agent/handoff/prepare-fork`. The returned id is sent on +/// the subsequent `POST /agent/runs` request under `conversation_id` (resume +/// semantics) so the new task picks up the fork directly. +#[derive(Debug, Clone, serde::Deserialize)] +pub struct PrepareHandoffForkResponse { + pub forked_conversation_id: String, +} + #[derive(Debug, Clone, serde::Serialize)] pub struct RunFollowupRequest { pub message: String, @@ -879,6 +894,14 @@ pub trait AIClient: 'static + Send + Sync { request: PrepareHandoffSnapshotRequest, ) -> anyhow::Result; + /// Materialize a server-side fork of the source conversation for a local-to-cloud handoff. + /// Called at chip-click time (REMOTE-1519) so the client can pre-populate the new pane + /// with the forked conversation before any task exists. + async fn prepare_handoff_fork( + &self, + request: PrepareHandoffForkRequest, + ) -> anyhow::Result; + async fn list_ambient_agent_tasks( &self, limit: i32, @@ -1508,6 +1531,16 @@ impl AIClient for ServerApi { Ok(response) } + async fn prepare_handoff_fork( + &self, + request: PrepareHandoffForkRequest, + ) -> anyhow::Result { + let response: PrepareHandoffForkResponse = self + .post_public_api("agent/handoff/prepare-fork", &request) + .await?; + Ok(response) + } + async fn list_ambient_agent_tasks( &self, limit: i32, diff --git a/app/src/server/server_api/ai_test.rs b/app/src/server/server_api/ai_test.rs index 1a859dbf7..f9c86c1fe 100644 --- a/app/src/server/server_api/ai_test.rs +++ b/app/src/server/server_api/ai_test.rs @@ -4,8 +4,8 @@ use chrono::Utc; use super::{ build_list_agent_runs_url, build_run_followup_url, AgentMessageHeader, AgentRunEvent, AgentSource, AmbientAgentTaskState, Artifact, ArtifactDownloadResponse, ArtifactType, - ExecutionLocation, ListRunsResponse, ReadAgentMessageResponse, RunFollowupRequest, RunSortBy, - RunSortOrder, TaskListFilter, + ExecutionLocation, ListRunsResponse, PrepareHandoffForkRequest, PrepareHandoffForkResponse, + ReadAgentMessageResponse, RunFollowupRequest, RunSortBy, RunSortOrder, TaskListFilter, }; use crate::notebooks::NotebookId; @@ -998,3 +998,31 @@ fn serialize_run_followup_request() { }) ); } + +#[test] +fn serialize_prepare_handoff_fork_request() { + let request = PrepareHandoffForkRequest { + source_conversation_id: "550e8400-e29b-41d4-a716-446655440000".to_string(), + }; + + let json = serde_json::to_value(request).unwrap(); + + assert_eq!( + json, + serde_json::json!({ + "source_conversation_id": "550e8400-e29b-41d4-a716-446655440000", + }) + ); +} + +#[test] +fn deserialize_prepare_handoff_fork_response() { + let response: PrepareHandoffForkResponse = serde_json::from_value(serde_json::json!({ + "forked_conversation_id": "abcdef01-2345-6789-abcd-ef0123456789", + })) + .unwrap(); + assert_eq!( + response.forked_conversation_id, + "abcdef01-2345-6789-abcd-ef0123456789" + ); +} diff --git a/app/src/terminal/shared_session/viewer/event_loop.rs b/app/src/terminal/shared_session/viewer/event_loop.rs index 2cf924f77..964901a14 100644 --- a/app/src/terminal/shared_session/viewer/event_loop.rs +++ b/app/src/terminal/shared_session/viewer/event_loop.rs @@ -295,6 +295,7 @@ impl EventLoop { // For forked conversations, update the viewer's conversation // to use the new server token (only sent once per fork). if let Some(forked_from) = forked_from_token { + log::info!("[DEBUG] AgentResponseEvent link_forked_conversation_token forked_from={forked_from:?}"); c.link_forked_conversation_token( &forked_from, &event_clone, @@ -317,6 +318,7 @@ impl EventLoop { } } OrderedTerminalEventType::AgentConversationReplayStarted => { + log::info!("[DEBUG] AgentConversationReplayStarted (should_suppress={})", self.should_suppress_existing_agent_conversation_replay); let mut model = self.terminal_model.lock(); model.set_is_receiving_agent_conversation_replay(true); model.set_should_suppress_existing_agent_conversation_replay( @@ -324,6 +326,7 @@ impl EventLoop { ); } OrderedTerminalEventType::AgentConversationReplayEnded => { + log::info!("[DEBUG] AgentConversationReplayEnded"); let mut model = self.terminal_model.lock(); model.set_is_receiving_agent_conversation_replay(false); model.set_should_suppress_existing_agent_conversation_replay(false); diff --git a/app/src/terminal/shared_session/viewer/terminal_manager.rs b/app/src/terminal/shared_session/viewer/terminal_manager.rs index 5160e9575..2fa097e33 100644 --- a/app/src/terminal/shared_session/viewer/terminal_manager.rs +++ b/app/src/terminal/shared_session/viewer/terminal_manager.rs @@ -319,14 +319,27 @@ impl TerminalManager { /// Connects a deferred terminal manager to a shared session. /// This can only be called on a TerminalManager created with `new_deferred`. /// Returns `true` if the connection was initiated, `false` if already connected. - pub fn connect_to_session(&mut self, session_id: SessionId, ctx: &mut AppContext) -> bool { + /// + /// `append_followup_scrollback` controls whether the initial join uses + /// `AppendFollowupScrollback` mode instead of `ReplaceFromSessionScrollback`. + /// REMOTE-1519's local-to-cloud handoff pane sets this to `true` so the + /// pre-populated forked conversation isn't blown away by the cloud session's + /// replay scrollback, and so the existing `should_suppress_existing_agent_conversation_replay` + /// machinery skips response streams whose conversation we already have. + pub fn connect_to_session( + &mut self, + session_id: SessionId, + append_followup_scrollback: bool, + ctx: &mut AppContext, + ) -> bool { + let load_mode = if append_followup_scrollback { + SharedSessionInitialLoadMode::AppendFollowupScrollback + } else { + SharedSessionInitialLoadMode::ReplaceFromSessionScrollback + }; match self.network_state { NetworkState::Idle => { - self.connect_session( - session_id, - SharedSessionInitialLoadMode::ReplaceFromSessionScrollback, - ctx, - ); + self.connect_session(session_id, load_mode, ctx); true } NetworkState::Connecting => { @@ -986,6 +999,7 @@ impl TerminalManager { && is_cloud_agent_pre_first_exchange( view.ambient_agent_view_model(), view.agent_view_controller(), + &view.model, ctx, ) { @@ -1296,12 +1310,15 @@ impl TerminalManager { }; // During cloud startup (pre-first-exchange), keep local input mode stable // and ignore remote shell/ai mode toggles from session-sharing context sync. - let is_pre_first_exchange = FeatureFlag::CloudModeSetupV2.is_enabled() - && is_cloud_agent_pre_first_exchange( - view.as_ref(ctx).ambient_agent_view_model(), - view.as_ref(ctx).agent_view_controller(), + let is_pre_first_exchange = FeatureFlag::CloudModeSetupV2.is_enabled() && { + let view_ref = view.as_ref(ctx); + is_cloud_agent_pre_first_exchange( + view_ref.ambient_agent_view_model(), + view_ref.agent_view_controller(), + &view_ref.model, ctx, - ); + ) + }; let suppress_input_mode_update = view.as_ref(ctx).is_shared_ambient_agent_session() || is_pre_first_exchange; if suppress_input_mode_update { diff --git a/app/src/terminal/view.rs b/app/src/terminal/view.rs index 74f28edae..504d0e3fc 100644 --- a/app/src/terminal/view.rs +++ b/app/src/terminal/view.rs @@ -5097,6 +5097,8 @@ impl TerminalView { response_stream_id, .. } => { + let agent_view_active_conv = self.agent_view_controller.as_ref(ctx).agent_view_state().active_conversation_id(); + log::info!("[DEBUG] AppendedExchange view_id={:?} exchange_id={exchange_id:?} task_id={task_id:?} conversation_id={conversation_id:?} is_hidden={is_hidden} agent_view_active_conv={agent_view_active_conv:?} response_stream_id={response_stream_id:?}", self.view_id); // Hide telemetry banner forever after first AI input user sends. if FeatureFlag::GlobalAIAnalyticsBanner.is_enabled() && !GeneralSettings::as_ref(ctx) @@ -5135,19 +5137,6 @@ impl TerminalView { .set_is_executing_oz_environment_startup_commands(false); } - // REMOTE-1486: clear the queued-prompt block on the cloud agent's first - // exchange for an Oz local-to-cloud handoff. Mirrors the third-party-harness - // path's `HarnessCommandStarted` cleanup, but for the Oz harness the first - // `AppendedExchange` is the analogous transition. Idempotent when no block - // is currently inserted. - if self - .ambient_agent_view_model - .as_ref() - .is_some_and(|model| model.as_ref(ctx).is_local_to_cloud_handoff()) - { - self.remove_pending_user_query_block(ctx); - } - let should_add_ai_block = history_model .as_ref(ctx) .conversation(conversation_id) @@ -6896,6 +6885,7 @@ impl TerminalView { && is_cloud_agent_pre_first_exchange( self.ambient_agent_view_model.as_ref(), &self.agent_view_controller, + &self.model, app, ) { @@ -23193,7 +23183,12 @@ impl TerminalView { // Save a backup of the conversation before truncating, so users can restore it later. BlocklistAIHistoryModel::handle(ctx).update(ctx, |history_model, ctx| { if let Some(conversation) = history_model.conversation(&conversation_id).cloned() { - if let Err(e) = history_model.fork_conversation(&conversation, PRE_REWIND_PREFIX, ctx) { + if let Err(e) = history_model.fork_conversation( + &conversation, + PRE_REWIND_PREFIX, + false, /* preserve_task_ids */ + ctx, + ) { log::warn!("Failed to save pre-rewind backup of conversation {conversation_id}: {e}"); } } else { @@ -25739,6 +25734,7 @@ impl View for TerminalView { && is_cloud_agent_pre_first_exchange( self.ambient_agent_view_model.as_ref(), &self.agent_view_controller, + &self.model, app, ) { diff --git a/app/src/terminal/view/ambient_agent/block/setup_command_text.rs b/app/src/terminal/view/ambient_agent/block/setup_command_text.rs index a0cdd947d..1ec2c9462 100644 --- a/app/src/terminal/view/ambient_agent/block/setup_command_text.rs +++ b/app/src/terminal/view/ambient_agent/block/setup_command_text.rs @@ -1,3 +1,5 @@ +use parking_lot::FairMutex; +use std::sync::Arc; use warp_core::ui::{appearance::Appearance, Icon}; use warpui::{ elements::ParentElement, @@ -14,8 +16,11 @@ use crate::{ inline_action::inline_action_icons, BlocklistAIHistoryEvent, BlocklistAIHistoryModel, }, - terminal::view::ambient_agent::{ - is_cloud_agent_pre_first_exchange, AmbientAgentViewModel, AmbientAgentViewModelEvent, + terminal::{ + view::ambient_agent::{ + is_cloud_agent_pre_first_exchange, AmbientAgentViewModel, AmbientAgentViewModelEvent, + }, + TerminalModel, }, }; @@ -55,6 +60,7 @@ impl SetupCommandState { pub struct CloudModeSetupTextBlock { ambient_agent_view_model: ModelHandle, agent_view_controller: ModelHandle, + terminal_model: Arc>, mouse_state: MouseStateHandle, } @@ -62,6 +68,7 @@ impl CloudModeSetupTextBlock { pub fn new( ambient_agent_view_model: ModelHandle, agent_view_controller: ModelHandle, + terminal_model: Arc>, ctx: &mut ViewContext, ) -> Self { if let Some(conversation_id) = agent_view_controller @@ -104,6 +111,7 @@ impl CloudModeSetupTextBlock { Self { ambient_agent_view_model, agent_view_controller, + terminal_model, mouse_state: Default::default(), } } @@ -148,6 +156,7 @@ impl View for CloudModeSetupTextBlock { if is_cloud_agent_pre_first_exchange( Some(&self.ambient_agent_view_model), &self.agent_view_controller, + &self.terminal_model, app, ) { "Running setup commands..." diff --git a/app/src/terminal/view/ambient_agent/mod.rs b/app/src/terminal/view/ambient_agent/mod.rs index fa0b35c07..c070968cf 100644 --- a/app/src/terminal/view/ambient_agent/mod.rs +++ b/app/src/terminal/view/ambient_agent/mod.rs @@ -28,16 +28,18 @@ pub use model_selector::{ModelSelector, ModelSelectorAction, ModelSelectorEvent} pub use progress::{render_progress, ProgressProps, ProgressStep, ProgressStepState}; pub use progress_ui_state::AmbientAgentProgressUIState; pub use tips::{get_cloud_mode_tips, CloudModeTip}; +use parking_lot::FairMutex; +use std::sync::Arc; use warp_core::features::FeatureFlag; use crate::ai::blocklist::agent_view::{AgentViewController, AgentViewState}; -use crate::ai::blocklist::BlocklistAIHistoryModel; use crate::pane_group::TerminalViewResources; use crate::terminal::shared_session; use crate::terminal::TerminalManager; +use crate::terminal::TerminalModel; use crate::terminal::TerminalView; use warpui::geometry::vector::Vector2F; -use warpui::{AppContext, ModelHandle, SingletonEntity, ViewHandle, WindowId}; +use warpui::{AppContext, ModelHandle, ViewHandle, WindowId}; /// Creates a cloud mode terminal view and manager for ambient agent sessions. /// @@ -76,6 +78,7 @@ pub fn create_cloud_mode_view( log::warn!("Cloud mode view was created without an ambient agent view model"); return (terminal_view, terminal_manager); }; + let view_model_for_subscription = view_model.clone(); terminal_manager.update(ctx, |_, ctx| { ctx.subscribe_to_model(&view_model, move |manager, event, ctx| { let Some(manager) = manager @@ -86,7 +89,14 @@ pub fn create_cloud_mode_view( }; match event { AmbientAgentViewModelEvent::SessionReady { session_id } => { - manager.connect_to_session(*session_id, ctx); + // Local-to-cloud handoff panes pre-populate the forked + // conversation on chip click (REMOTE-1519). Use append-mode + // scrollback + replay suppression so the cloud agent's + // replay doesn't duplicate the blocks we already have. + let append_followup_scrollback = view_model_for_subscription + .as_ref(ctx) + .is_local_to_cloud_handoff(); + manager.connect_to_session(*session_id, append_followup_scrollback, ctx); } AmbientAgentViewModelEvent::FollowupSessionReady { session_id } => { manager.attach_followup_session(*session_id, ctx); @@ -115,12 +125,24 @@ pub fn create_cloud_mode_view( (terminal_view, terminal_manager) } -/// Returns `true` when a cloud agent shared session is ready but no agent exchange has been -/// received yet. In this state, we hide the interactive input and render a loading footer -/// instead. +/// Returns `true` when a cloud agent shared session is in any pre-first-exchange phase — +/// either still spawning (loading: "Connecting to Host" / "Creating Environment" / +/// "Starting Environment") or running setup commands before the first agent turn. In this +/// state, we hide the interactive input and render a loading footer instead. +/// +/// During the loading phase the view-model status is `WaitingForSession`; once the cloud +/// session is ready and setup commands are running it transitions to `AgentRunning` and we +/// rely on `is_executing_oz_environment_startup_commands` (initialized true on cloud-agent +/// pane creation, flipped false on the first `AppendedExchange`) to decide whether the +/// agent has produced its first real turn yet. The flag is correct for both fresh cloud +/// panes and REMOTE-1519 local-to-cloud handoff panes (whose forked conversation already +/// has exchanges from the local source, but whose cloud agent has not yet produced its +/// first new turn) — the `AppendedExchange` handler in `view.rs` ensures the flag only +/// flips to false on a NEW cloud turn, not on replay-driven events. pub fn is_cloud_agent_pre_first_exchange( ambient_agent_view_model: Option<&ModelHandle>, agent_view_controller: &ModelHandle, + terminal_model: &Arc>, app: &AppContext, ) -> bool { if !(FeatureFlag::CloudMode.is_enabled() && FeatureFlag::AgentView.is_enabled()) { @@ -131,38 +153,44 @@ pub fn is_cloud_agent_pre_first_exchange( return false; }; - if !matches!( - ambient_agent_view_model.as_ref(app).status(), - Status::AgentRunning - ) { + let view_model = ambient_agent_view_model.as_ref(app); + + let is_in_pre_first_exchange_status = matches!( + view_model.status(), + Status::WaitingForSession { .. } | Status::AgentRunning + ); + if !is_in_pre_first_exchange_status { return false; } let agent_view_state = agent_view_controller.as_ref(app).agent_view_state().clone(); - let AgentViewState::Active { - conversation_id, - origin, - .. - } = agent_view_state - else { + let AgentViewState::Active { origin, .. } = agent_view_state else { return false; }; - if !origin.is_cloud_agent() { + // REMOTE-1519 handoff panes enter agent view with `RestoreExistingConversation` (because + // they restore the forked conversation), not `CloudAgent`. The `is_local_to_cloud_handoff` + // flag on the view model is the authoritative "this is a cloud agent pane" signal for that + // path, so accept either. + if !origin.is_cloud_agent() && !view_model.is_local_to_cloud_handoff() { return false; } // For non-oz harness runs, there is no Oz `AppendedExchange` to key off of, so we also // exit the pre-first-exchange phase when the harness CLI (e.g. `claude`, `gemini`) has // been detected. See `mark_harness_command_started`. - if ambient_agent_view_model - .as_ref(app) - .harness_command_started() - { + if view_model.harness_command_started() { return false; } - BlocklistAIHistoryModel::as_ref(app) - .conversation(&conversation_id) - .is_some_and(|conversation| conversation.exchange_count() == 0) + // Loading phase (`WaitingForSession`): no setup commands have started yet, but we're + // still pre-first-exchange. Skip the block-list flag check. + if matches!(view_model.status(), Status::WaitingForSession { .. }) { + return true; + } + + terminal_model + .lock() + .block_list() + .is_executing_oz_environment_startup_commands() } diff --git a/app/src/terminal/view/ambient_agent/model.rs b/app/src/terminal/view/ambient_agent/model.rs index 052c00941..fe7ab910d 100644 --- a/app/src/terminal/view/ambient_agent/model.rs +++ b/app/src/terminal/view/ambient_agent/model.rs @@ -9,7 +9,6 @@ use warpui::r#async::{SpawnedFutureHandle, Timer}; use warpui::{AppContext, Entity, EntityId, ModelContext, SingletonEntity}; use crate::ai::active_agent_views_model::ActiveAgentViewsModel; -use crate::ai::agent::api::ServerConversationToken; use crate::ai::agent::{conversation::AIConversationId, extract_user_query_mode}; use crate::ai::ambient_agents::spawn::{spawn_task, submit_run_followup, AmbientAgentEvent}; use crate::ai::ambient_agents::task::HarnessConfig; @@ -18,7 +17,6 @@ use crate::ai::ambient_agents::AmbientAgentTaskId; use crate::ai::ambient_agents::{ OUT_OF_CREDITS_TASK_FAILURE_MESSAGE, SERVER_OVERLOADED_TASK_FAILURE_MESSAGE, }; -use crate::ai::blocklist::handoff::orchestrator::run_handoff; use crate::ai::blocklist::handoff::touched_repos::TouchedWorkspace; use crate::ai::blocklist::BlocklistAIHistoryModel; use crate::ai::cloud_environments::CloudAmbientAgentEnvironment; @@ -87,11 +85,18 @@ pub enum HandoffSubmissionState { /// `is_local_to_cloud_handoff()`. #[derive(Debug, Clone)] pub(crate) struct PendingHandoff { - /// Source conversation id (the local conversation's `server_conversation_token`). - pub(crate) source_conversation_id: ServerConversationToken, - /// `None` until `derive_touched_workspace` completes. + /// Forked conversation id minted by `POST /agent/handoff/prepare-fork` at + /// chip-click time. Sent under `conversation_id` (resume semantics) on the + /// subsequent `POST /agent/runs` request so the new task picks up the fork. + pub(crate) forked_conversation_id: String, + /// `None` until `derive_touched_workspace` completes (REMOTE-1486). pub(crate) touched_workspace: Option, - /// Gates submit — prevents double-submitting while the orchestrator is in flight. + /// Snapshot upload outcome: `None` while the upload is in flight or never + /// started; `Some(Some(token))` once minted (the standard case); + /// `Some(None)` when the workspace was empty so no upload happened. + /// `submit_handoff` requires this to be `Some(_)` before spawning. + pub(crate) snapshot_prep_token: Option>, + /// Gates submit — prevents double-submitting while the spawn is in flight. pub(crate) submission_state: HandoffSubmissionState, } @@ -159,8 +164,6 @@ pub struct AmbientAgentViewModel { /// Selected worker host for the cloud agent run. Populated from the HostSelector /// (which resolves env var > workspace setting) and read by `spawn_agent`. worker_host: Option, - /// Whether the optimistic InitialUserQuery block has been inserted for the current run. - has_inserted_cloud_mode_user_query_block: bool, /// Whether the harness CLI (e.g. `claude`, `gemini`) has started running for a non-oz run. /// Used to transition the cloud-mode setup UI out of the pre-first-exchange phase when /// there is no oz `AppendedExchange` to key off of. @@ -204,7 +207,6 @@ impl AmbientAgentViewModel { conversation_id: None, harness: Harness::default(), worker_host: None, - has_inserted_cloud_mode_user_query_block: false, harness_command_started: false, optimistically_rendered_user_queries: vec![], active_execution_session_id: None, @@ -392,6 +394,23 @@ impl AmbientAgentViewModel { ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); } + /// Records the outcome of the chip-click async snapshot upload on the pending + /// handoff so `submit_handoff` can read the prep token without re-running + /// the upload. `Some(token)` is the standard success case; `None` means the + /// touched workspace was empty (no upload happened, no rehydration needed). + /// No-op when no handoff context is set. + pub(crate) fn set_pending_handoff_snapshot_prep_token( + &mut self, + prep_token: Option, + ctx: &mut ModelContext, + ) { + let Some(handoff) = self.pending_handoff.as_mut() else { + return; + }; + handoff.snapshot_prep_token = Some(prep_token); + ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); + } + /// Whether the harness CLI has started running. Only meaningful for non-oz runs. pub(super) fn harness_command_started(&self) -> bool { self.harness_command_started @@ -438,14 +457,6 @@ impl AmbientAgentViewModel { self.task_id } - pub fn has_inserted_cloud_mode_user_query_block(&self) -> bool { - self.has_inserted_cloud_mode_user_query_block - } - - pub fn set_has_inserted_cloud_mode_user_query_block(&mut self, has_inserted: bool) { - self.has_inserted_cloud_mode_user_query_block = has_inserted; - } - pub fn record_optimistic_user_query(&mut self, prompt: String) { self.optimistically_rendered_user_queries.push(prompt); } @@ -666,7 +677,6 @@ impl AmbientAgentViewModel { self.environment_id = None; self.task_id = None; self.conversation_id = None; - self.has_inserted_cloud_mode_user_query_block = false; self.harness_command_started = false; self.optimistically_rendered_user_queries.clear(); self.active_execution_session_id = None; @@ -734,7 +744,7 @@ impl AmbientAgentViewModel { parent_run_id: None, runtime_skills: vec![], referenced_attachments: vec![], - fork_from_conversation_id: None, + conversation_id: None, handoff_prep_token: None, }; @@ -1132,11 +1142,10 @@ impl AmbientAgentViewModel { /// Drive the local-to-cloud handoff submission for this pane. /// /// Called by the cloud-mode submit dispatch when the pane has `pending_handoff` - /// set. Runs the orchestrator off the main thread; on success, builds a - /// `SpawnAgentRequest` with `fork_from_conversation_id` + `handoff_prep_token` - /// set and routes it through the same `spawn_agent_with_request` path that - /// regular cloud-mode runs use — so `WaitingForSession` → `SessionStarted` - /// streaming reaches the same pane unchanged. + /// set. The fork (REMOTE-1519) and snapshot upload (REMOTE-1486) both happen + /// at chip-click time — this method just reads the cached `forked_conversation_id` + /// and `snapshot_prep_token` off the pending handoff and routes through the + /// same `spawn_agent_with_request` path that regular cloud-mode runs use. pub(crate) fn submit_handoff( &mut self, prompt: String, @@ -1148,73 +1157,45 @@ impl AmbientAgentViewModel { return; }; if matches!(handoff.submission_state, HandoffSubmissionState::Starting) { - // Double-submit guard: orchestrator already in flight. + // Double-submit guard: spawn already in flight. return; } - let Some(workspace) = handoff.touched_workspace.clone() else { + if handoff.touched_workspace.is_none() { log::warn!("submit_handoff called before touched-workspace derivation completed"); return; + } + let Some(prep_token) = handoff.snapshot_prep_token.clone() else { + log::warn!("submit_handoff called before snapshot upload completed"); + return; }; - let source_conversation_id = handoff.source_conversation_id.clone(); + let forked_conversation_id = handoff.forked_conversation_id.clone(); handoff.submission_state = HandoffSubmissionState::Starting; ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); - let server_api_provider = ServerApiProvider::as_ref(ctx); - let ai_client = server_api_provider.get_ai_client(); - let http = server_api_provider.get_http_client(); - - // Clone the prompt so the failure path can hand it back to the input - // layer for restoration. The orchestrator future consumes the original. - let prompt_for_retry = prompt.clone(); - - ctx.spawn( - async move { - run_handoff(source_conversation_id, workspace, prompt, ai_client, http).await - }, - move |me, result, ctx| match result { - Ok(prepared) => { - // Build the spawn config from the model so the env selector chip's - // pick (and `WARP_CLOUD_MODE_DEFAULT_HOST` / model / harness defaults) - // propagate into the spawn request. - let config = Some(me.build_default_spawn_config(ctx)); - // Strip any `/plan` / `/orchestrate` prefix from the prompt and surface - // it as the request's `mode` so the cloud agent honors the same modes - // the local-mode spawn path does. - let (prompt, mode) = extract_user_query_mode(prepared.prompt); - let request = SpawnAgentRequest { - prompt, - mode, - config, - title: None, - team: None, - skill: None, - attachments, - interactive: None, - parent_run_id: None, - runtime_skills: vec![], - referenced_attachments: vec![], - fork_from_conversation_id: Some(prepared.fork_from_conversation_id), - handoff_prep_token: prepared.prep_token, - }; - me.spawn_agent_with_request(request, ctx); - } - Err(err) => { - let error_message = format!("{err}"); - log::warn!("Handoff prep+upload failed: {err:#}"); - me.set_pending_handoff_submission_state( - HandoffSubmissionState::Failed(error_message.clone()), - ctx, - ); - // Emit the prompt back so the input layer can repopulate the - // editor and surface the error — otherwise the user is left - // staring at a blank composing pane with no retry path. - ctx.emit(AmbientAgentViewModelEvent::HandoffSubmissionFailed { - prompt: prompt_for_retry, - error_message, - }); - } - }, - ); + // Build the spawn config from the model so the env selector chip's + // pick (and `WARP_CLOUD_MODE_DEFAULT_HOST` / model / harness defaults) + // propagate into the spawn request. + let config = Some(self.build_default_spawn_config(ctx)); + // Strip any `/plan` / `/orchestrate` prefix from the prompt and surface + // it as the request's `mode` so the cloud agent honors the same modes + // the local-mode spawn path does. + let (prompt, mode) = extract_user_query_mode(prompt); + let request = SpawnAgentRequest { + prompt, + mode, + config, + title: None, + team: None, + skill: None, + attachments, + interactive: None, + parent_run_id: None, + runtime_skills: vec![], + referenced_attachments: vec![], + conversation_id: Some(forked_conversation_id), + handoff_prep_token: prep_token, + }; + self.spawn_agent_with_request(request, ctx); } /// Cancels the ambient agent task if one is currently running. diff --git a/app/src/terminal/view/ambient_agent/view_impl.rs b/app/src/terminal/view/ambient_agent/view_impl.rs index fa6562895..7739047b2 100644 --- a/app/src/terminal/view/ambient_agent/view_impl.rs +++ b/app/src/terminal/view/ambient_agent/view_impl.rs @@ -144,14 +144,11 @@ impl TerminalView { } if FeatureFlag::CloudModeSetupV2.is_enabled() { let view_model = ambient_agent_view_model.as_ref(ctx); - let use_queued_prompt = view_model.is_third_party_harness() - || view_model.is_local_to_cloud_handoff(); + let use_queued_prompt = view_model.is_third_party_harness(); if use_queued_prompt { - // Non-oz runs and local-to-cloud handoff (REMOTE-1486) runs: - // render the submitted prompt via the queued-prompt UI on top of - // the conversation-history scaffold. The block is removed later - // by `HarnessCommandStarted` (non-oz) / first `AppendedExchange` - // (oz handoff) / failure / cancel / auth handlers. + // Non-oz runs render the submitted prompt via the queued-prompt UI on + // top of the conversation-history scaffold. The block is removed later + // by `HarnessCommandStarted` / failure / cancel / auth handlers. // // `request.prompt` is stored stripped of any `/plan` / `/orchestrate` // prefix; rebuild the display form from `request.mode` so the user sees @@ -179,7 +176,6 @@ impl TerminalView { ctx, ); ambient_agent_view_model.update(ctx, |model, _| { - model.set_has_inserted_cloud_mode_user_query_block(true); if let Some(prompt) = model.request().map(|request| request.prompt.clone()) { @@ -399,6 +395,7 @@ impl TerminalView { if !is_cloud_agent_pre_first_exchange( self.ambient_agent_view_model.as_ref(), &self.agent_view_controller, + &self.model, ctx, ) { return; @@ -435,10 +432,11 @@ impl TerminalView { .set_did_execute_a_setup_command(true); }); - let setup_command_text = ctx.add_typed_action_view(|ctx| { + let setup_command_text = ctx.add_typed_action_view(|ctx| { super::CloudModeSetupTextBlock::new( ambient_agent_view_model.clone(), self.agent_view_controller.clone(), + self.model.clone(), ctx, ) }); diff --git a/app/src/workspace/view.rs b/app/src/workspace/view.rs index 97c7708dd..d853c9cff 100644 --- a/app/src/workspace/view.rs +++ b/app/src/workspace/view.rs @@ -110,6 +110,8 @@ use crate::util::openable_file_type::FileTarget; #[cfg(feature = "local_fs")] use crate::util::openable_file_type::{resolve_file_target_with_editor_choice, EditorLayout}; +use crate::ai::agent::conversation::AIConversation; +use crate::ai::agent_sdk::driver::upload_snapshot_for_handoff; use crate::ai::blocklist::agent_view::agent_input_footer::sort_environments_by_recency; use crate::ai::blocklist::handoff::touched_repos::{ derive_touched_workspace, extract_paths_from_conversation, pick_handoff_overlap_env, @@ -117,6 +119,7 @@ use crate::ai::blocklist::handoff::touched_repos::{ use crate::ai::blocklist::history_model::CloudConversationData; use crate::ai::blocklist::FORK_PREFIX; use crate::ai::cloud_environments::CloudAmbientAgentEnvironment; +use crate::server::server_api::ai::PrepareHandoffForkRequest; #[cfg(not(target_family = "wasm"))] use crate::terminal::cli_agent_sessions::plugin_manager::{plugin_manager_for, PluginModalKind}; use crate::terminal::cli_agent_sessions::{CLIAgentSessionsModel, CLIAgentSessionsModelEvent}; @@ -11600,7 +11603,12 @@ impl Workspace { ctx, ) } else { - history_model.fork_conversation(&source_conversation, FORK_PREFIX, ctx) + history_model.fork_conversation( + &source_conversation, + FORK_PREFIX, + false, /* preserve_task_ids */ + ctx, + ) } }); @@ -12980,15 +12988,18 @@ impl Workspace { /// Open a local-to-cloud handoff pane next to the active local pane. Triggered /// by the `/oz-cloud-handoff` slash command and the "Hand off to cloud" footer - /// chip. + /// chip (REMOTE-1486 / REMOTE-1519). /// - /// Resolves the active conversation up front. If there's an eligible source - /// conversation (active, non-empty, has a `server_conversation_token`), splits a - /// fresh cloud-mode pane to the right and seeds it with handoff context so the - /// submit path routes through the orchestrator. Otherwise, still splits a fresh - /// cloud-mode pane (no handoff context) so the chip is always-clickable per the - /// existing posture — there's nothing meaningful to hand off in that state, but - /// the user clearly wanted a cloud-mode pane. + /// When the active conversation is non-empty and has a server token, mints a + /// server-side fork via `POST /agent/handoff/prepare-fork`, then splits a fresh + /// cloud-mode pane next to the local pane and pre-populates it with the forked + /// conversation. + /// + /// All failure modes — ineligibility (no active conversation, empty, or no + /// synced server token), prepare-fork RPC failure, and local fork + /// materialization failure — surface an error toast in the local window and + /// **do not open** any pane. The local conversation is unaffected and the + /// user can retry by re-clicking the chip. fn start_local_to_cloud_handoff( &mut self, initial_prompt: Option, @@ -13019,10 +13030,94 @@ impl Workspace { }) }); - // Split a fresh cloud-mode pane to the right of the active pane. Mirrors - // `Workspace::open_network_log_pane`'s pattern but uses `add_ambient_agent_pane` - // so the new pane is wired up as a cloud-mode terminal (with the right pre- - // session shared-session viewer manager). + let Some((source_conversation, source_token)) = source else { + // Not eligible: surface an error toast and bail out. We deliberately + // do not open a fresh cloud-mode pane here — the chip is a + // hand-off-this-conversation action, and silently opening an + // unrelated fresh pane hides the failure from the user. + self.show_handoff_error_toast(ctx); + return; + }; + + // Eligible: kick off the prepare-fork RPC. The pane is **not** opened + // until the fork resolves, so a failed fork doesn't leave a stranded + // empty pane on screen. + let ai_client = ServerApiProvider::as_ref(ctx).get_ai_client(); + let request = PrepareHandoffForkRequest { + source_conversation_id: source_token.as_str().to_string(), + }; + ctx.spawn( + async move { ai_client.prepare_handoff_fork(request).await }, + move |me, result, ctx| match result { + Ok(response) => { + me.complete_local_to_cloud_handoff_open( + source_conversation, + source_token, + response.forked_conversation_id, + initial_prompt, + ctx, + ); + } + Err(err) => { + log::warn!("prepare_handoff_fork failed: {err:#}"); + me.show_handoff_error_toast(ctx); + } + }, + ); + } + + /// Surface the shared "Failed to prepare handoff" toast in the local + /// window. Used by every failure path in `start_local_to_cloud_handoff` + /// (ineligibility, prepare-fork RPC failure, local fork materialization + /// failure) so the user sees a single consistent error treatment. + fn show_handoff_error_toast(&self, ctx: &mut ViewContext) { + let window_id = ctx.window_id(); + WorkspaceToastStack::handle(ctx).update(ctx, |toast_stack, ctx| { + let toast = DismissibleToast::error( + "Failed to prepare handoff. Please try again.".to_owned(), + ); + toast_stack.add_ephemeral_toast(toast, window_id, ctx); + }); + } + + /// Finishes the local-to-cloud handoff open after the prepare-fork RPC + /// returns. Materializes a local fork bound to the server's forked + /// conversation id, splits a fresh cloud-mode pane next to the active + /// pane, restores the forked conversation into it, seeds `PendingHandoff`, + /// and kicks off async derivation + snapshot upload (REMOTE-1486). + fn complete_local_to_cloud_handoff_open( + &mut self, + source_conversation: AIConversation, + source_token: ServerConversationToken, + forked_conversation_id: String, + initial_prompt: Option, + ctx: &mut ViewContext, + ) { + // Materialize the local fork up-front so the new pane has something to + // restore. `fork_conversation` already handles SQLite persistence and + // copies tasks / messages over from the source. + let history_model = BlocklistAIHistoryModel::handle(ctx); + let local_fork = match history_model.update(ctx, |history_model, ctx| { + // Preserve source task ids so the local fork's task store matches the cloud-side + // fork (which is a byte copy of the source's GCS data). The cloud agent's + // ClientActions reference these task ids and must resolve locally. + history_model.fork_conversation( + &source_conversation, + FORK_PREFIX, + true, /* preserve_task_ids */ + ctx, + ) + }) { + Ok(forked) => forked, + Err(err) => { + log::warn!("Failed to materialize local fork for handoff: {err:#}"); + self.show_handoff_error_toast(ctx); + return; + } + }; + let local_fork_id = local_fork.id(); + + // Split the new cloud-mode pane next to the active pane. self.active_tab_pane_group().update(ctx, |pane_group, ctx| { pane_group.add_ambient_agent_pane(ctx); }); @@ -13036,7 +13131,6 @@ impl Workspace { ); return; }; - let Some(model_handle) = new_pane_view .as_ref(ctx) .ambient_agent_view_model() @@ -13046,9 +13140,7 @@ impl Workspace { return; }; - // `add_ambient_agent_pane` already entered cloud agent view via - // `enter_ambient_agent_setup` (which transitions the model into `Composing` / - // `Setup`). Pre-fill the prompt input from the slash command argument, if any. + // Pre-fill the prompt input if the slash command supplied one. if let Some(prompt) = initial_prompt.as_deref().filter(|p| !p.is_empty()) { new_pane_view.update(ctx, |terminal_view, view_ctx| { terminal_view.input().update(view_ctx, |input, input_ctx| { @@ -13057,41 +13149,77 @@ impl Workspace { }); } - // Fall through to a fresh cloud-mode pane (no handoff context) when there's - // nothing meaningful to hand off. The pane was already opened above. - let Some((conversation, source_token)) = source else { - return; - }; + // Restore the forked conversation into the new pane so its AI exchanges + // are visible immediately. Mirrors the `/fork` in-current-pane flow at + // `Self::fork_ai_conversation`. + let local_fork_for_restore = local_fork.clone(); + new_pane_view.update(ctx, |terminal_view, view_ctx| { + terminal_view.restore_conversation_after_view_creation( + RestoredAIConversation::new(local_fork_for_restore), + /* use_live_appearance */ true, + view_ctx, + ); + }); - // Seed the handoff context onto the new pane's `AmbientAgentViewModel` so - // `is_local_to_cloud_handoff()` is true from this point on (the V2 input - // is suppressed and the submit path routes through the orchestrator). + // Bind the local fork's `server_conversation_token` to the forked + // conversation id minted by the server. Must run AFTER + // `restore_conversation_after_view_creation`, since `restore_conversations` + // overwrites the entry in `conversations_by_id` with the (token-less) + // clone we hand it. Binding here ensures that when the cloud agent's + // shared session connects with `StreamInit { conversation_id: T_C }`, + // `find_existing_conversation_by_server_token` finds the live fork and + // `should_skip_replayed_response_for_existing_conversation` correctly + // suppresses the replayed response stream — otherwise the replay would + // re-enter as new exchanges, flipping `is_executing_oz_environment_startup_commands` + // false and breaking setup-command block UI for the handoff pane. + history_model.update(ctx, |history_model, _| { + history_model.set_server_conversation_token_for_conversation( + local_fork_id, + forked_conversation_id.clone(), + ); + }); + + // Seed `PendingHandoff` so `is_local_to_cloud_handoff()` is true from + // here on. `submit_handoff` reads the cached `forked_conversation_id` + // and `snapshot_prep_token` directly from this struct — the orchestrator + // path that REMOTE-1486 used has been inlined into the async block below. let pending = PendingHandoff { - source_conversation_id: source_token, + forked_conversation_id: forked_conversation_id.clone(), touched_workspace: None, + snapshot_prep_token: None, submission_state: HandoffSubmissionState::Idle, }; model_handle.update(ctx, |model, model_ctx| { model.set_pending_handoff(Some(pending), model_ctx); }); - // Kick off touched-repo derivation off the main thread. The conversation - // walk lives inside the spawned future too so we don't pay it on chip click - // (long conversations have hundreds of action results to traverse). When - // derivation completes, apply the repo-aware overlap pick on top of - // whatever `ensure_default_selection` already picked, but only if the pane - // is still in handoff mode — the pane could have been closed in the - // interim. On a real overlap match we override unconditionally so the - // user's last-selected (potentially empty) env doesn't shadow a matching - // env; on no-overlap we leave the existing selection alone, since the env - // selector's recency-based default is the best fallback. + // Kick off async background prep: derive the touched workspace, then + // upload the snapshot. The pane is fully interactive throughout — the + // user can scroll, type, and pick an env while this runs. The send + // button gate inside `submit_handoff` waits for both the workspace and + // the prep token to be cached before allowing a spawn. let async_model_handle = model_handle.clone(); + let server_api_provider = ServerApiProvider::as_ref(ctx); + let ai_client = server_api_provider.get_ai_client(); + let http = server_api_provider.get_http_client(); + let log_token = source_token.clone(); ctx.spawn( async move { - let paths = extract_paths_from_conversation(&conversation); - derive_touched_workspace(paths).await + let paths = extract_paths_from_conversation(&source_conversation); + let workspace = derive_touched_workspace(paths).await; + let repo_paths: Vec<_> = + workspace.repos.iter().map(|r| r.git_root.clone()).collect(); + let upload_result = upload_snapshot_for_handoff( + repo_paths, + workspace.orphan_files.clone(), + ai_client, + http.as_ref(), + &log_token, + ) + .await; + (workspace, upload_result) }, - move |_workspace, derived_workspace, ctx| { + move |_workspace, (derived_workspace, upload_result), ctx| { async_model_handle.update(ctx, |model, model_ctx| { if !model.is_local_to_cloud_handoff() { return; @@ -13102,6 +13230,18 @@ impl Workspace { model.set_environment_id(Some(overlap_env), model_ctx); } model.set_pending_handoff_workspace(derived_workspace, model_ctx); + match upload_result { + Ok(prep_token) => { + model.set_pending_handoff_snapshot_prep_token(prep_token, model_ctx); + } + Err(err) => { + log::warn!("Handoff snapshot upload failed: {err:#}"); + model.set_pending_handoff_submission_state( + HandoffSubmissionState::Failed(format!("{err}")), + model_ctx, + ); + } + } }); }, ); diff --git a/specs/REMOTE-1499/TECH.md b/specs/REMOTE-1499/TECH.md new file mode 100644 index 000000000..1fdcc6358 --- /dev/null +++ b/specs/REMOTE-1499/TECH.md @@ -0,0 +1,113 @@ +# REMOTE-1499 Tech Spec: Client-side prompt-less local-to-cloud handoff +Linear: [REMOTE-1499](https://linear.app/warpdotdev/issue/REMOTE-1499) +Server tech spec: `../warp-server/specs/REMOTE-1499/TECH.md` +Builds on: REMOTE-1486 (`specs/REMOTE-1486/TECH.md`) +## Problem +The local-to-cloud handoff pane (REMOTE-1486) requires the user to type a follow-up prompt before the send button activates. There are flows where the source conversation history plus the workspace snapshot is the entire input the cloud agent needs — the user wants to "just send this to the cloud as-is". Today the cloud-mode submit path short-circuits when the prompt buffer is empty, so the user can't dispatch the handoff without typing something. +## Relevant code +- `app/src/terminal/input.rs:11860-11959` — the cloud-mode submit dispatch (`handle_input_submit` branch). Contains the `if prompt.is_empty() { return; }` short-circuit at line 11870 that blocks empty-prompt submission for both regular cloud-mode runs and handoff panes. +- `app/src/terminal/view/ambient_agent/model.rs:1142-1187` — `AmbientAgentViewModel::submit_handoff`. Already accepts an empty `prompt: String` and threads it into `SpawnAgentRequest` unchanged; only the upstream input gate needs relaxing. +- `app/src/terminal/view/ambient_agent/model.rs:332-334` — `is_local_to_cloud_handoff()`, the predicate used to branch handoff-vs-fresh-cloud-mode behavior across the input + view layers. +- `app/src/terminal/view/ambient_agent/view_impl.rs:141-179` — `DispatchedAgent` handler. Already gates the queued-user-query block insertion on `!prompt.is_empty()`, so an empty-prompt handoff renders the standard "WaitingForSession" UI without a stray empty user-query block. No change needed here, but the behavior is load-bearing for this feature. +- `app/src/server/server_api/ai.rs:181-218` — `SpawnAgentRequest`. The `prompt` field is `String` (not `Option`); empty strings serialize fine and the server accepts them once the new gate lands. +- `specs/REMOTE-1486/PRODUCT.md:33` — the line that says *"send button follows the regular cloud-mode rules (prompt non-empty)"*. Needs amending. +## Current state +- The submit path at `input.rs:11860-11959` is shared between fresh cloud-mode spawns and handoff submits. The `prompt.is_empty()` early return runs before either branch — there's no handoff-aware branch on emptiness today. +- `submit_handoff` already builds a `SpawnAgentRequest { prompt: "", … }` correctly when called with an empty string — the gate is upstream of this method, so the function itself needs no change. +- The optimistic queued-user-query block insertion in `view_impl.rs` already no-ops on empty prompts. +- There is no separate render-time "send disabled when buffer empty" check for cloud-mode panes; the only gate is the input.rs short-circuit. +- The CLI surface (`agent run-cloud`) is not in scope for this feature — REMOTE-1486 explicitly excluded a CLI handoff entry point from V0. +## Proposed changes +### 1. Allow empty-prompt submission for handoff panes +Edit the cloud-mode submit dispatch in `app/src/terminal/input.rs` (around line 11860-11959). Today the relevant fragment is: +```rust path=null start=null +if self + .ambient_agent_view_model() + .is_some_and(|m| m.as_ref(ctx).is_configuring_ambient_agent()) +{ + let prompt = command.trim().to_owned(); + if prompt.is_empty() { + return; + } + // … attachment collection + dispatch to submit_handoff / spawn_agent +} +``` +Replace the unconditional `prompt.is_empty()` short-circuit with one that exempts handoff panes: +```rust path=null start=null +let prompt = command.trim().to_owned(); +let is_handoff = self + .ambient_agent_view_model() + .is_some_and(|m| m.as_ref(ctx).is_local_to_cloud_handoff()); +if prompt.is_empty() && !is_handoff { + return; +} +``` +The downstream branch at lines 11949-11957 already routes to `submit_handoff` vs `spawn_agent` based on `is_local_to_cloud_handoff()`, so no further dispatch logic changes. `submit_handoff` accepts `prompt: ""` unchanged and lets the spawn flow through to the server. +Fresh cloud-mode runs are unaffected: `is_handoff = false` for those, so the empty-prompt short-circuit still fires. +### 2. No additional send-button gating change +The send button itself isn't gated by emptiness — the only enable check is the input.rs path above. The handoff-specific guards added by REMOTE-1486 (workspace derivation complete, prep token cached) live inside `submit_handoff` and remain in force: +```rust path=null start=null +if handoff.touched_workspace.is_none() { + log::warn!("submit_handoff called before touched-workspace derivation completed"); + return; +} +let Some(prep_token) = handoff.snapshot_prep_token.clone() else { + log::warn!("submit_handoff called before snapshot upload completed"); + return; +}; +``` +These continue to gate submission whether or not the prompt is empty. +### 3. Amend the product spec +Update `specs/REMOTE-1486/PRODUCT.md:33` to drop the "prompt non-empty" requirement for handoff panes: +- Old: *"The send button follows the regular cloud-mode rules (prompt non-empty) plus a guard until touched-repo derivation completes."* +- New: *"The send button is enabled once touched-repo derivation completes and the snapshot prep token has been minted; the prompt may be empty (in which case the cloud agent receives only the forked conversation history and the rehydration message)."* +Also update §11 ("Submitting") to note that the prompt is optional, and §16 ("Pre-SessionStarted visualization") to clarify that the queued-user-query indicator is suppressed when the prompt is empty (already the implementation behavior). +### 4. CLI surface — out of scope +The `agent run-cloud` clap arg group at `crates/warp_cli/src/agent.rs:339-344` requires one of `prompt`, `saved_prompt`, or `skill`. CLI-driven handoff isn't shipping in V0 (per REMOTE-1486 PRODUCT.md non-goals), so the clap group stays as-is. When CLI handoff lands, that group will need to add `task_id` / a new `--fork-from-conversation` flag and the runtime check at `app/src/ai/agent_sdk/ambient.rs:259-268` will need a parallel relaxation. Tracking issue at follow-ups. +## End-to-end flow +```mermaid +sequenceDiagram + participant U as User + participant Pane as Handoff pane (Input) + participant VM as AmbientAgentViewModel + participant API as warp-server + U->>Pane: Click "Hand off to cloud" chip + Pane->>VM: set_pending_handoff(...) + Pane->>VM: derive_touched_workspace + upload_snapshot_for_handoff (async) + VM-->>Pane: PendingHandoffChanged (workspace + prep_token cached) + U->>Pane: Press Cmd-Enter with empty buffer + Pane->>Pane: prompt = "" ; is_handoff = true ; do NOT short-circuit + Pane->>VM: submit_handoff(prompt="", attachments=[]) + VM->>VM: Build SpawnAgentRequest { prompt: "", fork_from_conversation_id, handoff_prep_token, ... } + VM->>API: POST /agent/runs + API-->>VM: {task_id, run_id} + VM->>VM: Status::WaitingForSession + Note over Pane: view_impl.rs DispatchedAgent: queued-prompt block skipped (prompt is empty) + Note over Pane: Standard "Setting up environment" + "Running setup commands" indicators show + API-->>VM: SessionStarted (cloud agent first turn = rehydration summary) +``` +## Risks and mitigations +### Accidental empty submission on fresh cloud-mode panes +The new `!is_handoff` guard means we still block empty-prompt submits on fresh cloud-mode panes. Risk: a future refactor accidentally drops the `is_handoff` predicate and lets fresh cloud-mode runs submit with empty prompts, which the server would now also accept (per the relaxed `prompt or skill_spec or fork_from_conversation_id` gate would still reject because no fork is set, but the symmetry argument is worth preserving). +*Mitigation:* keep the predicate inline at the call site — don't extract it into a helper that future changes could grow conditions onto. The server-side gate's `fork_from_conversation_id` requirement is the load-bearing safety net regardless. +### User confusion: "did my submit go through?" +Submitting with an empty buffer produces no visible user message in the conversation (the queued-prompt block already suppresses on empty). The pane still shows the standard "Setting up environment" + spinner UI, but the submit feels less acknowledged. +*Mitigation:* the existing handoff submission state (`HandoffSubmissionState::Starting`) already drives the input button's "Starting…" state. The visual feedback is the same as the with-prompt case minus the queued-user-query block. Consider a follow-up to insert a small "Handed off without prompt" pill if user feedback indicates the no-prompt case feels under-acknowledged. +### Send key bindings other than Cmd-Enter +The submit dispatch handles a single submit path; any alternate trigger (e.g. clicking the send icon) routes through the same `handle_input_submit` flow. Visual inspection of the input footer + agent input footer confirms there's no second submit path that bypasses this gate. +## Testing and validation +### Automated coverage +- New unit test on the input submit path — empty buffer + handoff pane dispatches `submit_handoff` with `prompt: ""`; empty buffer + fresh cloud-mode pane no-ops as today. +- Existing handoff tests in `app/src/ai/blocklist/handoff/` should still pass; add a parameterized variant covering the empty-prompt path. +- `app/src/terminal/view/ambient_agent/view_impl.rs` queued-user-query test (if not already) — confirms an empty `request.prompt` skips block insertion. +### Manual / integration validation +- Open a local conversation with at least one touched repo. Click the chip. Wait for derivation + prep token to settle. Click send with an empty prompt. Confirm cloud sandbox starts, applies patches, and posts a summary turn. +- Repeat with a typed prompt to confirm no regression. +- Open a fresh cloud-mode pane (not via handoff). Confirm empty-prompt submit still no-ops. +- Toggle `LocalToCloudHandoff` off; confirm the chip is hidden (no UI change in this PR). +### Feature-flag rollout +This change is gated by the existing `LocalToCloudHandoff` client flag — handoff panes only exist when the flag is on, so the new branch is unreachable otherwise. No new flag. +## Follow-ups +- **CLI handoff entry point.** When CLI handoff lands, relax the `agent run-cloud` clap arg group and the runtime check at `ambient.rs:259-268` to accept a `--task-id` / `--fork-from-conversation` invocation as a sufficient prompt source. +- **"Handed off without prompt" pill.** If user feedback on the no-prompt case suggests the submit feels under-acknowledged, add a small visual marker in the queued region indicating the handoff was sent without a follow-up. +- **Send button label nuance.** Today the send button shows the standard send icon. In the no-prompt handoff case we could swap to a "Hand off" or arrow icon to make the action feel less like a regular message send. Defer until UX feedback warrants it. diff --git a/specs/REMOTE-1519/PRODUCT.md b/specs/REMOTE-1519/PRODUCT.md index 5d3076d91..f749e77a9 100644 --- a/specs/REMOTE-1519/PRODUCT.md +++ b/specs/REMOTE-1519/PRODUCT.md @@ -22,8 +22,8 @@ Two related rough edges in the V0 handoff flow: 3. The forked conversation appears in the user's history under their account, owned by them. 4. Subsequent edits in the local pane after chip click do **not** appear in the handoff pane. The cloud agent will work against the conversation as it was at chip-click time. Users who want a more recent snapshot must close the handoff pane and click the chip again. ### Eligibility and fallback -5. Per-conversation eligibility (active conversation must be non-empty and have a synced server token) is unchanged from REMOTE-1486. When the active conversation isn't eligible, the chip still opens a fresh cloud-mode pane with no hydration and no fork — same fall-through as today. -6. If the server fork call fails for any reason (network, auth, source not synced to GCS), the new pane is **not** opened. The failure surfaces as an error toast in the local window. The local conversation is unaffected and the user can retry by clicking the chip again. +5. Per-conversation eligibility requires an active, non-empty conversation with a synced server token. When the active conversation isn't eligible, the chip surfaces an error toast in the local window and **does not open** any pane. The local conversation is unaffected and the user can retry once the source has synced. +6. If the server fork call fails for any reason (network, auth, source not synced to GCS), the new pane is **not** opened. The failure surfaces as the same error toast in the local window. The local conversation is unaffected and the user can retry by clicking the chip again. ### Cloud session replay and dedup 7. When the cloud agent's shared session connects to the handoff pane, the agent's conversation replay rebroadcasts every exchange in the forked conversation. Because we already pre-populated the same exchanges, the replay events are suppressed at the response-stream level, identical to how cloud→cloud follow-up sessions handle stale replay (REMOTE-1290). 8. After the replay completes, genuinely new exchanges (the cloud agent's first response to the user's follow-up prompt) are appended normally. The user sees a smooth transition from "frozen pre-handoff state" to "cloud agent answering my follow-up prompt". diff --git a/specs/REMOTE-1519/TECH.md b/specs/REMOTE-1519/TECH.md index 9eb185fef..ba643c8df 100644 --- a/specs/REMOTE-1519/TECH.md +++ b/specs/REMOTE-1519/TECH.md @@ -86,7 +86,7 @@ implemented in `ServerApi` as `POST agent/handoff/prepare-fork`. Mirror the requ - Update the snapshot pipeline call site that takes a `&ServerConversationToken` only for log labelling (`upload_snapshot_for_handoff` in `app/src/ai/agent_sdk/driver/snapshot.rs`) — no signature change needed; the source conversation token is still available on the `PendingHandoff`. ### 3. Client-side fork-on-chip-click (`app/src/workspace/view.rs`) Extend `Workspace::start_local_to_cloud_handoff` (currently at `app/src/workspace/view.rs:12952-13079`) into a strict-ordering open path: -1. **Resolve eligibility synchronously.** Read the active session view's conversation via `BlocklistAIHistoryModel::active_conversation`. If the conversation is empty or has no `server_conversation_token`, fall back to the existing behavior (open a fresh cloud-mode pane with no hydration / no fork — same as today's REMOTE-1486 chip). +1. **Resolve eligibility synchronously.** Read the active session view's conversation via `BlocklistAIHistoryModel::active_conversation`. If the conversation is missing, empty, or has no `server_conversation_token`, surface the same error toast as the prepare-fork RPC failure path (step 2 below) and return without opening any pane. There is no "fresh cloud-mode pane" fall-through — the chip is a hand-off-this-conversation action, and silently opening an unrelated fresh pane would hide the failure from the user. 2. **Await the fork before opening the pane.** When the source resolves, `ctx.spawn` a future that calls `AIClient::prepare_handoff_fork({source_conversation_id: T_L})`. The new pane is **not** split until this returns. `start_local_to_cloud_handoff` itself returns to the caller immediately so the click handler doesn't block, but the pane-open work is gated on the RPC. - **On error** (network, auth, `SourceConversationNotPersisted`, etc.), surface a `WorkspaceToastStack` error toast (mirroring the pattern used by `Self::show_fork_toast` at `app/src/workspace/view.rs:11586-11588` for failed local forks). Log the underlying error. Do **not** open a pane. - **On success**, on the main thread, run the rest of the open path described below. @@ -162,7 +162,7 @@ No new feature flags. All of the changes are gated on the existing `FeatureFlag: - Click the chip on a long Oz conversation; verify the new pane is visibly populated with the AI exchanges before the cloud session connects, with no flicker or duplicate blocks during the connect/replay window. - Submit a follow-up; verify the queued-prompt indicator + "Setting up environment" loading screen + "Running setup commands…" collapsible block all render the same way they do for a fresh cloud-mode run. - After the cloud agent's first turn arrives, verify the pre-populated blocks remain in place, the queued-prompt indicator clears, and the new exchange appends below them. -- Click the chip on a non-eligible conversation (no synced server token); verify the pane opens as a fresh cloud-mode pane with no handoff context (existing fall-through preserved). +- Click the chip on a non-eligible conversation (no synced server token); verify **no pane opens** and an error toast surfaces in the local window. The local conversation should be unaffected. - Manually break a network connection during chip click so the prepare-fork RPC fails; verify **no pane opens** and an error toast surfaces in the local window. The local conversation should be unaffected and the chip should be re-clickable. ## Parallelization The two-side change (server endpoint + client wiring) is small enough that one engineer/agent can implement it sequentially in two PRs — a server PR for the prepare-fork endpoint + `ForkFromConversationID` removal, then a client PR for the hydration + load mode + setup-v2 reset gate. The user has indicated they will handle the server-side changes themselves in `../warp-server-2`, so the client agent does not need to coordinate with a parallel server agent. No sub-agents needed for this scope. From bbfc3578ba742123d98ded4d93f08e53695baeed Mon Sep 17 00:00:00 2001 From: Harry Date: Thu, 30 Apr 2026 15:30:04 -0500 Subject: [PATCH 3/5] fix merge issues --- app/src/terminal/input.rs | 18 ++++++------- app/src/terminal/view/ambient_agent/model.rs | 25 ++++++++++++++++--- .../terminal/view/ambient_agent/view_impl.rs | 6 ++--- app/src/workspace/view.rs | 5 +--- 4 files changed, 32 insertions(+), 22 deletions(-) diff --git a/app/src/terminal/input.rs b/app/src/terminal/input.rs index 9f385e994..3742830c6 100644 --- a/app/src/terminal/input.rs +++ b/app/src/terminal/input.rs @@ -2142,18 +2142,14 @@ impl Input { }); } }); - // REMOTE-1486: prep+upload failures arrive here so we can - // repopulate the editor with the user's original prompt (the - // submit path cleared it before the orchestrator started) and - // surface the error as a toast. Without this branch the user is - // left staring at a blank composing pane after a silent log - // line. - if let AmbientAgentViewModelEvent::HandoffSubmissionFailed { - prompt, - error_message, - } = event + // REMOTE-1519: chip-click handoff prep+upload failures arrive + // here so we can surface the error as a toast. The editor + // buffer is intentionally left alone — the user's prompt was + // never cleared (chip-click happens before submit), so there + // is nothing to restore. + if let AmbientAgentViewModelEvent::HandoffSubmissionFailed { error_message } = + event { - me.replace_buffer_content(prompt, ctx); let window_id = ctx.window_id(); let toast_message = format!("Failed to prepare cloud handoff: {error_message}"); ToastStack::handle(ctx).update(ctx, |ts, ctx| { diff --git a/app/src/terminal/view/ambient_agent/model.rs b/app/src/terminal/view/ambient_agent/model.rs index fe7ab910d..5cea32230 100644 --- a/app/src/terminal/view/ambient_agent/model.rs +++ b/app/src/terminal/view/ambient_agent/model.rs @@ -394,6 +394,22 @@ impl AmbientAgentViewModel { ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); } + /// Records a chip-click handoff prep+upload failure on the pending handoff. + /// Flips the submission state to `Failed` (so the status footer / banner + /// reflects the error) and emits `HandoffSubmissionFailed` so the input + /// layer can surface a user-visible toast. + pub(crate) fn record_handoff_prep_failed( + &mut self, + error_message: String, + ctx: &mut ModelContext, + ) { + self.set_pending_handoff_submission_state( + HandoffSubmissionState::Failed(error_message.clone()), + ctx, + ); + ctx.emit(AmbientAgentViewModelEvent::HandoffSubmissionFailed { error_message }); + } + /// Records the outcome of the chip-click async snapshot upload on the pending /// handoff so `submit_handoff` can read the prep token without re-running /// the upload. `Some(token)` is the standard success case; `None` means the @@ -1276,11 +1292,12 @@ pub enum AmbientAgentViewModelEvent { /// The pane's `pending_handoff` was updated — derivation completed, submission /// state transitioned, etc. PendingHandoffChanged, - /// The handoff prep + upload phase failed before the cloud agent was spawned. - /// Carries the user's original prompt so the input layer can repopulate the - /// editor for retry, plus the error message to surface as a toast. + /// The handoff prep + upload phase failed at chip-click time. The input + /// layer subscribes to surface the error as a toast; the editor buffer is + /// untouched because the user's prompt was never cleared (submit is gated + /// behind the cached prep token, so a failed upload prevents submit + /// entirely instead of consuming the prompt). HandoffSubmissionFailed { - prompt: String, error_message: String, }, diff --git a/app/src/terminal/view/ambient_agent/view_impl.rs b/app/src/terminal/view/ambient_agent/view_impl.rs index 7739047b2..85b11fcde 100644 --- a/app/src/terminal/view/ambient_agent/view_impl.rs +++ b/app/src/terminal/view/ambient_agent/view_impl.rs @@ -370,9 +370,9 @@ impl TerminalView { ctx.notify(); } AmbientAgentViewModelEvent::HandoffSubmissionFailed { .. } => { - // Restoration of the editor buffer + the user-visible toast are - // handled by `Input`'s subscription to the same event; nothing - // for the terminal view to do here beyond the implicit re-render. + // The user-visible toast is handled by `Input`'s subscription + // to the same event; nothing for the terminal view to do here + // beyond the implicit re-render. ctx.notify(); } AmbientAgentViewModelEvent::UpdatedSetupCommandVisibility => (), diff --git a/app/src/workspace/view.rs b/app/src/workspace/view.rs index d853c9cff..1b78bc4ec 100644 --- a/app/src/workspace/view.rs +++ b/app/src/workspace/view.rs @@ -13236,10 +13236,7 @@ impl Workspace { } Err(err) => { log::warn!("Handoff snapshot upload failed: {err:#}"); - model.set_pending_handoff_submission_state( - HandoffSubmissionState::Failed(format!("{err}")), - model_ctx, - ); + model.record_handoff_prep_failed(format!("{err}"), model_ctx); } } }); From 6ae0ce7090bd92e3c3aa05d609540fd7eb818511 Mon Sep 17 00:00:00 2001 From: Harry Date: Thu, 30 Apr 2026 16:46:52 -0500 Subject: [PATCH 4/5] clean up debug logs --- app/src/ai/blocklist/block.rs | 8 ++ .../ai/blocklist/controller/shared_session.rs | 25 ++-- .../shared_session/viewer/event_loop.rs | 3 - app/src/terminal/view.rs | 2 - specs/REMOTE-1499/TECH.md | 113 ------------------ 5 files changed, 15 insertions(+), 136 deletions(-) delete mode 100644 specs/REMOTE-1499/TECH.md diff --git a/app/src/ai/blocklist/block.rs b/app/src/ai/blocklist/block.rs index f3a1fdb2f..816ab0645 100644 --- a/app/src/ai/blocklist/block.rs +++ b/app/src/ai/blocklist/block.rs @@ -1205,6 +1205,14 @@ impl AIBlock { ctx.subscribe_to_model(&agent_view_controller, |_, _, _, ctx| ctx.notify()); } + // Re-render when the cloud agent transitions through setup phases so the response + // footer (thumbs up/down, fork, credits) toggles correctly with `is_cloud_agent_pre_first_exchange`. + // Without this, the prior exchange's footer remains visible during a follow-up's + // "Step n/3" loading until something else triggers a redraw. + if let Some(ambient_agent_view_model) = ambient_agent_view_model.as_ref() { + ctx.subscribe_to_model(ambient_agent_view_model, |_, _, _, ctx| ctx.notify()); + } + ctx.subscribe_to_model(&context_model, |_, _, event, ctx| { if let BlocklistAIContextEvent::UpdatedPendingContext { .. } = event { ctx.notify(); diff --git a/app/src/ai/blocklist/controller/shared_session.rs b/app/src/ai/blocklist/controller/shared_session.rs index 8c77d02fc..a004cab8e 100644 --- a/app/src/ai/blocklist/controller/shared_session.rs +++ b/app/src/ai/blocklist/controller/shared_session.rs @@ -164,13 +164,11 @@ impl BlocklistAIController { h.start_new_conversation(terminal_view_id, false, true, ctx) }) }); - let should_skip = self.should_skip_replayed_response_for_existing_conversation( + if self.should_skip_replayed_response_for_existing_conversation( existing_conversation_id, &init_event.request_id, ctx, - ); - log::info!("[DEBUG] on_shared_init view_id={:?} req_id={} init_conv={} existing_conv={:?} resolved_conv={:?} was_existing={} skip={}", self.terminal_view_id, init_event.request_id, init_event.conversation_id, existing_conversation_id, conversation_id, existing_conversation_id.is_some(), should_skip); - if should_skip { + ) { self.shared_session_state.current_response_id = Some(stream_id); self.shared_session_state .should_skip_current_replayed_response = true; @@ -180,15 +178,12 @@ impl BlocklistAIController { self.shared_session_state.current_response_id = Some(stream_id.clone()); let Some(conversation) = history.as_ref(ctx).conversation(&conversation_id) else { - log::error!("[DEBUG] on_shared_init conversation lookup MISSING for conversation_id={conversation_id:?}"); + log::error!( + "Tried to initialize shared session stream for non-existent conversation {conversation_id:?}" + ); return; }; let task_id = conversation.get_root_task_id().clone(); - let known_task_ids: Vec = conversation - .all_tasks() - .map(|t| t.id().to_string()) - .collect(); - log::info!("[DEBUG] on_shared_init using root task_id={task_id:?} known_task_ids={known_task_ids:?}"); // Ensure the action executor is in view-only mode for shared-session viewers. self.action_model.update(ctx, |action_model, _ctx| { @@ -197,8 +192,7 @@ impl BlocklistAIController { // Eagerly create an exchange for this request (with empty inputs) and initialize output. history.update(ctx, |history_model, ctx| { - let view_id = self.terminal_view_id; - if let Err(err) = history_model.update_conversation_for_new_request_input( + let _ = history_model.update_conversation_for_new_request_input( RequestInput::for_task( vec![], task_id, @@ -211,9 +205,7 @@ impl BlocklistAIController { stream_id.clone(), self.terminal_view_id, ctx, - ) { - log::info!("[DEBUG] update_conversation_for_new_request_input ERR view_id={view_id:?} conversation_id={conversation_id:?} err={err:?}"); - } + ); history_model.initialize_output_for_response_stream( &stream_id, @@ -294,7 +286,6 @@ impl BlocklistAIController { .shared_session_state .should_skip_current_replayed_response { - log::info!("[DEBUG] on_shared_client_actions SKIPPED (suppressed replay) view_id={:?} action_count={}", self.terminal_view_id, actions.actions.len()); return; } let Some(stream_id) = self.shared_session_state.current_response_id.clone() else { @@ -408,13 +399,11 @@ impl BlocklistAIController { .shared_session_state .should_skip_current_replayed_response { - log::info!("[DEBUG] on_shared_finished SKIPPED (suppressed replay) view_id={:?}", self.terminal_view_id); self.shared_session_state.current_response_id.take(); self.shared_session_state .should_skip_current_replayed_response = false; return; } - log::info!("[DEBUG] on_shared_finished view_id={:?} current_response_id={:?}", self.terminal_view_id, self.shared_session_state.current_response_id); let Some(stream_id) = self.shared_session_state.current_response_id.take() else { log::warn!("Shared Finished missing request_id"); return; diff --git a/app/src/terminal/shared_session/viewer/event_loop.rs b/app/src/terminal/shared_session/viewer/event_loop.rs index 964901a14..2cf924f77 100644 --- a/app/src/terminal/shared_session/viewer/event_loop.rs +++ b/app/src/terminal/shared_session/viewer/event_loop.rs @@ -295,7 +295,6 @@ impl EventLoop { // For forked conversations, update the viewer's conversation // to use the new server token (only sent once per fork). if let Some(forked_from) = forked_from_token { - log::info!("[DEBUG] AgentResponseEvent link_forked_conversation_token forked_from={forked_from:?}"); c.link_forked_conversation_token( &forked_from, &event_clone, @@ -318,7 +317,6 @@ impl EventLoop { } } OrderedTerminalEventType::AgentConversationReplayStarted => { - log::info!("[DEBUG] AgentConversationReplayStarted (should_suppress={})", self.should_suppress_existing_agent_conversation_replay); let mut model = self.terminal_model.lock(); model.set_is_receiving_agent_conversation_replay(true); model.set_should_suppress_existing_agent_conversation_replay( @@ -326,7 +324,6 @@ impl EventLoop { ); } OrderedTerminalEventType::AgentConversationReplayEnded => { - log::info!("[DEBUG] AgentConversationReplayEnded"); let mut model = self.terminal_model.lock(); model.set_is_receiving_agent_conversation_replay(false); model.set_should_suppress_existing_agent_conversation_replay(false); diff --git a/app/src/terminal/view.rs b/app/src/terminal/view.rs index 504d0e3fc..aa4839589 100644 --- a/app/src/terminal/view.rs +++ b/app/src/terminal/view.rs @@ -5097,8 +5097,6 @@ impl TerminalView { response_stream_id, .. } => { - let agent_view_active_conv = self.agent_view_controller.as_ref(ctx).agent_view_state().active_conversation_id(); - log::info!("[DEBUG] AppendedExchange view_id={:?} exchange_id={exchange_id:?} task_id={task_id:?} conversation_id={conversation_id:?} is_hidden={is_hidden} agent_view_active_conv={agent_view_active_conv:?} response_stream_id={response_stream_id:?}", self.view_id); // Hide telemetry banner forever after first AI input user sends. if FeatureFlag::GlobalAIAnalyticsBanner.is_enabled() && !GeneralSettings::as_ref(ctx) diff --git a/specs/REMOTE-1499/TECH.md b/specs/REMOTE-1499/TECH.md deleted file mode 100644 index 1fdcc6358..000000000 --- a/specs/REMOTE-1499/TECH.md +++ /dev/null @@ -1,113 +0,0 @@ -# REMOTE-1499 Tech Spec: Client-side prompt-less local-to-cloud handoff -Linear: [REMOTE-1499](https://linear.app/warpdotdev/issue/REMOTE-1499) -Server tech spec: `../warp-server/specs/REMOTE-1499/TECH.md` -Builds on: REMOTE-1486 (`specs/REMOTE-1486/TECH.md`) -## Problem -The local-to-cloud handoff pane (REMOTE-1486) requires the user to type a follow-up prompt before the send button activates. There are flows where the source conversation history plus the workspace snapshot is the entire input the cloud agent needs — the user wants to "just send this to the cloud as-is". Today the cloud-mode submit path short-circuits when the prompt buffer is empty, so the user can't dispatch the handoff without typing something. -## Relevant code -- `app/src/terminal/input.rs:11860-11959` — the cloud-mode submit dispatch (`handle_input_submit` branch). Contains the `if prompt.is_empty() { return; }` short-circuit at line 11870 that blocks empty-prompt submission for both regular cloud-mode runs and handoff panes. -- `app/src/terminal/view/ambient_agent/model.rs:1142-1187` — `AmbientAgentViewModel::submit_handoff`. Already accepts an empty `prompt: String` and threads it into `SpawnAgentRequest` unchanged; only the upstream input gate needs relaxing. -- `app/src/terminal/view/ambient_agent/model.rs:332-334` — `is_local_to_cloud_handoff()`, the predicate used to branch handoff-vs-fresh-cloud-mode behavior across the input + view layers. -- `app/src/terminal/view/ambient_agent/view_impl.rs:141-179` — `DispatchedAgent` handler. Already gates the queued-user-query block insertion on `!prompt.is_empty()`, so an empty-prompt handoff renders the standard "WaitingForSession" UI without a stray empty user-query block. No change needed here, but the behavior is load-bearing for this feature. -- `app/src/server/server_api/ai.rs:181-218` — `SpawnAgentRequest`. The `prompt` field is `String` (not `Option`); empty strings serialize fine and the server accepts them once the new gate lands. -- `specs/REMOTE-1486/PRODUCT.md:33` — the line that says *"send button follows the regular cloud-mode rules (prompt non-empty)"*. Needs amending. -## Current state -- The submit path at `input.rs:11860-11959` is shared between fresh cloud-mode spawns and handoff submits. The `prompt.is_empty()` early return runs before either branch — there's no handoff-aware branch on emptiness today. -- `submit_handoff` already builds a `SpawnAgentRequest { prompt: "", … }` correctly when called with an empty string — the gate is upstream of this method, so the function itself needs no change. -- The optimistic queued-user-query block insertion in `view_impl.rs` already no-ops on empty prompts. -- There is no separate render-time "send disabled when buffer empty" check for cloud-mode panes; the only gate is the input.rs short-circuit. -- The CLI surface (`agent run-cloud`) is not in scope for this feature — REMOTE-1486 explicitly excluded a CLI handoff entry point from V0. -## Proposed changes -### 1. Allow empty-prompt submission for handoff panes -Edit the cloud-mode submit dispatch in `app/src/terminal/input.rs` (around line 11860-11959). Today the relevant fragment is: -```rust path=null start=null -if self - .ambient_agent_view_model() - .is_some_and(|m| m.as_ref(ctx).is_configuring_ambient_agent()) -{ - let prompt = command.trim().to_owned(); - if prompt.is_empty() { - return; - } - // … attachment collection + dispatch to submit_handoff / spawn_agent -} -``` -Replace the unconditional `prompt.is_empty()` short-circuit with one that exempts handoff panes: -```rust path=null start=null -let prompt = command.trim().to_owned(); -let is_handoff = self - .ambient_agent_view_model() - .is_some_and(|m| m.as_ref(ctx).is_local_to_cloud_handoff()); -if prompt.is_empty() && !is_handoff { - return; -} -``` -The downstream branch at lines 11949-11957 already routes to `submit_handoff` vs `spawn_agent` based on `is_local_to_cloud_handoff()`, so no further dispatch logic changes. `submit_handoff` accepts `prompt: ""` unchanged and lets the spawn flow through to the server. -Fresh cloud-mode runs are unaffected: `is_handoff = false` for those, so the empty-prompt short-circuit still fires. -### 2. No additional send-button gating change -The send button itself isn't gated by emptiness — the only enable check is the input.rs path above. The handoff-specific guards added by REMOTE-1486 (workspace derivation complete, prep token cached) live inside `submit_handoff` and remain in force: -```rust path=null start=null -if handoff.touched_workspace.is_none() { - log::warn!("submit_handoff called before touched-workspace derivation completed"); - return; -} -let Some(prep_token) = handoff.snapshot_prep_token.clone() else { - log::warn!("submit_handoff called before snapshot upload completed"); - return; -}; -``` -These continue to gate submission whether or not the prompt is empty. -### 3. Amend the product spec -Update `specs/REMOTE-1486/PRODUCT.md:33` to drop the "prompt non-empty" requirement for handoff panes: -- Old: *"The send button follows the regular cloud-mode rules (prompt non-empty) plus a guard until touched-repo derivation completes."* -- New: *"The send button is enabled once touched-repo derivation completes and the snapshot prep token has been minted; the prompt may be empty (in which case the cloud agent receives only the forked conversation history and the rehydration message)."* -Also update §11 ("Submitting") to note that the prompt is optional, and §16 ("Pre-SessionStarted visualization") to clarify that the queued-user-query indicator is suppressed when the prompt is empty (already the implementation behavior). -### 4. CLI surface — out of scope -The `agent run-cloud` clap arg group at `crates/warp_cli/src/agent.rs:339-344` requires one of `prompt`, `saved_prompt`, or `skill`. CLI-driven handoff isn't shipping in V0 (per REMOTE-1486 PRODUCT.md non-goals), so the clap group stays as-is. When CLI handoff lands, that group will need to add `task_id` / a new `--fork-from-conversation` flag and the runtime check at `app/src/ai/agent_sdk/ambient.rs:259-268` will need a parallel relaxation. Tracking issue at follow-ups. -## End-to-end flow -```mermaid -sequenceDiagram - participant U as User - participant Pane as Handoff pane (Input) - participant VM as AmbientAgentViewModel - participant API as warp-server - U->>Pane: Click "Hand off to cloud" chip - Pane->>VM: set_pending_handoff(...) - Pane->>VM: derive_touched_workspace + upload_snapshot_for_handoff (async) - VM-->>Pane: PendingHandoffChanged (workspace + prep_token cached) - U->>Pane: Press Cmd-Enter with empty buffer - Pane->>Pane: prompt = "" ; is_handoff = true ; do NOT short-circuit - Pane->>VM: submit_handoff(prompt="", attachments=[]) - VM->>VM: Build SpawnAgentRequest { prompt: "", fork_from_conversation_id, handoff_prep_token, ... } - VM->>API: POST /agent/runs - API-->>VM: {task_id, run_id} - VM->>VM: Status::WaitingForSession - Note over Pane: view_impl.rs DispatchedAgent: queued-prompt block skipped (prompt is empty) - Note over Pane: Standard "Setting up environment" + "Running setup commands" indicators show - API-->>VM: SessionStarted (cloud agent first turn = rehydration summary) -``` -## Risks and mitigations -### Accidental empty submission on fresh cloud-mode panes -The new `!is_handoff` guard means we still block empty-prompt submits on fresh cloud-mode panes. Risk: a future refactor accidentally drops the `is_handoff` predicate and lets fresh cloud-mode runs submit with empty prompts, which the server would now also accept (per the relaxed `prompt or skill_spec or fork_from_conversation_id` gate would still reject because no fork is set, but the symmetry argument is worth preserving). -*Mitigation:* keep the predicate inline at the call site — don't extract it into a helper that future changes could grow conditions onto. The server-side gate's `fork_from_conversation_id` requirement is the load-bearing safety net regardless. -### User confusion: "did my submit go through?" -Submitting with an empty buffer produces no visible user message in the conversation (the queued-prompt block already suppresses on empty). The pane still shows the standard "Setting up environment" + spinner UI, but the submit feels less acknowledged. -*Mitigation:* the existing handoff submission state (`HandoffSubmissionState::Starting`) already drives the input button's "Starting…" state. The visual feedback is the same as the with-prompt case minus the queued-user-query block. Consider a follow-up to insert a small "Handed off without prompt" pill if user feedback indicates the no-prompt case feels under-acknowledged. -### Send key bindings other than Cmd-Enter -The submit dispatch handles a single submit path; any alternate trigger (e.g. clicking the send icon) routes through the same `handle_input_submit` flow. Visual inspection of the input footer + agent input footer confirms there's no second submit path that bypasses this gate. -## Testing and validation -### Automated coverage -- New unit test on the input submit path — empty buffer + handoff pane dispatches `submit_handoff` with `prompt: ""`; empty buffer + fresh cloud-mode pane no-ops as today. -- Existing handoff tests in `app/src/ai/blocklist/handoff/` should still pass; add a parameterized variant covering the empty-prompt path. -- `app/src/terminal/view/ambient_agent/view_impl.rs` queued-user-query test (if not already) — confirms an empty `request.prompt` skips block insertion. -### Manual / integration validation -- Open a local conversation with at least one touched repo. Click the chip. Wait for derivation + prep token to settle. Click send with an empty prompt. Confirm cloud sandbox starts, applies patches, and posts a summary turn. -- Repeat with a typed prompt to confirm no regression. -- Open a fresh cloud-mode pane (not via handoff). Confirm empty-prompt submit still no-ops. -- Toggle `LocalToCloudHandoff` off; confirm the chip is hidden (no UI change in this PR). -### Feature-flag rollout -This change is gated by the existing `LocalToCloudHandoff` client flag — handoff panes only exist when the flag is on, so the new branch is unreachable otherwise. No new flag. -## Follow-ups -- **CLI handoff entry point.** When CLI handoff lands, relax the `agent run-cloud` clap arg group and the runtime check at `ambient.rs:259-268` to accept a `--task-id` / `--fork-from-conversation` invocation as a sufficient prompt source. -- **"Handed off without prompt" pill.** If user feedback on the no-prompt case suggests the submit feels under-acknowledged, add a small visual marker in the queued region indicating the handoff was sent without a follow-up. -- **Send button label nuance.** Today the send button shows the standard send icon. In the no-prompt handoff case we could swap to a "Hand off" or arrow icon to make the action feel less like a regular message send. Defer until UX feedback warrants it. From 53f23d73774be3a0c7707b2bb1c665439573f91a Mon Sep 17 00:00:00 2001 From: Harry Date: Fri, 1 May 2026 19:23:04 -0400 Subject: [PATCH 5/5] address agent comments --- app/src/ai/blocklist/block.rs | 23 ++- app/src/ai/blocklist/block/view_impl.rs | 10 +- .../ai/blocklist/controller/shared_session.rs | 21 +-- app/src/ai/blocklist/history_model_test.rs | 100 +++++++++++- app/src/terminal/input.rs | 21 +-- app/src/terminal/view.rs | 27 ++- app/src/terminal/view/ambient_agent/mod.rs | 24 +-- app/src/terminal/view/ambient_agent/model.rs | 148 ++++++++--------- .../terminal/view/ambient_agent/view_impl.rs | 23 +-- app/src/terminal/view/pane_impl.rs | 1 + app/src/workspace/view.rs | 151 ++++++++--------- specs/REMOTE-1519/TECH.md | 154 ++++++------------ 12 files changed, 346 insertions(+), 357 deletions(-) diff --git a/app/src/ai/blocklist/block.rs b/app/src/ai/blocklist/block.rs index 816ab0645..85f9790d8 100644 --- a/app/src/ai/blocklist/block.rs +++ b/app/src/ai/blocklist/block.rs @@ -46,7 +46,7 @@ use crate::code_review::comment_rendering::{CommentViewCard, HeaderClickHandler} use crate::terminal::model::BlockId; use crate::terminal::model_events::ModelEvent; use crate::terminal::model_events::ModelEventDispatcher; -use crate::terminal::view::ambient_agent::AmbientAgentViewModel; +use crate::terminal::view::ambient_agent::{AmbientAgentViewModel, AmbientAgentViewModelEvent}; use crate::terminal::TerminalModel; use crate::view_components::action_button::{ ActionButtonTheme, NakedTheme, PrimaryTheme, SecondaryTheme, @@ -1206,11 +1206,24 @@ impl AIBlock { } // Re-render when the cloud agent transitions through setup phases so the response - // footer (thumbs up/down, fork, credits) toggles correctly with `is_cloud_agent_pre_first_exchange`. - // Without this, the prior exchange's footer remains visible during a follow-up's - // "Step n/3" loading until something else triggers a redraw. + // footer toggles correctly with `is_cloud_agent_pre_first_exchange`. Each event below + // toggles that helper's output. if let Some(ambient_agent_view_model) = ambient_agent_view_model.as_ref() { - ctx.subscribe_to_model(ambient_agent_view_model, |_, _, _, ctx| ctx.notify()); + ctx.subscribe_to_model(ambient_agent_view_model, |_, _, event, ctx| { + if matches!( + event, + AmbientAgentViewModelEvent::DispatchedAgent + | AmbientAgentViewModelEvent::FollowupDispatched + | AmbientAgentViewModelEvent::SessionReady { .. } + | AmbientAgentViewModelEvent::FollowupSessionReady { .. } + | AmbientAgentViewModelEvent::Failed { .. } + | AmbientAgentViewModelEvent::Cancelled + | AmbientAgentViewModelEvent::NeedsGithubAuth + | AmbientAgentViewModelEvent::HarnessCommandStarted + ) { + ctx.notify(); + } + }); } ctx.subscribe_to_model(&context_model, |_, _, event, ctx| { diff --git a/app/src/ai/blocklist/block/view_impl.rs b/app/src/ai/blocklist/block/view_impl.rs index cd3fb6718..6f23b1775 100644 --- a/app/src/ai/blocklist/block/view_impl.rs +++ b/app/src/ai/blocklist/block/view_impl.rs @@ -915,11 +915,11 @@ impl View for AIBlock { query_and_index .as_ref() .is_some_and(|(query_for_display, ..)| { - let has_optimistic_user_query = self - .ambient_agent_view_model - .as_ref() - .is_some_and(|model| { - model.as_ref(app).has_optimistic_user_query(query_for_display) + let has_optimistic_user_query = + self.ambient_agent_view_model.as_ref().is_some_and(|model| { + model + .as_ref(app) + .has_optimistic_user_query(query_for_display) }); should_hide_ai_block_query_and_header( has_optimistic_user_query, diff --git a/app/src/ai/blocklist/controller/shared_session.rs b/app/src/ai/blocklist/controller/shared_session.rs index a004cab8e..2cf99e6fd 100644 --- a/app/src/ai/blocklist/controller/shared_session.rs +++ b/app/src/ai/blocklist/controller/shared_session.rs @@ -113,15 +113,9 @@ impl BlocklistAIController { self.find_existing_conversation_by_server_token(&init_event.conversation_id, ctx); let conversation_id = existing_conversation_id .inspect(|conversation_id| { - // The local conversation is bound to a cloud-side session, so the cloud agent - // is the source of truth for user inputs going forward. Mark it as a shared- - // session view so `apply_client_actions` reconstructs UserQuery / ActionResult - // inputs from the cloud agent's response messages — without this, the local - // exchange's inputs stay empty and the AI block has no user query to render. - // Idempotent for conversations that already have the flag set (e.g. regular - // cloud mode, where `start_new_conversation` set it at creation time); - // important for REMOTE-1519 local-to-cloud handoff, where the local fork - // started as a non-shared-session conversation. + // The local conversation is bound to a cloud-side session, so mark it as a + // shared-session view; otherwise `apply_client_actions` won't reconstruct + // UserQuery / ActionResult inputs from the cloud agent's response messages. history.update(ctx, |history, _| { history.set_viewing_shared_session_for_conversation(*conversation_id, true); }); @@ -250,12 +244,9 @@ impl BlocklistAIController { } drop(model); - // Only skip the replayed response stream when we already have a local - // exchange whose `server_output_id` matches its `request_id`. New - // exchanges that the cloud agent appended after the local fork (e.g. - // the user's first submitted prompt for a REMOTE-1519 local-to-cloud - // handoff pane) carry request_ids we have never seen and must flow - // through normally so the viewer's blocklist picks them up. + // Only skip the replayed response when we already have a local exchange whose + // `server_output_id` matches `request_id`. New exchanges (e.g. the user's first + // post-handoff prompt) carry unseen request_ids and must flow through normally. let history = BlocklistAIHistoryModel::as_ref(ctx); let known_server_output_ids: Vec = history .conversation(&conversation_id) diff --git a/app/src/ai/blocklist/history_model_test.rs b/app/src/ai/blocklist/history_model_test.rs index de99ed911..b8e318732 100644 --- a/app/src/ai/blocklist/history_model_test.rs +++ b/app/src/ai/blocklist/history_model_test.rs @@ -1355,10 +1355,7 @@ fn test_fork_then_bind_handoff_token_resolves_to_forked_conversation() { // Bind the cloud T_C returned by `prepare-fork` to the forked conversation. history_model.update(&mut app, |model, _| { - model.set_server_conversation_token_for_conversation( - forked_id, - "cloud-T".to_string(), - ); + model.set_server_conversation_token_for_conversation(forked_id, "cloud-T".to_string()); }); let cloud_token = ServerConversationToken::new("cloud-T".to_string()); @@ -1371,3 +1368,98 @@ fn test_fork_then_bind_handoff_token_resolves_to_forked_conversation() { }); }); } + +/// REMOTE-1519 local-to-cloud handoff requires `preserve_task_ids: true` so the local fork's +/// task store matches the cloud-side fork (a byte-for-byte GCS copy of the source). Verifies +/// that root and subtask ids are preserved across the fork, the subtask's `parent_task_id` +/// reference still points at the source's root id, and only the root task description is +/// prefixed. +#[test] +fn test_fork_conversation_preserves_task_ids_when_requested() { + use crate::ai::agent::conversation::AIConversation; + use crate::persistence::model::AgentConversationData; + use crate::test_util::ai_agent_tasks::{create_api_subtask, create_api_task, create_message}; + + App::test((), |mut app| async move { + initialize_settings_for_tests(&mut app); + + let (sender, _receiver) = std::sync::mpsc::sync_channel(2); + let mut global_resource_handles = GlobalResourceHandles::mock(&mut app); + global_resource_handles.model_event_sender = Some(sender); + app.add_singleton_model(|_| GlobalResourceHandlesProvider::new(global_resource_handles)); + + let history_model = app.add_singleton_model(|_| BlocklistAIHistoryModel::new(vec![], &[])); + let terminal_view_id = EntityId::new(); + + let source_id = AIConversationId::new(); + let mut root_task = create_api_task( + "root-task-id", + vec![create_message("root-msg", "root-task-id")], + ); + root_task.description = "Original root".to_string(); + let mut subtask = create_api_subtask( + "subtask-id", + "root-task-id", + vec![create_message("sub-msg", "subtask-id")], + ); + subtask.description = "Original subtask".to_string(); + let source = AIConversation::new_restored( + source_id, + vec![root_task, subtask], + Some(AgentConversationData { + server_conversation_token: Some("src-token".to_string()), + conversation_usage_metadata: None, + reverted_action_ids: None, + forked_from_server_conversation_token: None, + artifacts_json: None, + parent_agent_id: None, + agent_name: None, + parent_conversation_id: None, + run_id: None, + autoexecute_override: None, + last_event_sequence: None, + }), + ) + .expect("restored source conversation should build"); + history_model.update(&mut app, |model, ctx| { + model.restore_conversations(terminal_view_id, vec![source], ctx); + }); + + history_model.update(&mut app, |model, ctx| { + let source = model + .conversation(&source_id) + .expect("source conversation must be in memory after restore") + .clone(); + let forked = model + .fork_conversation(&source, "[Fork] ", true, ctx) + .expect("fork must succeed when sqlite sender is wired up"); + + let forked_tasks: Vec<&warp_multi_agent_api::Task> = + forked.all_tasks().filter_map(|t| t.source()).collect(); + let forked_root = forked_tasks + .iter() + .find(|t| t.id == "root-task-id") + .expect("root task id must be preserved across fork"); + let forked_subtask = forked_tasks + .iter() + .find(|t| t.id == "subtask-id") + .expect("subtask id must be preserved across fork"); + assert_eq!( + forked_subtask + .dependencies + .as_ref() + .map(|d| d.parent_task_id.as_str()), + Some("root-task-id"), + "subtask must still reference the original root task id", + ); + assert_eq!( + forked_root.description, "[Fork] Original root", + "root task description must be prefixed", + ); + assert_eq!( + forked_subtask.description, "Original subtask", + "subtask description must not be prefixed", + ); + }); + }); +} diff --git a/app/src/terminal/input.rs b/app/src/terminal/input.rs index 3742830c6..a698db0a2 100644 --- a/app/src/terminal/input.rs +++ b/app/src/terminal/input.rs @@ -2142,14 +2142,8 @@ impl Input { }); } }); - // REMOTE-1519: chip-click handoff prep+upload failures arrive - // here so we can surface the error as a toast. The editor - // buffer is intentionally left alone — the user's prompt was - // never cleared (chip-click happens before submit), so there - // is nothing to restore. - if let AmbientAgentViewModelEvent::HandoffSubmissionFailed { error_message } = - event - { + // Surface async snapshot prep+upload failures as a toast. + if let AmbientAgentViewModelEvent::HandoffPrepFailed { error_message } = event { let window_id = ctx.window_id(); let toast_message = format!("Failed to prepare cloud handoff: {error_message}"); ToastStack::handle(ctx).update(ctx, |ts, ctx| { @@ -2160,13 +2154,8 @@ impl Input { ); }); } - // Re-render on status-footer transitions (V1 cloud-mode setup) and on the - // status-affecting events that decide whether the input is in its composing - // shape. The composing-shape transitions matter for the V1 handoff path: - // its submit goes through `submit_handoff` which only flips the model to - // `WaitingForSession` after the async prep+upload completes, so the input - // would otherwise keep rendering the composing chrome (harness selector, - // attachment chips) until something else triggers a notify. + // Re-render on status-footer transitions and on status-affecting events that + // decide whether the input is in its composing shape. let should_notify = handle.as_ref(ctx).should_show_status_footer() || matches!( event, @@ -2178,7 +2167,7 @@ impl Input { | AmbientAgentViewModelEvent::Cancelled | AmbientAgentViewModelEvent::NeedsGithubAuth | AmbientAgentViewModelEvent::HarnessSelected - | AmbientAgentViewModelEvent::HandoffSubmissionFailed { .. } + | AmbientAgentViewModelEvent::HandoffPrepFailed { .. } ); if should_notify { ctx.notify(); diff --git a/app/src/terminal/view.rs b/app/src/terminal/view.rs index aa4839589..c78bf0ccf 100644 --- a/app/src/terminal/view.rs +++ b/app/src/terminal/view.rs @@ -2876,6 +2876,17 @@ impl TerminalView { .is_some_and(|index| index > 0) } + /// True when this pane's cloud agent is in any pre-first-exchange phase. + /// Thin wrapper over the free function that threads `self`'s handles. + fn is_cloud_agent_pre_first_exchange(&self, app: &AppContext) -> bool { + is_cloud_agent_pre_first_exchange( + self.ambient_agent_view_model.as_ref(), + &self.agent_view_controller, + &self.model, + app, + ) + } + pub fn create_sync_event_based_on_terminal_state(&self, app_ctx: &AppContext) -> SyncEvent { if !matches!( self.model.lock().terminal_input_state(), @@ -6880,12 +6891,7 @@ impl TerminalView { // agent exchange arrives, we hide the interactive input view. A non-interactive footer is // rendered instead (see `TerminalView::render`). if !FeatureFlag::CloudModeSetupV2.is_enabled() - && is_cloud_agent_pre_first_exchange( - self.ambient_agent_view_model.as_ref(), - &self.agent_view_controller, - &self.model, - app, - ) + && self.is_cloud_agent_pre_first_exchange(app) { return false; } @@ -25728,14 +25734,7 @@ impl View for TerminalView { if self.is_input_box_visible(&model, app) { column.add_child(self.render_input()); - } else if !model.is_read_only() - && is_cloud_agent_pre_first_exchange( - self.ambient_agent_view_model.as_ref(), - &self.agent_view_controller, - &self.model, - app, - ) - { + } else if !model.is_read_only() && self.is_cloud_agent_pre_first_exchange(app) { column.add_child(ambient_agent::render_loading_footer(appearance)); } else if self.show_remote_server_loading_footer(&model, app) { column.add_child( diff --git a/app/src/terminal/view/ambient_agent/mod.rs b/app/src/terminal/view/ambient_agent/mod.rs index c070968cf..793561b37 100644 --- a/app/src/terminal/view/ambient_agent/mod.rs +++ b/app/src/terminal/view/ambient_agent/mod.rs @@ -19,18 +19,21 @@ pub use host_selector::{ Host, HostSelector, HostSelectorAction, HostSelectorEvent, NakedHeaderButtonTheme, }; pub use loading_screen::{render_cloud_mode_error_screen, render_cloud_mode_loading_screen}; -pub(crate) use model::PendingHandoff; pub use model::{ AgentProgress, AmbientAgentViewModel, AmbientAgentViewModelEvent, HandoffSubmissionState, Status, }; +pub(crate) use model::{PendingHandoff, SnapshotPrepStatus}; pub use model_selector::{ModelSelector, ModelSelectorAction, ModelSelectorEvent}; pub use progress::{render_progress, ProgressProps, ProgressStep, ProgressStepState}; pub use progress_ui_state::AmbientAgentProgressUIState; pub use tips::{get_cloud_mode_tips, CloudModeTip}; + use parking_lot::FairMutex; use std::sync::Arc; use warp_core::features::FeatureFlag; +use warpui::geometry::vector::Vector2F; +use warpui::{AppContext, ModelHandle, ViewHandle, WindowId}; use crate::ai::blocklist::agent_view::{AgentViewController, AgentViewState}; use crate::pane_group::TerminalViewResources; @@ -38,8 +41,6 @@ use crate::terminal::shared_session; use crate::terminal::TerminalManager; use crate::terminal::TerminalModel; use crate::terminal::TerminalView; -use warpui::geometry::vector::Vector2F; -use warpui::{AppContext, ModelHandle, ViewHandle, WindowId}; /// Creates a cloud mode terminal view and manager for ambient agent sessions. /// @@ -116,7 +117,7 @@ pub fn create_cloud_mode_view( | AmbientAgentViewModelEvent::HostSelected | AmbientAgentViewModelEvent::HarnessCommandStarted | AmbientAgentViewModelEvent::PendingHandoffChanged - | AmbientAgentViewModelEvent::HandoffSubmissionFailed { .. } + | AmbientAgentViewModelEvent::HandoffPrepFailed { .. } | AmbientAgentViewModelEvent::UpdatedSetupCommandVisibility => {} } }); @@ -126,19 +127,8 @@ pub fn create_cloud_mode_view( } /// Returns `true` when a cloud agent shared session is in any pre-first-exchange phase — -/// either still spawning (loading: "Connecting to Host" / "Creating Environment" / -/// "Starting Environment") or running setup commands before the first agent turn. In this -/// state, we hide the interactive input and render a loading footer instead. -/// -/// During the loading phase the view-model status is `WaitingForSession`; once the cloud -/// session is ready and setup commands are running it transitions to `AgentRunning` and we -/// rely on `is_executing_oz_environment_startup_commands` (initialized true on cloud-agent -/// pane creation, flipped false on the first `AppendedExchange`) to decide whether the -/// agent has produced its first real turn yet. The flag is correct for both fresh cloud -/// panes and REMOTE-1519 local-to-cloud handoff panes (whose forked conversation already -/// has exchanges from the local source, but whose cloud agent has not yet produced its -/// first new turn) — the `AppendedExchange` handler in `view.rs` ensures the flag only -/// flips to false on a NEW cloud turn, not on replay-driven events. +/// either still spawning (loading screen) or running setup commands before the first +/// agent turn. In this state, we hide the interactive input and render a loading footer. pub fn is_cloud_agent_pre_first_exchange( ambient_agent_view_model: Option<&ModelHandle>, agent_view_controller: &ModelHandle, diff --git a/app/src/terminal/view/ambient_agent/model.rs b/app/src/terminal/view/ambient_agent/model.rs index 5cea32230..150fcb188 100644 --- a/app/src/terminal/view/ambient_agent/model.rs +++ b/app/src/terminal/view/ambient_agent/model.rs @@ -65,20 +65,46 @@ pub enum SessionStartupKind { Followup, } -/// State of an in-flight local-to-cloud handoff submission. -/// -/// Gates `submit_handoff` against double-submits. Stays `Idle` from the moment -/// the pane opens; flips to `Starting` when the user submits and the orchestrator -/// runs; flips to `Failed` if prep / upload fails so the user can retry by -/// re-submitting from the same pane. +/// Gates `submit_handoff` against double-submits. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub enum HandoffSubmissionState { #[default] Idle, Starting, +} + +/// Outcome of the chip-click async snapshot upload. +#[derive(Debug, Clone, PartialEq, Eq, Default)] +pub enum SnapshotPrepStatus { + /// Upload is still in flight, or has not started yet. + #[default] + Pending, + /// Touched workspace was empty so no upload happened. The cloud agent will + /// start with no rehydration content. + SkippedEmptyWorkspace, + /// Upload succeeded; the inner `prep_token` is sent to the server on spawn. + Uploaded(String), + /// Upload failed. The error message is surfaced as a toast via + /// `HandoffPrepFailed`. Failed(String), } +impl SnapshotPrepStatus { + /// True when the upload has settled successfully (uploaded or skipped). + /// Pending and Failed both block submit. + fn is_settled(&self) -> bool { + matches!(self, Self::Uploaded(_) | Self::SkippedEmptyWorkspace) + } + + /// Returns the `handoff_prep_token` to send on spawn, if any. + fn prep_token(&self) -> Option { + match self { + Self::Uploaded(token) => Some(token.clone()), + Self::SkippedEmptyWorkspace | Self::Pending | Self::Failed(_) => None, + } + } +} + /// Per-pane handoff context. Seeded by the chip / slash command's open path on a /// fresh cloud-mode pane and consumed by `submit_handoff`. Its presence is the /// single source of truth for "this pane is in handoff mode" via @@ -89,13 +115,10 @@ pub(crate) struct PendingHandoff { /// chip-click time. Sent under `conversation_id` (resume semantics) on the /// subsequent `POST /agent/runs` request so the new task picks up the fork. pub(crate) forked_conversation_id: String, - /// `None` until `derive_touched_workspace` completes (REMOTE-1486). + /// `None` until `derive_touched_workspace` completes. pub(crate) touched_workspace: Option, - /// Snapshot upload outcome: `None` while the upload is in flight or never - /// started; `Some(Some(token))` once minted (the standard case); - /// `Some(None)` when the workspace was empty so no upload happened. - /// `submit_handoff` requires this to be `Some(_)` before spawning. - pub(crate) snapshot_prep_token: Option>, + /// Outcome of the async snapshot upload. + pub(crate) snapshot_prep: SnapshotPrepStatus, /// Gates submit — prevents double-submitting while the spawn is in flight. pub(crate) submission_state: HandoffSubmissionState, } @@ -332,27 +355,22 @@ impl AmbientAgentViewModel { CLIAgent::from_harness(self.harness) } - /// True when this pane is a local-to-cloud handoff pane. Flipped on the moment - /// the chip or `/oz-cloud-handoff` slash command opens this pane (see - /// `Workspace::start_local_to_cloud_handoff`) and stays true through and past the - /// spawn, so post-spawn flows (queued-prompt rendering, V2-input suppression, - /// submit interception) all observe the same source of truth. + /// True when this pane is a local-to-cloud handoff pane. Set when the handoff opens + /// the pane and stays true through and past the spawn. pub(crate) fn is_local_to_cloud_handoff(&self) -> bool { self.pending_handoff.is_some() } - /// True when this pane is a handoff pane AND the async - /// `derive_touched_workspace` derivation has finished AND no submission is - /// already in flight. Callers in the input layer use this to gate clearing - /// the editor buffer on submit — if derivation hasn't completed yet, we - /// must leave the prompt and pending attachments alone instead of - /// silently dropping them on the floor. + /// True when this pane is a handoff pane and the touched-workspace derivation + + /// snapshot upload have both settled and no submission is in flight. Used by the + /// input layer to gate clearing the editor buffer on submit. pub(crate) fn is_handoff_ready_to_submit(&self) -> bool { let Some(handoff) = self.pending_handoff.as_ref() else { return false; }; handoff.touched_workspace.is_some() - && !matches!(handoff.submission_state, HandoffSubmissionState::Starting) + && handoff.snapshot_prep.is_settled() + && matches!(handoff.submission_state, HandoffSubmissionState::Idle) } /// Seeds the handoff context onto this pane. Called by the workspace bootstrap @@ -380,51 +398,35 @@ impl AmbientAgentViewModel { ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); } - /// Updates the submission state on the pending handoff. No-op when no handoff - /// context is set. - pub(crate) fn set_pending_handoff_submission_state( + /// Records the outcome of the async snapshot upload. The standard success + /// case is `Uploaded(token)`; `SkippedEmptyWorkspace` when the workspace + /// had nothing to upload; `Failed` is set by `record_handoff_prep_failed`. + /// No-op when no handoff context is set. + pub(crate) fn set_pending_handoff_snapshot_prep( &mut self, - state: HandoffSubmissionState, + snapshot_prep: SnapshotPrepStatus, ctx: &mut ModelContext, ) { let Some(handoff) = self.pending_handoff.as_mut() else { return; }; - handoff.submission_state = state; + handoff.snapshot_prep = snapshot_prep; ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); } - /// Records a chip-click handoff prep+upload failure on the pending handoff. - /// Flips the submission state to `Failed` (so the status footer / banner - /// reflects the error) and emits `HandoffSubmissionFailed` so the input - /// layer can surface a user-visible toast. + /// Records a snapshot prep+upload failure on the pending handoff. Sets + /// `snapshot_prep` to `Failed` (so submit stays gated) and emits + /// `HandoffPrepFailed` so the input layer can surface a user-visible toast. pub(crate) fn record_handoff_prep_failed( &mut self, error_message: String, ctx: &mut ModelContext, ) { - self.set_pending_handoff_submission_state( - HandoffSubmissionState::Failed(error_message.clone()), + self.set_pending_handoff_snapshot_prep( + SnapshotPrepStatus::Failed(error_message.clone()), ctx, ); - ctx.emit(AmbientAgentViewModelEvent::HandoffSubmissionFailed { error_message }); - } - - /// Records the outcome of the chip-click async snapshot upload on the pending - /// handoff so `submit_handoff` can read the prep token without re-running - /// the upload. `Some(token)` is the standard success case; `None` means the - /// touched workspace was empty (no upload happened, no rehydration needed). - /// No-op when no handoff context is set. - pub(crate) fn set_pending_handoff_snapshot_prep_token( - &mut self, - prep_token: Option, - ctx: &mut ModelContext, - ) { - let Some(handoff) = self.pending_handoff.as_mut() else { - return; - }; - handoff.snapshot_prep_token = Some(prep_token); - ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); + ctx.emit(AmbientAgentViewModelEvent::HandoffPrepFailed { error_message }); } /// Whether the harness CLI has started running. Only meaningful for non-oz runs. @@ -1155,13 +1157,9 @@ impl AmbientAgentViewModel { ctx.emit(AmbientAgentViewModelEvent::Cancelled); } - /// Drive the local-to-cloud handoff submission for this pane. - /// - /// Called by the cloud-mode submit dispatch when the pane has `pending_handoff` - /// set. The fork (REMOTE-1519) and snapshot upload (REMOTE-1486) both happen - /// at chip-click time — this method just reads the cached `forked_conversation_id` - /// and `snapshot_prep_token` off the pending handoff and routes through the - /// same `spawn_agent_with_request` path that regular cloud-mode runs use. + /// Drive the local-to-cloud handoff submission for this pane. Reads the cached + /// `forked_conversation_id` and `snapshot_prep` off the pending handoff and routes + /// through `spawn_agent_with_request`. Caller must check `is_handoff_ready_to_submit`. pub(crate) fn submit_handoff( &mut self, prompt: String, @@ -1180,21 +1178,21 @@ impl AmbientAgentViewModel { log::warn!("submit_handoff called before touched-workspace derivation completed"); return; } - let Some(prep_token) = handoff.snapshot_prep_token.clone() else { - log::warn!("submit_handoff called before snapshot upload completed"); + if !handoff.snapshot_prep.is_settled() { + log::warn!( + "submit_handoff called with unsettled snapshot_prep: {:?}", + handoff.snapshot_prep + ); return; - }; + } + let prep_token = handoff.snapshot_prep.prep_token(); let forked_conversation_id = handoff.forked_conversation_id.clone(); handoff.submission_state = HandoffSubmissionState::Starting; ctx.emit(AmbientAgentViewModelEvent::PendingHandoffChanged); - // Build the spawn config from the model so the env selector chip's - // pick (and `WARP_CLOUD_MODE_DEFAULT_HOST` / model / harness defaults) - // propagate into the spawn request. + // Build the spawn config so the env selector chip + `/plan` / `/orchestrate` + // mode prefix propagate into the request, matching a regular cloud-mode spawn. let config = Some(self.build_default_spawn_config(ctx)); - // Strip any `/plan` / `/orchestrate` prefix from the prompt and surface - // it as the request's `mode` so the cloud agent honors the same modes - // the local-mode spawn path does. let (prompt, mode) = extract_user_query_mode(prompt); let request = SpawnAgentRequest { prompt, @@ -1289,15 +1287,11 @@ pub enum AmbientAgentViewModelEvent { /// Fires once per run and signals the transition out of the pre-first-exchange phase /// for claude / gemini / other third-party harnesses. HarnessCommandStarted, - /// The pane's `pending_handoff` was updated — derivation completed, submission - /// state transitioned, etc. + /// The pane's `pending_handoff` was updated. PendingHandoffChanged, - /// The handoff prep + upload phase failed at chip-click time. The input - /// layer subscribes to surface the error as a toast; the editor buffer is - /// untouched because the user's prompt was never cleared (submit is gated - /// behind the cached prep token, so a failed upload prevents submit - /// entirely instead of consuming the prompt). - HandoffSubmissionFailed { + /// The async snapshot prep+upload failed. The input layer subscribes to + /// surface the error as a toast. + HandoffPrepFailed { error_message: String, }, diff --git a/app/src/terminal/view/ambient_agent/view_impl.rs b/app/src/terminal/view/ambient_agent/view_impl.rs index 85b11fcde..d0fddde62 100644 --- a/app/src/terminal/view/ambient_agent/view_impl.rs +++ b/app/src/terminal/view/ambient_agent/view_impl.rs @@ -31,9 +31,7 @@ use super::loading_screen::{ render_cloud_mode_cancelled_screen, render_cloud_mode_error_screen, render_cloud_mode_github_auth_required_screen, render_cloud_mode_loading_screen, }; -use super::{ - is_cloud_agent_pre_first_exchange, AmbientAgentEntryBlock, AmbientAgentViewModelEvent, -}; +use super::{AmbientAgentEntryBlock, AmbientAgentViewModelEvent}; use crate::terminal::view::Event as TerminalViewEvent; const CHILD_AGENT_GITHUB_AUTH_REQUIRED_BLOCKED_ACTION: &str = "GitHub authentication required before starting the child agent."; @@ -364,15 +362,11 @@ impl TerminalView { ctx.notify(); } AmbientAgentViewModelEvent::PendingHandoffChanged => { - // REMOTE-1486: re-render so the handoff banner picks up the new - // touched-workspace data, submission state, or pending-handoff - // teardown. ctx.notify(); } - AmbientAgentViewModelEvent::HandoffSubmissionFailed { .. } => { - // The user-visible toast is handled by `Input`'s subscription - // to the same event; nothing for the terminal view to do here - // beyond the implicit re-render. + AmbientAgentViewModelEvent::HandoffPrepFailed { .. } => { + // The toast is surfaced by `Input`'s subscription; this just + // triggers a re-render of pane chrome. ctx.notify(); } AmbientAgentViewModelEvent::UpdatedSetupCommandVisibility => (), @@ -392,12 +386,7 @@ impl TerminalView { return; }; - if !is_cloud_agent_pre_first_exchange( - self.ambient_agent_view_model.as_ref(), - &self.agent_view_controller, - &self.model, - ctx, - ) { + if !self.is_cloud_agent_pre_first_exchange(ctx) { return; } @@ -432,7 +421,7 @@ impl TerminalView { .set_did_execute_a_setup_command(true); }); - let setup_command_text = ctx.add_typed_action_view(|ctx| { + let setup_command_text = ctx.add_typed_action_view(|ctx| { super::CloudModeSetupTextBlock::new( ambient_agent_view_model.clone(), self.agent_view_controller.clone(), diff --git a/app/src/terminal/view/pane_impl.rs b/app/src/terminal/view/pane_impl.rs index 48b245775..678e7b010 100644 --- a/app/src/terminal/view/pane_impl.rs +++ b/app/src/terminal/view/pane_impl.rs @@ -954,6 +954,7 @@ impl TerminalView { || is_cloud_agent_pre_first_exchange( self.ambient_agent_view_model.as_ref(), &self.agent_view_controller, + &self.model, ctx, ) } diff --git a/app/src/workspace/view.rs b/app/src/workspace/view.rs index 1b78bc4ec..217a24096 100644 --- a/app/src/workspace/view.rs +++ b/app/src/workspace/view.rs @@ -27,6 +27,7 @@ use self::vertical_tabs::{ pub(crate) use onboarding::OnboardingTutorial; use crate::ai::active_agent_views_model::ActiveAgentViewsModel; +use crate::ai::agent::conversation::AIConversation; use crate::ai::agent_conversations_model::AgentConversationsModel; use crate::ai::agent_conversations_model::ConversationOrTask; use crate::ai::agent_management::notifications::toast_stack::AgentNotificationToastStack; @@ -37,15 +38,22 @@ use crate::ai::agent_management::notifications::NotificationFilter; use crate::ai::agent_management::telemetry::AgentManagementTelemetryEvent; use crate::ai::agent_management::view::{AgentManagementView, AgentManagementViewEvent}; use crate::ai::agent_management::AgentManagementEvent; +use crate::ai::agent_sdk::driver::upload_snapshot_for_handoff; use crate::ai::ambient_agents::telemetry::{CloudAgentTelemetryEvent, CloudModeEntryPoint}; use crate::ai::ambient_agents::AmbientAgentTaskId; use crate::ai::blocklist::agent_view::agent_input_footer::editor::AgentToolbarEditorMode; +use crate::ai::blocklist::agent_view::agent_input_footer::sort_environments_by_recency; use crate::ai::blocklist::agent_view::AgentViewEntryOrigin; -use crate::ai::blocklist::history_model::load_conversation_from_server; +use crate::ai::blocklist::handoff::touched_repos::{ + derive_touched_workspace, extract_paths_from_conversation, pick_handoff_overlap_env, +}; +use crate::ai::blocklist::history_model::{load_conversation_from_server, CloudConversationData}; use crate::ai::blocklist::suggested_agent_mode_workflow_modal::SuggestedAgentModeWorkflowAndId; use crate::ai::blocklist::suggested_rule_modal::{ SuggestedRuleAndId, SuggestedRuleModal, SuggestedRuleModalEvent, }; +use crate::ai::blocklist::FORK_PREFIX; +use crate::ai::cloud_environments::CloudAmbientAgentEnvironment; use crate::ai::conversation_utils; use crate::ai::document::ai_document_model::{AIDocumentId, AIDocumentModel}; use crate::ai::llms::LLMPreferences; @@ -110,16 +118,6 @@ use crate::util::openable_file_type::FileTarget; #[cfg(feature = "local_fs")] use crate::util::openable_file_type::{resolve_file_target_with_editor_choice, EditorLayout}; -use crate::ai::agent::conversation::AIConversation; -use crate::ai::agent_sdk::driver::upload_snapshot_for_handoff; -use crate::ai::blocklist::agent_view::agent_input_footer::sort_environments_by_recency; -use crate::ai::blocklist::handoff::touched_repos::{ - derive_touched_workspace, extract_paths_from_conversation, pick_handoff_overlap_env, -}; -use crate::ai::blocklist::history_model::CloudConversationData; -use crate::ai::blocklist::FORK_PREFIX; -use crate::ai::cloud_environments::CloudAmbientAgentEnvironment; -use crate::server::server_api::ai::PrepareHandoffForkRequest; #[cfg(not(target_family = "wasm"))] use crate::terminal::cli_agent_sessions::plugin_manager::{plugin_manager_for, PluginModalKind}; use crate::terminal::cli_agent_sessions::{CLIAgentSessionsModel, CLIAgentSessionsModelEvent}; @@ -177,7 +175,7 @@ use crate::quit_warning::UnsavedStateSummary; use crate::search::command_palette::view::NavigationMode; use crate::search::slash_command_menu::static_commands::commands; use crate::server::network_log_pane_manager::NetworkLogPaneManager; -use crate::server::server_api::ai::AIClient; +use crate::server::server_api::ai::{AIClient, PrepareHandoffForkRequest}; use crate::server::server_api::auth::AuthClient; use crate::settings::{ AISettings, AISettingsChangedEvent, CodeSettings, CodeSettingsChangedEvent, CtrlTabBehavior, @@ -318,7 +316,9 @@ use crate::terminal::session_settings::{ }; use crate::terminal::settings::{SpacingMode, TerminalSettings}; use crate::terminal::shell::ShellType; -use crate::terminal::view::ambient_agent::{HandoffSubmissionState, PendingHandoff}; +use crate::terminal::view::ambient_agent::{ + HandoffSubmissionState, PendingHandoff, SnapshotPrepStatus, +}; #[cfg(feature = "local_tty")] use crate::terminal::view::docker_sandbox::DEFAULT_DOCKER_SANDBOX_BASE_IMAGE; use crate::terminal::{self, SizeInfo, TerminalView}; @@ -12988,18 +12988,16 @@ impl Workspace { /// Open a local-to-cloud handoff pane next to the active local pane. Triggered /// by the `/oz-cloud-handoff` slash command and the "Hand off to cloud" footer - /// chip (REMOTE-1486 / REMOTE-1519). + /// chip. /// /// When the active conversation is non-empty and has a server token, mints a /// server-side fork via `POST /agent/handoff/prepare-fork`, then splits a fresh /// cloud-mode pane next to the local pane and pre-populates it with the forked /// conversation. /// - /// All failure modes — ineligibility (no active conversation, empty, or no - /// synced server token), prepare-fork RPC failure, and local fork - /// materialization failure — surface an error toast in the local window and - /// **do not open** any pane. The local conversation is unaffected and the - /// user can retry by re-clicking the chip. + /// All failure modes — ineligibility, prepare-fork RPC failure, and local fork + /// materialization failure — surface an error toast and **do not open** any + /// pane. The local conversation is unaffected. fn start_local_to_cloud_handoff( &mut self, initial_prompt: Option, @@ -13009,8 +13007,6 @@ impl Workspace { return; } - // Resolve the source conversation (if any). The current active session view's - // active conversation drives the fork pointer and the touched-repo derivation. let source = self .active_tab_pane_group() .as_ref(ctx) @@ -13031,17 +13027,18 @@ impl Workspace { }); let Some((source_conversation, source_token)) = source else { - // Not eligible: surface an error toast and bail out. We deliberately - // do not open a fresh cloud-mode pane here — the chip is a - // hand-off-this-conversation action, and silently opening an - // unrelated fresh pane hides the failure from the user. - self.show_handoff_error_toast(ctx); + // Ineligible: don't open a fresh unrelated pane — the chip is a + // hand-off-this-conversation action. + let window_id = ctx.window_id(); + WorkspaceToastStack::handle(ctx).update(ctx, |toast_stack, ctx| { + let toast = DismissibleToast::error( + "Failed to prepare handoff. Please try again.".to_owned(), + ); + toast_stack.add_ephemeral_toast(toast, window_id, ctx); + }); return; }; - // Eligible: kick off the prepare-fork RPC. The pane is **not** opened - // until the fork resolves, so a failed fork doesn't leave a stranded - // empty pane on screen. let ai_client = ServerApiProvider::as_ref(ctx).get_ai_client(); let request = PrepareHandoffForkRequest { source_conversation_id: source_token.as_str().to_string(), @@ -13060,31 +13057,22 @@ impl Workspace { } Err(err) => { log::warn!("prepare_handoff_fork failed: {err:#}"); - me.show_handoff_error_toast(ctx); + let window_id = ctx.window_id(); + WorkspaceToastStack::handle(ctx).update(ctx, |toast_stack, ctx| { + let toast = DismissibleToast::error( + "Failed to prepare handoff. Please try again.".to_owned(), + ); + toast_stack.add_ephemeral_toast(toast, window_id, ctx); + }); } }, ); } - /// Surface the shared "Failed to prepare handoff" toast in the local - /// window. Used by every failure path in `start_local_to_cloud_handoff` - /// (ineligibility, prepare-fork RPC failure, local fork materialization - /// failure) so the user sees a single consistent error treatment. - fn show_handoff_error_toast(&self, ctx: &mut ViewContext) { - let window_id = ctx.window_id(); - WorkspaceToastStack::handle(ctx).update(ctx, |toast_stack, ctx| { - let toast = DismissibleToast::error( - "Failed to prepare handoff. Please try again.".to_owned(), - ); - toast_stack.add_ephemeral_toast(toast, window_id, ctx); - }); - } - - /// Finishes the local-to-cloud handoff open after the prepare-fork RPC - /// returns. Materializes a local fork bound to the server's forked - /// conversation id, splits a fresh cloud-mode pane next to the active - /// pane, restores the forked conversation into it, seeds `PendingHandoff`, - /// and kicks off async derivation + snapshot upload (REMOTE-1486). + /// Finishes the local-to-cloud handoff open after the prepare-fork RPC returns. + /// Materializes a local fork bound to the server's forked conversation id, + /// splits a fresh cloud-mode pane, restores the forked conversation into it, + /// seeds `PendingHandoff`, and kicks off async derivation + snapshot upload. fn complete_local_to_cloud_handoff_open( &mut self, source_conversation: AIConversation, @@ -13093,14 +13081,11 @@ impl Workspace { initial_prompt: Option, ctx: &mut ViewContext, ) { - // Materialize the local fork up-front so the new pane has something to - // restore. `fork_conversation` already handles SQLite persistence and - // copies tasks / messages over from the source. + // Materialize the local fork up-front so the new pane has something to restore. + // Preserve source task ids so the local fork's task store matches the cloud-side + // fork (the cloud agent's ClientActions reference these task ids). let history_model = BlocklistAIHistoryModel::handle(ctx); let local_fork = match history_model.update(ctx, |history_model, ctx| { - // Preserve source task ids so the local fork's task store matches the cloud-side - // fork (which is a byte copy of the source's GCS data). The cloud agent's - // ClientActions reference these task ids and must resolve locally. history_model.fork_conversation( &source_conversation, FORK_PREFIX, @@ -13111,13 +13096,18 @@ impl Workspace { Ok(forked) => forked, Err(err) => { log::warn!("Failed to materialize local fork for handoff: {err:#}"); - self.show_handoff_error_toast(ctx); + let window_id = ctx.window_id(); + WorkspaceToastStack::handle(ctx).update(ctx, |toast_stack, ctx| { + let toast = DismissibleToast::error( + "Failed to prepare handoff. Please try again.".to_owned(), + ); + toast_stack.add_ephemeral_toast(toast, window_id, ctx); + }); return; } }; let local_fork_id = local_fork.id(); - // Split the new cloud-mode pane next to the active pane. self.active_tab_pane_group().update(ctx, |pane_group, ctx| { pane_group.add_ambient_agent_pane(ctx); }); @@ -13140,7 +13130,6 @@ impl Workspace { return; }; - // Pre-fill the prompt input if the slash command supplied one. if let Some(prompt) = initial_prompt.as_deref().filter(|p| !p.is_empty()) { new_pane_view.update(ctx, |terminal_view, view_ctx| { terminal_view.input().update(view_ctx, |input, input_ctx| { @@ -13149,9 +13138,8 @@ impl Workspace { }); } - // Restore the forked conversation into the new pane so its AI exchanges - // are visible immediately. Mirrors the `/fork` in-current-pane flow at - // `Self::fork_ai_conversation`. + // Restore the forked conversation into the new pane so its AI exchanges are + // visible immediately. Mirrors the `/fork` in-current-pane flow. let local_fork_for_restore = local_fork.clone(); new_pane_view.update(ctx, |terminal_view, view_ctx| { terminal_view.restore_conversation_after_view_creation( @@ -13161,17 +13149,9 @@ impl Workspace { ); }); - // Bind the local fork's `server_conversation_token` to the forked - // conversation id minted by the server. Must run AFTER - // `restore_conversation_after_view_creation`, since `restore_conversations` - // overwrites the entry in `conversations_by_id` with the (token-less) - // clone we hand it. Binding here ensures that when the cloud agent's - // shared session connects with `StreamInit { conversation_id: T_C }`, - // `find_existing_conversation_by_server_token` finds the live fork and - // `should_skip_replayed_response_for_existing_conversation` correctly - // suppresses the replayed response stream — otherwise the replay would - // re-enter as new exchanges, flipping `is_executing_oz_environment_startup_commands` - // false and breaking setup-command block UI for the handoff pane. + // Bind the local fork to the cloud-side conversation id. Must happen AFTER + // restore: `restore_conversations` overwrites `conversations_by_id` with the + // token-less clone we passed in. history_model.update(ctx, |history_model, _| { history_model.set_server_conversation_token_for_conversation( local_fork_id, @@ -13179,25 +13159,19 @@ impl Workspace { ); }); - // Seed `PendingHandoff` so `is_local_to_cloud_handoff()` is true from - // here on. `submit_handoff` reads the cached `forked_conversation_id` - // and `snapshot_prep_token` directly from this struct — the orchestrator - // path that REMOTE-1486 used has been inlined into the async block below. let pending = PendingHandoff { forked_conversation_id: forked_conversation_id.clone(), touched_workspace: None, - snapshot_prep_token: None, + snapshot_prep: SnapshotPrepStatus::Pending, submission_state: HandoffSubmissionState::Idle, }; model_handle.update(ctx, |model, model_ctx| { model.set_pending_handoff(Some(pending), model_ctx); }); - // Kick off async background prep: derive the touched workspace, then - // upload the snapshot. The pane is fully interactive throughout — the - // user can scroll, type, and pick an env while this runs. The send - // button gate inside `submit_handoff` waits for both the workspace and - // the prep token to be cached before allowing a spawn. + // Async background prep: derive the touched workspace, then upload the + // snapshot. The pane is fully interactive throughout. `submit_handoff` + // gates on both completing before allowing a spawn. let async_model_handle = model_handle.clone(); let server_api_provider = ServerApiProvider::as_ref(ctx); let ai_client = server_api_provider.get_ai_client(); @@ -13231,8 +13205,17 @@ impl Workspace { } model.set_pending_handoff_workspace(derived_workspace, model_ctx); match upload_result { - Ok(prep_token) => { - model.set_pending_handoff_snapshot_prep_token(prep_token, model_ctx); + Ok(Some(prep_token)) => { + model.set_pending_handoff_snapshot_prep( + SnapshotPrepStatus::Uploaded(prep_token), + model_ctx, + ); + } + Ok(None) => { + model.set_pending_handoff_snapshot_prep( + SnapshotPrepStatus::SkippedEmptyWorkspace, + model_ctx, + ); } Err(err) => { log::warn!("Handoff snapshot upload failed: {err:#}"); diff --git a/specs/REMOTE-1519/TECH.md b/specs/REMOTE-1519/TECH.md index ba643c8df..5adae5694 100644 --- a/specs/REMOTE-1519/TECH.md +++ b/specs/REMOTE-1519/TECH.md @@ -2,16 +2,14 @@ Product spec: `specs/REMOTE-1519/PRODUCT.md` Linear: [REMOTE-1519](https://linear.app/warpdotdev/issue/REMOTE-1519/make-ui-better-for-local-cloud-handoff) ## Context -REMOTE-1486 shipped the V0 local-to-cloud handoff: a chip in the agent input footer (or `/oz-cloud-handoff`) opens a fresh cloud-mode pane next to the local pane, the user types a follow-up prompt, and on submit the client snapshots the workspace and spawns a cloud agent that's forked from the local conversation. +REMOTE-1486 shipped V0 of the local-to-cloud handoff: a chip in the agent input footer (or `/oz-cloud-handoff`) opens a fresh cloud-mode pane next to the local pane; on submit the client snapshots the workspace and spawns a cloud agent forked from the local conversation. That V0 has two rough edges this spec addresses: -1. **No hydration of the source conversation in the new pane.** The fork is materialized server-side at submit time only (`enqueueAgentRun` in `../warp-server-2/router/handlers/public_api/agent_webhooks.go:376-386` calls `ForkConversationForHandoff` and points `task.AgentConversationID` at the fork). Until the cloud agent's shared session connects and replays the conversation transcript, the new pane is blank. The cloud session's replay then re-broadcasts every exchange the user already saw in the local pane. -2. **Setup-v2 affordances are not consistent with fresh cloud-mode runs.** A fresh cloud-mode pane uses `BlockList::set_is_executing_oz_environment_startup_commands(true)` (set in `app/src/terminal/model/terminal_model.rs:1238-1241`), which hides the active block, marks it as a setup command, and renders a "Running setup commands…" collapsible row above it (`CloudModeSetupTextBlock` in `app/src/terminal/view/ambient_agent/block/setup_command_text.rs`). The flag is reset on the first `AppendedExchange` (`app/src/terminal/view.rs:5113-5124`). For handoff panes the pre-populated conversation's exchanges trip that reset path early (when we restore them via `restore_conversations_on_view_creation`), unhiding the active block before the cloud session has even connected — so when the cloud agent's environment startup PTY output arrives it renders raw rather than wrapped in the setup-v2 surface. +1. **No hydration of the source conversation in the new pane.** The fork is materialized server-side at submit time only. Until the cloud agent's shared session connects and replays the conversation transcript, the new pane is blank. The cloud session's replay then re-broadcasts every exchange the user already saw in the local pane. +2. **Setup-v2 affordances render incorrectly.** A fresh cloud-mode pane shows a "Running setup commands…" collapsible row, a queued-prompt indicator, and a loading screen during the pre-session window (gated by `FeatureFlag::CloudModeSetupV2`). Handoff panes today don't surface those affordances; the environment startup PTY output renders raw instead. The pieces this spec builds on: -- **Cloud-cloud handoff replay suppression.** When `attach_followup_session` joins a fresh shared session for a follow-up cloud execution, it uses `SharedSessionInitialLoadMode::AppendFollowupScrollback` (`app/src/terminal/shared_session/viewer/terminal_manager.rs:340-370`), which (a) deduplicates blocks by ID via `BlockList::append_followup_shared_session_scrollback` (`app/src/terminal/model/blocks.rs:725`) and (b) sets `should_suppress_existing_agent_conversation_replay = true` (`app/src/terminal/shared_session/viewer/event_loop.rs:132-134`). When the cloud agent's replay arrives, `BlocklistAIController::should_skip_replayed_response_for_existing_conversation` (`app/src/ai/blocklist/controller/shared_session.rs:220-239`) skips response streams whose conversation already has exchanges in our local history. We will reuse this exact mechanism for the local→cloud first-session connect. -- **Fork-into-new-pane restoration.** `BlocklistAIHistoryModel::fork_conversation` (`app/src/ai/blocklist/history_model.rs:1033`) materializes a forked `AIConversation` locally from a source conversation. `ConversationRestorationInNewPaneType::Forked { conversation }` (`app/src/terminal/view/load_ai_conversation.rs:104-106`) feeds it into a freshly-created pane via `restore_conversations_on_view_creation`, which restores AI blocks for every exchange with live (non-restored) appearance. -- **Server-side fork and conversation-token binding.** `ForkConversationForHandoff` in `../warp-server-2/logic/ai_conversation_fork.go` already implements the server fork end-to-end (auth on source, GCS data copy, metadata insert, `has_gcs_data = TRUE`); it's currently called only from `enqueueAgentRun`. The viewer-side `BlocklistAIController::find_existing_conversation_by_server_token` (`app/src/ai/blocklist/controller/shared_session.rs:418-433`) maps a `StreamInit.conversation_id` to a local `AIConversation` by token; if we set the local fork's `server_conversation_token` to the server fork's id at chip-click time, this lookup wires them up automatically when the cloud session arrives. -- **REMOTE-1486 client surface area.** `Workspace::start_local_to_cloud_handoff` (`app/src/workspace/view.rs:12952-13079`) is the entry point invoked by the chip and slash command. It splits a fresh cloud-mode pane via `pane_group.add_ambient_agent_pane(ctx)`, seeds `PendingHandoff` onto the new pane's `AmbientAgentViewModel`, and kicks off async touched-repo derivation. `AmbientAgentViewModel::submit_handoff` (`app/src/terminal/view/ambient_agent/model.rs:1108-1177`) runs the snapshot prep + upload orchestrator and then calls `spawn_agent_with_request` with `fork_from_conversation_id` set on the `SpawnAgentRequest`. -The Linear ticket description ("we should fork the conversation into the cloud pane and re-use the cloud mode loading v2 for the setup commands") covers both pieces; this spec wires them together because the fork-timing change is what enables the setup-v2 fix. +- **Cloud-cloud handoff replay suppression.** When `attach_followup_session` joins a fresh shared session for a follow-up cloud execution, it uses `SharedSessionInitialLoadMode::AppendFollowupScrollback`, which (a) deduplicates blocks by ID via `BlockList::append_followup_shared_session_scrollback` and (b) flips `should_suppress_existing_agent_conversation_replay = true`. That flag drives `BlocklistAIController::should_skip_replayed_response_for_existing_conversation` to skip replayed response streams. We reuse this mechanism for the local→cloud first-session connect. +- **Fork-into-new-pane restoration.** `BlocklistAIHistoryModel::fork_conversation` materializes a forked `AIConversation` locally; `restore_conversation_after_view_creation` feeds it into a freshly-created pane and restores AI blocks for every exchange with live (non-restored) appearance. +- **Server-side fork and conversation-token binding.** `ForkConversationForHandoff` in `../warp-server-2/logic/ai_conversation_fork.go` already implements the server fork end-to-end (auth on source, GCS data copy, metadata insert). The viewer-side `BlocklistAIController::find_existing_conversation_by_server_token` maps a `StreamInit.conversation_id` to a local `AIConversation` by token; binding the local fork's `server_conversation_token` to the server fork's id at chip-click time wires them up automatically when the cloud session arrives. ## Diagram ```mermaid sequenceDiagram @@ -28,7 +26,7 @@ sequenceDiagram Note over C: On error here: error toast, no pane opens C->>C: BlocklistAIHistoryModel::fork_conversation (local fork L', bind T_C) C->>HP: split fresh cloud-mode pane next to LP - C->>HP: restore_conversations_on_view_creation(Forked { L' }) + C->>HP: restore_conversation_after_view_creation(L') Note over HP: Pre-populated with source's AI exchanges par Background prep (kicked off after pane opens) C->>C: derive_touched_workspace (walks conversation, git remotes) @@ -37,7 +35,7 @@ sequenceDiagram C->>API: PUT snapshot files (parallel) end U->>HP: Type follow-up prompt, submit - Note over HP: Send button disabled until prep_token cached on PendingHandoff + Note over HP: Send blocked until snapshot upload settles C->>API: POST /agent/runs {conversation_id: T_C, handoff_prep_token, prompt, config} API-->>C: {task_id, run_id} Note over HP: Setup-v2 affordances render: queued prompt, loading screen @@ -46,17 +44,14 @@ sequenceDiagram HP->>HP: connect_to_session with AppendFollowupScrollback Note over HP: should_suppress_existing_agent_conversation_replay = true Sand-->>HP: replay forked conversation transcript - Note over HP: Replay events skipped (existing conversation has exchanges) + Note over HP: Replay events skipped (request_id matches existing exchange) Sand-->>HP: cloud agent's first turn (rehydration prompt + user follow-up + response) HP->>HP: AppendedExchange clears setup-v2 flag, queued-prompt block Note over LP: Local pane unchanged throughout ``` ## Proposed changes ### 1. Server-side: split fork from spawn (`../warp-server-2`) -**Why split fork from spawn?** This whole spec hinges on pre-populating the new cloud pane with the source conversation at chip click. That requires a stable, materialized fork at chip-click time, not at submit time, for two reasons: -1. **Stable target.** Once the cloud pane is hydrated we don't want to keep re-syncing it as the user continues typing in the local pane — that would be O(local-conversation-edits) GCS writes for nothing, and would have to merge against whatever the cloud agent is doing in parallel. Forking on click freezes the cloud's view at the moment the user opted into the handoff and lets the two conversations evolve independently. -2. **Semantic match.** Handoff is fork→cloud per the product model: clicking the chip is the user saying "this conversation, as it stands right now, is what I'm sending to the cloud." Forking at submit-time is an implementation accident inherited from REMOTE-1486 V0 (which had no hydration so it didn't matter when the fork happened); forking at click-time mirrors the user's mental model exactly. -The fork currently happens inside `enqueueAgentRun` when `ForkFromConversationID` is set on the `RunAgentRequest` (`router/handlers/public_api/agent_webhooks.go:376-386`). This spec moves the fork to a new dedicated endpoint so the client can mint the fork at chip-click time and pre-populate the pane. +Forking on chip click (vs at submit time) freezes the cloud's view at the moment the user opted into the handoff and lets the two conversations evolve independently. **New endpoint** `POST /api/v1/agent/handoff/prepare-fork`: ```go path=null start=null type PrepareLocalHandoffForkRequest struct { @@ -66,105 +61,58 @@ type PrepareLocalHandoffForkResponse struct { ForkedConversationID string `json:"forked_conversation_id"` } ``` -Add the handler alongside `PrepareLocalHandoffSnapshotHandler` in `router/handlers/public_api/agent_handoff.go`. It is a thin wrapper that: -1. Gates on `features.LocalToCloudHandoffEnabled()`. -2. Resolves `principal` via `middleware.GetRequiredPrincipalFromContext`. -3. Calls the existing `logic.ForkConversationForHandoff(ctx, db, datastores, req.SourceConversationID, principal)` and returns `{forked_conversation_id}`. -Wire the route under the same `aiCheckedGroup` as the existing snapshot prep endpoint at `router/handlers/public_api/agent_webhooks.go:205-207`. -**Remove `ForkFromConversationID` from `RunAgentRequest`.** Per user direction, no backwards compatibility is needed — the field is only used by the under-flag REMOTE-1486 branch which isn't merged. Delete the field declaration (`agent_webhooks.go:235-240`), the validation block (`agent_webhooks.go:337-344`), and the inline fork call (`agent_webhooks.go:376-386`). The existing `ConversationID *string` field at `agent_webhooks.go:222` continues to drive `task.AgentConversationID` (resume semantics) and is what the client now uses to point the new task at the pre-minted fork. -**`HandoffPrepToken` stays.** Snapshot prep + upload still flow through the existing `prepare-snapshot` endpoint and the same `attachHandoffSnapshotToTask` post-task-creation step; the only thing that moves is when the client triggers them (now async on chip click instead of submit time — see §3). The server handler block at `agent_webhooks.go:476-484` is unchanged. +Add the handler alongside `PrepareLocalHandoffSnapshotHandler` in `router/handlers/public_api/agent_handoff.go`. It gates on `features.LocalToCloudHandoffEnabled()`, resolves the principal, and calls `logic.ForkConversationForHandoff`. Wire the route under the same `aiCheckedGroup` as the existing snapshot prep endpoint. +**Remove `ForkFromConversationID` from `RunAgentRequest`.** The field, validation, and inline fork call all go. The existing `ConversationID *string` field continues to drive `task.AgentConversationID` (resume semantics) — the client now points it at the pre-minted fork id. +**`HandoffPrepToken` stays.** Snapshot prep + upload still flow through `prepare-snapshot` and `attachHandoffSnapshotToTask` post-task-creation; only the timing of when the client triggers them moves (now async on chip click instead of submit time — see §3). ### 2. Client-side API surface (`app/src/server/server_api/ai.rs`) -- Add `prepare_handoff_fork` to the `AIClient` trait: -```rust path=null start=null -async fn prepare_handoff_fork( - &self, - request: PrepareHandoffForkRequest, -) -> Result; -``` -implemented in `ServerApi` as `POST agent/handoff/prepare-fork`. Mirror the request/response shape pattern of `PrepareHandoffSnapshotRequest` (currently around line 221-249). -- On `SpawnAgentRequest`, **remove** the `fork_from_conversation_id: Option` field (currently line 213) and **add** `conversation_id: Option` for resume semantics. The client now always pre-mints the fork via the new endpoint and sends the resulting id under `conversation_id`. -- Update the snapshot pipeline call site that takes a `&ServerConversationToken` only for log labelling (`upload_snapshot_for_handoff` in `app/src/ai/agent_sdk/driver/snapshot.rs`) — no signature change needed; the source conversation token is still available on the `PendingHandoff`. +- Add `prepare_handoff_fork` to the `AIClient` trait, implemented as `POST agent/handoff/prepare-fork`. Mirror the request/response shape pattern of `PrepareHandoffSnapshotRequest`. +- On `SpawnAgentRequest`, replace `fork_from_conversation_id: Option` with `conversation_id: Option` (resume semantics). The client now always pre-mints the fork via the new endpoint and sends the resulting id under `conversation_id`. ### 3. Client-side fork-on-chip-click (`app/src/workspace/view.rs`) -Extend `Workspace::start_local_to_cloud_handoff` (currently at `app/src/workspace/view.rs:12952-13079`) into a strict-ordering open path: -1. **Resolve eligibility synchronously.** Read the active session view's conversation via `BlocklistAIHistoryModel::active_conversation`. If the conversation is missing, empty, or has no `server_conversation_token`, surface the same error toast as the prepare-fork RPC failure path (step 2 below) and return without opening any pane. There is no "fresh cloud-mode pane" fall-through — the chip is a hand-off-this-conversation action, and silently opening an unrelated fresh pane would hide the failure from the user. -2. **Await the fork before opening the pane.** When the source resolves, `ctx.spawn` a future that calls `AIClient::prepare_handoff_fork({source_conversation_id: T_L})`. The new pane is **not** split until this returns. `start_local_to_cloud_handoff` itself returns to the caller immediately so the click handler doesn't block, but the pane-open work is gated on the RPC. - - **On error** (network, auth, `SourceConversationNotPersisted`, etc.), surface a `WorkspaceToastStack` error toast (mirroring the pattern used by `Self::show_fork_toast` at `app/src/workspace/view.rs:11586-11588` for failed local forks). Log the underlying error. Do **not** open a pane. - - **On success**, on the main thread, run the rest of the open path described below. -3. **Open and pre-populate the pane.** With `T_C` in hand: - - Call `pane_group.add_ambient_agent_pane(ctx)` to split the new pane next to the active pane (today's call site). - - Call `BlocklistAIHistoryModel::fork_conversation(&source_conversation, FORK_PREFIX, app)` to materialize a local fork `L'`. `fork_conversation` already handles SQLite persistence, the `forked_from_server_conversation_token` field, and reverted-action-id preservation. - - Set `L'.server_conversation_token = T_C` via `BlocklistAIHistoryModel::set_server_conversation_token_for_conversation` (existing helper used by the `link_forked_conversation_token` path). This makes `find_existing_conversation_by_server_token(T_C)` immediately return `L'` once the cloud session connects. - - On the new pane's terminal view, call `terminal_view.restore_conversation_after_view_creation(RestoredAIConversation::new(L'.clone()), /* use_live_appearance */ true, ctx)` (existing helper at `app/src/terminal/view/load_ai_conversation.rs:542-603`). This is the same restoration helper used by the in-current-pane fork path at `app/src/workspace/view.rs:11597-11607`. - - Set the new pane's `BlocklistAIContextModel` pending-query state for the forked conversation so the agent view's selected conversation matches `L'` (mirrors `restore_conversations_from_block_params` at `app/src/terminal/view/load_ai_conversation.rs:482-491`). - - Seed `PendingHandoff` on the new pane's `AmbientAgentViewModel` with `source_conversation_id: T_L`, `forked_conversation_id: T_C`, `touched_workspace: None`, `snapshot_prep_token: None`, `submission_state: Idle`. - - Apply the slash-command-supplied prompt pre-fill if any. -4. **Kick off async background prep.** After the pane is open, `ctx.spawn` a single chained future on the new pane's `AmbientAgentViewModel` that runs `derive_touched_workspace` → `upload_snapshot_for_handoff` (existing helpers in `app/src/ai/blocklist/handoff/touched_repos.rs` and `app/src/ai/agent_sdk/driver/snapshot.rs`). When derivation completes, call `set_pending_handoff_workspace` so the env-overlap pick can apply (existing behavior). When the upload completes, store the resulting prep token via a new `set_pending_handoff_snapshot_prep_token(Option, ctx)` setter on the model. The pane is fully interactive throughout — the user can type, scroll, and pick an env while this runs. -The send button's existing gate (`pending_handoff.touched_workspace.is_some()` plus prompt non-empty) is extended to also require `snapshot_prep_token.is_some_or_skipped()` — i.e. the upload is either complete or the touched workspace was empty (the existing `upload_snapshot_for_handoff` returns `Ok(None)` for empty workspaces and that's a valid skip). +`Workspace::start_local_to_cloud_handoff` becomes a strict-ordering open path: +1. Resolve eligibility synchronously from the active session view's `BlocklistAIHistoryModel::active_conversation`. If the conversation is missing, empty, or has no `server_conversation_token`, surface the shared error toast and return without opening any pane. +2. `ctx.spawn` a future that calls `AIClient::prepare_handoff_fork`. The new pane is **not** split until this returns. On error, surface the same error toast; do not open a pane. +3. On success, on the main thread: + - Call `BlocklistAIHistoryModel::fork_conversation(&source_conversation, FORK_PREFIX, /* preserve_task_ids */ true, ctx)` to materialize the local fork `L'`. `preserve_task_ids: true` keeps the source's task ids so the cloud agent's `ClientAction`s (which reference those task ids) resolve in `L'`. + - `pane_group.add_ambient_agent_pane(ctx)` to split the new pane. + - Pre-fill the prompt input if the slash command supplied one. + - `terminal_view.restore_conversation_after_view_creation(RestoredAIConversation::new(L'.clone()), /* use_live_appearance */ true, ctx)` so the AI exchanges render immediately. + - `BlocklistAIHistoryModel::set_server_conversation_token_for_conversation(local_fork_id, T_C)`. Must run **after** restore: `restore_conversations` overwrites `conversations_by_id` with the token-less clone we passed in, so binding earlier would be lost. + - Seed `PendingHandoff { forked_conversation_id: T_C, touched_workspace: None, snapshot_prep: Pending, submission_state: Idle }`. +4. Kick off async background prep on the new pane: `derive_touched_workspace` → `upload_snapshot_for_handoff`. When derivation completes, call `set_pending_handoff_workspace`. When the upload completes, call `set_pending_handoff_snapshot_prep` with `Uploaded(token)` / `SkippedEmptyWorkspace` / `Failed(err)` as appropriate. The pane is fully interactive throughout. +The send button's gate (`is_handoff_ready_to_submit`) requires `touched_workspace.is_some()`, `snapshot_prep` settled (Uploaded or SkippedEmptyWorkspace), and `submission_state == Idle`. If submit fires while any precondition is unmet, the input layer surfaces a "Preparing handoff" toast and leaves the prompt + attachments intact. ### 4. Submit path uses resume semantics (`app/src/terminal/view/ambient_agent/model.rs`) -With the fork and the snapshot upload both completed during the chip-click open path, `AmbientAgentViewModel::submit_handoff` becomes a thin shim over `spawn_agent_with_request`. It reads the cached `forked_conversation_id` and `snapshot_prep_token` directly off `pending_handoff` — no orchestrator runtime needed: -```rust path=null start=null -let handoff = self.pending_handoff.as_ref()?; -let request = SpawnAgentRequest { - prompt, - config: Some(self.build_default_spawn_config(ctx)), - title: None, - team: None, - skill: None, - attachments, - interactive: None, - parent_run_id: None, - runtime_skills: vec![], - referenced_attachments: vec![], - conversation_id: Some(handoff.forked_conversation_id.clone()), - handoff_prep_token: handoff.snapshot_prep_token.clone(), -}; -self.spawn_agent_with_request(request, ctx); -``` -Delete the existing `app/src/ai/blocklist/handoff/orchestrator.rs` (`run_handoff` + `HandoffPrepared`) — the prep-and-upload phase moves to the chip-click path described in §3, and the orchestrator's only remaining role would be a redundant wrapper around `upload_snapshot_for_handoff`. Inline the call directly there. `submit_handoff` retains its existing double-submit guard via `submission_state`. +With the fork and the snapshot upload both completed during the chip-click open path, `AmbientAgentViewModel::submit_handoff` is a thin shim over `spawn_agent_with_request` that reads cached `forked_conversation_id` and `snapshot_prep` directly off `pending_handoff`. The orchestrator that REMOTE-1486 used is deleted. ### 5. Replay-suppressing initial connect (`app/src/terminal/shared_session/viewer/terminal_manager.rs`) -`TerminalManager::connect_to_session` (`app/src/terminal/shared_session/viewer/terminal_manager.rs:322-338`) currently always uses `SharedSessionInitialLoadMode::ReplaceFromSessionScrollback`. Change it so handoff panes use `AppendFollowupScrollback` instead: -- Plumb a `should_append_followup: bool` flag into `connect_to_session` (or a new `connect_to_session_with_load_mode(session_id, load_mode, ctx)` variant — caller's choice). -- The cloud-mode subscription in `app/src/terminal/view/ambient_agent/mod.rs:88-90` calls `manager.connect_to_session(*session_id, ctx)` on `SessionReady`. Update it to also pass `view_model.is_local_to_cloud_handoff()` (read from the model on the same line). When true, use append mode. -The append mode then handles both pieces of dedup automatically: `BlockList::append_followup_shared_session_scrollback` skips block IDs we already have, and `EventLoop::should_suppress_existing_agent_conversation_replay = true` (`event_loop.rs:132-134`) drives `BlocklistAIController::should_skip_replayed_response_for_existing_conversation` to skip the historical response streams. No changes to the suppression machinery itself. -### 6. Setup-v2 active-block guard during conversation restore (`app/src/terminal/view.rs`) -The flag-reset block at `app/src/terminal/view.rs:5113-5124` flips `is_executing_oz_environment_startup_commands` to `false` whenever an `AppendedExchange` arrives in an ambient agent session. During `restore_conversations_on_view_creation`, every restored exchange emits `AppendedExchange` (via `update_conversation_for_new_request_input` → `BlocklistAIHistoryEvent::AppendedExchange`), which trips this reset before the cloud agent has even started its setup commands. -Gate the reset on the model not being in handoff-pre-spawn state: -```rust path=null start=null -if self.is_ambient_agent_session(ctx) - && self.model.lock().block_list().is_executing_oz_environment_startup_commands() - && !self.is_in_handoff_replay_phase(ctx) -{ - // existing reset... -} -``` -where `is_in_handoff_replay_phase` returns true when `ambient_agent_view_model.is_local_to_cloud_handoff() && (model.is_in_setup() || model.is_configuring_ambient_agent() || model.is_waiting_for_session())` — i.e. the cloud session has not yet connected and the active block should still be treated as a setup-command surface. After `SessionReady` (and thus once `Status::AgentRunning` is set), the predicate becomes false; the cloud agent's actual `AppendedExchange` (its first response post-rehydration) trips the existing reset path normally. -This is the single behavior fix needed for the setup-v2 affordances to render correctly during handoff. The "Running setup commands…" collapsible row, queued-prompt indicator, and loading screen are all already wired up via existing `CloudModeSetupV2`-gated paths and Just Work once the active block stays hidden through the pre-session window. -### 7. Drop the V2-input opt-out for handoff panes (`app/src/terminal/input/agent.rs`) -REMOTE-1486 added a guard at `app/src/terminal/input/agent.rs:65` so handoff panes don't opt into `CloudModeInputV2`. With the setup-v2 affordances now intentionally enabled for handoff panes (per §6 + the product spec's #9), remove the `&& !ambient_agent_view_model.is_local_to_cloud_handoff()` clause from `Input::is_cloud_mode_input_v2_composing`. Handoff panes go through the same V2 input path as fresh cloud-mode runs. -### 8. Feature-flag posture -No new feature flags. All of the changes are gated on the existing `FeatureFlag::OzHandoff && FeatureFlag::LocalToCloudHandoff` (client) and `features.LocalToCloudHandoffEnabled()` (server) used by REMOTE-1486. The client and server flags continue to roll out together. +`TerminalManager::connect_to_session` gains an `append_followup_scrollback: bool` flag. The cloud-mode subscription in `app/src/terminal/view/ambient_agent/mod.rs` passes `view_model.is_local_to_cloud_handoff()` so handoff panes use `AppendFollowupScrollback` instead of the default `ReplaceFromSessionScrollback`. +The append mode handles both pieces of dedup: `BlockList::append_followup_shared_session_scrollback` skips block IDs we already have, and `should_suppress_existing_agent_conversation_replay = true` drives the response-stream filter described in §6. +### 6. Replay-stream filter keys on `request_id` (`app/src/ai/blocklist/controller/shared_session.rs`) +The cloud agent's replay rebroadcasts every exchange in the forked conversation, including ones we've already pre-populated. We need to skip the replayed response streams without skipping the cloud agent's genuinely new turns (which arrive on the same connection after replay finishes). +`should_skip_replayed_response_for_existing_conversation` (called from `on_shared_init`) compares `init_event.request_id` against the `server_output_id`s of the local fork's existing exchanges. The stream is skipped iff: +- the model is in replay mode (`is_receiving_agent_conversation_replay && should_suppress_existing_agent_conversation_replay`), AND +- the incoming `request_id` matches an existing exchange's `server_output_id`. +Replay events for already-known exchanges are dropped; new turns the cloud agent appends after the local fork (e.g. the user's first submitted prompt) carry request_ids we have never seen and flow through normally. +### 7. Feature-flag posture +No new feature flags. All changes are gated on the existing `FeatureFlag::OzHandoff && FeatureFlag::LocalToCloudHandoff` (client) and `features.LocalToCloudHandoffEnabled()` (server) used by REMOTE-1486. ## Risks and mitigations -- **Chip-click latency is now gated on the prepare-fork RPC.** Previously the pane opened instantly; now the user sees nothing until the fork resolves. *Mitigation:* the fork is a synchronous metadata + GCS-copy round-trip already used at submit time today; expected latency is similar to other authenticated public-API RPCs (<300ms p50). On error we surface a toast immediately so the user knows what happened. -- **Source conversation not synced to GCS.** `ForkConversationForHandoff` returns `InvalidRequestError.New("source conversation %s has not been fully synced to cloud storage; try again in a moment")` when `BatchDoesConversationDataExist` is false. *Mitigation:* the client surfaces this as the toast described above; the user can wait a moment and click again. -- **Replay suppression skips a genuinely new exchange.** `should_skip_replayed_response_for_existing_conversation` skips response streams during replay if the local conversation already has exchanges. If the cloud agent's first response stream arrives during the replay phase (before `AgentConversationReplayEnded`) it could be suppressed too. *Mitigation:* this is the same posture cloud→cloud uses today (`AppendFollowupScrollback`); the runtime emits `AgentConversationReplayEnded` before the new turn streams in, so the new turn lands in the post-replay window. -- **Snapshot upload still in flight at submit time.** The user types a follow-up faster than `derive_touched_workspace` + `upload_snapshot_for_handoff` complete. *Mitigation:* the send button gate already requires `pending_handoff.touched_workspace.is_some()` (existing); we extend it to also require the snapshot upload to be settled (either succeeded with `Some(prep_token)`, deliberately skipped with `Ok(None)` for empty workspaces, or failed with the existing `report_error!` posture so submit can proceed best-effort). -- **Snapshot upload failure.** Per-blob failures already retry with bounded backoff via `upload_snapshot_for_handoff`. If every blob fails, the existing `report_error!` fires and the prep token is still minted (cloud agent starts with no rehydration content). *Mitigation:* unchanged — same best-effort posture as cloud→cloud handoff today, just kicked off earlier. +- **Chip-click latency is now gated on the prepare-fork RPC.** Previously the pane opened instantly; now the user sees nothing until the fork resolves. The fork is a synchronous metadata + GCS-copy round-trip already used at submit time today; expected latency is similar to other authenticated public-API RPCs (<300ms p50). On error we surface a toast immediately. +- **Source conversation not synced to GCS.** `ForkConversationForHandoff` returns `InvalidRequestError` when `BatchDoesConversationDataExist` is false. The client surfaces this as the toast above; the user can wait a moment and click again. +- **Replay suppression skips a genuinely new exchange.** The `request_id` filter scopes skipping to specific known exchanges, so new turns flow through even during the replay window. +- **Snapshot upload still in flight at submit time.** `is_handoff_ready_to_submit` blocks submit until upload settles. The user sees a "Preparing handoff" toast and their prompt + attachments are preserved. ## Testing and validation ### Unit tests -- `app/src/server/server_api/ai_test.rs`: serialization test for `PrepareHandoffForkRequest`, path test for `build_prepare_handoff_fork_url`, mirroring the pattern of the existing `serialize_run_followup_request` test. -- `app/src/ai/blocklist/history_model_test.rs`: test that `set_server_conversation_token_for_conversation` after `fork_conversation` updates the token-to-conversation reverse index so `find_conversation_id_by_server_token(T_C)` finds the fork. -- `app/src/terminal/view/view_test.rs`: a minimal regression covering the setup-v2 reset gate — restoring exchanges into a handoff pane while the model is in `Setup`/`Composing`/`WaitingForSession` does NOT flip `is_executing_oz_environment_startup_commands` to false. -- `app/src/terminal/shared_session/viewer/event_loop_test.rs`: extend the existing append-mode tests to cover the local→cloud connect path (i.e. `AppendFollowupScrollback` mode is what `connect_to_session` uses when the model reports `is_local_to_cloud_handoff`). +- `app/src/server/server_api/ai_test.rs`: serialization/deserialization tests for `PrepareHandoffForkRequest` / `PrepareHandoffForkResponse`. +- `app/src/ai/blocklist/history_model_test.rs`: test that `set_server_conversation_token_for_conversation` after `fork_conversation` updates the token-to-conversation reverse index so `find_conversation_id_by_server_token(T_C)` finds the fork. Plus a test exercising `preserve_task_ids: true` to confirm task ids are preserved across the fork. +- `app/src/terminal/shared_session/viewer/event_loop_test.rs`: extend the existing append-mode tests to cover the local→cloud connect path. ### Server tests (`../warp-server-2`) -- `router/handlers/public_api/agent_handoff_test.go`: extend the existing test file with a `TestPrepareLocalHandoffForkHandler_*` suite covering: feature-flag-off returns the standard error; missing `source_conversation_id` returns `invalid request payload`; happy path returns a valid UUID; auth failure on the source returns the wrapped `NotAuthorizedError`. -- Update the existing `agent_webhooks_test.go::TestHandoff_*` cases that exercise `ForkFromConversationID`. With the field removed those tests should switch to driving the new `prepare-fork` endpoint and then sending `ConversationID` on the run request, asserting the same end-state (`task.AgentConversationID = `, `snapshots/{task_id}/0/` populated). +- `router/handlers/public_api/agent_handoff_test.go`: add a `TestPrepareLocalHandoffForkHandler_*` suite covering: feature-flag-off; missing `source_conversation_id`; happy path; auth failure on the source. +- Update existing `agent_webhooks_test.go::TestHandoff_*` cases that exercise `ForkFromConversationID` to instead drive the new `prepare-fork` endpoint and then send `ConversationID` on the run request. ### Integration / manual - Click the chip on a long Oz conversation; verify the new pane is visibly populated with the AI exchanges before the cloud session connects, with no flicker or duplicate blocks during the connect/replay window. - Submit a follow-up; verify the queued-prompt indicator + "Setting up environment" loading screen + "Running setup commands…" collapsible block all render the same way they do for a fresh cloud-mode run. - After the cloud agent's first turn arrives, verify the pre-populated blocks remain in place, the queued-prompt indicator clears, and the new exchange appends below them. -- Click the chip on a non-eligible conversation (no synced server token); verify **no pane opens** and an error toast surfaces in the local window. The local conversation should be unaffected. -- Manually break a network connection during chip click so the prepare-fork RPC fails; verify **no pane opens** and an error toast surfaces in the local window. The local conversation should be unaffected and the chip should be re-clickable. +- Click the chip on a non-eligible conversation (no synced server token); verify **no pane opens** and an error toast surfaces in the local window. +- Manually break a network connection during chip click so the prepare-fork RPC fails; verify **no pane opens** and an error toast surfaces in the local window. ## Parallelization -The two-side change (server endpoint + client wiring) is small enough that one engineer/agent can implement it sequentially in two PRs — a server PR for the prepare-fork endpoint + `ForkFromConversationID` removal, then a client PR for the hydration + load mode + setup-v2 reset gate. The user has indicated they will handle the server-side changes themselves in `../warp-server-2`, so the client agent does not need to coordinate with a parallel server agent. No sub-agents needed for this scope. +The two-side change is small enough that one engineer/agent can implement it sequentially in two PRs — a server PR for the prepare-fork endpoint + `ForkFromConversationID` removal, then a client PR for the hydration + load mode + replay-stream filter. No sub-agents needed. ## Follow-ups -- Cloud→cloud setup-v2 fixes. Cloud-cloud follow-ups (REMOTE-1290) likely have the same setup-v2 active-block reset issue when the follow-up's environment runs setup commands. Out of scope here, but the gate added in §6 can be generalized to also check for follow-up startups. +- Cloud→cloud setup-v2 polish (REMOTE-1290) — out of scope here.