-
Notifications
You must be signed in to change notification settings - Fork 2.7k
feat: Add Deepseek v3.2 interleaved thinking support #9925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Implement interleaved thinking mode support for DeepSeek V3.2 with generic capability-based architecture for extensibility to other models. Key changes: - Core thinking mode: parameter handling, temperature override, streaming - Multi-turn conversations: conditional reasoning clearing, turn detection - Task persistence: reasoning_content save/restore - Generic refactoring: capability flags replace model-specific checks Makes it easy to add interleaved thinking for other models (Kimi K2, Minimax M2) via capability flags. Note: Checkpoint commit. Backward compatibility verification and documentation pending. No manual testing performed yet.
…r OpenAI compatible providers - Set supportsInterleavedThinking default to false in openAiModelInfoSaneDefaults - Remove enabledR1Format fallback, rely solely on modelInfo.supportsInterleavedThinking flag - Add UI checkbox in OpenAICompatible settings to configure supportsInterleavedThinking for custom models
- Fix turn detection to preserve reasoning_content when user messages contain tool_result blocks (still in tool call sequence) - Fix turn detection when assistant stops sending tool calls after receiving tool results - Convert tool_use blocks from assistant messages to OpenAI tool_calls format - Convert tool_result blocks from user messages to tool role messages - Preserve tool_calls when merging consecutive assistant messages These changes ensure proper handling of tool calls and tool results in the interleaved thinking message conversion pipeline, maintaining reasoning_content preservation during tool call sequences while correctly clearing it between turns.
… history for DeepSeek models
Update three tests to expect `true` (clear reasoning_content) when assistant stops making tool calls after receiving results. These were outdated after commit 477d917 changed the implementation to detect tool sequence completion.
All issues resolved. The implementation is clean and ready for merge.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
src/api/providers/openai.ts
Outdated
| let finalReasoningContent = "" | ||
| let finalContent = "" | ||
| let finalToolCalls: any[] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two variables (finalReasoningContent and finalContent) are declared but never used. The comment on line 266 mentions "Track tool calls for debug logging" but only finalToolCalls is actually used (to accumulate tool call data). Consider removing these unused variables to reduce dead code.
| let finalReasoningContent = "" | |
| let finalContent = "" | |
| let finalToolCalls: any[] = [] | |
| let finalToolCalls: any[] = [] |
Fix it with Roo Code or mention @roomote and request a fix.
Related GitHub Issue
Closes: # 9779
Description
reasoning_contenthandling in streaming and non-streaming modesTest Procedure
deepseek-chatanddeepseek-reasonermodelPre-Submission Checklist
Screenshots / Videos
Added "Interleaved Thinking" toggle to OpenAI provider

Documentation Updates
Does this PR necessitate updates to user-facing documentation?
Additional Notes
1. Streaming Content Behavior
Streaming content sometimes appears in chunks rather than smoothly. It's unclear whether this is due to my implementation or Deepseek's API behavior. This doesn't affect functionality but may impact user experience.
2. Reasoning Content Continuity Verification
While the implementation follows Deepseek's API requirements for passing
reasoning_contentback to the API, I haven't been able to definitively verify that Deepseek is receiving it correctly. However, the API would return 400 errors if not handled properly, and I haven't observed any such errors, suggesting the implementation is working correctly.3. Limited Testing Scope
This has only been tested with Deepseek models (
deepseek-chatanddeepseek-reasoner). No regression testing has been performed with other OpenAI-compatible providers.Get in Touch
DM skelectric on Discord for questions