Skip to content

Conversation

@ron-42
Copy link
Contributor

@ron-42 ron-42 commented Oct 23, 2025

Description

This PR adds support for the reasoning_effort parameter for OpenAI's reasoning models (o1, o3, and gpt-5 series). This parameter allows users to control the trade-off between reasoning depth and response speed.

What is reasoning_effort?

The reasoning_effort parameter controls how much computational effort the model puts into reasoning before generating a response. It accepts three values:

  • "low" - Faster responses with less reasoning
  • "medium" - Balanced approach (default behavior)
  • "high" - More thorough reasoning, slower responses

This parameter is only applicable to OpenAI's reasoning models: o1, o1-preview, o1-mini, o3, o3-mini, gpt-5, gpt-5o, gpt-5o-mini, and gpt-5o-micro.

Changes Made

  • Added reasoning_effort parameter to BaseLlmConfig class with type safety (Literal["low", "medium", "high"])
  • Updated all LLM configuration classes to support the new parameter:
    • OpenAIConfig
    • AzureOpenAIConfig
    • AnthropicConfig
    • OllamaConfig
    • DeepSeekConfig
    • LMStudioConfig
    • VllmConfig
    • AWSBedrockConfig
  • Added validation logic in _validate_config() to ensure only valid values are accepted
  • Added smart filtering in _get_supported_params() to only pass the parameter to reasoning models
  • Added warning when reasoning_effort is set for non-reasoning models
  • Created comprehensive test suite with 9 unit tests covering:
    • Parameter passing to reasoning models (o1, o3, gpt-5)
    • Validation of valid and invalid values
    • Filtering for non-reasoning models
    • Optional live API integration test
  • Updated documentation:
    • docs/components/llms/config.mdx - Updated parameter table
    • docs/components/llms/models/openai.mdx - Added usage example
    • docs/components/llms/models/azure_openai.mdx - Added usage example

Usage Example

OpenAI

from mem0 import Memory

config = {
    "llm": {
        "provider": "openai",
        "config": {
            "model": "o1-preview",
            "temperature": 1,
            "max_tokens": 2000,
            "reasoning_effort": "high"  # New parameter
        }
    }
}

m = Memory.from_config(config)
m.add("I love programming in Python", user_id="user123")

Azure OpenAI

from mem0 import Memory

config = {
    "llm": {
        "provider": "azure_openai",
        "config": {
            "model": "o1-preview",
            "azure_kwargs": {
                "api_version": "2024-09-01-preview",
                "azure_endpoint": "https://your-endpoint.openai.azure.com/",
                "azure_deployment": "your-o1-deployment"
            },
            "reasoning_effort": "medium"  # New parameter
        }
    }
}

m = Memory.from_config(config)

Testing

To run the new tests:

# Run all reasoning_effort tests
pytest tests/llms/test_openai_reasoning_effort.py -v

# Run a specific test
pytest tests/llms/test_openai_reasoning_effort.py::test_reasoning_effort_with_o1 -v

# Run with live API (requires OPENAI_API_KEY)
pytest tests/llms/test_openai_reasoning_effort.py::test_reasoning_effort_live_api -v

Checklist

  • Added reasoning_effort parameter to all config classes
  • Implemented validation for parameter values
  • Implemented smart filtering (only pass to reasoning models)
  • Added warning for non-reasoning models
  • Created comprehensive test suite
  • Updated documentation
  • Followed existing code style and patterns
  • All tests pass

Related Issues

Fixes #3651

- Add reasoning_effort parameter to all LLM configuration classes
  - BaseLlmConfig, OpenAIConfig, AzureOpenAIConfig
  - AnthropicConfig, OllamaConfig, DeepSeekConfig
  - LMStudioConfig, VllmConfig, AWSBedrockConfig

- Implement validation for reasoning_effort values (low, medium, high)
- Add smart parameter filtering in LLMBase._get_supported_params()
  - Only passes reasoning_effort to reasoning models (o1, o3, gpt-5)
  - Warns when used with non-reasoning models

- Add comprehensive test suite
  - 9 unit tests with mocked OpenAI client
  - Tests for o1, o3, and gpt-5 models
  - Validation tests and parameter filtering tests
  - Optional live API integration test

- Update documentation
  - Updated config.mdx to list reasoning_effort for OpenAI/Azure providers
  - Added usage examples in openai.mdx and azure_openai.mdx
  - Documented supported models and valid values

This enables users to control reasoning depth vs speed trade-off
for OpenAI's reasoning models using the reasoning_effort parameter.

Fixes mem0ai#3651
… errors

- Changed unused 'response' variables to either remove assignment or use result
- Fixed test_reasoning_effort_validation_invalid_value to not assign unused llm variable
- All F841 linting errors resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for reasoning_effort parameter for reasoning models in AzureOpenAIConfig

1 participant