Feature/OpenAI providers#324
Conversation
|
So this seems to be a very similar intention as: where I commented:
in: I feel like with this level of complexity, you might as well collapse both |
Introduce OpenAIProviderManager plus JSON-backed metadata to hydrate /models payloads for OpenAI-like providers such as Synthetic. Hook ModelInfoManager, Model, and CLI completions/listings into that registry, expose configuration data in aider/resources/openai_providers.json, and ensure LiteLLM is initialized with the custom handler so cecli can call these endpoints reliably.
Some OpenAI-compatible providers emit costs like '/bin/bash.00000055', which our float parser treated as invalid and left the UI without per-token pricing (e.g., synthetic MiniMax-M2). Strip currency symbols/commas before parsing and add a regression test that proves static model caches with dollar-prefixed pricing still populate ModelInfoManager.
Synthetic/other OpenAI-like providers returned both reasoning_content and content, but our consolidation skipped storing the final content whenever reasoning existed, so cecli printed only the THINKING section. Always capture the message.content (including list-style OpenAI blocks) and add a regression test that feeds a recorded MiniMax completion via a heredoc JSON snippet to assert both THINKING and ANSWER text render.
Reinstalled the dev toolchain, ran the documented pre-commit hooks, and applied the resulting isort/black fixes across provider modules plus removed an unused import/variable in tests so the branch now passes the project’s formatting gate.
Co-authored-by: aider-ce (synthetic/hf:deepseek-ai/DeepSeek-V3.2)
b0817e8 to
e365eaa
Compare
|
Working for me in testing, please review when you have the time. |
|
Very nice, I'll make a v0.91.3 and make this is all included in that |
This PR adds support for the
openai-likeproviders known to LiteLLM so their models are included in lists, autocomplete, etc. Also adds scripts/generate_openai_providers.py to create a JSON file based on litellm data. For models, tokens/cost calculation is retrieved using /models endpoint if available, cached similar to openrouter, and made available to cecli.