Skip to content

Conversation

@celeria-ai
Copy link
Contributor

@celeria-ai celeria-ai bot commented Nov 17, 2025

This PR implements the automatic discovery of local models from common hosting endpoints such as LM Studio, Llama.cpp, and MLX.

It introduces the following changes:

  • A new check_port function to detect open ports.
  • The get_locally_available_models function now scans for LM Studio, Llama.cpp, and MLX on their default ports.
  • If a local model provider is detected, it attempts to fetch the list of available models from the /v1/models endpoint.
  • Discovered models are added to the list of available models with a provider name like lmstudio-openai-like.

This addresses issue #193.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant