A VS Code extension that brings AI-powered data discovery to your IDE using your choice of LLM provider. Chat with your OpenMetadata catalog using OpenAI, Ollama, or any custom endpoint.
- Multiple LLM Providers: Choose between OpenAI, Ollama, or custom OpenAI-compatible endpoints
- Complete Privacy: Use local models with Ollama for offline, private data analysis
- Flexible Configuration: Switch between providers without code changes
- Natural Language Search: Ask questions like "show me customer tables" or search by keywords
- AI-Powered Insights: Get intelligent analysis of your datasets and data quality
- Interactive Data Lineage: Visualize upstream and downstream table relationships
- Column Details: Explore table schemas with expandable column information
-
OpenMetadata Server: Running locally at http://localhost:8585
- Use the OpenMetadata Docker setup
- Load sample data for testing
-
LLM Provider (choose one):
- OpenAI: Get an API key from platform.openai.com
- Ollama: Install Ollama and pull a model (e.g.,
ollama pull llama2) - Custom: Any OpenAI-compatible API endpoint
1. Clone and Install
cd /path/to/your/workspace
git clone <your-repo-url> local-llm-chat-vscode-openmetadata
cd local-llm-chat-vscode-openmetadata
npm install2. Configure Your LLM Provider
Open VS Code settings (Ctrl+, or Cmd+,) and configure your preferred provider:
{
"openmetadataExplorer.llm.provider": "openai",
"openmetadataExplorer.llm.openai.apiKey": "sk-your-api-key-here",
"openmetadataExplorer.llm.openai.model": "gpt-4o",
"openmetadataExplorer.openmetadataUrl": "http://localhost:8585",
"openmetadataExplorer.openmetadataAuthToken": "YOUR_BOT_TOKEN"
}{
"openmetadataExplorer.llm.provider": "ollama",
"openmetadataExplorer.llm.ollama.endpoint": "http://localhost:11434",
"openmetadataExplorer.llm.ollama.model": "llama2",
"openmetadataExplorer.openmetadataUrl": "http://localhost:8585",
"openmetadataExplorer.openmetadataAuthToken": "YOUR_BOT_TOKEN"
}{
"openmetadataExplorer.llm.provider": "custom",
"openmetadataExplorer.llm.custom.endpoint": "http://localhost:1234/v1",
"openmetadataExplorer.llm.custom.apiKey": "optional-api-key",
"openmetadataExplorer.llm.custom.model": "model-name",
"openmetadataExplorer.openmetadataUrl": "http://localhost:8585",
"openmetadataExplorer.openmetadataAuthToken": "YOUR_BOT_TOKEN"
}3. Get OpenMetadata Bot Token
- Open http://localhost:8585 and login (default: admin/admin)
- Go to Settings → Bots
- Click Add Bot with these details:
- Name:
vscode-llm-bot - Description:
Bot for VS Code LLM extension
- Name:
- Click Generate Token and copy the JWT token (starts with
eyJ) - Assign Data Consumer role to the bot
4. Run in Debug Mode
- Press
F5to launch the extension in a new VS Code window - Look for OPEN METADATA panel at the bottom
- Verify connection and start searching
1. Build the Extension
npm run compile
npm run package2. Install the VSIX
- Open Command Palette (
Ctrl+Shift+PorCmd+Shift+P) - Type:
Extensions: Install from VSIX... - Select the generated
.vsixfile - Reload when prompted
3. Configure as shown in Option 1, step 2
- Keyword Search: Type table names like "customer" or "orders"
- Natural Language: Ask questions like "show me customer data"
- Browse Results: Click on tables to see column details and AI insights
- Search for any table
- Click View Lineage on the table card
- Use the interactive graph:
- Click + buttons to expand upstream/downstream relationships
- Click - buttons to collapse connections
- Drag nodes to reposition them
- Zoom with mouse wheel
customer- Find customer-related tablesorders- Discover transaction datasales- Locate revenue tablesproduct- Find catalog information
| Feature | OpenAI | Ollama | Custom |
|---|---|---|---|
| Speed | Fast (cloud) | Medium-Fast (local) | Varies |
| Privacy | Cloud-based | 100% Local | Depends |
| Cost | Pay-per-use | Free | Varies |
| Setup | API key only | Install + model | Varies |
| Offline | ❌ No | ✅ Yes | Depends |
| Quality | Excellent | Good | Varies |
OpenAI:
gpt-4o- Best quality, faster than GPT-4gpt-4- High quality, slowergpt-3.5-turbo- Fast and cheapo1-preview- Advanced reasoningo1-mini- Cost-effective reasoning
Ollama:
llama2- General purpose, 7B parametersmistral- Fast and capablecodellama- Optimized for code/datallama3- Latest Llama model
Custom:
- Any OpenAI-compatible API endpoint works
- Examples: LM Studio, LocalAI, vLLM, Text Generation WebUI
# Development build with watch
npm run watch
# Production build
npm run compile
# Package for distribution
npm run packagesrc/
├── extension.ts # Extension entry point
├── OpenMetadataExplorerProvider.ts # Main provider
├── services/
│ ├── OpenAIService.ts # OpenAI integration
│ ├── LocalLLMService.ts # Ollama/Custom integration
│ ├── UnifiedLLMService.ts # LLM orchestrator
│ ├── OpenMetadataService.ts # OpenMetadata API
│ └── LineageService.ts # Data lineage
└── webview/
├── App.tsx # Main React app
└── components/ # React components
UnifiedLLMService.ts
- Orchestrates between OpenAI, Ollama, and custom endpoints
- Handles provider selection and initialization
- Provides unified interface for all LLM operations
OpenAIService.ts
- Direct integration with OpenAI API
- Supports chat completions format
- Handles API key validation
LocalLLMService.ts
- OpenAI-compatible format for Ollama
- Fallback to legacy formats for compatibility
- Flexible response parsing
| Setting | Description | Default |
|---|---|---|
openmetadataExplorer.openmetadataUrl |
OpenMetadata server URL | http://localhost:8585 |
openmetadataExplorer.openmetadataAuthToken |
OpenMetadata bot JWT token | (empty) |
| Setting | Description | Default |
|---|---|---|
openmetadataExplorer.llm.provider |
LLM provider (openai/ollama/custom) | openai |
| Setting | Description | Default |
|---|---|---|
openmetadataExplorer.llm.openai.apiKey |
OpenAI API key | (empty) |
openmetadataExplorer.llm.openai.model |
OpenAI model name | gpt-4o |
openmetadataExplorer.llm.openai.baseUrl |
OpenAI API base URL | https://api.openai.com/v1 |
| Setting | Description | Default |
|---|---|---|
openmetadataExplorer.llm.ollama.endpoint |
Ollama API endpoint | http://localhost:11434 |
openmetadataExplorer.llm.ollama.model |
Ollama model name | llama2 |
| Setting | Description | Default |
|---|---|---|
openmetadataExplorer.llm.custom.endpoint |
Custom API endpoint | (empty) |
openmetadataExplorer.llm.custom.apiKey |
Custom API key (if required) | (empty) |
openmetadataExplorer.llm.custom.model |
Custom model name | (empty) |
- "API key invalid": Check your API key at platform.openai.com
- "Rate limit exceeded": Wait a moment or upgrade your OpenAI plan
- "Model not found": Verify the model name (gpt-4o, gpt-4, etc.)
- "Connection refused": Make sure Ollama is running (
ollama serve) - "Model not found": Pull the model first (
ollama pull llama2) - Slow responses: Use a smaller model or upgrade your hardware
- "Connection failed": Verify OpenMetadata is running at the configured URL
- "Authentication failed": Check your bot token is valid and not expired
- "No tables found": Ensure sample data is loaded in OpenMetadata
Version 1.0.0 - Initial Release
- OpenAI, Ollama, and custom endpoint support
- Natural language search with AI insights
- Interactive data lineage visualization
- Professional UI optimized for developers
Planned Features
- Column-level lineage relationships
- Data quality monitoring integration
- Advanced search filters and exports
- Streaming responses for better UX
- Multi-turn conversations
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to the Ollama team for making local LLMs accessible
- LM Studio for providing an excellent local inference platform
- The VS Code extension API team for comprehensive documentation
If you encounter any issues or have questions:
- 🐛 Report bugs
- 💡 Request features
- ⭐ Star the repo if you find it useful!
If you like this project, support further development with a repost or coffee:
- 🧑💻 Markus Begerow
- 💾 GitHub
Privacy Notice: This extension operates entirely locally. No data is sent to external servers unless you explicitly configure it to use a remote API endpoint.