This guide helps collaborators set up and run PatchPro locally for development and testing.
- Python 3.12+ (required)
- uv package manager (recommended) or pip
- OpenAI API Key (for LLM features)
git clone <repository-url>
cd patchpro-bot-agent-devUsing uv (recommended):
uv syncUsing pip:
python -m venv .venv
source .venv/bin/activate # or `.venv/bin/activate.fish` for fish shell
pip install -e .Create a .env file in the project root:
# .env
OPENAI_API_KEY=sk-proj-your-openai-api-key-hereGet your API key from: https://platform.openai.com/api-keys
# Clone the demo repository
git clone <demo-repo-url>
cd patchpro-demo-repo
# Generate analysis artifacts
mkdir -p artifact/analysis
ruff check --output-format json . > artifact/analysis/ruff_output.json || true
semgrep --config .semgrep.yml --json . > artifact/analysis/semgrep_output.json || true
# Run PatchPro (from demo repo)
uv run --with /path/to/patchpro-bot-agent-dev python -m patchpro_bot.run_ci-
Setup your repository:
cd your-repository # Add proper pyproject.toml cat > pyproject.toml << EOF [project] name = "your-project" version = "0.1.0" requires-python = ">=3.8" dependencies = [] [build-system] requires = ["setuptools>=68", "wheel"] build-backend = "setuptools.build_meta" EOF
-
Generate analysis:
mkdir -p artifact/analysis ruff check --output-format json . > artifact/analysis/ruff_output.json || true semgrep --json . > artifact/analysis/semgrep_output.json || true
-
Run PatchPro:
uv run --with /path/to/patchpro-bot-agent-dev python -m patchpro_bot.run_ci
cd patchpro-bot-agent-dev
# Point to external artifacts
PP_ARTIFACTS=/path/to/repo/artifact uv run python -m patchpro_bot.run_ciPatchPro generates several artifacts in the artifact/ directory:
report.md- Comprehensive analysis reportpatch_combined_*.diff- AI-generated code fixespatch_summary_*.md- Summary of changesanalysis/- Raw analysis data from tools
OPENAI_API_KEY- OpenAI API key for LLM featuresPP_ARTIFACTS- Path to artifacts directory (default:artifact)
Add to your .env file:
OPENAI_MODEL=gpt-4o-mini # Default model
OPENAI_MAX_TOKENS=8192 # Max response tokens
OPENAI_TEMPERATURE=0.1 # Generation temperature# Check Python version
python --version # Should be 3.12+
# Use specific Python version with uv
uv python install 3.12
uv venv --python 3.12# Test API key
export OPENAI_API_KEY="your-key"
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models- Ensure you're running from the correct directory
- Check that source files exist in the expected locations
- Verify
PP_ARTIFACTSpath is correct
# Reinstall dependencies
uv sync --force
# or
pip install -e . --force-reinstall- Make Changes to PatchPro code
- Test Locally using one of the methods above
- Check Logs in
artifact/patchpro_enhanced.log - Review Output patches and reports
- Commit Changes when satisfied
- First run may be slower (model initialization)
- Subsequent runs benefit from caching
- Large repositories are processed in intelligent batches
- API costs vary by model and code complexity
- Check logs in
artifact/patchpro_enhanced.log - Review error messages for specific issues
- Ensure all prerequisites are met
- Verify file paths and permissions