π Never miss a critical update again! A lightweight Python tool that monitors GitHub repositories for new releases, perfect for CI/CD pipelines, dependency tracking, and security updates.
Quick Start: New to Python? See our Quick Start Guide for non-Python developers.
- π Automated Dependency Tracking: Know instantly when your dependencies release new versions
- π Security Updates: Stay on top of critical security patches
- π CI/CD Integration: Built specifically for Concourse pipelines (but works anywhere!)
- πΎ Stateful Monitoring: Only alerts on truly new releases, not ones you've seen
- π― Flexible Output: JSON or YAML output for easy integration
- β‘ Lightweight: Simple Python script with minimal dependencies
Perfect for teams who need to:
- Track when Kubernetes, Gatekeeper, Istio, or other tools release updates
- Automate dependency updates in their CI/CD pipelines
- Monitor security tools for latest versions
- Build compliance reports showing update status
$ python3 github_monitor.py --config config.yaml
{
"timestamp": "2025-05-30T13:40:38.695772+00:00",
"total_repositories_checked": 3,
"new_releases_found": 2,
"releases": [
{
"repository": "kubernetes/kubernetes",
"tag_name": "v1.33.1",
"name": "Kubernetes v1.33.1",
"published_at": "2025-05-15T17:49:01Z",
"html_url": "https://github.com/kubernetes/kubernetes/releases/tag/v1.33.1"
},
{
"repository": "istio/istio",
"tag_name": "1.22.4",
"name": "1.22.4",
"published_at": "2025-05-21T13:02:01Z",
"html_url": "https://github.com/istio/istio/releases/tag/1.22.4"
}
]
}- GitHub API Integration: Authenticates with GitHub API using personal access tokens
- Rate Limiting: Built-in protection against API rate limits with automatic retry
- State Tracking: Maintains state between runs to detect only new releases
- Multiple Output Formats: Supports JSON and YAML output
- Concourse Integration: Designed for use in Concourse pipelines with bash wrapper
- Error Handling: Comprehensive error handling for network issues and API failures
- Configurable: YAML-based configuration for repository lists and settings
- π₯ Release Downloads: Automatically download release assets with version management
- π Asset Filtering: Download only specific file types using configurable patterns
- π Manifest Support: Download Kubernetes manifests, YAML configs, and source archives
- β Verification: Built-in checksum verification for downloaded files
- ποΈ Smart Organization: Organize downloads by repository and version
We welcome contributions! Please see our Contributing Guide for detailed information.
Perfect for newcomers to the project:
- π·οΈ Good First Issues - Beginner-friendly tasks
- π€ Help Wanted - Issues where we need help
- β¨ Enhancements - New features and improvements
- π Bug Fixes - Help us squash bugs
- π Documentation - Improve docs, add examples, fix typos
- β¨ New Features - Add support for new CI/CD tools (GitHub Actions, GitLab CI, etc.)
- π§ͺ Tests - Increase test coverage
- π¨ Code Quality - Refactoring and improvements
- π Examples - Add more use cases and integration examples
- Find an Issue: Browse open issues or create a new one
- Comment: Let us know you're working on it
- Fork & Clone: See our Contributing Guide
- Make Changes: Follow our coding standards
- Submit PR: We'll review and provide feedback
- π΄ Fork the repository
- π Report a bug
- π‘ Request a feature
- π Contributing Guide
- π₯ Contributors
- π¬ Discussions (coming soon)
The GitHub Release Monitor supports multiple storage backends for artifacts and version tracking:
- MinIO, AWS S3, Google Cloud Storage, and other S3-compatible services
- Built-in support with automatic failover between boto3 and MinIO client
- Perfect for cloud-native and self-hosted environments
- Full JFrog Artifactory OSS and Enterprise support
- Complete Documentation: π Artifactory Integration Guide
- Version database and artifact storage in Artifactory repositories
- API key and username/password authentication
- SSL configuration for self-signed certificates
Choose the storage backend that best fits your infrastructure:
- Getting Started: Use the default S3-compatible setup with MinIO
- Enterprise: Use Artifactory if you already have JFrog infrastructure
- Cloud: Use AWS S3, Google Cloud Storage, or Azure Blob Storage
- Python 3.8+ (tested on 3.8, 3.9, 3.10, 3.11)
- GitHub personal access token
- Dependencies:
requests,PyYAML,boto3(for S3 storage)
Use the Makefile for easy setup:
# Complete setup
make setup
# Edit .env with your GitHub token
# Edit config-local.yaml with test repositories
# Run monitoring
make run-
Clone or download the repository
-
Install dependencies:
pip install -r requirements.txt
-
Set up your GitHub token:
export GITHUB_TOKEN="your_personal_access_token"
Create a config.yaml file to specify repositories to monitor:
repositories:
- owner: kubernetes
repo: kubernetes
description: "Kubernetes container orchestration platform"
- owner: istio
repo: istio
description: "Istio service mesh"
settings:
rate_limit_delay: 1.0
max_releases_per_repo: 10
include_prereleases: false
# Optional: Enable release downloads
download:
enabled: false
directory: ./downloads
asset_patterns:
- "*.tar.gz"
- "*.zip"
verify_downloads: true# Basic usage
python3 github_monitor.py --config config.yaml
# Save output to file
python3 github_monitor.py --config config.yaml --output releases.json
# YAML output format
python3 github_monitor.py --config config.yaml --format yaml
# Force check all releases (ignore timestamps)
python3 github_monitor.py --config config.yaml --force-check
# Monitor and download new releases automatically
python3 github_monitor.py --config config.yaml --download
# Force local storage (bypass S3/Artifactory auto-detection)
python3 github_monitor.py --config config.yaml --force-download# Basic usage
./scripts/monitor.sh
# Custom configuration
./scripts/monitor.sh --config custom-config.yaml --output releases.json
# YAML output
./scripts/monitor.sh --format yaml
# Force check
./scripts/monitor.sh --force-check
# Monitor and download releases
./scripts/monitor.sh --downloadEnable automatic downloading of GitHub release assets:
# Configure downloads in config.yaml first
python3 github_monitor.py --config config.yaml --download
# Use the dedicated download script
./scripts/download.sh --config config.yaml --input releases.json
# Monitor and download in one pipeline
./scripts/monitor.sh | ./scripts/download.shSee the Download Guide for detailed configuration and usage.
Many repositories provide Kubernetes manifests (YAML files) or only source code instead of binary releases:
# Download YAML manifests and source archives
python3 github_monitor.py --config config.yaml --download
# Example repository: Wavefront Observability for Kubernetes
# Downloads: wavefront-operator.yaml + source tarballSee the Source Code Downloads Guide for repositories that release manifests or source-only.
The script outputs structured data about new releases:
{
"timestamp": "2024-01-15T10:30:00Z",
"total_repositories_checked": 8,
"new_releases_found": 2,
"releases": [
{
"repository": "kubernetes/kubernetes",
"owner": "kubernetes",
"repo": "kubernetes",
"tag_name": "v1.29.1",
"name": "v1.29.1",
"published_at": "2024-01-14T15:20:30Z",
"tarball_url": "https://api.github.com/repos/kubernetes/kubernetes/tarball/v1.29.1",
"zipball_url": "https://api.github.com/repos/kubernetes/kubernetes/zipball/v1.29.1",
"html_url": "https://github.com/kubernetes/kubernetes/releases/tag/v1.29.1",
"prerelease": false,
"draft": false
}
]
}The project includes complete Concourse CI/CD pipeline support with multiple pipeline options:
π View Pipeline Flowchart - Visual overview of how the pipeline works
ci/
βββ pipeline-s3-compatible.yml # S3-compatible pipeline (MinIO/AWS S3) β PRIMARY
βββ pipeline-simple.yml # Basic monitoring only (getting started) π STARTER
βββ pipeline.yml # Traditional AWS S3 pipeline π’ AWS-ONLY
βββ fly.sh # Deployment script
βββ tasks/
βββ check-releases/ # Full monitoring task
β βββ task.yml
β βββ task.sh
βββ check-releases-simple/ # Simplified monitoring task
β βββ task.yml
β βββ task.sh
βββ download-releases/ # Advanced download task with S3 support
βββ task.yml
βββ task.sh
scripts/
βββ monitor.sh # Monitor wrapper script
βββ download.sh # Download wrapper script
params/
βββ global.yml # Global parameters
βββ test.yml # Test environment parameters
βββ prod.yml # Production environment parameters
-
S3-Compatible Pipeline (
pipeline-s3-compatible.yml) β PRIMARY:- Full MinIO and AWS S3 support
- Advanced version tracking and downloads
- Force download and database management utilities
- Production-ready with automatic cleanup
-
Artifactory Pipeline (
pipeline-artifactory.yml) π’ ENTERPRISE:- JFrog Artifactory integration for enterprise environments
- Version database and artifact storage in Artifactory
- API key and username/password authentication
- SSL configuration for self-signed certificates
- See Artifactory Setup Guide
-
Simple Pipeline (
pipeline-simple.yml) π STARTER:- Basic monitoring without storage dependencies
- Easy setup for getting started
- No downloads, perfect for learning the basics
-
AWS S3 Pipeline (
pipeline.yml) π’ AWS-ONLY:- Traditional AWS S3 integration
- Standard download functionality
- For pure AWS environments
-
Configure parameters in
params/directory:- Edit
global.ymlfor shared configuration - Create environment-specific files (e.g.,
test.yml,prod.yml)
- Edit
-
Deploy the pipeline using the fly script:
# Deploy standard pipeline ./ci/fly.sh set -t test -f test # Deploy simple pipeline fly -t test set-pipeline -p release-monitor-simple \ -c ci/pipeline-simple.yml \ -l params/global.yml -l params/test.yml # Deploy S3-compatible pipeline (recommended) fly -t test set-pipeline -p release-monitor-minio \ -c ci/pipeline-s3-compatible.yml \ -l params/global-s3-compatible.yml -l params/minio-local.yml # Deploy Artifactory pipeline (enterprise) fly -t test set-pipeline -p release-monitor-artifactory \ -c ci/pipeline-artifactory.yml \ -l params/global-artifactory.yml -l params/test.yml
-
Unpause the pipeline:
fly -t your-target unpause-pipeline -p github-release-monitor
- Scheduled Execution: Runs monitoring at configurable intervals
- Release Detection: Identifies new releases since last run
- Asset Downloads: Download specific release assets based on patterns
- Version Management: Track downloaded versions to avoid duplicates
- S3 Integration: Optional S3 storage for state and downloads
- Structured Output: JSON/YAML output for downstream processing
- Manual Triggers: Download specific releases on demand
The script maintains state in release_state.json to track:
- Last checked timestamp for each repository
- Last overall run timestamp
This ensures only new releases are reported on subsequent runs.
- API Rate Limiting: Automatic retry with exponential backoff
- Network Errors: Graceful handling of connection issues
- Missing Releases: Handles repositories with no releases
- Invalid Configuration: Clear error messages for configuration issues
GITHUB_TOKEN(required): GitHub personal access tokenAWS_ACCESS_KEY_ID(Concourse): AWS access key for S3AWS_SECRET_ACCESS_KEY(Concourse): AWS secret key for S3
# Start MinIO, Artifactory, and PostgreSQL
docker-compose -f docker-compose.yml up -d
# Check service status
docker-compose -f docker-compose.yml logs -f setup-checker# MinIO S3-compatible storage only
docker-compose up -d
# JFrog Artifactory OSS only
docker-compose -f docker-compose-artifactory.yml up -d
# Automated Artifactory setup
./scripts/setup-artifactory-local.sh- MinIO Console: http://localhost:9001 (minioadmin/minioadmin)
- Artifactory UI: http://localhost:8081 (admin/password)
- PostgreSQL: localhost:5432 (release_monitor/release_monitor_pass)
export GITHUB_TOKEN="ghp_xxxxxxxxxxxxxxxxxxxx"
./scripts/monitor.sh --config config.yaml# In a CI environment
./scripts/monitor.sh \
--config config.yaml \
--output /tmp/releases.json \
--format json
# Process results
if [ $(jq '.new_releases_found' /tmp/releases.json) -gt 0 ]; then
echo "New releases found, triggering downstream jobs"
# Trigger additional pipeline steps
fiCreate a custom configuration for specific projects:
repositories:
- owner: your-org
repo: your-project
description: "Internal project"
- owner: open-policy-agent
repo: gatekeeper
description: "Policy Controller for Kubernetes"
settings:
rate_limit_delay: 2.0 # Slower API calls
include_prereleases: true # Include pre-releasesπ Complete Troubleshooting Guide - Detailed solutions for common issues
- Downloads Not Working: Version database already contains releases β Solution
- Environment Variables Override Config: Auto-detection overrides file settings β Solution
- Rate Limiting: Increase
rate_limit_delayin configuration - Token Issues: Ensure
GITHUB_TOKENhas proper permissions - Network Errors: Check connectivity to api.github.com
- State File Errors: Check write permissions for state file location
Enable verbose logging by setting the log level:
LOG_LEVEL=DEBUG python github_monitor.py --config ./config.yaml --download- Store GitHub tokens securely (environment variables, secret management)
- Use minimal token permissions (public repository read access)
- Regularly rotate access tokens
- Avoid logging sensitive information
- Default rate limiting: 1 second between API calls
- Typical execution time: < 2 minutes for 10 repositories
- Memory usage: < 50MB for typical workloads
- Network usage: ~1KB per repository check
A Makefile is provided for common development tasks:
# Setup and Installation
make setup # Complete local development setup
make install # Install Python dependencies
make clean # Clean generated files
make clean-all # Clean everything including venv
# Development
make run # Run monitoring with default config
make run-local # Run with local config
make test # Run all tests
make check # Run lint, validate, and test
make watch # Run continuously (every 5 minutes)
# CI/CD Pipeline
make validate # Validate pipeline configuration
make pipeline-set-test # Deploy to test (public repos)
make pipeline-set-test-with-key # Deploy to test (private repos with SSH key)
make pipeline-set-prod # Deploy to production
make pipeline-set-prod-with-key # Deploy to production (private repos with SSH key)
# Help
make help # Show all available commandsThis project is provided as-is for educational and operational use.