To explore LLMs integration into regular vulnerability assessment/management workflows.
- Using BBOT to automate vulnerability assessment and management tasks.
- Using LLMs to generate descriptive findings from scanner outputs.
- Using LLMs to generate remediation steps for vulnerabilities.
- Word document output of the findings and remediation steps for a human to review.
- Using LLMs to prioritize vulnerabilities based on risk and impact.
See Medium article for more details: VAAL: Vulnerability Assessment Automation using LLMs
Works best with Kali Linux
- Clone the repo
git clone https://github.com/esandeepchoudary/vulnerability-assessment-automation-using-llms.git - Install the requirements using python uv
uv sync - Activate the virtual environment
source .venv/bin/activate - Install all dependencies for BBOT
bbot --install-all-deps - Specify targets in the
pwd/config.pyfile
targets = [
"10.0.0.1","mydomain.com","192.168.0.0/24"
]
- Specify a name to your scan in the
pwd/config.pyfile
scanname = "my_scan"
- Add the ChatGroq LLM API key in the
.envfile. Rename.env.exampleto.env
GROQ_API_KEY="your_api_key"
- Run the python script to begin the scan
python vaal.py - The word document generated will be in the
pwd/reports directory. The name of the file will be in the formatmy_scan.docx
- The system prompt for the LLM is in the
pwd/llm/prompt.pyfile. You can customize it to your liking. - The scanner used is BBOT. You can customize the scan types and options in the
pwd/scanner/presets/preset-nuclei.ymlfile.
