This report converts vulnerability aggregates into a risk register suitable for GRC-style conversations:
deterministic scoring (repeatable)
tiers (Critical/High/Medium/Low)
treatment guidance and prioritization signals
Per scan: output/<scan_name>/risk.html
output/<scan_name>/risk.html
Optional model artifacts (only when metadata export is enabled):
output/<scan_name>/risk_model.json (machine-readable register + scoring)
output/<scan_name>/risk_model.json
Model JSON is written when notifications.include_run_metadata: true.
notifications.include_run_metadata: true
CLI:
miyabi-qualys-ai-triage-pack run --config config/config.yaml
Enable/disable:
reports.risk.enabled: true|false
UI options:
reports.risk.ui.enable_filters
reports.risk.ui.default_sort
reports.risk.ui.max_rows_render
Optional LLM narrative (guardrailed / JSON-only):
reports.risk.llm.enabled
reports.risk.llm.model (empty uses openai.model)
reports.risk.llm.model
openai.model
reports.risk.llm.max_items_for_llm
Primary:
QID, Title, Severity, Category
QID
Title
Severity
Category
asset scope via host identifiers present in the export (FQDN/DNS/NetBIOS/IP)
FQDN/DNS/NetBIOS/IP
Supporting (best-effort, for evidence snippets / context when present):
Threat, Impact, Solution, Exploitability, Associated Malware, Results, Instance, CVE ID
Threat
Impact
Solution
Exploitability
Associated Malware
Results
Instance
CVE ID
The LLM is used only to produce consulting-style narrative blocks (e.g., executive bullets). It must not be interpreted as proof of:
exploitation in the wild
internet exposure
presence/absence of security controls
Risk is an interpretation layer over scan data; it is not a breach indicator.
Scores depend on what the export contains (scope and normalization quality matter).