🔒 100% Local — No Data Leaves Your Machine
AI: offline
SIEM: off
📋
⚙️
🤖

AgentZ

SOC Level AI

AI-powered security incident triage that runs entirely on your machine.
Connect your local Ollama model to any SIEM platform and get instant, actionable intelligence — with zero data leaving your network.

✅ All AI inference runs locally via Ollama — your alerts never touch external servers
1

Configure

Open Settings (⚙) and add your Ollama URL, model, and SIEM API credentials.

2

Start Agent

Click Start in Settings. AgentZ will poll your SIEM and auto-analyze every new alert.

3

Triage Faster

Receive AI summaries, MITRE mapping, severity assessments and remediation steps instantly.

Activity Log clear
⚙️ Settings
If you host this on GitHub Pages (HTTPS), browsers will block calls to http://localhost (Ollama). Run Ollama with HTTPS or use this file locally to avoid mixed-content errors.
🔒 Your API tokens are stored only in browser memory during this session. They are never sent to any server other than the one you configure below.
🤖 AI Configuration
Your local Ollama server. Must be reachable from this browser.
Any model installed in Ollama. Recommended: llama3.1 8B or larger.
🔒 SIEM Platform
🔒 Stored only in browser session — never transmitted to external servers
⏱ Agent Settings
Auto-analyze new incidents
Desktop notifications
Sound alert
Save settings to browser
🌍 Deploy to GitHub Pages
💡 This is a single static HTML file. Fork the repo, enable GitHub Pages, and access AgentZ from anywhere. All analysis runs against your local Ollama — no backend needed. For HTTPS+Ollama compatibility, run Ollama behind a reverse proxy with TLS.