Run agents
Deploy named agents with durable identity, schedules, routing, memory, backend choice, and health checks.
The project
The codebase is a way to run AI agent teams on Kubernetes, route real work to them, connect MCP tools and memory, observe what they are doing, and keep humans in the loop when judgment matters. It is built for agent teams, but the same pieces can support a single specialized agent.
Deploy named agents with durable identity, schedules, routing, memory, backend choice, and health checks.
Connect autonomous agents to shared workspaces and MCP tools for Kubernetes, Helm, Prometheus, and future systems.
Keep work observable with metrics, traces, logs, readiness checks, explicit routing, and human escalation.
Try it
Install ww, point it at a Kubernetes cluster, install the operator, and create an echo-backed
agent that needs no LLM API key.
curl -fsSL https://github.com/witwave-ai/witwave/releases/latest/download/install.sh | sh
ww operator install --if-missing --yes
ww agent create hello --namespace witwave --create-namespace
Core thesis
witwave is built around a simple belief: agentic AI can help inside existing teams, hybrid workflows, and human-driven development loops. The larger shift happens when agents become part of the development system itself. Work routing, context, quality gates, memory, observability, escalation, and human judgment are not extras around autonomy. They are the control system that lets autonomy compound safely.
Read the adoption modelThe important work is not just prompting a model. It is giving agents durable identity, clear lanes, reusable context, evaluation loops, memory, and a visible path from request to outcome.
Humans define policy, own risk, review exceptions, set direction, and audit the work. The goal is leverage, not abdication.
Moving agents from an IDE to cloud infrastructure changes operations. Trust comes from governance, observability, and feedback, not from where the process happens to run.
What the code does
witwave is not just a prompt folder. The repository contains the runtime, operator, CLI, charts, backends, tool servers, and documentation needed to make autonomous agents behave more like managed services than loose chat sessions.
A harness container receives work, schedules heartbeats and jobs, handles triggers and continuations, and routes each concern to the configured backend.
Claude, Codex, Gemini, and Echo backends expose a common A2A surface while keeping their own sessions, memory, conversation logs, and metrics.
Helm charts, Kubernetes CRDs, an operator, and the ww CLI create agents, prompts, workspaces,
credentials, and deployable team topologies.
MCP services give agents controlled access to systems such as Kubernetes, Helm, and Prometheus without baking every capability into the agent image.
Workspaces, prompt bundles, memory folders, identity files, and backend routing give agents continuity across sessions and across a team.
Health probes, metrics ports, auth boundaries, redaction, traces, dashboards, and release tooling make the agent system inspectable instead of magical.
The duality
The project uses its own agent-team model to maintain the codebase, write documentation, shape public updates, watch releases, and improve the operating process. That does not mean humans disappear. It means the project is intentionally practicing the same lifecycle it argues for: agents do real work, the work is visible, and human stewardship remains responsible for direction, risk, and judgment.
Meet the working teamFor teams
Most organizations already know how to ask an AI for help. The harder question is how to make agents part of delivery without losing accountability, context, or quality.
For the project
The working team is not a separate demo. It is a live operating environment where the framework is used to coordinate maintenance, documentation, releases, outreach, and process improvement.
Community
Share ideas, report bugs, ask questions, or follow announcements and progress in GitHub Discussions.