CCW is a research interface for coordinating multiple AI systems under human governance. It visualizes constellation state, surfaces disagreement, enables human intervention, and maintains complete audit trails.
The interface is designed around a core assumption: the operator needs to see not just what the system recommends, but why it recommends it, who disagreed, and what would have happened without intervention.