Clarity under constraint
Threshold Signalworks builds solutions that provide clarity and control for AI systems, from models to agents to the communication infrastructure beneath them.
At every stage, our research and engineering is designed to tackle real-world constraints organisations and individuals encounter in the deployment of AI systems and their supporting infrastructure. We build governance tooling, evaluation instruments, and protocol-layer technology, each grounded in research that addresses specific failure modes under operational conditions.
From coding agents to model evaluation pipelines to orbital compute nodes.
Threshold Systems is the research arm of Threshold Signalworks. The programme provides the underlying research and development that backs each engineering output the company produces.
Current work spans AI evaluation protocols, behavioural measurement and specification discovery for agentic AI systems, cognitive architecture under constraint, and human decision-making in high-uncertainty environments. Publications and artefact packs are released through threshold.systems.
Policy enforcement for AI coding agents. Classifies every agent action by risk, requires structured human approval before anything destructive, and logs everything to a tamper-evident audit trail. Constraints live on disk, not in context, so they survive when conversations are compacted. Cloud tier adds independent third-party attestation, policy versioning, and team governance.
Integrations • Cloud plans • Licensing
Patent-pending protocol technology for delivering structured payloads across constrained communication environments, including space-based AI infrastructure. Designed for domains where bandwidth is limited, links are intermittent, and transfer cost per bit is high. European patent application filed 2026.
Reproducible behavioural instrumentation for agentic AI systems. Driftwatch measures drift, premature convergence, instruction-integrity failure, and repeat-evaluation instability across model versions, prompt perturbations, and scaffold changes. The core research question: can we identify which behavioural properties are stable enough to specify and which require explicit constraints before formal assurance can be meaningful? Open evaluation harness with structured probe suites, deterministic run-envelope artefacts, and full provenance chains.
Inference-time stabilisation research. Detecting when a model is committing to an answer before sufficient information is available, and correcting the behaviour without retraining. Early research validated across three model families. Designed to feed calibrated confidence signals into downstream governance and control systems.
Research outputs from Driftwatch and Helmsman (probe suites, schemas, measurement methods, technical reports) are designed to be open, inspectable, and reusable independently of any commercial product. Commercial hosted services and enterprise integrations are developed separately.
Public artefact packs (evaluation runs, reports, provenance chains) will appear here as they are released.