1. Scoping
We start by agreeing on what we're looking at — which systems, which environments, what timeframe. No work begins until scope is defined and documented.
Every engagement follows the same basic structure. We scope it, collect evidence, analyze what we find, and deliver something you can act on.
We start by agreeing on what we're looking at — which systems, which environments, what timeframe. No work begins until scope is defined and documented.
We work from observable evidence: configs, access records, logs, deployment pipelines, architecture docs, incident history. We don't rely on questionnaires or take anyone's word for it.
Findings get mapped against consistent criteria. For classification engagements, this means our four-tier maturity framework. For advisory work, it means clear prioritization and actionable recommendations.
You get a report that's useful — findings, context, limitations, and what we'd do next. Written for the people who actually need to make decisions, not filed away in a compliance folder.
Our criteria don't shift between clients. If we rated something a certain way, we can explain why — and that reasoning holds across engagements.
We don't sell remediation tools or managed services. Our assessments aren't influenced by what we could upsell you afterward.
You see our evaluation criteria, how we weighted evidence, and where we had incomplete information. No black boxes.