security engineering.

years in the trenches taught me the job is part detective work, part diplomacy, and part stubbornness. the systems we ship are only as strong as the questions we ask before the pager rings. every engagement still starts with mapping critical paths, chasing noisy logs, and proving assumptions wrong.

the work here leans on first principles. i push teams to interrogate boundaries, align incentives, and ground every decision in observed behavior. that lens keeps the craft focused on resilience instead of résumé bullet points. it shaped SaaS platforms, global marketplaces, and now the evalops research lab work bringing language models the same guardrails we demanded from networks.

principles that anchor the craft

prove the threat model

start with the adversaries and the critical paths they covet. map trust boundaries, latent coupling, and blast radius before a single control gets deployed.

instrument before you fortify

visibility precedes hardening. telemetry, tracing, and synthetic probes establish reality so defenses evolve with evidence instead of assumptions.

design graceful failure paths

security controls must collapse safely. default to containment, rapid isolation, and reversible changes so responders preserve optionality under stress.

compress feedback loops

code, deploy, verify, learn. shorten that loop with automated reviews, live metrics, and incident hotwashes until improvements land faster than regressions.

elevate human judgment

incident commanders, SREs, and product owners are part of the system. invest in shared language, scenario drills, and honest debriefs so humans amplify controls.

where those muscles formed

snap

scaled detection coverage for social platforms where abusive automation constantly probed production systems.

carta

stood up security operations that kept equity workflows trustworthy while investors and law firms pushed for zero downtime.

doordash

protected logistics infrastructure that balanced restaurant demand, courier safety, and payments risk in real time.

threatkey

built cloud misconfiguration detection that watched SaaS estates drift and surfaced issues before they burned engineering time.

evalops research lab

now translating those lessons into behavioral evals for language models so production AI inherits the same guardrails as our networks.

foundations to interrogate first

system boundaries

chart data flows, trust contracts, and implicit dependencies. the goal is to expose where authority, identity, and compute actually cross so risk discussions stay grounded.

control surfaces

enumerate every place you can sense, decide, or intervene. from feature flags to webhook validators, list the levers before choosing where to invest automation.

socio-technical debt

catalog rituals, runbooks, and pager expectations. fragile communication patterns produce as much risk as unpatched code, so social debt joins the threat model.

operating loops that keep us honest

continuous verification

policy as code, runtime assertions, and chaos-style probes keep us honest. every fix ships with a sensor that proves it works and alarms when it drifts.

design sparring

embed security review as an architectural dialogue. pull prototypes apart with engineers, quantify tradeoffs, and document invariants the same day decisions land.

pressure-tested response

simulate failure, rehearse comms, and rotate ownership. the team that has practiced together keeps calm when the pager hits and knows which metric matters first.

how i work with teams

evidence reviews

post-incident work that turns raw logs into durable learning. package decisions, guardrail updates, and new probes into artifacts future responders can trust.

secure-by-default pipelines

infrastructure-as-code, secrets scanning, and runtime policy enforcement embedded into CI so fixes land before releases cut.

human-centered response

train on-call teams through live-fire exercises, pair them with product leaders, and keep communication honest when stakes get high.