As a quantitative sociologist and empiricist by training, I see engineering as the design of systems of measurable risk, reliability and compliance. I focus on protecting sensitive information in data-first systems where failure is critical. This included global payment systems, all with a focus on building trust through resilient engineering and empirical validation.
One recent professional milestone was leading the engineering efforts to pass PCI DSS certification on a payments platform. Completing that project reinforced one of my professional mantras: compliance is a function of resilient engineering, and resilience is the outcome of disciplined, data-driven processes. This work involved:
- Designing a layered strategy for validation
- Stress-testing APIs and backend integrations
- Generating quantifiable evidence for auditors
- Investigating edge cases via exploratory analysis
Additional Kinds of Sensitive Data and Regulations
PCI DSS will always have a special place in my technical story, but the lessons are so much broader. Engineerability is not the province of a single vertical, but is something that can be applied universally. Hard problems require deep thinking and secure/architectural analysis; solutions must be quantifiable and evaluable, implementations and results must be measurable and demonstrable.
Finance/E-Commerce (PCI DSS, SOX, GLBA, PSD2): For financial transactions, your entire request/response/payment workflow must be hardened. Every API must be authenticated and auditable; no way around it.
Education (FERPA, COPPA, PPRA, SOPIPA): Architecture and engineered infrastructure must be created to allow for syncing of sensitive student data records across institutions, all while keeping privacy and individual student learning experiences intact. Loss of synchronisation is more than an engineering failure here: it is a human experience failure.
Privacy (GDPR, CCPA/CPRA, LGPD, PIPEDA, CalICO): Architectures can (and must!) be built to bake security and data protection compliance into the overall solution, all with required data protection monitoring and enforcement, as it scales.
Healthcare (HIPAA, HITECH): This is yet another flavour of sensitive data in a workflow that cannot have downtime and needs highly accurate information. Quantitative/validated measures matter, not just theoretical thought experiments with assertions.
Cloud/SaaS (SOC 2, ISO 27001, FedRAMP): How do you make large bodies of requirements (e.g., 3 pages per SAML attribute) sufficiently quantified and specific enough for implementation, then with enough concrete measures that their implementation and ongoing results are trackable?
Quantitative Advantage Applied (SED)
An underlying analytic paradigm I bring to bear is SED: Statistical Modelling + Empirical Research + Data Interpretation. I view engineering through the lens of quantitative sociology and empiricism. I like to turn problems in the abstract into quantitative experiments. In regulation-heavy industries, it also allows me to operationalise “wishy-washy” requirements into auditable system behaviour for qualitative and quantitative testing.
1. Statistical Modeling → Reliability Metrics
I convert soft, conceptual compliance policies into concrete numbers. Tracking latency, error rate, and availability against rigid SLOs (e.g. p95 ≤ 250ms, error rate ≤ 0.30%) forces systems to show, don’t tell, that they are reliable and available. Incidents are opened immediately if breaches occur, and proof-of-state snapshots are archived for traceability. To me, it’s more than just math – it’s how we build the trust that someone’s payments, records, or learning data will always be secure and available. (Pseudo code example below)
2. Empirical Research → Validation Design
To validate systems, I approach the task in much the same way that I did experiments in my research days. I start by forming a hypothesis, defining controls, and testing variables in a structured and organised manner. I define parameter matrices (request rates, encryption types, database pool sizes, etc. ), automate runs and generate reproducible evidence rather than relying on speculation and guesswork.
Artefacts from these runs (metrics, logs, reports) provide the evidence needed to make go/no-go decisions that are both clear and defensible. For me, the process is about much more than testing and passing; it is about engineering systems that can last and earn trust. (Flow chart below)
3. Data Interpretation → Risk Engineering
I view system behaviour as a data story. Error spikes, latency drifts, anomalous log entries, etc. – these aren’t just numbers to me. They’re the first hints of incipient instability which, left unaddressed, can start to undermine user trust. By setting thresholds and clear responses (rolling back a canary, freezing risky deploys, etc. ), I transform these raw metrics into interpretable risk scores. This, for me, is proactive engineering: recognising the patterns before they manifest as outages, before users ever experience failure. (Risk Matrix below)
Bridging Research and Engineering: Toward Student-Oriented Systems
My training as a research engineer and post-grad field-work on youth behaviour taught me how to find patterns in data, how to read systemic risk, and how to connect the dots between numbers and real, human impact. I didn’t just walk through a server room into engineering; I went through a university research lab. I still bring that orientation to engineering practice today.
When I’m modelling error distribution on a payments platform, I’m thinking about trust at scale. When I’m parsing API anomalies, I’m thinking about continuity for the humans at the other end of those systems. And when I think about compliance frameworks in education, I don’t just see a laundry list of technical requirements; I see how they protect the most sensitive records society has: our students.
That’s why I believe the same rigour we apply to securing financial platforms can and should be applied to student-facing platforms. Systems that sync attendance, grades, or communication should have the same quantitative resilience as those that transfer billions of dollars. The risk may be different, but it’s just as human: a child’s privacy, a parent’s trust, and a teacher’s capacity to facilitate learning.


