Work · 2022– · 🇬🇧London
Senior Software Engineer
My motivating factor at Meta is building developer experiences and tooling that people didn’t think were possible. The work I’m proudest of:
- Autonomous vision for AI agents. Built to enable agentic debugging and testing of Horizon Worlds on fully on-demand Linux servers. I’d previously ported Horizon Worlds to run on those Linux on-demand servers and built a workflow where developers stream the running app from the server back to their local machine. The vision layer exposes that as a tool agents can invoke agentically — when an agent decides it needs to see what’s happening in the running app, it calls the tool, captures the current frame, and acts on what it sees.
- C++ Hot Reload on Windows for Horizon Worlds and the Meta Horizon Studio, integrated via Live++. Replaced the rebuild-and-relaunch cycle on both with edit-and-continue.
- Buck2 Fix of the Week on Windows. Traced a Windows-only build slowdown back to Buck2 serialising IO through an internal queue. The fix was small — remove that queue and dispatch directly to Rust’s tokio IO scheduler — and took the workload from running on 4 threads to running on 512 threads. Won Meta’s company-wide Fix of the Week.
What I’m working on now
I’m contributing heavily to Meta’s internal AI harnesses — a multi-agent orchestrator and Devmate, Meta’s developer-assistant platform.
The technical problems I’m digging into:
- Back pressure via generated hooks. A lot of harnesses already build context
iteratively from past sessions — auto memory and the
/dreamcommand in Claude Code, the session-context layer in the Hermes agent harness. What I’m working on is a harness that also generates stop hooks and pre/post-tool-use hooks from past sessions. The hooks act as executable guardrails that prevent the agent from repeating known mistakes, which makes the whole workflow more deterministic. - Ultra-long-running autonomous work. Enabling agents to run autonomously
against a single goal across hours or days — a next-level Ralph loop, closer in
spirit to the
/goalcommand in OpenAI’s Codex harness. - Shifting error detection left so failures get caught inside the harness, not in CI and not by a human reviewer. CI capacity and reviewer attention are both scarce; an agent that wastes either is doing net-negative work.
The shape I want at the end is a deterministic harness with a short iteration loop.
I dogfood the harnesses heavily — last I checked I’m a 99th-percentile token user across Meta.
How I got here
I was headhunted internally twice at Meta:
- Out of boot camp into Workrooms VR, Product Infra — DEVX work on what later became the unified Meta Horizon Engine.
- From Workrooms VR into Horizon Infrastructure and Tooling, to continue the DEVX work on the (then-new) Meta Horizon Engine. I founded its Product Infra group and still own that surface — build systems, developer experience, and the AI tooling the rest of Horizon’s engineers use day-to-day.
Selected work
- Build Speed V-team lead — pulled in partners from Central DevX and DevInfra. Over 2025, builds got 33% faster while the C++ codebase doubled.
- Multimodal vision in Devmate — added image support to Devmate’s
read_filetool and fixed Meta’s internal LLM proxy to pass image content through cleanly. - DevAI champion for Horizon — delivered Horizon-wide training on Devmate and Claude Code agents, and worked with adopters across teams on what AI-native development looks like on a codebase this size. Published the first “vibe coding” win in the org in April 2025: covered all of my oncall rotation’s Python code with code coverage in a single day.