The Difference Between "In the Loop" and "In the Lead"
The industry talks a lot about keeping humans "in the loop" when it comes to AI-assisted development. The phrase sounds reassuring, but it describes the wrong relationship. A human in the loop is a reviewer — someone who checks, approves, and occasionally corrects AI output. The AI proposes, the human disposes.
That is not how we work.
At Looming Tech, the engineer is in the lead. They make the architectural calls. They decide what gets built, how it gets built, and why. AI does not propose solutions for the engineer to evaluate — the engineer directs the AI to implement their decisions faster. The distinction matters: it is the difference between an engineer reviewing code they did not design and an engineer using a powerful tool to bring their own design to life at higher velocity.
Pension Path — a complex, regulated FinTech platform — is where this philosophy proved itself at scale.
The Challenge
International pension transfers are notoriously complex. Moving retirement funds between jurisdictions — say, from a UK SIPP to an overseas ROPS scheme — involves multiple regulated counterparties, strict KYC and AML compliance requirements, real-time foreign exchange pricing, and workflows that traditionally stretch beyond three months.
Our client needed a platform that would compress this timeline to under 14 days while cutting FX fees from the industry-standard 1% down to 0.35%. The system had to orchestrate communication across five distinct parties: independent financial advisors, SIPP providers, FX counterparties, overseas pension schemes, and retirees themselves.
The constraint? A team of three — two senior full-stack developer and one project manager — with a target budget of roughly 950 hours.
Leading With Architecture, Accelerating With AI
The senior engineers on Pension Path made every foundational decision before Claude Code wrote a single line of code. Serverless on AWS. Hono over Express. Step Functions for multi-party workflow orchestration rather than a hand-rolled state machine. Kysely for type-safe SQL over an ORM. A monorepo with shared types between frontend and backend. VPC topology with single NAT gateway for cost optimisation.
These are decisions that require experience, domain knowledge, and an understanding of operational trade-offs. No AI tool makes them well. What Claude Code did was eliminate the gap between deciding and implementing — the engineer led, and the tool followed at speed.
How "Human in the Lead" Worked in Practice
The Engineer Designed the Infrastructure — AI Built It Out
The platform runs on AWS — Lambda functions behind API Gateway, PostgreSQL on RDS, DynamoDB for audit logs, S3 for document storage, Cognito for MFA authentication, SQS for async processing, and Step Functions for workflow orchestration. All provisioned through AWS CDK.
The engineer designed the architecture: which services, how they connect, what the security boundaries look like. Then they directed Claude Code to scaffold CDK constructs, generate least-privilege IAM policies, and implement the networking configuration they had already specified. A misconfigured subnet or over-permissive IAM role would not get past review — but the hours of boilerplate typing between "I know what this should look like" and "it is deployed" collapsed dramatically. The infrastructure layer came in at 135 hours against a 160-hour estimate.
The Engineer Owned the Domain Logic — AI Handled the Patterns
The backend domain logic is dense: FX quote lifecycle management, multi-step KYC verification, ROPS validation against the GOV.UK registry, beneficiary payment processing, and a role-based access control system spanning five user types.
The engineers defined how each of these systems should work — the state machines, the validation rules, the error handling strategies. They documented these decisions in a comprehensive CLAUDE.md file that served as the project's engineering playbook: API patterns, database schema conventions, RBAC permissions, audit logging requirements, and domain-specific terminology.
With that context loaded, Claude Code could implement endpoint after endpoint that followed the patterns the engineer had established. Not because the AI understood pension transfers — but because the engineer had encoded their understanding into a format the tool could execute against. The feedback loop was tight: the engineers described intent in the context of decisions they had already made, and the tool produced code that was consistent with those decisions on the first pass.
Backend work came in at 270 hours against a 290-hour estimate — with substantially more functionality delivered than originally scoped.
The Engineer Set the Standards — AI Met Them
The React 19 frontend uses Chakra UI, TanStack Query for server state, and TanStack Table for complex data grids. Zod schemas are shared between frontend and backend via the monorepo's common package. Decimal.js handles the precise arithmetic that FX calculations demand.
The engineer chose these tools and defined the component patterns. Claude Code scaffolded implementations within those constraints. Every piece of output went through the same quality gates as hand-written code: peer review, automated linting, unit and integration tests in CI, and static analysis. The engineer did not review AI output as a second opinion — they reviewed it as the technical lead who set the standard the code needed to meet.
The Engineer Designed the Compliance Architecture — AI Helped Trace the Gaps
Financial platforms under FCA and MiFID II regulations require comprehensive audit trails. Every state change — every document upload, every FX quote request, every approval — must be logged with full context and retained for seven years.
The engineer designed an event-driven audit system using SQS and DynamoDB with TTL-based retention. Where Claude Code added genuine value was in tracing data flows across the full codebase — from API endpoint through business logic to audit log emission — helping the engineer verify that their design was fully implemented with no gaps. The tool did not decide what to audit. The engineer did. The tool helped confirm that every path was covered.
The Results
The numbers tell the story:
What "Human in the Lead" Taught Us
The CLAUDE.md file is the engineer's playbook, not the AI's instruction manual. The single highest-leverage investment was documenting architecture decisions, code conventions, and domain terminology. But the purpose was not to "teach the AI" — it was to encode the engineer's technical leadership into a format that made the tool maximally useful. The engineer who writes a strong CLAUDE.md is not configuring an assistant. They are extending their own reach.
Productivity compounds because the engineer's decisions compound. Early in the project, Claude Code saved perhaps 20% of development time. By the midpoint — with established patterns, a rich playbook, and a mature codebase — it was closer to 40%. This is not the AI getting smarter. It is the engineer's accumulated decisions creating a larger and more coherent surface for the tool to work within. The more the engineer leads, the more the tool can follow.
Security stays in the engineer's hands. On a regulated financial platform, we were deliberate about boundaries. No customer data, credentials, or production information was ever exposed in prompts. Sensitive domain logic was anonymised where necessary. AI output was treated as a draft — a suggestion to be evaluated against the engineer's judgement, never as authoritative source code.
Speed without loss of ownership. The most important outcome of Pension Path was not the hours saved. It was that the engineer who built it can explain every architectural decision, defend every trade-off, and own every line of code — because they made every meaningful choice. The AI did not design this system. An engineer designed this system and used AI to build it faster.
The Real Question
The industry debate around AI in software development often frames the choice as autonomy versus oversight: how much can we let the AI do, and how much do we need to check?
We think this frames the question backwards. The right question is not "how much oversight does the AI need?" It is "how effectively can an engineer lead when they have AI-level execution speed at their disposal?"
Pension Path answered that question: a single senior engineer, leading with clear architectural vision and deep domain understanding, can deliver what used to require a team of four or five — not by delegating to AI, but by using it as the most powerful implementation tool they have ever had.
The human is not in the loop. The human is in the lead. That is the difference.