A 747 pilot knows the plane intimately. After twenty years, they understand the variable geometry jet turbines, the hydraulics, the failure modes, the instrument logic. But at some point, the learning stops. The plane changes slowly. You've mastered it. And in mastering it, you've closed off the thing that makes technical work rewarding: the constant discovery that you're wrong about something, and figuring out why.
Carl Kolon wrote about this on Hacker News recently, describing a conversation with a Belgian 747 pilot: "In this job, after a while, there's no improvement. You are no better today than you were yesterday." Then Kolon made the pivot that landed this piece on the HN front page: AI coding agents are turning some developers into pilots. You're overseeing a system that executes for you. You stop fully understanding the code it produces. You stop learning at the depth you used to.
This is a real phenomenon, and worth taking seriously. But the diagnosis — and the solution — matters a lot, because there's a version of this story where the answer is "avoid AI tools to stay sharp," and that's wrong. The right answer is more specific.
The Pilot vs. Engineer Split
What Kolon identified is a genuine shift in the ratio of work that developers do:
- Engineering work: Designing systems, understanding tradeoffs, debugging non-obvious problems, learning new domains, making architectural decisions
- Pilot work: Overseeing execution, reviewing output, clicking "approve" on generated code you've spot-checked but not fully absorbed, monitoring dashboards
AI coding agents — Cursor, Claude Code, Copilot in agentic mode — have dramatically reduced the cost of execution. Writing the code for a feature that you've already designed in your head is faster than ever. But that reduction in execution cost comes with a subtle cost: some developers are letting the design step get compressed too.
If you use a coding agent to implement something without first building a clear model of what you're asking it to build — you're in pilot mode. You're overseeing something you don't fully own. The code works (mostly), the feature ships, but you've learned less than you would have learned by implementing it yourself.
Over time, this matters. The 747 pilot's complaint wasn't that the plane stopped working. It's that the job stopped teaching them anything new about planes. If your development workflow is mostly "prompt → review → approve → commit," you're in a similar situation.
What Coding Agents Do Well (and What They Don't)
The honest accounting of where coding agents are in early 2026:
They're genuinely good at:
- Boilerplate and standard patterns (CRUD, REST endpoints, serialization, test scaffolding)
- Code translation (Python to TypeScript, updating syntax to a newer API version)
- Documentation synthesis (explaining what a function does, generating inline docs)
- Finding obvious bugs in code you hand them
- Implementing features that fit standard patterns, given a clear spec
They fail, often silently, at:
- Architectural decisions with long-term tradeoffs you haven't made explicit
- Debugging emergent behavior in distributed or concurrent systems
- Security-sensitive code (they generate code that looks correct but misses attack surfaces)
- Novel problem framing — the step before spec writing
- Performance optimization that requires hardware-level reasoning
- Understanding implicit constraints from your codebase's history
The pattern is clear: coding agents excel at execution of well-defined work. They fail at the ambiguous upstream work that defines whether the execution is going in the right direction.
The Cognitive Debt Problem
There's a related issue that compounds the 747 dynamic: code that an AI generates accumulates cognitive debt. Not technical debt in the traditional sense (though that happens too) — but the gap between what's in the codebase and what's in your head.
When you write code, you build a mental model. You remember why you made the choices you made. You understand the edge cases you considered and rejected. When an AI agent generates code, you often get the output without the reasoning. The code may be correct. But your mental model of the system is weaker than it would have been if you'd written it.
This cognitive debt is invisible until you need to debug something complex, extend the system in a non-obvious direction, or explain to a new developer why the architecture is the way it is. At that point, you're working from a weaker foundation than you realize.
The 747 pilot who's been flying autopilot for ten years still knows the plane well. But they know it from the pilot seat, not the engineering seat. If the autopilot fails in an unusual way, the depth of their understanding is the margin of safety.
Staying in the Engineer Seat
The solution isn't to avoid AI tools. That's the wrong lesson from the 747 analogy. The right lesson is about where you position yourself in the workflow.
1. Design before you delegate
Before handing a task to a coding agent, build your own model of what you're building. Not necessarily a detailed design doc — but a clear mental map of the data flow, the edge cases, the tradeoffs. Then when you review the agent's output, you're comparing it to your model, not just reading it cold. You'll catch more errors, and you'll retain more learning.
2. Implement the hard parts yourself
Use agents for boilerplate. Write the core logic yourself. The interesting parts of a system — the parts where the tradeoffs live, where the domain complexity lives — those are the parts worth writing. The parts that will make you better. Hand the rest to the agent.
3. Debug actively, not passively
When something breaks, don't immediately ask the coding agent to fix it. Spend at least some time building your own hypothesis about what's wrong before you look at the agent's suggestions. Debugging is one of the highest-leverage learning activities in development. Don't outsource the entire thing.
4. Build at the edges of current capability
The 747 pilot problem is partly a problem of working in well-understood territory. If you're building systems that push against the limits of current AI capability — novel architectures, high-performance systems, security-critical code, frontier AI integrations — you're by definition doing work the agent can't fully execute for you. You stay in the engineering seat.
The API Layer as Engineering Frontier
This is why building with AI APIs — not just AI coding tools — is where some of the most interesting developer work sits right now.
When you build a system that integrates image generation, video synthesis, or multimodal reasoning through an API, you're operating in territory where the patterns aren't yet standardized, the edge cases aren't documented, and the architectural decisions are genuinely novel. The coding agent can write your API client boilerplate. But the system design — how you handle latency, how you manage model selection, how you structure the prompt pipeline, how you build the feedback loop — that's still engineering work.
Consider the decisions involved in building a real-time image generation feature:
- What's your latency budget? (async generation vs. synchronous, callback vs. polling)
- How do you handle generation failures gracefully without degrading UX?
- Do you cache outputs, and if so, how do you handle cache invalidation across model versions?
- How do you manage API key security without blocking feature velocity?
- How do you build the quality feedback loop so users can flag bad outputs?
None of these questions have obvious answers. They require you to understand the capability deeply, reason about tradeoffs, and make architectural decisions. A coding agent can help you implement whatever you decide — but the decisions are yours.
A Quick Example: Building with the ModelsLab API
Here's what this looks like in practice. The boilerplate for an API call is trivially agent-generated:
import requests
import time
def generate_image(prompt: str, api_key: str) -> str:
"""Generate image and return URL. Handles async polling."""
response = requests.post(
"https://modelslab.com/api/v6/images/text2img",
headers={"Content-Type": "application/json"},
json={
"key": api_key,
"model_id": "flux",
"prompt": prompt,
"width": "1024",
"height": "1024",
"num_inference_steps": 30,
"webhook": None,
"track_id": None
}
)
result = response.json()
# Handle async generation
if result.get("status") == "processing":
fetch_url = result["fetch_result"]
for _ in range(30): # Poll up to 30 times
time.sleep(2)
fetch_response = requests.post(fetch_url,
json={"key": api_key})
if fetch_response.json().get("status") == "success":
return fetch_response.json()["output"][0]
return result.get("output", [""])[0]
Let the agent write that. But the decisions that surround this code — how many times to poll, what to do when generation times out, whether to implement a queue system for high-volume use, how to handle the NSFW filter responses, when to fall back to a different model — those decisions require you to understand the system you're building, the capability you're integrating, and the users you're serving.
That's engineering work. The agent is a fast executor. You're still the designer.
The 747 Pilot Got It Wrong About One Thing
The Belgian pilot told Kolon that after mastering the 747, there was no improvement — no better today than yesterday. But this assumed the 747 was the relevant domain.
Kolon's insight, and ours: the domain is software systems, not coding syntax. If you're using AI coding agents to handle the execution layer, your improvement metric shouldn't be "how fast can I write this function." It should be "how well do I understand the systems I'm building, the capabilities I'm integrating, and the problems my users actually have?"
Measured that way, the best developers in 2026 are not becoming pilots. They're becoming architects who happen to have very fast construction crews.
The ones at risk of the 747 trap are the ones who let the coding agent do the architecture too — who prompt their way to working code without ever building the mental model that makes them irreplaceable when something breaks.
Don't be that pilot. Build something with the API. Own the architecture. Use the agent to execute it fast. That's the version of this that's worth being.
