Using AI Is Management
There is a type of person for whom failing with AI is a point of pride. They tried it. It couldn’t do what they asked. They’ve decided this reflects well on them. The sophistication of their work. The irreducible complexity of their domain. “It couldn’t even handle it.” Said with satisfaction.
These people are missing the beat completely.
What the Work Actually Is
Working with an LLM is management work. Not metaphorically. The same skills. You identify what to delegate. You define the outcome. You give the context someone needs to succeed. You review what comes back and decide if it meets the bar.
An experienced manager recognizes this pattern the first time they sit with a capable model. The interface is different. The dynamic isn’t. They’ve been doing this for years. Just with people.
People who have never managed anyone are starting from scratch. They don’t have a mental model for directing high-capability, independent work. They treat every interaction like a vending machine: put in the request, expect the output, declare failure when they don’t get it.
The Capable Employee Problem
Think about what it means to manage someone with a strong track record. Proven in several roles. Recommended by people whose judgment you trust. The general capability is not in question.
You give them an assignment. It comes back wrong. Then again. Then again.
At some point, a good manager stops asking what’s wrong with the employee and starts asking what’s wrong with the assignment. How was it framed? Was the success criteria clear? Did the person have the context they needed to do it well? Were the constraints explicit or assumed?
This is not abstract. It is the actual job. Underperformance from a strong player is frequently a management diagnosis, not a personnel one. The capable person needed something they didn’t get.
The same logic applies here. Models have demonstrated capability across an enormous range of tasks. If you’re not getting that performance, the assignment is where I’d start looking.
What Transfers
Delegation, done well, has a few components. You have to identify tasks appropriate to delegate. Not everything belongs on someone else’s plate. You have to specify what done looks like. Not the steps. The outcome. You have to provide sufficient context. And you have to be honest in your review: did it fail because the output was bad, or because you described the wrong thing?
Every one of those components has a direct analog in effective AI use. The English Trap is a specification problem: the instructions weren’t clear enough for the system executing them. Stop Micromanaging the Model is a delegation problem: you described the procedure when you should have described the destination. These aren’t coincidences. The failure modes are the same because the underlying activity is the same.
Years of being deliberate about task assignment, context, and success criteria transfer directly. Not because experienced managers know more about AI. They’ve practiced the relevant skill. The tool is new. The discipline isn’t.
Failure Is Not a Brag
The people to your left and right are getting real work done with this. Not demo work. Production work. Decisions that moved. Code that shipped.
When the argument is “I tried it and it failed me,” and the people around you aren’t having that experience, the data doesn’t point where you think it does. Failure is not evidence that the task was hard. It’s evidence that something in the interaction didn’t work. That’s a solvable problem. The pride in not solving it is harder to explain.
The model is not going to advocate for itself. It won’t push back on a badly framed request. It will produce something that looks like an answer and let you decide what to do with it. That’s the contract. You bring the judgment about what good looks like. The model brings the execution.
If the outputs are consistently bad and the people around you are consistently getting good ones, the common factor in your results is you. That’s not an insult. That’s the diagnosis.