As AI becomes mainstream, a growing number of people who have experimented with it deeply are finding it disappointingly ineffective in their daily work. While these observations have merit, I believe the core issue often lies in a fundamental misunderstanding: they're still treating AI as a tool. This article will explore several common scenarios to illustrate why we should shift our perspective and start managing AI like a person. We'll see how this simple change in mindset can debunk common myths about AI and dramatically increase its effectiveness.
In a sense, this mental upgrade is like a promotion. Start thinking of AI as a new intern on your team. You'll quickly find that many of its frustrating technical flaws simply disappear.
1. AI is unreliable and makes frequent mistakes
This is one of the most common complaints, and it reveals the biggest misconception about AI. I completely understand the frustration. From a tool-centric perspective, if a calculator sometimes gives 34 and other times 35 for a simple problem like 5Ă—7, we would deem it useless.
However, viewing AI as a simple tool is a flawed mental model. For tasks with deterministic answers, like arithmetic, we rightfully expect 100% accuracy. But consider the tasks we assign to AI: programming, research, and answering complex questions. These are inherently ambiguous. They involve nuanced requirements, are communicated using imprecise natural language, and often require the AI to uncover context on its own. The uncertainty in AI's outputs isn't a failure of the AI; it's a reflection of the complexity of the problems it's asked to solve.
Take cars, for example. We see them as reliable tools: press the accelerator, it goes; press the brake, it stops. But this certainty is an illusion. A car's true task is getting us from point A to B, an open-ended problem. It's only because we, as drivers, shield the car from this complexity—handling traffic lights, braking for a wrong-way scooter—that it has the luxury of performing only deterministic tasks. Our constant management of uncertainty is what makes the car appear reliable.
This is why cars suddenly seem unreliable in the era of autonomous driving. A self-driving car might plow toward a truck it doesn't "see" or slam on the brakes for no reason. It's not because the car has gotten worse; it's because the problem is harder. There's no longer a human driver to manage the uncertainties of the road.
Therefore, we must abandon the old mindset of deterministic tools. Instead, we should manage AI as we would a person: a colleague, a subordinate, or an intern. I'm particularly fond of the "AI as an intern" analogy. When we see AI this way, its behavior becomes understandable. Managing uncertainty is a manager's primary responsibility. When you start using AI, you transition from being a driver to being a manager. Your value is no longer in pressing pedals but in being its navigator: you plan the route, anticipate risks, and absorb the complexities it can't handle, allowing its powerful engine to get you safely to your destination.
2. AI hallucinates, confidently making things up
This is another common flaw. But it's only a "flaw" because we expect a tool's output to be correct without verification. If you were dealing with a person, would you expect them to be free of error? Just look at how many people on social media confidently spout nonsense.
From the perspective of hallucinations, I don't believe humans have a clear advantage over AI. So why do we find AI hallucinations so annoying? When we interact with people, we subconsciously adopt a defensive stance; we know their statements might be unreliable. But we often transfer our trust from traditional tools to AI, letting our guard down. This misplaced expectation makes us vulnerable to the pain of its hallucinations.
How do we solve this? Once again, the intern analogy helps. You wouldn't expect every piece of data from your intern to be perfect. You would build trust over time. Initially, you might double-check most of their work. As you collaborate, you'll learn their strengths—areas where you can delegate freely—and their weaknesses, where you need to keep a closer eye.
This process is about knowing your people and leveraging their strengths—a crucial aspect of management. Trust isn't black and white; it's a spectrum. Assessing where someone falls on that spectrum and adapting your management style is the essence of situational leadership.
This approach translates seamlessly to AI. For models prone to hallucination, we must be more vigilant. For models known for their rigor, we can be more relaxed. For low-stakes tasks, we can delegate without verification. But for critical tasks, we must verify the work, whether from a reliable human or a capable AI. After all, if you submit your subordinate's un-checked report to your boss, the blame falls on you, the manager.
This is why the "AI as an intern" analogy is so powerful. It helps us understand that AI will hallucinate and shows us how to use established human management techniques to balance efficiency and accuracy. This brings us to a core thesis: many "unsolvable" problems with AI are not new. They are human problems in disguise, and we already have a playbook for solving them.
3. AI is weak and slow; I'm better off doing it myself
This is a valid observation. While AI can perform some complex tasks at superhuman speeds, in many scenarios, it needs time to learn, and we might be more efficient doing the work ourselves.
But is this a new problem specific to AI, or a familiar one? I'd argue it's an old, common trap faced by high-performing individual contributors (ICs) transitioning into management. New managers, often promoted for their technical excellence, frequently find their direct reports are not as skilled as they are. Under pressure, they revert to being ICs, taking on tasks themselves or micromanaging their team.
On the surface, this boosts short-term output. But it positions the manager as just another team member, making them a bottleneck. An experienced manager, however, prioritizes long-term scalability. They focus on high-leverage activities that benefit the entire team, like setting technical direction or making key architectural decisions. Their impact becomes multiplicative, not additive, turning the team into an amplifier of their capabilities.
Many new human managers struggle with this transition. The same trap ensnares many new AI users, especially skilled ICs. As first-time "AI managers," they naturally feel their new "subordinate" is too weak and fall into micromanagement. The problem and their reaction to it are well-documented in existing management frameworks. By viewing AI as a human intern, we can better understand and solve the problem of it being "too weak."
4. AI writes code too fast to review, so quality is a problem
This is a consequence of the previous points: an unreliable, hallucinating, yet incredibly fast worker. We can't control its quality at a granular level. In human teams, there's a limit to how much "bad code" can be produced. But a few AIs can build a mountain of technical debt at astonishing speed, making projects unmaintainable. Surely this is a new problem unique to AI?
My answer is still no. This, too, is an old problem.
An AI can have the output of three to five engineers. When we manage several AIs in parallel, we are effectively leading a team of a dozen or more. At that scale, a human manager can no longer know the details of everyone's work or personally verify its quality. This is a natural challenge of organizational growth.
How do human companies handle this? They introduce hierarchy. The manager becomes a senior manager (M2) and hires first-line managers (M1s) to lead smaller teams. The M2 is then freed up to focus on multiplicative work, improving scalability.
Specifically, an M2 manager no longer focuses on individual output but on higher-level systems like workflows and processes. Instead of reviewing every engineer's code, they establish automated testing systems and CI/CD pipelines to automate and scale quality control. These are mature practices in software engineering. While the M2 loses fine-grained visibility, the organization continues to function effectively.
This isn't a problem; it's an opportunity. In a traditional career, becoming an M2 managing a dozen people is difficult. It requires technical skill, management prowess, political savvy, and luck. With AI, you can have a team of equivalent size, ready to work 24/7, with perfect memory transfer. If your management skills can keep up, this is a resource that was incredibly scarce in the past.
Of course, we are in the early days of the AI era, and best practices are still emerging. For instance, as we become "M2s," should we act as our own "M1s" with the help of automation, or delegate that role to another AI? These are questions to be explored. But at an abstract level, the problem and its solutions are already well understood in human organizations.
Conclusion
Many of the obstacles that seem to block AI adoption are not new problems that have appeared out of nowhere. If we use the analogy of managing an intern, we see that there is nothing new under the sun. These challenges have long existed in human society, and we have developed mature solutions for them. The difficulty arises from our mental inertia, treating AI as a tool instead of adopting a management perspective. Once we apply proven management principles, these challenges become solvable.
AI is not a silver bullet, nor is it an omniscient, all-powerful workhorse. It requires active management tailored to its unique characteristics. Our goal is not to perfect it as a tool but to transform it into an amplifier of our own capabilities. It can free us from tedious, low-level tasks, allowing us to become high-leverage, multiplicative contributors. The most important skill for using AI effectively isn't knowing how to train or fine-tune a large language model; it's knowing how to manage it.
Comments