Tags:

The Current Expert Problem with ChatGPT

I was talking with my friend Mehdi Zonjy about the following, and I thought it was worth sharing.

ChatGPT’s ability to generate responses across a vast range of topics is both a strength and a limitation.

It’s not a sniper—precise and targeted—it’s a shotgun: broad and often generic.

And that’s a problem.

Take its confidence in incorrect answers. Right now, ChatGPT is like a junior developer who never says: “I don’t know.” Or “I am not sure the solution I am providing is robust enough.”

It will give you an answer with confidence—whether or not it should, and whether or not the answer is correct.

This is extremely risky for new engineers entering the field, who often copy and paste solutions without fully understanding what’s inside. For production work this is not valid.

That said, ChatGPT is still incredibly useful—if you treat it like a junior team member who needs review, context, and guardrails.

As a senior engineer, you still have to guide it: nudge it toward specific design patterns, ensure it uses OOP when appropriate, or point out that a Lambda function has memory and execution limits, making its proposed solution invalid.

In Product Management, ChatGPT is good at surface-level tasks: generating user stories, drafting PRDs, summarizing customer interviews. That’s helpful, and in many ways mirrors a Junior PM—someone learning the ropes and able to produce artifacts quickly with the right direction.

But it can’t replace the judgment of a Senior PM—the person making trade-offs between business value and technical complexity, who knows when to ship a rough cut versus when to polish.

For example, if you ask ChatGPT how to prioritize features, it might suggest RICE or MoSCoW. But it won’t challenge the underlying assumptions.

That’s a bummer. It won’t ask things like: Are we even building the right product? What’s the risk of not shipping this?

It can’t sense the tension between short-term growth and long-term strategy.

That requires context, lived experience, political awareness, and an instinct for timing—things machines don’t yet have.

The danger is when people start treating ChatGPT like a senior.

When it comes to deep expertise, a human—a senior—is still necessary.

That will change. And I hope it does.

Leave a comment