Skip to main content

New Book Release: Practical Leadership - A Guide to Building Trust, Getting Results, & Changing Lives. Check it out!

The Real Danger of AI

 The Real Danger of AI: Confidently Doing the Wrong Thing

 AI is making it easier to be wrong.  

by Janet Ply, PhD · The Practical Leadership Newsletter · April `28, 2026

I’m hearing a lot of people talking about “Human in the Loop” with AI.

HITL is the deliberate act of inserting human judgment, validation, and decision-making into AI-driven work, before anything is executed.

Here’s the problem: too many people skim AI output for recommendations, content creation, ideas, resumes, and such and use them without sufficient validation.

That creates a real risk: implementing the wrong thing…confidently.

We Solved the Information Problem - Not the Implementation Problem

Chris Prouty said it well:

“We got very good at delivering information.

We never got good at helping people implement it.

Now AI makes information instant, and the implementation gap is wider than ever.”

He calls the missing piece Learned Intelligence.

There is a category of knowledge you cannot get from a course, a framework, or an AI-generated strategy.

You can only get it from:

  • doing the work
  • hitting the walls
  • figuring out what actually works in real situations

“My Goal Is to Implement a System by Year End”

Let’s look at an example.

I was facilitating a strategic planning session. Each leader brought their top three goals.

A Technology VP said, “My goal is to implement a new software platform to reduce errors.”

He had already used AI extensively:

  • how to position the initiative
  • how to structure the team
  • risks, timelines, even a polished presentation

It all looked solid. But It wasn’t.

Here’s what we found

“Implementing a system” is not a goal.
It’s an action that should support a goal.

So I asked a few questions and learned that the real issues were:

  • Manual front-end processes
  • Missing and inaccurate data
  • No enforcement of data quality

The same problems would exist, even with the new system.

It gets worse

He couldn’t get a business executive to sponsor the initiative.

So he decided to sponsor it himself.

That’s a red flag.

Technology-led business initiatives rarely succeed without business ownership.

And even worse

The business executive wasn’t available because she already had two major initiatives underway.

Many of the same people were needed.

He hadn’t considered:

  • resource constraints
  • organizational change fatigue

Here’s the scary part

AI didn’t catch any of this.

Because AI doesn’t understand:

  • your organization
  • your constraints
  • your politics
  • your reality

It predicts patterns. That’s it.

The Real Problem

If you don’t provide the right context, AI will confidently lead you in the wrong direction.

And I see this happen all the time.

AI Doesn’t Replace Judgment - It Exposes It

AI will give you answers all day long.

That doesn’t mean they’re right. And it definitely doesn’t mean they’re right for you.

Without Human-in-the-Loop thinking: AI accelerates mistakes faster than it accelerates progress.

If your use of AI doesn’t force you to:

  • think
  • question
  • refine

You’re not using it well. You’re hiding behind it and hoping nobody will notice.

Prompts Need Lived Intelligence

The prompts I use in my Practical Leadership workshops are built on lived experience and intelligence.

They don’t just generate answers, they challenge them.

If someone tried to list “implement a system” as a goal, the prompt would immediately push back.

It checks for:

  • people → process → technology order
  • executive sponsorship
  • change fatigue
  • and other real-world factors

And now I’m adding another layer:

Forcing the user to evaluate the output—not just accept it.

Add a Simple HITL Check

Adding questions to the end of your prompts can serve as a useful HITL check:

  1. If a junior analyst gave you this output, what would you question? What would you challenge? What would you push back on?
  2. What context, constraints, or situational nuances are not reflected in the output?
  3. What’s the cost of being wrong?

These alone will improve your results.

Want to Use AI Without Getting Led in the Wrong Direction?

I put together a set of Practical AI Prompts for Leaders.

They include:

  • lived intelligence
  • built-in HITL checks
  • real-world leadership scenarios

If you want to learn how to use AI to think better, not just move faster, go to janetply.me and find a time for us to connect.

Leadership Is a Learnable Skill

If you want tactics you can use right now, Practical Leadership: A Guide to Building Trust, Getting Results, and Changing Lives would be a great addition to your library.

Mel Robbins, New York Times bestselling author and host of the Mel Robbins podcast had this praise for the book, “Janet Ply is the real deal. I’ve seen way too many talented people flail in leadership because nobody ever taught them how to do the job well. This book fixes that. Janet has been in the fire, she’s led through chaos, and now she’s giving you the tools she’s used to rescue high-stakes, high-dollar messes. If you lead people - or you want to - Practical Leadership should live on your desk. Get it, use it, lead better.”

Order it on Amazon or your favorite local bookstore.