There’s an unexpected benefit to working with AI agents that has nothing to do with productivity: it’s making me a better communicator.

The mechanism is surprisingly simple—and surprisingly powerful.

The Rubber Duck Effect at Scale

Rubber duck debugging is a classic programming technique: explain your code to an inanimate object (traditionally a rubber duck), and in the process of articulating the problem, you often discover the solution yourself.

AI agents amplify this dynamic across everything you do.

Before AI: I regularly mentor and help developers who, despite being experienced, carry assumptions and context entirely in their heads. They’ll whiteboard ideas or write a one-pager, but a lot of tribal knowledge and tacit context gets assumed to be known and shared by every reader—and that’s often not the case. Those assumptions stay buried until something breaks—or worse, until a teammate’s confusion reveals the communication gaps.

With AI: Every interaction demands explicit articulation:

  • What problem am I trying to solve?
  • What constraints matter?
  • What does success look like?
  • What context is relevant vs noise?

The agent doesn’t read my mind. It can’t fill in gaps with “you know what I mean.” It forces me to think through problems in a structured, communicable way.

The Correction Signal

Here’s where it gets interesting: when the agent goes wrong, it’s rarely just the agent’s fault.

Agent misinterpretations are a mirror. They show me exactly where my request was:

  • Underspecified: What did I assume was obvious but never stated?
  • Ambiguous: Where could my words be interpreted multiple ways?
  • Missing context: What background knowledge did I assume the agent had?

Each correction becomes a learning opportunity about my own communication gaps. Every one of those misses is an opportunity for introspection: What did I miss? Is this part of a broader pattern? This meta-cognition analysis can be enriching both personally and for deepening relationships with others.

When I make the same vague request to a human teammate, they fill in context from shared history, tone of voice, and organizational norms. The AI can’t. It needs what I should have been providing all along—explicit scope and constraints.

This makes me better at communicating with everyone, not just AI.

Beyond Code: Communication as a Meta-Skill

The clarity AI demands transfers to every interaction:

With teammates: I’m more explicit about scope, constraints, and success criteria. Fewer “I thought you meant…” moments.

In documentation: I notice where I’m assuming context that won’t be obvious to readers. I write clearer specifications.

In personal relationships: I’m getting better at articulating needs and expectations rather than assuming they’re understood.

Clear communication requires clear thinking. AI gives you immediate feedback when your thinking is fuzzy.

The Broader Pattern: AI as Mirror

This pattern extends beyond communication:

Code reviews: AI forces me to explain why I made certain design choices, not just what changed.

Architecture decisions: Writing prompts for agent-assisted design forces me to articulate trade-offs I might otherwise leave implicit.

Problem decomposition: Breaking work into agent-appropriate chunks forces me to think about dependencies and interfaces more carefully.

In each case, the requirement to communicate with AI improves my thinking about the underlying problem.

The Counterintuitive Implication

If you’re working with AI primarily to go faster, you might be missing the bigger opportunity: using AI to become more deliberate.

The constraint of articulating everything explicitly can feel slower initially. You’re typing out context you’d normally assume. You’re specifying details you’d normally leave vague.

But that “slowdown” is actually building a practice of clear thinking.

And clear thinking compounds:

  • Fewer miscommunications with teammates
  • Better documentation that future-you can actually use
  • Clearer mental models that handle edge cases
  • Stronger ability to teach and mentor

The Meta-Loop: Improving at Improving

Here’s where it gets recursive: getting better at prompting AI makes you better at explaining things to yourself.

The cycle:

  1. You articulate a request to an AI agent
  2. The agent reveals where your articulation was incomplete
  3. You refine how you think about and express the problem
  4. Your next request is clearer
  5. Your thinking about similar problems improves

Over time, you internalize the discipline. You start thinking in terms of clear specifications before you even write the prompt. The AI becomes training wheels for a communication skill that transfers everywhere.

What This Means for Teams

If AI’s impact is primarily “individual developers go faster,” the implications are interesting but bounded.

If AI’s impact includes “everyone gets better at structured thinking and communication,” the implications are transformative:

Junior engineers: Accelerate the learning curve for articulating technical problems clearly Senior engineers: Improve at transferring tacit knowledge into explicit frameworks Product managers: Get better at writing clear, unambiguous requirements Designers: Articulate design rationale more explicitly

The team-level benefit isn’t just velocity—it’s clarity accumulation. The organization gets better at making implicit knowledge explicit.

The Uncomfortable Question

If AI makes us better communicators by forcing explicit articulation, what does that say about how much we’ve been getting away with unclear communication all along?

How many “alignment issues” are really just “we assumed shared understanding that didn’t exist”?

How many “coordination failures” are really “nobody articulated the constraints clearly”?

AI doesn’t just reveal gaps in our prompts. It reveals gaps in how we’ve been thinking about and communicating problems all along.

Conclusion: AI as Communication Coach

AI agents aren’t just productivity multipliers. They’re communication coaches that give you instant feedback on:

  • Where your thinking is fuzzy
  • Where your assumptions are implicit
  • Where your articulation leaves room for misinterpretation

The rubber duck debugging effect at scale, applied to everything you do.

This isn’t just about working with AI. It’s about becoming a clearer thinker, a better communicator, and ultimately a more effective teammate, friend, and partner.

The unexpected upside: The very act of working with AI is training us in skills that make us better humans.


What skills are you unexpectedly improving through your work with AI? Connect with me on LinkedIn to share your observations.

Updated: