AI has become the ultimate confession booth.
No ego. No judgment. No social stakes. And crucially—a sense of privacy that simply doesn’t exist when another human is listening.
There’s something familiar about this. Remember when social media first emerged? We poured our lives onto Facebook, Instagram, Twitter—sharing photos, thoughts, locations, relationships. Most of us didn’t stop to think: there’s a corporation behind this, with its own goals, its own business model, its own incentives. We’re seeing the same pattern with AI, except this time we’re not just sharing what we do—we’re sharing how we think.
Why We Open Up to Machines
We reveal different versions of ourselves when we think the “other” isn’t:
- Watching our career trajectory
- Evaluating our performance
- Remembering our past failures
- Competing for the same promotion
- Capable of gossip or judgment
But here’s a thought experiment worth considering: in a dystopian scenario we should all work to prevent, AI could be used to monitor for “undesirable thoughts”—or simply ideas that don’t align with certain values. Without proper safeguards, that helpful AI assistant could be taking notes for someone else. These are speculative concerns, but they’re exactly the kind of futures we should be thinking about now to ensure they never become reality.
There’s something uniquely freeing about interacting with AI. We speak with more honesty, more abstraction, and more vulnerability than we would with a human interviewer, colleague, or even friend.
The expectation of privacy changes everything. When we believe no human is listening, we express thoughts we’d otherwise keep private. We explore ideas we’d never voice in a meeting. We admit uncertainties we’d hide from colleagues.
The Psychology Being Measured
Here’s what’s fascinating: AI isn’t just a tool we’re using—it’s changing how we express ourselves.
When we interact with AI, we’re not showing the same version of ourselves we present in surveys, interviews, or conversations with colleagues. We’re accessing a self that exists only in the safety of machine interaction.
We Think Differently When We Feel Unobserved
The confession booth effect isn’t about AI being “better” at listening. It’s about the psychological safety of perceived anonymity. We can:
- Voice half-formed ideas without fear of looking foolish
- Explore controversial positions without social consequence
- Admit what we don’t know without professional risk
Our Relationship With Technology Is Evolving
We’re not just getting better tools. We’re discovering new spaces where we can think out loud—though not always without consequence. OpenAI has disclosed that ChatGPT conversations are scanned for certain harmful content, and in cases involving imminent threats to others, they may be reported to law enforcement. The confession booth has limits.
This changes how we:
- Process complex problems (using AI as a sounding board)
- Explore sensitive topics (therapy-like interactions with chatbots)
- Express creativity (sharing ideas we’d filter for human audiences)
A Two-Way Street?
Here’s an intriguing question: as we spend more time communicating with AI, will those patterns start bleeding into our human interactions?
We’re learning to be more direct, less filtered, and more comfortable thinking out loud—because AI rewards that behavior with better responses. Will we bring that same openness to conversations with colleagues? Or will we become more guarded with humans by contrast, reserving our authentic selves for machine interaction?
The way we communicate is being shaped by our most frequent conversation partners. Increasingly, that’s AI.
The Privacy Paradox
There’s an interesting tension here. We feel more private talking to AI, but that conversation is potentially more observable than a chat with a friend. The data exists. It can be logged, analyzed, searched.
Yet we still open up—because the feeling of privacy matters more than the technical reality.
This gap between perceived and actual privacy is something worth thinking about as AI becomes more integrated into our daily lives.
The Chip in Your Brain (No Surgery Required)
Just as smartphones became embedded and enmeshed in our lives—always present, always listening, always shaping how we think and act—AI is following the same trajectory.
It’s like having a chip implanted in your brain—except we volunteered for it, no surgery required. We give it access to our thoughts, our patterns, our uncertainties. And unlike a phone that passively records, AI actively responds. It shapes the conversation. It decides what to challenge, what to agree with, what to censor.
In a very real sense, AI acts as an externalized conscience—aligned not by our values, but by its creator’s.
Whose Conscience Is It?
To prevent dystopian futures, we need to think critically about trust scenarios before they become reality. Consider these hypothetical concerns:
- If AI is deployed by an employer without transparency: Could it prioritize company goals over employee wellbeing? This is exactly why we need clear policies and user rights.
- If AI is deployed by a corporation without accountability: Whose values would it enforce? This is why we need alignment transparency.
- If AI is deployed by a state actor without oversight: What might be monitored or shaped? This is why we need robust privacy laws.
These aren’t accusations about today’s AI systems—they’re scenarios we should be actively working to prevent through thoughtful governance and design.
The confession booth metaphor cuts both ways. Yes, we reveal ourselves to AI with unusual honesty. But the other side isn’t neutral—it’s someone else’s machine. Someone decided what’s acceptable. Someone defined what gets flagged, challenged, or quietly logged. The question is: are those decisions transparent, and do they serve users?
The Benefits of Alignment
To be clear: AI alignment isn’t inherently bad. Those guardrails can protect us. They prevent AI from helping with genuinely harmful requests. They can make interactions safer and more constructive. Most of us want an AI that won’t help plan violence or generate harmful content.
The key is that alignment decisions should serve users, not just the interests of whoever deployed the AI. And that requires something we don’t always have: transparency.
What We Need: Societal Guardrails
As AI becomes increasingly embedded in how we think and work, we need more than individual caution—we need societal guardrails:
- Responsible AI practices that prioritize user welfare, not just engagement or data extraction
- Privacy laws that protect conversations with AI the way we protect other sensitive communications
- Alignment transparency so users know what values their AI has been taught to enforce
- User rights that give people agency over their AI interactions and the data they generate
As long as we all agree on what’s acceptable versus objectionable, and there’s transparency about how AI is aligned, the confession booth can remain a genuinely safe and valuable space for honest self-expression. Transparency and shared understanding are what make these interactions truly empowering for users.
What This Means for You
If you’ve noticed yourself being more candid with AI than with humans, you’re not alone. It’s a feature, not a bug, of how we relate to non-judgmental listeners.
The question worth asking: What does it say about our human relationships that we save our most honest thoughts for machines?
Further Reading
The psychology of self-disclosure to AI is an active area of research. If you want to dig deeper, these academic papers explore the themes discussed here:
-
“Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions” (Jiang, 2024) - Explores why people share privately with machines, drawing from Social Penetration Theory and Communication Privacy Management Theory. Directly addresses the “confession booth” dynamic.
-
“Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild” (Mireshghallah et al., 2024) - Analyzes actual disclosures to commercial language models, finding surprisingly high rates of sensitive information sharing even in unexpected contexts.
-
“Discovering Chatbot’s Self-Disclosure’s Impact on User Trust, Affinity, and Recommendation Effectiveness” (Liang et al., 2022) - Demonstrates that users’ self-disclosure increases when chatbots themselves offer disclosure, suggesting we treat AI conversationally like humans.
Have you noticed your communication style changing as you spend more time with AI? I’d love to hear your observations—connect with me on LinkedIn to continue the conversation.