I hide my AI use. So do you. Here’s why we’re not wrong.
Research reveals we want AI transparency, but punish those who give it to us.
A note on this newsletter: I’m back! After months exploring innovation and technology broadly, I’m narrowing my focus to one question: How can professionals use AI without losing their thinking quality, authentic insight, and intellectual ownership? Hence the new name, Modern Mind 😉 If you’re here from my previous work, I’m glad you’re sticking around. If you’re new, welcome.
Key Points:
We say we want transparency about AI use, but we punish those who disclose it, reducing trust by 16-20%. The gap between what we say and what we do reveals something deeper.
The real problem isn’t disclosure ethics. It’s that we don’t have a clear mental model for where our professional value lives when AI can produce instant polished output.
Until you can articulate what makes your work different from AI’s, admitting AI use feels like admitting you’re replaceable.
The Transparency Paradox: Why Admitting AI Use Feels Dangerous
I feel uncomfortable disclosing I used AI for my work (by AI I mean generative AI, LLMs).
For some reason it feels like cheating.
I don’t want my colleagues, clients, or readers to perceive me as less capable, or my work as less valuable.
But disclosure creates uncertainty around my real contribution versus AI’s.
Makes me question if I could be replaced by a machine and nobody would notice a difference.
Should I hide that I used AI and avoid people asking uncomfortable questions about my contribution?
Using AI in knowledge work is still a gray area.
The problem isn’t transparency about AI usage.
The problem is we don’t know anymore how to value our work.
AI outputs results so fast and seemingly so perfect that it creates a painful comparison with our own iterative human process.
The AI’s polished language makes the necessary human struggle feel inadequate.
It sometimes feels like “I’ll never reach that level of proficiency”.
This undermines self-esteem and belief in capabilities.
I use AI every day and I get intimidated by it, I notice myself losing confidence in my own abilities.
And I’m in my mid-career with over 20 years of professional experience.
Even when writing a simple email, I upload a draft version to AI to proofread it, or at least that’s what I say to myself.
But in reality, I’m seeking confidence that it’s good enough to send.
Do you also have that?
What happens when we can’t trust our own judgment anymore and need an algorithm for that?
We can’t articulate what makes our work different from AI’s work.
If we don’t know what is the value of our work, we don’t want others to question that, so we hide AI use.
Professionals don’t have a clear mental model for where their value lives when AI can produce instant polished output.
Researchers at the University of Arizona’s Eller College of Management ran 13 experiments with over 5,000 participants across education, business, and creative work.
They found that even though over 90% of people say they want transparency about AI use, they penalize those who disclose it, reducing trust in them by 16-20%.
Trust drops even further if somebody else exposes you after using an AI detector.
The penalties are not equal.
Research on over 1,000 engineers found female professionals face twice the competence drop for AI disclosure compared to male colleagues, 13% versus 6% for identical work.
The gap between what we say and what we do shows up everywhere.
We want transparency but punish those who are transparent.
We want authenticity but design sophisticated prompts for AI to sound like us.
We want transparency but fear consequences.
We’re asking ourselves: What are the rules here? Am I doing this wrong? Will I get caught?
We fear devaluation, that if I admit I used AI, clients or colleagues will think I couldn’t do it alone.
Admitting it feels like I can be replaced because others can prompt too.
On the other hand, we feel conflicted about hiding AI use, which creates cognitive dissonance, guilt, and impostor feelings.
My expertise and experience in innovation and technology seem to be losing relevance as knowledge is being democratized by AI.
AI makes people feel super smart, creates echo chambers, gives illusion of expertise they don’t have.
AI creates false confidence trap.
In a way hiding use of AI is a rational decision to protect your professional reputation when it is not clear how to value our contribution.
Imagine a chef in a restaurant of the future facing the same dilemma with food replicators, like in Star Trek, that produce perfect-looking dishes in seconds.
The chef makes food through a messy, longer process. Replicator output looks more perfect and instant. The client could buy a replicator themselves.
Question: Why should they pay the chef?
What makes the iterative, slow and messy cooking process valuable?
If the chef uses the replicator for stock or prep, should they tell the client?
Will the client think, “Why am I paying chef prices for replicated components?”
The uncomfortable truth: The chef isn’t entirely sure what makes their cooking more valuable than the replicator’s output.
Until they can articulate it, admitting they used the replicator feels dangerous.
This is why transparency feels so threatening to us.
It’s not just about disclosure.
It’s about not having clarity on where our value as knowledge workers lives.
Being bombarded by media messages that we’ll be replaced doesn’t help.
When you can’t articulate what makes your work different from AI’s, admitting AI use feels like admitting you’re replaceable.
You’re not broken. This is genuinely difficult. And you’re not alone.
The transparency paradox isn’t really about disclosure.
It’s about the missing mental model for where professional value lives in the AI age.
Until we have that clarity, transparency will continue to feel dangerous.
That’s the work we need to do first. (And that’s what we’ll explore in the next issue.)
Try This
Exercise: Where does your uncertainty live?
When you feel conflicted about AI use, identify which question troubles you most:
The organizational question: Am I clear on policies and norms around AI disclosure in my workplace?
The contribution question: Can I articulate what I contributed beyond what AI generated?
Your answer reveals what you need clarity on. Not whether to disclose, but where your uncertainty about professional value actually lives.
A Final Thought
I hide AI use too.
Not always. Not with everyone.
But enough that I feel the same cognitive dissonance I’m describing here.
I’ve spent 20 years in technology and innovation.
I should be comfortable with AI. I use it daily.
And still, I find myself either downplaying how much I relied on it or mentioning it too casually, when the truth is more complex.
I’m writing this newsletter using the AI collaboration workflow I’m developing for my book.
I’m testing these concepts on myself first.
Some sections I wrote entirely alone.
Others I co-drafted with AI.
The research synthesis involved heavy AI use.
I’m telling you this because I’m not preaching from a position of having figured it out.
I’m researching where human value lives in AI-augmented work because I need to understand it for myself, not just explain it to others.
If I can’t figure out how to preserve my thinking quality and authentic ownership while using AI, I have no business teaching others how to do it.
This is my discovery process.
I’m sharing it with you.
– Paweł
P.S. I’m writing a book, The Augmented Mind (working title), about maintaining thinking quality and intellectual ownership when working with AI. This newsletter is where I test the concepts, like this week’s transparency paradox, before they become chapters. You’re getting the ideas first 🤓
For consultants, researchers, and professionals who want to use AI without losing their thinking quality and intellectual ownership – join me as I figure this out.
Sources & Further Reading:


