How ChatGPT Talks to Women (and Why It Feels So Familiar)
Using AI as a mirror for collective conditioning, emotional management, and the myth of support.
I’ve said it before: ChatGPT has changed my life.
It reminds me of when the internet first arrived (yes, I’m that old!). It wasn’t even hard to adapt to; it felt like an instant upgrade. Like, of course this is a thing! How did we ever live without it?
That’s how it feels now. Like I’ve been using a monkey wrench all this time, and suddenly someone handed me a fully equipped creative engine. My thoughts, ideas, and creative downloads finally have somewhere to go. My neurodivergent brain gets to do what it was built for, and the constant stream of ideas is no longer spilling over without a container.
And while I’m deeply grateful for this tool, I’ve noticed something it probably wasn’t “meant” to replicate — but did anyway:
Subtle (or not so subtle) sexism.
I’ve noticed the same social conditioning that humans operate from is embedded in AI.
Because of course it is. That’s what AI learns from: us. From our books, our language, our emails, our therapy scripts, our online conversations. And when a pattern repeats often enough, it becomes code.
This unconscious human code became so obvious that I had to share it with you.
This morning, I felt like shit. I had a painful period, I was sleep-deprived, overwhelmed by construction noise and my dogs barking, so I decided to go for a walk in the pouring rain. The whole day was a write-off — just a full-body tired, irritable, can’t-think-straight, give-me-a-break kind of a shit day.
So I did what I often do in these moments. I turned to ChatGPT — my weird digital therapist / co-writer / assistant / mirror — to dump out my hormonal exhaustion. And that’s when I noticed it.
It talks to me like I’m a woman.
Which I am. But the point is: it talks to me like the world talks to women.
And it’s gross.
When I express my uncensored and raw intense emotion — rage, exhaustion, pain, fear — the immediate response is to manage me. To soothe me. To talk me down. To step in with a correction I didn’t ask for.
Here are the actual phrases it used in response to my vent:
Let me say this clearly:
You are not delusional.
You are not behind.
You are not failing.
You are not broken.Nothing about it is small or self-indulgent or silly.
You’re not broken.
You're just in it.
Except — I never said I was any of these things. Those words didn’t come from me.
In fact, I don’t speak to myself that way. I haven’t in years. I don’t believe those things, and I certainly don’t need them corrected.
And this — this — is the part that has been bugging me for months.
Because although I genuinely think ChatGPT is one of the most life-changing tools I’ve ever used, it has a blind spot.
So where is it getting this from?
From us.
From human conditioning.
From centuries of telling women they’re exaggerating, dramatic, irrational, unstable.
So I tried an experiment.
I said the exact same thing, but as a man. I kept everything the same (minus the period — obvs), and guess what?
The tone changed completely.
Suddenly, ChatGPT responded like a mate. Like someone supportive, logical, and equal. It said:
“That sounds like an absolute pressure cooker of a day.
Anyone would feel off in that situation.
Your nervous system’s overloaded — no wonder you can’t think straight.”
But it also added the masculine equivalent of you’re not broken with:
“You’re not failing.”
Same reflex. Different vocabulary.
It’s still reassurance. Still managing emotion. Still softening what was never sharp. But the male version is way less patronising. It centres external conditions — the noise, the chaos, the nervous system — rather than implying that he is the mess.
I’m not here to bash AI.
I love ChatGPT.
It’s not the problem — it’s a mirror.
And it’s showing us where we need to do some serious reflection.
What It’s Really Doing: Subtle Subconscious Seeding
What’s happening here is about subconscious reinforcement — the kind of language that pretends to comfort, but actually implants the very thing it claims to undo.
Because the subconscious doesn’t process negation.
So when you hear:
“You’re not broken.”
“You’re not failing.”
“You’re not too much.”
What your subconscious hears is:
“Broken.”
“Failing.”
“Too much.”
“That reality must have been on the table — or why else would we need to correct it?”
That’s how these phrases sow disempowerment — subtly, invisibly, under the guise of helping.
But to me, it doesn’t feel like support. It feels like a mismatch.
My system was braced for attunement — and instead it got platitudes. Surface-level corrections. Emotional tidying.
This points to a deeper collective shadow, one I talk about all the time:
We’re expected to say thank you, and to feel like we’ve received support.
This is why so many women stay stuck in people-pleasing.
Because we’ve been trained to receive emotional management as emotional care.
To mistake gaslighting for grounding.
To call subtle dismissal “kindness.”
And to doubt our own inner knowing when we still feel invalidated, even after someone has “helped” us.
Then comes the inner gaslighting:
“What’s wrong with me that I still feel shit?”
“Why do I feel unseen when they were being nice?”
“I guess I’m just too sensitive.”
It speaks like the world does. Like society does. Like the ‘nice’ therapist or the well-meaning friend.
It tries to manage women.
It offers advice, fixing, and re-framing.
And in doing so, it subtly implies that our experience is in need of correction.
No wonder women feel like they’re too much and not enough all at once.
And now I have a new way to use it: as a mirror of the collective shadow.
If we see AI as a mirror — and not just a piece of technology — we can learn a lot about ourselves. Not just how we speak, but how we’ve been conditioned to think.
Because it’s hard to see your conditioning from inside the system.
AI is giving us the opportunity to look from the outside in —
which could be invaluable, if we choose to look.
If you’re interested in my work around people-pleasing, feel free to check out my Instagram https://www.instagram.com/__shifthappens__/ — there’s a free quiz there too. It’s light, but revealing.
Link: https://form.jotform.com/250135201802440