Sometime last year, a friend of mine — a senior marketing manager at a mid-sized company — told me she had started using ChatGPT to draft all her campaign briefs. "It takes me 10 minutes now," she said. "Used to take half my Monday." I asked if her boss knew. She laughed. "I think everyone on the team does it. We just don't talk about it out loud."
That quiet adoption is exactly what's been happening across every industry for the past two years. And it's accelerating in a way that feels genuinely different from previous tech waves.
It's Not a Tech Thing Anymore
The conversation around AI used to live in certain pockets — software companies, VC-funded startups, people with "Machine Learning Engineer" in their LinkedIn bio. That's over now.
My doctor's clinic uses AI to transcribe appointments. A friend who's a criminal lawyer tells me he uses it to summarise case law before hearings. Teachers are using it to generate differentiated lesson plans. My cousin runs a small bakery and uses it to write her Instagram captions every week.
The numbers back this up. ChatGPT hit 100 million users in two months — faster than any platform in history. Microsoft, Google, Adobe, Notion, Canva — every tool people already use at work now has some form of AI built in, often whether you asked for it or not.
The shift isn't "coming." It's already here. What's happening now is the second phase: the gap between people who are using it well and people who aren't is getting visible.
What Actually Happens If You Don't Use It
I want to be careful here, because a lot of AI commentary slides into either pure hype or panic, and neither is all that useful.
The honest answer is: it depends on what you do.
If you're a copywriter who still produces a 500-word blog post in three hours while your colleague uses AI as a first draft and edits it down in 45 minutes — your client notices. Not because AI is "better," but because speed and output have changed what's expected. The baseline has shifted.
Same thing in data analysis, customer support, recruitment, basic legal research, accounting. Tasks that used to require two people and a week can now be done by one person with the right tools. Companies are noticing. Some are quietly not backfilling roles when people leave. Others are restructuring entire teams.
This isn't about AI replacing people wholesale. It's messier than that. It's more like: people who use AI well can do more, and that compresses what a team needs to get the same work done.
If you're not adapting, you're not standing still — you're falling behind relative to the people around you who are.
But Here's the Other Side
I genuinely don't think manual skills are going away. And I think the panic narrative — "AI will take all the jobs by 2027" — misses something important about what work actually is.
AI is impressively capable at tasks that are repeatable, well-defined, and mostly about producing output from a template. First drafts. Summaries. Code boilerplate. Research starting points. Data transformation. Formatting.
What it's much weaker at: judgment calls with real stakes, navigating genuine ambiguity, building trust with another human, making creative decisions that require understanding the culture of a specific moment. The senior copywriter's job isn't to write 500 words — it's to know which 500 words land for this brand with this audience right now. The lawyer's job isn't to find the case law — it's to argue it persuasively to a judge who's seen a hundred cases just like it.
The people who are genuinely at risk are those doing purely executional, low-judgment work. That's real and worth taking seriously. But honestly, that vulnerability existed before AI too — those roles were already being compressed by automation, outsourcing, and SaaS platforms eating into what used to require a person.
So Where Does This Leave Us
The people who'll do well over the next decade are not necessarily the most "AI-native." They're the ones who have genuine expertise in something — and who use AI to go deeper and move faster in that domain.
A strong designer who uses AI image tools becomes more productive, not less relevant. A good writer who uses AI for research and structure frees up mental space for the harder editorial decisions. A developer who leans on AI for repetitive code can focus on architecture and system design — the parts that actually matter.
The risk is the opposite of that. Shallow generalist skills plus AI still produces shallow work. AI doesn't turn average thinkers into great ones. It raises the floor. The ceiling is still set by what you actually know and how clearly you think.
A Few Things Worth Doing Right Now
I don't have a perfect playbook here. But some things seem fairly clear.
Pick one tool and actually learn it properly. Not just try it once and move on. Dig in — understand what it's good at, where it falls down, how to get what you actually need from it. Most people are surface-level users. The gap between casual and capable is larger than most people realise.
Use it for the parts of your work you procrastinate on most. First drafts, meeting prep, summarising long documents, repetitive formatting tasks. Free that time up for the things that actually need your brain in the room.
And don't pretend the shift isn't happening. I understand the instinct to push back — the hype cycle is genuinely exhausting and a lot of AI commentary is overwrought. But refusing to learn how to use a spreadsheet in 1995 because you preferred doing things by hand wasn't principled. It was just expensive.
The question worth sitting with isn't "is AI good or bad?" It's: given that this is happening, what's the smartest way for me to respond to it?
That's a question only you can answer. But no answer at all is probably the worst one.
