Dawn of the Brain-Rotted Zombies
The Weekend Windup #16 - Reflections, Cool Reads, Events, and More

I was talking to a friend the other day, and he lamented that many of his once-competent coworkers are becoming brain-rotted zombies. Instead of doing the stuff we used to do - thinking through problems and solving them, taking the time to write (emails, memos, etc) - they’ll now just toss the scraps of scattered ideas into ChatGPT and have it spit out emails, memos, strategy, documentation, etc. And like most of us, they’re using AI coding tools to write all of their code.
He went on to describe a sense that his coworkers are turning into mindless automatons, almost incapable of coming up with original ideas. In this case, AI’s not an assistant, and not even a crutch. It’s a wholesale lobotomy of entire roles.
This is not the first or hundredth story I’ve heard about AI transforming how we work. I talk to executives, managers, practitioners, professors, and students, and the main takeaway is we’re at a transition point right now. And the transition toward mostly AI-first workflows is happening at warp speed.
Chatting with a professor colleague the other day, he said that although AI was sporadically used by students in prior semesters, Fall 2025 was the first semester where every student used AI for everything - homework, exams, etc. How could he tell? Because almost everyone handed in assignments that were not just wrong, but implausibly bad. He had to fail quite a few students who couldn’t explain the AI slop they handed in. He also caught students using AI in a proctored final exam! They too were failed.
Like a lot of professors, he’s back at the drawing board, trying to figure out how to teach students. Some other professor friends are going back to using Blue Books (if you’re under 30, look them up) and group discussions. How teaching in the age of AI scales to massive undergrad classes of hundreds or thousands of students is a big challenge for schools. Schools are freaking out, and nobody has figured this out yet.
Early on, ChatGPT took off because college students posted TikToks of themselves using ChatGPT to cheat on homework. What’s really interesting and concerning is whether these students learned anything in school, since they’re now entering the workforce. Doubly troubling is the abysmal hiring market for new graduates. The open question is whether, if they find work, it is better to be competent in their field of study or be proficient at AI? The correct answer is they should be good at both, but that’s a tall order if a student used AI for their learning experience and didn’t actually pay attention or learn anything.
I fear we’re building a competency debt bubble. Just as we accrue technical debt by cutting corners in code and systems, we are now accruing human debt. This debt will eventually come due when the students who “AI-ed” their way through school are expected to lead. If you can’t spot “implausibly bad” slop, you aren’t proficient at AI. You’re just a glorified copy-paster who will eventually be replaced by the very tool you’re leaning on.
Mindless velocity without comprehension is just a fast track to collective stupidity. Idiocracy was a comedy, not a documentary. As I keep hammering in my old man rants, the only vaccine against the brain-rot zombie plague is a return to the fundamentals. Write the memo yourself. Read a book. Manually debug the code once in a while. If you stop using your brain, you shouldn’t be surprised when it stops working. Brain-rotted zombies are among us, and sooner or later, we might all be among the infected unless we pause and do what’s human, not just what you’re measured to do on some arbitrary scorecard.
I don’t have an answer for this yet. Maybe there isn’t one and this is just the inevitable transition of humanity into cyborgs as the Singularity approaches. Hell if I know. I’m still trying to sort this out. But I’m curious, have you found a way to use these tools without losing your edge? Is there a middle ground between being “Amish” and a “brain-rotted zombie,” or is the temptation of easy-mode too strong for most people to resist?
Let’s talk in the comments.
In other news:
If you’re a company wanting to work with me (training, workshops, B2B, speaking, etc.), let’s chat. My 2026 calendar is filling up fast, so let’s figure something out while the year is young.
The final manuscript of Mixed Model Arts, Book 1, is nearly finished. It will be released to paid subscribers sometime soon, in the form of various paywalled chapters. Then the harder part begins - editing. As any writer worth their salt will tell you, editing is where real writing begins. Plus, recording the course for the book. Giddy up.
That said, not having to focus so intently on book writing frees me up to publish more articles here and at Practical Data Modeling (my other Substack). Got a lot of articles in the queue, and I’m stoked to share some pent-up thoughts. For my personal Substack (this one), I want to go broader into tech, society, the economy, and related topics. PDM will be more focused on practitioner content. At least, that’s the plan for now.
There will be much more on YouTube. If you aren’t a subscriber, please join and get first dibs on lots of excellent data content (interviews, tutorials, etc) in the pipeline.
I’ve got January’s podcasts already recorded, and in the middle of editing them. You’re in for some real doozies - Cory Doctorow (wtf?!), Bill Inmon, Barry McCardel, and more.
Have a great weekend,
Joe
🚨 Quick Reminder - Take the Survey!
The 2026 Practical Data State of Data Engineering survey is still open, and I’d love more voices in the mix.
The goal is simple: build a picture of how data teams actually work in 2025. Not what vendors say we do, not what a “mega analyst firm” suggests, but ground truth from practitioners.
We’ve got a lot of responses so far (over 700 and counting), which is excellent. But the more perspectives we capture, the more useful this report becomes for everyone.
If you work in data (DE, analytics, AI/ML, platform, architecture), it takes 2–3 minutes:
Survey ends January 10, 2026.
The full report drops after the data is digested and is free for everyone.
Thanks to those who’ve already participated. 🙏
Awesome Upcoming Events
I’ll be at Data Day Texas! See you there.
Still working on my 2026 event schedule, and so far it looks dope. Will reveal more soon, so stay tuned…
See my upcoming events, which are also posted here.
But wait, there’s more!
Cool Reads and Videos
Here are some things I read this week that you might enjoy.
Why A.I. Didn’t Transform Our Lives in 2025 | The New Yorker
Companies Are Outlining Plans for 2026. Hiring Isn’t One of Them. - WSJ
Models Are the Airplanes. Data Is the Airlines.
Americans Hate AI. Which Party Will Benefit? - POLITICO
You’ve been targeted by government spyware. Now what? | TechCrunch
Our Reporters Reached Out for Comment. They Were Accused of Stalking and Intimidation.
2025: The year in LLMsData Identity Politics and The Kimball vs. Inmon WarThe Decline of Deviance - by Adam MastroianniPython Numbers Every Programmer Should KnowCapital in the 22nd Century
Hating Stranger Things During the Death Rattle of Criticism
Find My Other Content Here
📺 YouTube - Interviews, tutorials, product reviews, rants, and more.
🎙️ Podcasts - Listen on Spotify or wherever you get your podcasts
📝 Practical Data Modeling - This is where I’m writing my upcoming book, Mixed Model Arts, mostly in public. Free and paid content.
The Practical Data Community
The Practical Data Community is a place for candid, vendor-free conversations about all things tech, data, and AI. We host regular events such as book clubs, lunch-and-learns, Data Therapy, and more.



My kids see not using AI in uni as a badge of honor
People are inherently lazy and they take shortcuts. 5 years ago we had joke O'Reilly covers like "Copying Code From Stack Overflow" and the DailyWTF provided a steady supply of terrible code. Back in the 2000s I remember reading the the majority of software engineers hadn't read a book on software engineering in the last year. A lot of SWEs are really incurious and the job is just a method to finance their lifestyle. They aren't interested in the craft.
On the other hand, I know SWEs who don't use AI tooling. Some of them are excellent engineers and trust themselves more than the LLM, but many are average engineers, and all I can figure is that the latter group have the belief that they won't be replaced by AI if they don't use it. The other problem is that using an LLM requires good communication and planning skills. Instructing the LLM is more in the realm of a PM/PO, or maybe a manager, than an actual engineer.