Code Wasn't The Hard Part (Keep Building)
The Weekend Windup #14 - Reflections, Cool Reads, Events, and More
“What I built today might be obsolete tomorrow.”
Earlier this week, a developer told me these words, and it left me thinking for the rest of the week. AI models keep improving, almost weekly (I hear Claude Sonnet 4.7 is dropping imminently). The model from a few months ago, and definitely earlier this year, is legacy. And especially since OpenAI is in Code Red (again), it seems there’s a renewed race among hyperscalers to release better models and improve their model ecosystems.
In the end, the users benefit (for now). Right now, the economics of AI make zero sense - invest several hundreds of billions of dollars to make tens of billions in return. I think the party will end soon, and we’ll see price hikes. Enjoy these subsidized coding superpowers while they last, because the economics will eventually tighten (enshittification, price hikes, etc.).
But back to the developer’s comment. When the major AI models improve at a fast clip, it can certainly feel like you wasted your time building a feature that the latest model can suddenly replicate with ease. AI coding capabilities are relentlessly improving. The inevitable outcome might be that AI writes most or all of the code for your product (I recently heard that OpenAI’s Codex is 90% autonomously coded by AI).
Writing code was seldom the hard part. Code is only a part of the many things we must consider when building systems. There’s architecture, workflows, and the pesky problem of making things users want to use. If code were the only consideration, why not build stuff in Excel since it’s Turing-complete? Actually, much of the corporate world runs on Excel, but I digress.
It might seem disturbing that the work you put into something is a waste of time if an AI can now magically build it for you. But chalk that up to learning. AI helps us learn and iterate faster. But if the model does all the work, especially at the start, you might miss out on key insights into your product and users. And you might not understand what the AI’s built. This is a legit concern, and something we’re grappling with to understand as an industry. I’ve been telling devs and data engineers to learn product and systems thinking, as these will be critical skills as we increasingly work alongside AI, with AI handling much of the grunt coding work. Also, AI can be an excellent partner in designing systems and acting as a sounding board. But it’s just one tool of many at your disposal. You have other tools, like talking to people and developing a sense of good taste. Don’t treat coding as a zero-sum game, where the AI wins, and you lose. Instead, view it as an opportunity to accelerate the delivery of what matters to your users.
Engineering will move away from coding being the hard part to us (and AI) designing architectures and systems that support the delivery of better and better products. There’s no shortage of code to write to deliver these improvements, so I’m excited about the future. A concern is that we’ll lose sight of what we’re building, and I’m still mulling this over and will articulate it in a future article. But that won’t stop AI from making massive advancements in coding, nor will it stop our ability to use these new models to keep building.
In other news:
If you’re a company wanting to work with me (training, workshops, B2B, speaking, etc.), let’s chat. My 2026 calendar is filling up fast, so let’s figure something out while the year is young.
The final hard chapter of Mixed Model Arts, Book 1, is nearly finished. It will be released to paid subscribers sometime next week. Then the harder part begins - editing. As any writer worth their salt will tell you, editing is where real writing begins. Plus, recording the course for the book. Giddy up.
That said, not having to focus so intently on book writing frees me up to publish more articles here and at Practical Data Modeling (my other Substack). Got a lot of articles in the queue, and I’m stoked to share some pent-up thoughts. For my personal Substack (this one), I want to go broader into tech, society, the economy, and related topics. PDM will be more focused on practitioner content. At least, that’s the plan for now.
There will be much more on YouTube. If you aren’t a subscriber, please join and get first dibs on lots of excellent data content (interviews, tutorials, etc) in the pipeline.
I’ve got the next month of podcasts already recorded. Will be editing them over the holiday break, and you’re in for some real doozies - Cory Doctorow (wtf?!), Bill Inmon, Barry McCardel, and more.
This is also the last newsletter until after Christmas. Merry Christmas, Happy Hanukkah, and have a great time during the break!
Have a great weekend,
Joe
🚨 Quick Reminder - Take the Survey!
The 2026 Practical Data State of Data Engineering survey is still open, and I’d love more voices in the mix.
The goal is simple: build a picture of how data teams actually work in 2025. Not what vendors say we do, not what a “mega analyst firm” suggests, but ground truth from practitioners.
We’ve got a lot of responses so far (over 700 and counting), which is excellent. But the more perspectives we capture, the more useful this report becomes for everyone.
If you work in data (DE, analytics, AI/ML, platform, architecture), it takes 2–3 minutes:
Survey ends January 10, 2026.
The full report drops after the data is digested, and is free for everyone.
Thanks to those who’ve already participated. 🙏
Awesome Upcoming Events
Working on my 2026 event schedule, and so far it looks dope. Will reveal more soon, so stay tuned…
See my upcoming events, which are also posted here.
But wait, there’s more!
Cool Reads and Videos
In this episode, Nik Suresh returns to the show to discuss his first year running a bootstrapped services company. And no, he probably won't pile-drive you if you mention AI again.
Nik explains why he moved away from hourly billing to fixed pricing, why writing code is often the least profitable part of a project, and how to spot "status games" in the tech industry. We also dive into the current state of AI, why bad leadership is the real problem behind failed tech initiatives, and trade stories about MMA and boxing.
We also debunk the myth that starting a business has to be miserable, explore the performative nature of "hustle culture" in Silicon Valley, and break down why engineers often struggle with consulting sales.
Data modeling underground legend Larry Burns put on a clinic this week for the Practical Data Community on how to sell data modeling to stakeholders, data shamanism, and making great data models. I don’t hand out compliments lightly, and Larry is genuinely one of my industry heroes.
Here are some things I read this week that you might enjoy.
Re-imagining the Corporation of the Future - Still Wandering
The Hidden Side of Venture Capital Funds Every Founder Should Know - The VC Corner
Japan Is What Late-Stage Capitalist Decline Looks Like - Oceandrops
AI agents are starting to eat SaaS - Martin Alderson
ceberg in the Browser – DuckDB
Some Thoughts on Equity - Andrew’s Substack
The State of AI Coding 2025 - Greptile
Find My Other Content Here
📺 YouTube - Interviews, tutorials, product reviews, rants, and more.
🎙️ Podcasts - Listen on Spotify or wherever you get your podcasts
📝 Practical Data Modeling - This is where I’m writing my upcoming book, Mixed Model Arts, mostly in public. Free and paid content.
The Practical Data Community
The Practical Data Community is a place for candid, vendor-free conversations about all things tech, data, and AI. We host regular events such as book clubs, lunch-and-learns, Data Therapy, and more.




This nails it perfectly. The shift from code being the bottleneck to product thinking is basically whats happening in our org right now—we stopped debating libraries and started debating what users actualy need. I've noticed the best eng teams treat AI like a junior dev: great at boilerplate, but still needs architecture and context from someone who understands the system holistically.
I think one of the biggest misconceptions that ppl still have about these LLM’s is that they build the entire thing. I’m treating copilot as a “copilot” e.g. an assistant. I’m still in full control but will ask it to do the more mundane tasks such as “I need a function to nuke all files in an s3 bucket prefix”…things I’d normally spend 10 minutes googling and reading on stack overflow. I also use the LLM’s to refactor and build the readmes - the readmes are a huge one. I’ve never been a fan of documentation like most engineers and the LLM’s have done an excellent job building the readmes for my various projects will little editing needed