Eroding the Edges: (AI-Generated) Build vs. Buy and the Future of Software
The Weekend Windup #9 - Cool Reads, Events, and More
Here’s a question I’ve been asking myself lately, and I know I’m not alone in this when I speak with vendors and practitioners - Why would I buy software when I can use AI to build it in 20 minutes?
Now, I realize that sounds absurd and even arrogant. But this question represents a fundamental shift happening right now in how we think about software, platforms, and the entire technology stack our businesses depend on. And we’re just getting started.
For decades, we’ve lived with the “build versus buy” decision. The familiar dilemma looks like this.
Option A - Build. Do we sink months or years of engineering resources into building a solution in-house?
Option B - Buy. Do we purchase something off the shelf from a vendor?
Option C - Open Source. Or we could adopt an open-source alternative and maintain it ourselves. Of course, if a vendor supports the open-source project, what happens if they change their mind about licensing, community, etc?
Each option has clear trade-offs. Build gives you customization but costs you time and money. Buy gives you speed but locks you into someone else’s roadmap. Open source gives you flexibility but demands ongoing maintenance.
Something changed around early this year when AI started getting really good at writing code. The question has changed from “build versus buy” to “AI-generated versus buy.” This is a huge deal that people are only now beginning to understand. But things are going to change in a major way.
What I’m Seeing Today
Let me ground this in what I’m actually experiencing right now, not navel-gaze at some far-off sci-fi scenario. Right now, today, I’m using AI to replicate applications I would otherwise purchase. Yes, these are relatively simple applications at the moment. But I’m watching tools like Claude Code, Cursor, CoPilot, etc, become progressively more complex and capable with each passing week.
When I need tools or automations for my business, I brainstorm with my AI friends ChatGPT, Claude, or Gemini. Once we’ve hashed out the details, I’ll usually ask the AI to create a Claude.md file; other times, the AI will supply the code. In the case of Claude Code, I’ll plop the Claude.md file in and let Claude Code do its thing. With a bit of nudging and tweaks, I’ll usually have a functional application within 20 minutes. Sometimes this is a prototype. Other times it’s a working application.
No matter what, my learning cycles are considerably shrunk. I can test an idea and, within a short time, get a better sense of whether my thinking is on track, along with a tool to try. Do this multiple times a day, and you’ve got numerous learning cycles. This feedback loop would’ve taken orders of magnitude longer in the pre-genAI days. Now, I can iterate toward solutions far more quickly. Is it always perfect? No. But perfection isn’t the point of these learning cycles. It’s about deciding on a path to take. Once I arrive here, I can perfect the tool I built… once again, using AI alongside my own judgment and coding. And the bottleneck? It isn’t the AI’s capability. It’s me crafting the right prompts and instructions.
Now, you might be thinking. “Sure, Joe, but those are cute, simple apps. What about real enterprise software? What about the complex stuff? This AI nonsense won’t work in my Serious Enterprise, Inc.” Fair question. And you’re right…for now.
Current AI tools aren’t ready to shoulder the entire burden of enterprise application development. AI tools lack the context to understand your enterprise’s nuances, tacit knowledge, etc. Most enterprises have insane complexity. The edge cases are endless. The integration challenges are real. IT failure rates have hovered at around 80% for decades.
Yet businesses keep building or buying software. They also keep generating data.
It would be foolish to fixate on today’s models and extrapolate a flat line of progress. While the AI models and tools might not be quite up to snuff today, the models are improving at an incredibly rapid pace. What seems impossible today is a solved problem within weeks or months. And if you read the papers coming out of top AI research labs, they’re working on stuff that will hit the models in a year or more. The rate of progress in the labs is mind-bending and borderline science fiction.
We’re not talking about incremental improvement. We’re talking about exponential growth in capability. A fundamental question is now being discussed: How long will it take for AI to create the production-grade software your business actually needs?
Fluid Software and The Moat Problem
This brings me to what I call “the moat problem.” If you’re a software vendor, what constitutes a defensible moat when AI can seemingly create tools on the fly? Today, this seems like a silly question. But again, given how quickly AI is improving, it’s a question that should keep you up at night if you’re a vendor.
This week, I brought up this question on a panel at Small Data SF, and one of the panelists, George Fraser (CEO of Fivetran), says this question is often on his mind. He runs one of the data industry’s most valuable companies, and he’s curious about the impact of AI on his company. And he’s not alone. When I speak with SaaS vendors, the defensible moat in an era of rapidly improving AI coding tools is a very legitimate concern. There are no easy answers. No guarantees. Some ask how AI can destroy their company. This is a healthy level of paranoia.
Zooming out, let me paint you a picture of what I’m seeing. Imagine you’re a software vendor. You’ve got a great product. Loyal customers. Strong revenue. You feel secure. But the thing is, you don’t need to be replicated entirely to lose your business. Given that most people use only a fraction of a tool’s feature set, all it takes is for the edges of your offering to be reproduced, and suddenly, your value proposition weakens. Multiply this by dozens, hundreds, thousands of times, and you’re suddenly vulnerable. AI doesn’t need to create a wholesale replacement. It just needs to build the functionality a person or company needs. And especially with on-demand AI-generated UIs like Imagine with Claude, we’re seeing the very early stages of on-demand, personalized, and hyper-customizable software. I envision a future where software evolves and adapts to specific situations as needs arise. Not static applications that everyone uses the same way. Not one-size-fits-all platforms. Instead, fluid, adaptive software molds itself to your needs. No need to wait for a vendor to add a new feature, or for a pull request to be added to an open-source project. Just tell AI what you need, and voila!
Of course, some tools and tech are too complex for AI to build, at least for now. Some examples that come to mind are hardcore backend infrastructure, such as databases and streaming protocols/processing platforms. But for every one of those, countless things can be built “just good enough” to fit someone’s use case. This flips the script. What if the entity creating the software (the AI) has potentially seen more examples of a particular use case than even the original developers?
The stickiness that vendors rely on will increasingly be questioned. Unless your company possesses some inherent, defensible advantage (proprietary data, network effects, regulatory moats, deep customer relationships), you’re frighteningly vulnerable. Maybe not today or next year, but soon. Like it or not, this is the world we’re rapidly approaching, and vendors will need to pay attention and adapt accordingly.
“But…Hallucinations!!! And Didn’t You Criticize Vibe Coding?”
I know what some of you are thinking: “But Joe, what about hallucinations? What about reliability? What about all the ways these models get things wrong?”
Valid concerns. Absolutely. But do you really believe the AI labs are unaware of these issues? Be very, very careful extrapolating the shortcomings of today’s models to what’s likely sitting in the lab right now, 2 to 5 years ahead of what you’re using. The teams at Anthropic, OpenAI, Google, and others aren’t sitting around ignoring hallucinations. They’re engineering solutions. They’re building better architectures. They’re creating more reliable systems. Tomorrow will look vastly different from today.
Even if there’s a “popping” of the AI bubble (and maybe there will be), I don’t anticipate the progress in AI research to evaporate, or the tools to magically disappear. The toothpaste is out of the tube. As with most bubbles, the underlying technology continues to provide value. Bubbles occur when expectations outpace reality or budgets. But the technology itself? That’s real. I don’t know anyone in software who wants to return to a pre-AI world. These tools are truly remarkable. Everyone I know is discovering their benefits, every single day.
And yes, I criticized vibe coding in my talk earlier this year called “The Great Pacific Garbage Patch of AI Slopware.” In that talk, I made the point that the world will be full of half-baked, disposable software. My opinion hasn’t changed, and we will undoubtedly find ourselves in a situation of a shit ton of crappy AI-generated software (and resulting maintenance/safety/security issues). But we are where we are, and AI vibe coding ain’t going anywhere. I accept this is reality. So, rather than be the cynical mouthbreathing anti-AI neckbeard, I can just be a cynical mouthbreathing neckbeard.
What This Means For You
So what does all this mean for you in practical terms?
If you’re a software vendor, your moat better be deep and wide. Platform effects. Network effects. Proprietary data. Deep customer relationships. Strong community and mindshare. Something that can’t be easily replicated by an AI with context. The days of competing purely on features or implementation quality are numbered.
If you’re building data platforms or an engineering team, start thinking of AI not as a productivity tool but as a fundamental shift in how software is created and deployed. Your role is evolving from “builder” to “orchestrator.” The questions change from “How do we build this?” to “What should we build, and can AI handle it?”
If you’re making technology decisions for your company, the calculus is changing. That “build versus buy” spreadsheet you’ve been using? It’s increasingly obsolete, so factor in what AI can generate, how quickly, and at what cost. The ROI calculations look entirely different when development time shrinks from months to minutes, and the cost of code drops to near zero.
To sum up, we’re quickly entering an era where software becomes more fluid, more personalized, more adaptive. The barriers between “builder” and “user” will blur. The question isn’t whether you can build something, but whether there’s any reason to buy it instead. Open source projects will be in an interesting place: inundated with AI-slop pull requests while various parts of their functionality are created by random people with random needs.
The future isn’t coming. We’re already getting started. Build.
What are you seeing in your world? Are you using AI to build instead of buy? What applications have you generated that surprised you? Drop a comment. I’d love to hear your experiences.
Have a fun weekend,
Joe
Awesome Upcoming Events
The Practical Data Community is having our first hackathon next weekend (November 14 to 16). We’re going to vibe code the most useless apps ever. If you’re interested, head over to the Practical Data Community Discord for updates.
I’m speaking at Sifflet’s Signals 2025, an online conference all about Trust by Design. Four days. Fifteen sessions. Firesides, keynotes, and panels designed to give you actionable insights from the world’s leading data voices.
When: November 17 to 20
Where: Virtual
Bonjour! I’m back in Paris for Forward Data Conference part deux.
Join me and my friends for an awesome day of talks, croissants, and coffee.
Forward is rapidly becoming one of the best indie data events in the world, so show up if you can. See you there.
But wait, there’s more!
My other upcoming events are posted here.
Cool Reads
Here are some things I read this week that you might enjoy.
Joe Magerramov’s blog: The New Calculus of AI-based Coding
The Learning Loop and LLMs - Martin Fowler
In a First, AI Models Analyze Language As Well As a Human Expert - Quanta Magazine
We’re All Living in Different Data Decades - Seattle Data Guy
The Pulse: Amazon layoffs – AI or economy to blame? - The Pragmatic Engineer
Alibaba-backed Moonshot releases new AI model Kimi K2 Thinking - CNBC
AI and the Coming White-Collar Political Upheaval - WSJ
Find My Other Content Here
📺 YouTube - Interviews, tutorials, product reviews, rants, and more.
🎙️ Podcasts - Listen on Spotify or wherever you get your podcasts
📝 Practical Data Modeling - This is where I’m writing my upcoming book, Mixed Model Arts, mostly in public. Free and paid content.
The Practical Data Community
The Practical Data Community is a place for candid, vendor-free conversations about all things tech, data, and AI. We host regular events such as book clubs, lunch-and-learns, Data Therapy, and more.
Closing Question
What are you seeing in your world? Are you using AI to build instead of buy? What applications have you generated that surprised you? Drop a comment. I’d love to hear your experiences.
Want your article or event featured here?
Got an article or an upcoming event you want featured here? Please them here.






Timely Joe. I was just explaining this to my boss. The build vs. buy math equation just became unhinged, but it will be super hard to "vibe the enterprise" because all of the sheriffs are so risk-averse. SMB's for the most part won't vibe, I can't imagine a 20-person plumbing contractor doing that. And big enterprises won't vibe. However, it's not the "I'm gonna be cloned" threat to the giant seat-license app providers; it is the fact that their seat count has to shrink in an AI world.
My current dream is not that systems of record go away. There is good reason SAP is as complex as it is, it’s the heart of many big businesses but that I can do my workflow any way I want and have MCP and Agent2agent manage the go between.