Custom software used to be a dirty word
Custom software used to be the expensive, slow, fragile option. AI agents change that math, not as hype, but as a new execution layer for software work.
Custom software used to be the option you picked only when nothing else worked. Fair enough. But that math no longer holds.
For years, custom software was the thing you only chose when nothing else worked. And even then, it felt risky.
When people heard “custom software,” they didn’t think leverage. They thought hassle. A project that would get too expensive, take too long, and become too fragile to touch afterward.
That reputation didn’t come out of nowhere.
Custom software was expensive. You needed a whole team. Developers, an architect, a project manager, testers. Every hour added up. Every change became a discussion. Every discussion became an invoice.
Custom software was fragile. Once it went live, the real pain started. Updates. Bug fixes. Breaking integrations. Documentation that existed nowhere except in the heads of people who no longer worked there.
Custom software was slow. First weeks of scoping. Then months of building. Then testing. And by the time it was done, the original problem had usually already shifted.
That is the real reason SaaS won.
Not because standard software was always better. SaaS won because custom software was the wrong equation for a long time.
And that equation is changing now.
I did not notice that because I tried yet another AI chat tool. I noticed it when I gave my OpenClaw account its own GitHub user.
That is when it became concrete. Not in the vague sense of “AI might help with software someday,” but in the literal sense that this account can write code, propose changes, create pull requests, respond to issues when I ask it to, and work inside applications I already run. For execution it often uses Claude Code or OpenAI Codex, depending on the task.
That is when you see what has really changed. Not the model by itself, but the shift from isolated AI capability to actual execution power.
1. Building has become cheaper
Because AI agents can now take on a big part of the execution work, you can ship software much faster than a few years ago.
Not because developers suddenly stopped mattering. But because a lot of software work was always repetitive in nature. Setting up screens. Writing standard logic. Connecting systems. Refactors. Tests. That kind of work can now be executed or prepared much faster.
In my own work, this looks very practical. I can give OpenClaw a task, let it dispatch that task to Claude Code or Codex, and get back a proposal in the form of code, comments, or a pull request. Cursor cloud agents and GitHub Copilot coding agent show that this is no longer some niche pattern. It is a new execution layer for software work.
So the relevant shift is not just that code gets written faster. The relevant shift is that one person can now delegate work to multiple agents at once without turning every step into a mini-project.
That shifts human effort back to where it should be focused: architecture, logic, and alignment with the actual process. Not only do build costs go down. Coordination costs do too.
That is a fundamental difference.
Custom software used to be expensive because almost everything had to be produced manually. Now it is much more realistic to put something useful in front of reality quickly, without building a huge project machine around it first.
2. Maintenance is less likely to become a nightmare
This may be the bigger shift.
One of the classic objections to custom software was not just the build phase, but everything after it. Small fixes got postponed. Integrations aged badly. Nobody had time to keep things clean. Internal systems slowly turned into problems everyone kept moving to “later.”
That math is changing too.
AI agents can now take on or prepare part of monitoring, bug detection, small changes, and recurring maintenance. The work that used to sit untouched because nobody had time can now be handled much more intelligently.
This is also where it becomes personally useful for me. If I give my OpenClaw account a daily instruction to check whether there have been bugs in the applications I run, it does not just tell me something broke. It can already prepare tasks, gather context, and in some cases even take a first execution step.
You can see this clearly in Sentry Seer Autofix, which analyzes issues, performs root cause analysis, and can propose code fixes or even a PR. On the workflow side, you see the same pattern in Linear Agents and the Codex integration in Linear: agents are not a gimmick there, but actual operators inside the work loop, while the human remains accountable.
That matters more than it may sound. Maintenance starts to shift from “someone needs to look into this later” to an operational loop where detection, analysis, and first execution are already largely organized.
Not fully automatic. Not without oversight. But enough to make the gap between “custom software” and “maintenance problem” a lot smaller.
That alone removes much of the old fear from the conversation.
3. Learning has become faster
The third shift may be the most important one.
Because AI agents also help with prototyping, iteration, and testing, you can put together a prototype in a day and test a first internal tool on real data within a week. That makes failure cheaper.
The examples here may be even more convincing. With Codex or Claude Code, you can run multiple iterations quickly, let Cursor background agents build and test in parallel, and orchestrate those runs from chat or operational workflows via OpenClaw.
That makes it possible to build in a more personal and pragmatic way. Not with a long requirements phase first, but with something real. In my world, that often does not mean building an entire new application. It means putting a thin system-of-action layer on top of existing systems of record and seeing whether the real work becomes faster, better, or more consistent.
For B2B teams, that may be the real shift. You do not need to design and roll out a full application before learning anything. You can start where the friction is, in routing, exceptions, handoffs, triage, and execution, and see within days whether the process actually improves.
You no longer need six months of budget, planning, and internal politics just to discover that your assumption was wrong. AI agents make it easier to put something working in front of reality faster. You see sooner what holds up. And you adjust sooner.
That changes not only how you build, but also how you decide.
Once experimentation becomes cheaper, you can be much more selective about what deserves to be pushed further.
That also lines up with how Anthropic describes it in Building effective agents: the breakthrough is not just in the model, but in systems where models use tools, execute steps, and iteratively complete work.
This does not mean you should build everything yourself now
That would just be the next bad reflex.
The lesson is not that SaaS is dead. The lesson is also not that every company should suddenly become an internal software company.
If anything, the build-vs-buy question has become sharper. Systems of record are still usually the thing you buy. That is where standardization lives. That is where scale lives. But systems of action, the layer where your own processes, rules, exceptions, and internal workflows come together, have become much more realistic to build and keep adapting yourself.
The real lesson is that custom software is no longer automatically the expensive, slow, risky option it used to be.
For processes that actually differentiate your business, custom software may now be the smarter choice.
Because it is faster to validate.
Because it fits the way your company actually works.
And because a heavy SaaS implementation often turns out to be expensive, slow, and difficult to maintain too, just in a different way.
Not because agents are magical. But because they make exactly the kind of work cheaper that used to break the custom-software equation: execution, iteration, testing, and feedback loops.
That is at least how I see it now. Not as theory, but because I can already see what changes in my own stack once an agent is allowed not just to think with me, but to actually do work.
Custom software used to be a dirty word.
For many companies, it is time to revisit that judgment.
Further reading
- OpenClaw docs, ACP Agents
- Anthropic, Claude Code overview
- OpenAI Developers, Codex
- Cursor docs, Cloud Agents
- GitHub Docs, About GitHub Copilot coding agent
- Sentry docs, Seer Autofix
- Sentry blog, Seer is generally available
- Linear docs, Agents in Linear
- Linear, Codex integration
- OpenAI, Codex for Linear
- Anthropic Engineering, Building effective agents
