Investing in Developer Experience That Survives the Agent Era
- Staff Desk
- 1 day ago
- 6 min read

The last year has been unlike anything we have seen in software development. Every few weeks, a new tool appears, another breakthrough promises to change everything, and engineers react with equal parts excitement and fatigue. For people working in developer experience and experience design, the pace feels even more extreme. Just as one tool starts to feel familiar, another one demands attention.
For years, teams could confidently say no when someone asked to use a brand new tool that appeared yesterday. Today, the answer is often maybe. That uncertainty leads to a much deeper concern, especially for CTOs and developer experience leaders:
What investments made today will still matter in a few years?
Looking ahead to the end of 2026, what would we be genuinely glad we invested in for our developers and our organizations?
Many teams assume the answer is simple: agent-based coding will solve everything. Agents are powerful and transformative, but they are not the only investment software organizations need to make. To move forward responsibly, it helps to step back and ask two foundational questions:
What principles of developer experience remain valuable regardless of how tools evolve?
What structural investments help both humans and agents work effectively?
The answers point toward a set of fundamentals that are unlikely to be regretted.
Development Environments as Critical Inputs
Agents, like humans, depend heavily on their inputs. One of the most important inputs is the development environment itself.
This includes:
The tools used to build code
Package managers
Editors and IDEs
Build and execution workflows
The guiding principle here is simple: use industry-standard tools the same way the industry uses them. Modern models are trained on how software is built in the real world. When teams invent custom tooling, heavily modified workflows, or entirely new package managers, they create friction. While it is technically possible to force agents to work with unconventional setups, doing so means fighting the training data rather than benefiting from it.
This same logic applies to programming languages. While niche and experimental languages are fascinating, relying on them for day-to-day production work increasingly limits both agent effectiveness and organizational flexibility.
This is not a new problem. Organizations have always pushed back when someone wanted to run an untested, brand-new technology at massive scale. The difference now is that the cost of divergence is higher because agents depend so strongly on familiarity and convention.
Prefer Text-Based Interfaces Over Workarounds
Agents perform best when they can operate in the format they understand most naturally: text.
If an agent needs to interact with your system, the most effective options are:
A command-line interface
A clear and stable API
While it is possible to make agents click through browsers or replay recorded UI actions, these approaches are slower, more fragile, and less accurate. In environments where correctness matters, such as production systems or critical infrastructure, text-based interfaces dramatically improve reliability. If something needs to be run during development, it should be accessible through a CLI or API that agents can execute directly.
Validation Is One of the Highest-Leverage Investments
Few investments empower agents more than objective, deterministic validation.
Validation can take many forms:
Tests
Linters
Static analysis
Schema checks
What matters most is not how the validation is created, but that it exists and produces clear, actionable error messages.
An error like “Internal Server Error 500” is useless to an agent. Without context, the agent cannot infer intent or decide what to fix next. High-quality validation gives agents the same thing humans want: precise feedback that explains what went wrong and why.
There is a catch. Asking an agent to write tests for an untested or poorly structured codebase often leads to meaningless results. The agent may produce tests that technically pass but validate nothing of substance. This is especially common in legacy systems that rely heavily on high-level end-to-end tests and lack solid unit-level coverage.
Code Structure Directly Affects Agent Performance
Agents reason better when codebases are structured well. In large organizations, there are systems so complex and poorly organized that even experienced engineers struggle to understand them. In these cases, the information needed to reason about the system does not exist in the code itself. Agents face the same limitation.
When a codebase is structured clearly:
Agents can reason without constant trial and error
Humans can review changes more effectively
Both can work faster and with fewer mistakes
If the only way to understand behavior is to run the system and observe side effects, agent capability is significantly reduced. Refactoring for testability and clarity is not optional if agents are expected to be productive contributors.
Documentation Still Matters, Just Differently
Documentation has always been controversial. Engineers debate what should be written, what is redundant, and what is worth the effort. From an agent’s perspective, the rule is straightforward: If information is not in the code and not written down, it does not exist.
Agents cannot rely on tribal knowledge, undocumented decisions, or verbal discussions without transcripts. They can infer structure from code, but they cannot infer intent, constraints, or external context unless it is explicitly recorded.
This includes:
Why certain decisions were made
Expectations about external data
Assumptions about inputs and outputs
Some traditional documentation may become unnecessary because agents can summarize code structure automatically. But anything that exists outside the code still needs to be written down somewhere accessible.
Code Review Is Becoming the Bottleneck
Software engineers have always spent more time reading code than writing it. Today, that reality is even more pronounced. Writing code increasingly means reviewing generated code.
As agent-based development scales, organizations see:
A sharp increase in pull requests
Larger and more frequent changes
Review workloads concentrating on a small number of fast responders
When review requests are broadcast to entire teams, the same few people tend to respond every time. This does not scale. As PR volume increases, these reviewers become overwhelmed, and quality suffers.
Effective systems:
Assign reviews explicitly
Distribute load intentionally
Establish clear expectations and response times
Another challenge is signaling. Many tools do not clearly indicate when action is required. Reviewers rely on manual messages to know when to re-review changes, which creates unnecessary friction and delay.
Quality Bars Must Remain High
There is ongoing debate about how high the quality bar should be. Not every system needs perfection. However, long-lived and mission-critical systems require a much higher standard than many teams expect. Without a clear process to reject changes that do not meet expectations:
Codebases become harder to reason about
Agents become less effective over time
Productivity steadily declines
This is especially dangerous because poor quality compounds. Agents trained on degraded systems produce worse output, which then further degrades the system.
One consistent observation across decades of software development is that the best code reviewers often spend the least time reviewing code. Teaching good review skills happens through practice, not meetings or documentation. There is no substitute for doing code reviews together.
Avoiding the Vicious Cycle
When teams combine:
Confusing environments
Poor validation
Weak review processes
They create a vicious cycle. Agents produce questionable output. Developers grow frustrated. Pull requests are submitted with low confidence. Reviewers, overwhelmed or unsure, approve changes anyway. Over time, both human and agent productivity decline. The alternative is a virtuous cycle. When agents are supported by strong fundamentals, they amplify productivity instead of eroding it.
Investments That Will Not Be Regretted
Not every possible improvement is listed here, but several stand out as consistently valuable:
Standardized development environments
CLI or API access for development workflows
Fast feedback loops during development, not only in CI
High-quality validation with clear errors
Refactoring for testability and reasoning
Written records of external context and intent
Faster, clearer review iterations
High and consistently enforced quality bars
One often overlooked detail is execution speed. If tests only run in CI and take 15 to 20 minutes, agents will still run them repeatedly. Slow feedback drastically reduces productivity. Fast local validation creates a dramatically better experience for both humans and agents.
The Core Principle
All of these ideas can be summarized with a single principle:
What is good for people is also good for agents.
When organizations invest in these fundamentals, developers benefit regardless of how agent technology evolves. Even if a specific tool or model changes, the people are guaranteed to be better supported.
That makes these investments not just safe, but essential.






Comments