Reflections on AI-Assisted Coding and Modern Software Development
Below are a few observations on the current state of AI-assisted coding and what it is beginning to change in software development.
1. Software Velocity#
Codebases that are easier for AI tools to work with are likely to see higher development velocity over time. This has less to do with whether the code was written by humans or machines, and more to do with how predictable and conventional it is.
I recently worked on several similar projects where unit tests had been written in unconventional ways. Generating test boilerplate is a strong use case for coding assistants, but in these codebases it introduced friction. The agents either attempted to rewrite tests using more conventional patterns, or produced incorrect mocks when trying to follow the existing style. In both cases, the result required manual correction.
Features such as Claude Code’s Agent Skills may reduce this problem over time. Even so, AI tools amplify the cost of irregularity. Code that follows common conventions and prioritises clarity is easier for agents to modify safely. Over time, that translates directly into higher velocity.
2. Changing Expectations of Engineers#
Since the launch of ChatGPT, I have worked at a few companies and noticed a gradual shift in what is expected of engineers. As experienced developers increasingly succeed in working outside their core areas, while maintaining reasonable delivery speed, this behaviour has started to feel less exceptional and more assumed.
A Java engineer may now be asked to fix a bug in a complex React codebase. A full-stack JavaScript developer might be expected to debug broken Splunk logging when the original DevOps team no longer has capacity to help. AI tools make this possible, but they also quietly raise expectations.
This change affects junior engineers most. As the baseline expectation widens, the immediate value juniors can demonstrate becomes harder to surface. This is not a judgement on individual ability, but a structural shift in how contribution and productivity are assessed.
3. AI-Assisted Coding Strategies#
For non-trivial work, I have found that using multiple large language models produces better results than relying on a single model. In practice, using both Claude Code and OpenAI Codex in parallel has been more effective than assigning specialised sub-roles to agents derived from the same model.
This approach is unnecessary for simple tasks. It becomes valuable when the work benefits from planning, design trade-offs, or early architectural thinking. Different models have different training data and failure modes, and those differences often surface useful alternatives or catch errors the other misses. The pattern is similar to academic peer review: independent perspectives surface different issues, and human curation selects the best path forward.
All AI-assisted coding workflows I have seen in the workplace so far still rely on a single model.
4. Repository Design#
I have never held strong views on monorepositories versus multiple repositories. Both approaches can work. However, the rise of LLM-based coding assistants shifts the trade-off somewhat in favour of monorepos.
Giving agents visibility across a larger portion of the codebase, rather than restricting them to narrow interface boundaries, is an advantage. As context windows increase and long-range reasoning improves, this advantage is likely to grow.
This does not mean interface boundaries stop mattering, but that their role increasingly conflicts with the needs of AI tooling.
5. On Replacing Software Engineers#
For AI agents to replace software engineers, rather than assist them, several hard problems would need to be solved:
- reliable requirements gathering, including recognising ambiguity and asking for clarification rather than making assumptions
- deep competence across software domains, not just familiarity with common patterns
- the ability to work autonomously for long periods without quality degrading or progress stalling
- accountability for decisions, trade-offs, and failures under uncertainty
- full understanding of, and access to, an organisation’s engineering environment, including codebases, internal documentation, project management tools, build pipelines, infrastructure, networking constraints, and compliance requirements
Until these gaps are closed, AI remains a force multiplier for engineers, not a replacement.