How I Work Solo Without Losing Reliability

A personal account of what solo development in a resource-constrained environment taught me about methodology, adaptation, and why the right approach always depends on what the environment actually allows.

Previous article: Use Cases and Alignment in Solo Development

I do not think development methodologies should be followed as if they were laws. Not Agile, not object-oriented design principles applied as a moral code, not any particular school of thought about how good software is built. I have followed these things when they made sense. I have abandoned them when they did not. Treating a methodology as something that must be applied correctly regardless of context is, in my experience, one of the more reliable ways to make a project worse.

Some environments simply do not support the assumptions those methods make. You can understand Agile clearly, believe in its reasoning, and still find that it does not function inside a particular organization because the structural prerequisites for it do not exist. That is not a personal failure. It is a mismatch between method and environment. The correct response is not to try harder to apply the method. It is to adapt.

What I have learned, over time, is that what actually moves projects forward is reading the real constraints accurately and adjusting behavior to match them. Not applying theory and hoping reality will cooperate. The constraints are always organization, people, feedback availability, and time. Those determine what is actually possible. Methods that ignore them will fail, regardless of how sensible they are on paper. Reliability in this context does not come from following the right process. It means staying in the project continuously enough to catch problems before they compound — keeping the work moving through interruptions rather than executing a methodology correctly. Continuity is the precondition. Everything else is secondary.

The Environment That Changed My Thinking

The situation that taught me this most clearly was not a large corporate project. It was a job at a small company with no technical staff to speak of. I was, in practical terms, the entire development function. There was no engineering manager to escalate to, no architect to consult with, no peer review process because there were no peers. There was a business owner with a clear idea of what the system needed to do and very little patience for explanations of why software development takes time.

The company had real operational problems that software could solve. That part was clear. The people who were supposed to help think through requirements were exhausted. Not temporarily, not unusually — exhausted in the way that becomes the background condition of a job, the state that stops feeling like a problem because it has been the normal state for long enough. When daily work consumes all available capacity, there is no remaining attention for thinking about how daily work could be different. The possibility of improvement is not something they were skeptical about. It was something they had stopped considering entirely.

Getting feedback was structurally difficult. Not because people were uncooperative, but because everyone was operating at capacity. A formal review cycle assumed you had someone who could reliably show up for a meeting, engage with what was shown, and respond with considered input. That was not available. What was available was the person who had no time to look at what I built showing up to interrupt my work with something unrelated — and always, somehow, having plenty of time for that.

Documentation existed, but I want to be precise about what that meant in practice. I wrote things. I sent them. Occasionally they were acknowledged. But acknowledged is not the same as read, and read is not the same as used. When I produced a document to confirm shared understanding after a discussion, it would come back approved without any evidence of engagement. The approval was reflexive, not considered. That kind of blind sign-off does not create alignment. It creates a paper trail that implies alignment while the actual understanding on each side remains divergent. If anything, it made things worse, because I briefly believed the gap had been closed.

Iterative delivery ran into a different version of the same problem. Even when I delivered something and waited for a response, the response often did not come in a usable form. People did not always know how to articulate what was wrong. They could feel that something was off, but converting that feeling into a clear description of what should be different is a skill, and it requires a kind of reflective engagement that was not available here. What happened instead was simpler and more final: if something did not feel right, people stopped using it. Not a correction. Not a complaint. Just quiet disengagement. That is the hardest kind of feedback to work with, because by the time you notice it, the damage is already done.

What I Actually Did

What I actually ended up doing looked nothing like what I would have designed from first principles.

The first thing I standardized was the development environment and design patterns, and I did it not because it was architecturally satisfying but because I was the only person who would ever touch this code, and I needed to be able to pick up from where I had been without spending twenty minutes remembering what I had been trying to do. Without consistent structure, resuming work after an interruption meant rebuilding context from scratch every time, and interruptions in this environment were constant. If I left a screen half-finished for two days and came back to it, a consistent pattern meant I could continue rather than spend twenty minutes re-reading my own code to remember what I had been trying to do. Consistency became a survival mechanism. If every screen followed the same pattern, I could context-switch more quickly. If the data access layer followed a predictable contract, I could move without rethinking from scratch. Standardization was not a quality goal in the abstract sense. It was a working memory management strategy. And in a solo environment, working memory is the only resource that cannot be replaced.

The communication approach changed as well. I started calling almost every day. Not because there was always something urgent to discuss, but because I had learned that the only reliable channel was one that felt like ordinary contact rather than a formal request for input. I would frame it as a quick status update, mention something specific that had just been built, and let the conversation drift. The useful information came out sideways, embedded in casual exchanges, not in structured feedback sessions. It felt inefficient by any reasonable definition of the word. It was not. The alternative, waiting for scheduled meetings that produced nothing actionable, was the thing that was actually inefficient.

There was a specific category of difficult person I learned to avoid. Not every stakeholder was worth pursuing for detailed feedback. Some were reliably negative regardless of what was shown. Others would agree to anything in the moment and contradict it a week later, apparently without any memory of having agreed. A few were the kind of people who ask questions not to understand but to perform the act of asking — involvement as visibility, not as contribution. I stopped chasing those conversations entirely. I acknowledged their comments, gave nothing actionable back, and built without them. That sounds harsh. I stopped caring about that. The alternative was letting the noisiest people in the room determine what got built for the people who actually had to use it every day.

The larger change was in planning orientation. I had started this project trying to operate with something like Agile rhythm, on the theory that iterative delivery would allow requirements to emerge naturally and reduce late-stage correction costs. Within two months I had stopped. The feedback loop that iterative delivery depends on did not exist. I was iterating, but no one was engaging with the iterations in a way that produced useful input. The delivery cadence was generating anxiety about whether things were moving rather than generating calibrated feedback about whether things were right.

What replaced it was a much more structured upfront planning phase. I mapped the workflows explicitly before writing code. I asked narrow, specific questions about particular scenarios during informal calls and transcribed the answers immediately. I built a mental model of the full system before building any part of it, because I had learned that partial delivery without that context would be misinterpreted. The irony of this was not lost on me. I had moved, in some respects, toward a more waterfall-shaped process, not because I believed waterfall was superior, but because waterfall’s assumptions matched what was actually available.

Why Agile Did Not Work There

Thinking about why it happened this way is worth being honest about.

Agile, as a way of working, assumes things. It assumes that the people who will provide feedback can be reliably present and engaged across multiple cycles. It assumes that embracing change is something the organization has the social and operational capacity to do, rather than a phrase that sounds good but means nothing when the same person is simultaneously managing a daily operational crisis and being asked to review a software prototype. It assumes that trust between developer and stakeholder is built iteratively, which it can be, when both parties have time to build it.

None of these assumptions were true in that environment. This is not a criticism of Agile as a methodology. The premises are sound under the conditions the method was designed for. The problem is that those conditions are specific, and they are not universal. Presenting Agile as the obviously correct approach for software development in general means ignoring the fact that most small organizations, and a surprising number of large ones, do not operate in ways that satisfy those premises.

Part of what makes this easy to miss is that the IT industry, and the companies that work closely with it, tend to produce people who are actually capable of this kind of engagement. A stakeholder who has worked in a technical environment, or alongside developers for years, often does have the vocabulary and the cognitive habits to participate meaningfully in iterative development. In that world, Agile’s assumptions do not feel like assumptions at all. They feel like obvious facts about how reasonable adults operate. The environment I was in was simply not that world. The people there had never had any reason to develop those habits, and there was nothing wrong with them for not having done so.

The more theoretical objection I had to my own situation was that I felt vaguely like I was doing it wrong. The literature suggests iteration and adaptation. The practice I was converging on felt rigid and predetermined. It took longer than it should have to accept that the discomfort was not a sign of failure but a sign that the theoretical model I had internalized did not apply here. The people who wrote about iterative development were not describing this room, this company, these constraints. They were describing a condition that sometimes exists in the industry. I was working somewhere that condition did not exist, and spending time trying to force it into existence was not useful.

The Method Is a Tool

What I actually concluded from all of this is something that resists reduction to a simple rule, which is probably correct because the situation does not admit of a simple rule.

The right method is the one that fits the constraints. Not the one that is most respected in the current period of the industry, not the one that the people doing the best work in high-resource environments happen to use, not the one that aligns with the belief system you arrived with. The constraints are the facts. The method is a tool. Match the tool to the facts.

In some environments, frequent informal contact will get you further than any structured review process. In others, written documentation is essential because people will not be available when questions arise. In some organizations, upfront planning prevents more problems than it creates. In others, the requirements are genuinely too uncertain to plan in detail and iteration is the only rational response. Reading which situation you are in, and adjusting accordingly rather than insisting on a pre-selected method, is the actual skill.

I no longer feel conflicted about having abandoned Agile on that project. The project moved forward. The software got built and used. The client’s operational problems improved. That is the goal. Not adherence to process.

The same principle applies to every other choice that could be framed as methodology. Object-oriented design is valuable when the problem structure benefits from it and the team has the capacity to maintain the abstraction. When those conditions are absent, the value disappears and you are left paying the cost without receiving the benefit. No principle, applied rigidly without reading context, produces good results consistently.

This is not a relativistic position. Some approaches are genuinely better than others in most conditions. But the qualifier matters. Even good approaches fail when the environment contradicts the assumptions they depend on. And in practice, you are always operating inside a specific environment with specific constraints, not inside the idealized context that methodological literature tends to assume.

The people who describe methodology in the abstract are usually describing the best-case version of a situation they have been lucky enough to work in. That is not wrong. It is just incomplete. The complete version includes the environments where those conditions do not exist, where the feedback loop is broken, where the stakeholders cannot tell you what they want until they see what they did not want, where the project has to move forward anyway. Most of the interesting problems are in that version.

The project moves forward or it does not. That is what matters. Everything else is in service of that.

Learn Cotomy

Cotomy is a DOM-first UI runtime for long-lived business applications.