I started my career as a developer in the 1990s, writing C, C++, and Java code on Unix and OS/2 systems. It was a hands-on, deeply technical decade one where I built things from the ground up and shaped my understanding of software through direct experience.
Over time, I transitioned into architecture-oriented roles. For the next 15 years, I led teams, defined roadmaps, coordinated across functions, and carried the title of “Engineering Manager.” While I found a rhythm and a sense of leadership in that role, there was always a lingering disconnect: I rarely felt like I was contributing something tangible to the product itself. I wasn’t building — I was guiding the building. More importantly, I was unable to make the final products better, using the experience I had gained building quality software that is resilient and scalable.
I made a few attempts over the years to get back into coding, but every time, I hit the same wall. The software development landscape had evolved dramatically. The stacks were more fragmented, the tools more specialized, and the learning curves steeper. What used to be a self-contained system now spans multiple frameworks, services, and disciplines, each demanding a depth of knowledge I no longer had. Frankly, I felt intimidated.
Around 2018, I began getting opportunities to work in startup environments, particularly on solutions involving blockchain. These projects marked my return to hands-on software development after years in management. Unlike the enterprise world I had grown used to, startups gave me greater flexibility, creative control, and direct involvement in product engineering. Cloud adoption had also matured by then, making infrastructure provisioning far simpler than it had been when I last wrote code professionally. No more waiting on hardware or managing complex on-prem setups. This convergence of autonomy, modern tooling, and lean teams made it feasible and exciting to build systems from scratch again.
Over the next four years, I stayed in this builder’s mode, contributing in ways that were personally fulfilling. But despite the satisfaction, I often felt I was still operating with limits making steady progress, yes, but not moving the needle as much as I knew I could.
Then, in late 2022, ChatGPT arrived and something shifted. For the first time, I felt truly amplified. Tasks that once took days exploring new frameworks, interpreting dense specifications, troubleshooting obscure integration issues now moved at the speed of conversation. Suddenly, I had a partner who could help scaffold boilerplate code, reason through architectural trade-offs, and provide contextual explanations tailored to what I was building. Tools like ChatGPT and Claude didn’t just make me faster they made me braver. I could explore, experiment, and make decisions with greater confidence, without getting bogged down in endless documentation or fragmented tutorials.
I wasn’t just coding again; I was building better products with more clarity, more speed, and less hesitation. It felt like I had rediscovered a kind of leverage I didn’t know I’d lost. The arrival of Large Language Models didn’t change what I could build it. It changed how fluidly and fearlessly I could build it.
This blog is a personal take on what it means to engineer software in a world where AI is part of the toolchain. I’m sharing my views shaped by hands-on experience on how AI-assisted tools are changing the way we design, build, and reason about solutions. It’s not about AI replacing developers, but about how these tools expand our capabilities. When used thoughtfully, they amplify our experience, sharpen our intuition, and accelerate the path from idea to implementation.
Diminished Resistance
Adopting new techniques or tools for software solutions used to be a daunting task. I’d spend hours sifting through dense documentation, browsing Stack Overflow, skimming Reddit threads, watching YouTube tutorials, and piecing together insights from scattered blog posts. Even then, I’d often wonder if I’d grasped the tool well enough to use it effectively.
Large Language Models (LLMs) have changed that. Now, I can have a real-time, conversational exchange with an AI tool that tailors explanations and code examples to my project’s needs. I can ask follow-up questions, dive into edge cases, or request snippets that fit my specific use case—all without wading through endless resources.
This has transformed how I work. Instead of sticking to familiar tools out of habit, I’m eager to experiment with new frameworks, patterns, or libraries that better suit the problem. For example, when building a vehicle titling platform at a startup, I quickly adopted a modern confidential data sharing framework by using AI to clarify its setup and integration, saving weeks of trial and error. My solutions are now developed faster, with architectures that feel more current and optimized. The AI equips me to tackle unfamiliar tools with ease, making exploration a natural part of my workflow.
AI’s Hidden Productivity Advantage
A lot of the discourse around AI in software development especially on social media tends to fixate on a single question: “How much code was written by AI?” But this focus severely under sells the true productivity gains AI offers.
In my experience, the biggest time savings haven’t come from generating code they’ve come from everything that happens before and around the code.
I’ve used LLMs to:
- Brainstorm and shape product features from vague business requirements
- Interpret dense technical standards and industry specifications
- Evaluate trade-offs between competing technology stacks
- Analyze large, unfamiliar codebases and legacy systems
- Plan refactoring and break down complex problems into manageable units
These are high-leverage, high-cognition tasks and they’re typically where experienced engineers spend most of their effort. Traditionally, these steps involve research, design, architecture, and decision-making, often collaborative, often iterative, and almost always expensive in terms of time and context-switching.
What surprised me was how much AI could assist in these stages. Not by making the decisions for me, but by accelerating my understanding and helping me reason through complexity. The AI became a conversational partner one that could simulate expertise in multiple domains, reduce ambiguity, and surface edge cases I might have missed.
When it finally came time to write the code, it often felt like the easy part because the real heavy lifting had already been done with the help of AI.
Ironically, the most visible part of AI-assisted development code generation turned out to be the smallest lever for productivity in my workflow. The larger gains were in design clarity, better decision-making, and reduced cognitive friction. And those savings compound over time.
AI in software development isn’t just about writing code faster. It’s about thinking better, earlier, and more deeply. That’s the real unlock.
The Truth About AI-Coded Software
There’s been a lot of buzz lately about AI “writing code.” Even Satya Nadella, CEO of Microsoft, noted that 30% of the code in GitHub today is now written by AI. While that’s a statistic, the phrasing deserves a closer look.
Saying “AI wrote the code” can be misleading. It implies that the AI acted independently, like an autonomous software engineer operating in the background. In my experience, this couldn’t be further from the truth.
When I use AI tools to generate code whether it’s Copilot, ChatGPT, or a more advanced agent the work is still fundamentally mine. I decide what needs to be built, how the components should interact, and where the trade-offs lie. While the AI can produce code or connect components, I define the solution’s structure, guide its design, and set the standards for quality.
More importantly, I still own the code not just legally or procedurally, but morally and operationally. If it breaks in production, if it causes a security issue, if a performance bug shows up two weeks later, the responsibility is mine. AI may assist in generating code, but it does not absolve me from understanding and validating it.
There’s also a perception that AI agents can “just do things” write code, test it, deploy it. But these agents don’t act unless we tell them to. They must be orchestrated, provisioned, and granted access to resources, environments, APIs, and repositories. Even the most advanced autonomous agents today depend on a human to define the boundary conditions, assign goals, and supervise execution.
AI doesn’t replace the developer; it amplifies them. It’s not a magic intern that works independently. It’s a set of tools that multiplies the creative and technical bandwidth of the person wielding them. And in my case, even if every line was generated by an LLM, I still wrote the code because I’m the one responsible for what it does.
Prompts as Living Documentation
One of the most surprising benefits of using AI tools is how the prompts I write double as a form of living documentation.
In traditional software development, documenting the problem-solving process often falls by the wayside. Once a solution is built, we rarely record the questions we asked, the dead ends we explored, or the alternatives we weighed. With AI, that process is captured naturally. Each prompt I write whether for a high-level architecture or a low-level implementation detail creates a record of the problem-solving process, reusable for future challenges.
Revisiting these prompts lets me trace:
- How I framed the initial problem
- What constraints or edge cases surfaced
- Which design options I considered and why some were set aside
- The trade-offs I navigated in the final implementation
This prompt history is more than a chat log; it’s a clear, time-stamped account of my technical decisions. Its conversational style makes it easier to understand than traditional documentation, allowing teammates or my future self to follow the logic or repurpose prompts for similar tasks. For instance, I’ve saved prompt/response pairs in for extending our NFTs to meet 3rd party verification standards, where they act as both context and a guide for maintenance.
What once seemed like a fleeting interaction now feels indispensable. Prompts have become a core part of my documentation strategy, streamlining collaboration and preserving insights for the long term.
AI: Shaped by Your Expertise
Working closely with peers comparing solutions, reviewing code, and debating design choices revealed a striking pattern: even when using the same AI tools, our outputs look remarkably different. The solutions we build with Large Language Models (LLMs) carry the distinct mark of our individual experience, preferences, and problem-solving styles.
Why the variation? It comes down to how we shape the AI’s responses. The prompts we craft, the questions we ask, and the suggestions we refine all draw from our unique perspectives. A developer steeped in functional programming might nudge the AI toward elegant, declarative code. Another, with a background in microservices, might prioritize modular architecture or lean on familiar libraries. These personal tendencies don’t fade when AI enters the equation they shine through, molding the AI’s output to fit our mental models.
This diversity is a strength. It shows that AI adapts to the developer’s vision, acting as a flexible tool rather than a one-size-fits-all solution. While LLMs can suggest code snippets, outline patterns, or clarify trade-offs, they lack the context of our business needs, team dynamics, or long-term goals. That’s where our expertise takes over, steering the AI to produce solutions that align with our specific challenges and aspirations.
For me, this means the code I build with AI feels distinctly mine. It reflects years of wrestling with systems, learning from failures, and honing instincts for what works in production. Whether I’m sketching a new feature for or refactoring a legacy module, the AI amplifies my approach, not its own. Like a brush in a painter’s hand, it extends my ability to create, leaving my signature on every line.
This personalization makes AI-driven development a deep human endeavor. It’s not about the tool dictating the outcome, it’s about harnessing it to express our craft, solve problems, and build systems that bear our unique imprint.
AI has transformed my role as a software engineer, streamlining coding, design, and documentation, and restoring my confidence as a builder. But this is just the start. I’m eager to adopt advanced AI agents; autonomous collaborators that synthesize requirements, draft documentation, scaffold modules, and test edge cases extending our capabilities while remaining under our control. I envision AI enhancing team collaboration, contributing to design debates, and suggesting code improvements grounded in our conventions, enabling systems that are faster to build, more robust, and aligned with long-term goals.
Yet, industry trends reveal hurdles. GitClear’s 2024 analysis of coding on copilot shows a tenfold rise in duplicated code since 2022 and 79% of changes revising recent code, signaling fragility. Google’s DORA report notes a 7.2% stability drop per 25% AI adoption, despite perceived productivity gains. These stem from tools prioritizing raw output and developers’ focus on short-term metrics like “lines added.” To address this, we need AI tools with built-in quality checks, e.g., detecting duplication and better prompt strategies for reusable code.
These challenges clarify our path. By expecting smarter tools and mastering their use and making them a party to your software engineering goals, we can unlock AI’s potential to elevate development. I urge developers to experiment with structured prompts, a better understanding of how existing tools work and advocate for quality-focused AI, shaping a future where we build better, together.