Let AI do your coding laundry

How I use AI to minimize Levels of Possible Harm.

ai
Published

September 30, 2025

As someone who is still early-ish career (graduating from my undergrad in 2021), I worry about building my skills in the world of rising AI usage. I want to sharpen my craft, but also accept this changing world and want to use AI as a tool to help me be productive now but also continue to build skills that aren’t reliant on AI.

There’s a joke out there that we want AI to do the laundry so we have more time making art, but now AI is making art and we have to do the laundry. I’m actively trying to shift my use of AI where I can spend more time focusing on skills I want to grow, and let AI do the coding laundry.

Thinking about the Level of Possible Harm

When deciding my own AI usage, it’s on two axes of Levels of Possible Harm. One axis is the Level of Possible Harm to the codebase I’m working on. Harm might be an immediate failure like code that doesn’t work or that is difficult to reason about, or silent failure like not immediately noticeable bugs or tech debt. I try to mitigate the Level of Harm by always reviewing the AI code before shipping it off to teammates to review.

There’s also the axis of Level of Possible Harm for myself. I have skills that I personally want to grow, and if I only ever use AI for that, frankly, I won’t ever learn. At this moment in time, many models seem to be quite verbose, so reviewing thousands of lines of over-engineered code is another point of Harm since my time is wasted. But there are many tasks that I don’t ever need to excel at, and AI is helpful to do this sort of coding laundry for me.

This isn’t a perfect rubric, but my AI usage follows something like this.

I can solve this task easily. I cannot solve this task easily.
AI excels at solving this task. Happy path for agentic AI usage. If it’s something I want to get better at, use AI as a reviewer/rubber duck or not at all. If it is something I do not want/need to get better at (say, regex), review AI code.
AI does not excel at solving this task. Depending on size, either use AI to generate code and expect to do lots of cleanup if it would be faster than typing out a first implementation myself, or do not use AI. This is probably something that will involve deep work, like a particularly thorny bug. Use AI to brainstorm potential places to start, or not at all.

In practice

When I started out using AI tools1, I wanted to try it out in harmless scenarios. This is how it is with all new tools; I don’t want to start by using it for the big stuff. Writing tests seemed safe. Even if the tests are a bit silly (like checking that an enum returns the correct value) or redundant, the Level of Possible Harm was low. I needed some practice runs to see where AI fell over.

Without even realizing it, I started using AI a bit more, not just for boilerplate tests, but for validating fixes as well. Building IDE features often means checking out lots of edge cases. Rather than re-looking up the syntax of twelve different Python plotting libraries, I could just tell AI to do something that I already knew I was capable of doing, in a faster way. Now, if I was a visualization expert, I might not have wanted to go this route, since I would want to build up the muscle memory for tasks I would encounter every day, eventually becoming more advanced than AI. But for something that I won’t require deep knowledge of, that still has a Low Level of Possible Harm, this also felt okay to me.

Now, using AI for tests and examples is not a novel idea. This is the usual on-ramp for AI users. It saves a bit of time, and doesn’t feel too scary! However, it didn’t really feel like it was Supercharging My Workflows. AI tools started really feeling helpful when I had mentally planned a solution, then asking AI to actually write the code. Since I already knew what it should look like, telling an agent exactly what I wanted felt like being able to time warp through the time it took me to actually write the code. Nice! Also, the Level of Possible Harm felt pretty low, since I already knew what to expect. I could always follow up and brainstorm other ways to solve a problem, or ask about the pieces I was less confident in.

Not everything I tried worked. Giving an abstract task like “hey, can you build this New Feature? it should do x, y, z” without any explanation of the structure and integration necessary usually results in a lot of garbage. I haven’t tried out planning modes, mostly because I want to practice designing larger systems. So for that sort of task, I might start writing the scaffolding, and then ask AI to fill in the syntax once I know exactly how I want to build a feature.

If you don’t know where you are going, any road will get you there. - Lewis Carroll

I don’t like giving AI tasks that I don’t understand yet, or tasks that I would like to get better at. But I can use it responsibly as a tool to be faster in general while still learning. It’s also been helpful as an impartial (but imperfect) feedback mechanism for the skills that I want to improve. For example, I might build the New Feature and then ask AI to critique my design choices. Doing this has also been helpful for me as I build muscle for recognizing when things are bad.

AI and I work well together, and when used wisely, makes building things way more enjoyable.

Footnotes

  1. You can generally assume the tool I am using is Claude code with the Sonnet 3.7 or Sonnet 4.5 models. But I’ll use the language of “AI tools” loosely here since I’m talking about the tasks I am performing, not the specific method of getting results.↩︎