Skip to content
← All articles

Using AI tools responsibly in client projects

A year into using AI coding tools on real projects, here's where I've landed on policies, disclosure, and practical use.

Using AI tools responsibly in client projects

A year ago I wrote about AI tools starting to change web development. I've now had twelve months of using AI coding tools, ChatGPT, Claude, and various LLMs on actual client work, and my thinking has matured considerably. The initial novelty has worn off, and what's left is a more measured view of where these tools help, where they hurt, and what responsibilities I have to clients when I use them.

The biggest shift for me in my approach has been around disclosure. Early on, it felt like a grey area, if a developer uses Stack Overflow to solve a problem, they don't disclose that to the client, so why would AI be different? But the more I thought about it, the more I realised the analogy doesn't quite hold. Stack Overflow gives you a snippet you understand and adapt. AI tools can generate entire blocks of code that a developer might not fully comprehend, and that changes the risk profile. So where do you draw the line?

My current policy.

I now have a written internal policy. AI tools can be used for: generating boilerplate and scaffolding, writing tests, drafting documentation, research and exploration, code review suggestions, and refactoring. I use Claude in the browser and API for most of this work. A proper terminal-based tool that you can point at a codebase and have a real conversation with. AI tools must not be used for: generating security-critical code (authentication, encryption, input validation) without thorough manual review, producing code that handles personal data without line-by-line verification, or any situation where the developer doesn't understand every line of the output. And this is exactly why you need senior developers running these tools. A junior might accept output that looks right but misses edge cases. A senior spots it in seconds.

Every piece of AI-assisted code goes through the same review process as human-written code. If anything, I scrutinise it more carefully, because AI-generated code has a particular failure mode: it looks correct, it follows conventions, it might even pass basic tests, but it can contain subtle logical errors or use patterns that work in the general case but fail for the specific edge cases in your project.

The quality question

AI coding tools have got noticeably better over the past year. The suggestions are more context-aware, and it handles TypeScript particularly well, it picks up on your types and generates code that's usually type-safe. For writing repetitive patterns (API route handlers, component props interfaces, test cases), it's a genuine time-saver.

Where it still struggles is with anything that requires understanding the broader system. It doesn't know your business logic, your client's requirements, or the specific way your CMS structures content. It'll suggest a plausible implementation that misses the point. This is fine if the developer catches it, but dangerous if they don't.

What about client concerns?

Some clients, particularly in regulated industries like life sciences, have asked me directly whether I use AI tools. I'm transparent about it: yes, I use them as development aids, no, I don't use them to generate untested code, and all output is reviewed by experienced developers. Most clients are satisfied with that explanation. A few have asked me not to use AI tools on their projects at all, and we respect that completely.

The broader industry conversation around AI-generated code and intellectual property is still unresolved. I'm watching the legal developments but not waiting for them, my policy is designed to be defensible regardless of how the copyright questions settle.

If you're an agency or business working through your own AI policy, or you have questions about how I handle this on your project, I'm happy to discuss, just drop me a line.

Chris Ryan

Chris Ryan

Managing Director

17+ years in full-stack web development, most of it leading teams agency-side across e-commerce, CMS platforms, and bespoke applications. Specialises in infrastructure, system integration, and data privacy, with hands-on experience as a Data Protection Officer. Founded Innatus Digital in 2020 to offer the kind of honest, technically-led partnership that he felt was missing from the agency world.