Over the past few years, we’ve explored AI-assisted programming, experimented with smarter testing tools, pushed models to their limits, and fine-tuned our workflow.
The goal has always been clear: find the sweet spot where AI doesn’t just write code for us, but collaborates with us to deliver better results, faster. The payoff? We’ve cut our coding time by 30%, freeing us to focus more on creativity and problem-solving.
Through this journey, we’ve learned a lot about what works, what doesn’t, and how to get the most out of AI without falling into common traps. Here’s a look at the tools we trust.
The Tools That Power Our Workflow
For AI models, Claude 4 Sonnet has become our go-to. It delivers consistent, high-quality output without breaking the budget. While the AI community often names Claude 4 Opus as the gold standard, we reserve it for only the most complex, high-stakes projects because of its higher price tag.
In terms of our coding environment, we rely on the Cursor editor. It’s been an excellent investment, though recent pricing changes have us keeping a backup plan in mind: Claude Code in the terminal paired with a Visual Studio Code plugin.
Our Principles for AI-Assisted Programming
We’ve discovered that success with AI coding comes down to how we work with it. These are the principles we follow:
- Context is king. We share detailed task descriptions, our reasoning, and our intended approach before the AI writes a single line of code.
- Small steps win. We avoid one giant, all-in-one prompt. Instead, we have the AI create a step-by-step plan with the smallest possible tasks.
- Test as we go. After each step, we check the functionality via automated tests, console logs, or error logs before moving forward.
- Review before execution. We never let the AI start execution until we’ve approved the plan and refined it.
How We Debug with AI
When the AI struggles or loops through the same unhelpful solution, we:
- Add extra logging points to capture more data.
- Share complete error logs back to the AI for better context.
- Switch models — GPT 5 and Gemini 2.5 are our trusted alternatives.
Using Tokens Wisely
- Avoid including the entire codebase in every AI prompt, as this rapidly consumes available tokens without adding proportional value.
- Instead, selectively load only the specific files, modules, or code snippets that are directly relevant to the task at hand.
- This targeted approach reduces token usage, improves processing efficiency, and ensures the AI focuses on the most contextually significant information.
Sometimes, we’ll even hold back from giving full instructions to see if the AI suggests a creative approach we hadn’t considered — and it often does.
Our Takeaway
We don’t see AI as a miracle solution for every development challenge; to us, AI is a high-powered teammate: fast, consistent, and tireless, but most effective when paired with human judgment and expertise.
In our workflow, AI isn’t handed the steering wheel; instead, it’s given the right context, constraints, and direction so it can contribute meaningfully without derailing the bigger picture.
Used this way, it helps us move faster, write cleaner, more maintainable code, and maintain full control over the architecture and quality of our work.
This perspective comes from years of trial, refinement, and understanding where AI genuinely adds value versus where human oversight is irreplaceable.
If you’d like to see the exact prompt framework we use to strike this balance, get in touch, we’re happy to share it.