Quinten L.’s Post

View profile for Quinten L., graphic

Platform engineer with 5+ years of deploying AI Inference systems

I think what a lot of people have intuitively figured out, but haven't noticed explicitly, is that using AI for greenfield projects feels much more useful than using it in an established codebase. From what I've seen, there are two main reasons for this: 1. Experienced engineers often work on systems that involves many different parts of systems. Current AI tools just aren't built for this kind of task. 1. AI models are trained on a broad range of data, which doesn't always match up with the specific, deep knowledge that experienced devs have built up over years. New devs are brought up while experienced devs are weighed down. I'm going to focus on that first point in this post, because I think it's in part what's allowing less experienced devs to see things that more experienced devs aren't. AI models are getting pretty damn good, to the point where using Claude 3.5 rarely leaves me wanting more. AI tooling is the exact opposite. Working on greenfield projects that have grown, I've started to run into problems: it's becoming increasingly harder to give the AI enough context to get a good response. The changes I'm requesting are touching more parts of the codebase, and it's tough to include all the relevant bits. For any given change to my web projects (like Django, for example), if I want a solution quickly I need: 1. The relevant html 2. Any blocks of other content I'm including 3. Relevant CSS 4. Relevant JS 5. Sometimes an example of a similar feature implemented in another html, css, or js file, to maintain consistency 6. The view 7. Any relevant imports 8. Similar views that may have implemented similar patterns to what I need to happen 9. Any other functions that the view calls 10. The URL structure 11. Any schemas that might be relevant 12. Database models And that's not even counting things like repo structure, ownership, git diffs, or (for more complicated scenarios) call graphs. More relevant context means better AI output, but getting that context is a pain, and for best results it should all be in a single message. I got fed up with this and made a Neovim shortcut to collect these snippets in a haphazard kind of way that grabs code snippets, file info, and generates a file structure at the top of a temporary buffer based on the files that snippets are grabbed from. It's not perfect, but it helps get more context to the AI without spending ages adding all the metadata. Just by using this there has been a noticeable improvement in how often I am able to get zero-shot solutions out of Claude 3.5. At this point I am just doing a manual, informed RAG. I would like to automate this process, so to that end I ask "How can I automatically find all of the snippets that are relevant to the feature I am trying to implement?" I cover the rest of my thoughts on this in a post on my blog: https://lnkd.in/gtAmyx7a

Using Agents as Retrofit Solutions to Established Codebases

thelisowe.com

To view or add a comment, sign in

Explore topics