* Tips on how to talk to an AI *
Over the past months I’ve immersed myself in developing AI apps with the help of AI. I thought I’d share some approaches that I use regularly to save time in case they help you develop more software in less time. In my application development I use a variety of systems, from Vertex to Langgraph, and many different models, including OpenAI’s O1, Claude Sonnet, and Gemini. I’ve also used many tools in Visual Studio code, like Continue/Claude/Mistral, Codeium, Copilot, and Gemini Code Assist. Here are some tips that seem to apply in my experience. While all of these techniques apply to developers, I think most apply to technology leaders across the board. Here are the first few, more to come:
Provide the right context. Sometimes you don’t receive a sufficiently precise answer from an AI, or that answer is inaccurate. One reason for this is that AI only knows what you tell it about YOU, so you need to tell it everything that is relevant to solving your problem. If you pass in some Python code - will it be self-evident to the AI that you’re using Flask? Or will it think you’re building a Django application? It may not matter, but it also might. So, if you’re using an open source library, express that. If you’re writing a letter to an attorney, or to a kindergarten teacher, provide that context!
Expect to iterate! Only in the simplest cases do you get exactly what you want in the first response from an AI. So assume that you’re going to try a few times and don’t give up. For example, asking the AI to cite its sources or to find contravening evidence can be a good way of testing for hallucinations. Blindly passing in the error you get from your system with “I received this error…” may seem lazy - but if you do that three times you’ll be surprised to find that frequently an AI like Claude will figure out an approach that works. But be warned: all the AIs seem to solve problems mostly by adding code and you can end up with a great deal of ugly code. Asking the AI to clean up and improve the finally working code is a nice last step.
Give an example. AIs are great at making up things but they don’t always make up precisely what we want. “Write a test for this class…” is a good AI task, but you’re unlikely to get the right preconditions or follow the same patterns as your other tests. An even better task is “Write a test for this class and use this other class as an example - follow the patterns in File X and import the same key libraries.” If you are at Amazon working on a PRFAQ document, giving the AI a complete example of a high quality PRFAQ and identifying why that document is great will help the AI make your new document much better. Structured instructions and examples like these are one of the reasons why Product Partner is able to generate such high quality product management artifacts so quickly.
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
3moIt's fascinating how you highlighted the accessibility of generative AI development with Watsonx.ai flows engine. Historically, building AI applications often required specialized expertise, limiting widespread adoption. This new approach democratizes access to powerful LLMs like those from IBM, potentially leading to a surge in innovative applications across various industries. Think about the impact this could have on fields like education or healthcare, where personalized learning tools or AI-powered diagnostics could become more commonplace. Given that Watsonx.ai flows engine simplifies integration with enterprise data, what are the potential implications for data privacy and security when these powerful models are deployed in real-world settings?