Behind the Scenes of KAYA Global's Approach to Prompt Engineering
In this week's article, we dive into how prompt engineering is used at KAYA. We explore personal experiences from our members, share a business use case, then outline our methodology for efficiently producing quality prompts.
From Fun to Function
In KAYA’s early days, our team dabbled with generative models like GPT-2 and used it primarily as a told for creative experimentation and generating text for various applications. It was initially about exploring the capabilities of the models and taking note of how we could use the technology. As we crafted narratives, answered operational questions, and produced code snippets, we quickly realized that we were essentially using the models as an artificial assistant.
What is really profound was the acceleration from the early models to where we are now, with being able to draft entire documents, develop creative pieces, and assist in more complex software development tasks.
Use Cases
Personal: KAYA Engineering Intern Vinaya Ramamorthy Venkatasubramanian has developed a knack for crafting prompts that coax out precise and informative responses from LLMs. She enjoys experimenting with styles and questioning to coax out information. Whether it's finding inspiration to clear a mental roadblock in her studies or planning a 10 day vacation to Thailand (complete with beach and restaurant recommendations), there's nothing she can't do. Her friends have even started to refer to her as the ‘Prompt Whisperer’.
Business: Business Development manager Andrew Christensen Christensen was attached to the scrum team tasked with developing KAYA’s AI assistant, Bernie. He comes from a non-technical background (studied Finance at the University of San Diego for his bachelor’s degree) and was admittedly intimidated about joining a team of engineers. However, since LLMs are designed to generate conversational responses to users, he was surprised to find his writing abilities added significant value to his exceedingly patient teammates.
The result was an complex Software Requirement Specification (SRS) for Bernie, which included a feature blueprint complete with priorities, resource needs, and timelines.
Guiding Principles
Prompt engineering is the art of crafting precise and strategic inputs to AI models, tailoring their responses and behaviors to meet specific goals and applications. Although the objective is simple, we are aware that this technology has significant complexities. In our prompting, we take care to make the following considerations:
Recommended by LinkedIn
These considerations form the ethical and operational foundation of our approach to prompting. They guide us in crafting prompts that not only deliver exceptional results but also adhere to the highest standards of fairness, transparency, and user-centricity.
KAYA's Prompting Method
Unlocking the full potential of language models requires a strategic approach to crafting prompts. At KAYA, we understand that the quality of prompts plays a pivotal role in the accuracy and relevance of responses generated by these models. To achieve this, we follow a meticulous 3-step method that has proven to be the cornerstone of our success:
1. Classify the Prompting Task - First, we need to frame what output we are looking for. Are we asking for code to fix a frontend issue? Are we pulling specific points from a database? A combination? Asking what we are looking for will help reduce time spent in the iterative space. We categorize our prompting tasks from the list below:
2. Apply Prompting Tactics – Second, it is helpful to give the model information on the viewpoint we are coming from, or any conditions that need to be met. We’ve found that being structured both helps the prompter stay organized and helps flag important information to the LLM.
3. Iteration to Perfection – This final step is where we can expect to spend most of our time. If we have followed the process correctly, however, we can expect to at least get most of the information needed in a way that is about halfway formatted. That means a tasks that would have taken 12-13 prompts to accomplish can be reduced down to about 5.