Almost Timely News: 🗞️ Does Prompt Engineering Still Matter?
Almost Timely News: 🗞️ Does Prompt Engineering Still Matter? (2024-04-21) :: View in Browser
Content Authenticity Statement
100% of this week's newsletter was generated by me, the human. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future.
Watch This Newsletter On YouTube 📺
What's On My Mind: Does Prompt Engineering Still Matter?
I strongly recommend watching the YouTube video for this week's newsletter to see the PARE framework in action!
This week, let’s answer an important question about generative AI. Is prompt engineering still necessary?
It depends on the use case, but mostly yes, it’s still necessary and still important for us to learn and perfect. Why? Because as we become more advanced in our use of AI, we’re going to run into more use cases where a well-crafted prompt makes a big difference in performance.
Let’s start with a very brief refresher. Prompt engineering is how we program large language models to do things, tools like ChatGPT, Anthropic Claude, Google Gemini, and Meta LLaMa. You’ve probably noticed even in your Instagram app, there’s now a LLaMa-based AI waiting to help you.
Prompt engineering is a programming language. The difference is that it’s in the language of your choice and not a computer language like Python or Java. When we write prompts, we are coding. And you code all the time, because coding is just giving repeatable, reliable steps to achieve an outcome. A recipe is code. Instructions are code. Dog training is code.
As with all code, there are ways to code inefficiently and ways to code efficiently. Inefficient code involves constantly reinventing the wheel, not putting in any kind of error checking, repeating yourself over and over again instead of consolidating things together, not documenting things, etc. Efficient coding is basically the opposite of that.
So let's dig into whether prompt engineering is necessary or not, whether we need to formalize it into some best practices.
As a tangent, best practices are basically recipes. They're a starting point for your journey and they're essential, especially for beginners. Beware anyone who says there are no best practices. They're either trying to sell you something or they're not very good at what they do.
The first major use case in generative AI is the consumer use case, which is probably like 90% of uses these days. You the human sit down at your keyboard or your mobile device, you open up the interface of your choice, like ChatGPT or Claude or Gemini or whatever, and you start having a conversation with the AI model. You give it some instructions, you converse with it, you ask clarifying questions, and you get the result you want.
Do you need prompt engineering in this use case? Is it important? For this particular use case, prompt engineering delivers benefits - like repeatability - but it's not absolutely essential. You can get done what you need to get done without prompt engineering practices, though you might find it inefficient after a while.
The second use case is sharing your prompts with your teammates and colleagues. Maybe you work on a team and your team has similar processes and practices. You definitely want to share your prompts so that other team members can help improve them, and so that you can cut down on the time it takes to get any particular task going. This is a case where prompt engineering does matter. Taking the time to craft great prompts so that you can share them makes a lot of sense and will increase the speed of adoption.
The third use case is using small models. There are big consumer models like the ones that power ChatGPT where you can have a conversation and get where you need to go eventually. But there are smaller models, like Meta's newly released LLaMa 3, that have shorter memories and very specific prompt templates to maximize their capabilities. People who build software with generative AI baked in will often use models like this because of the very low cost - but that means more specific, actual best practices for prompting. The prompting that you use for a big model like ChatGPT will deliver subpar results on a small model like LLaMa 3.
If you work in a highly regulated industry, there's a very good chance you'll be using one of these smaller models because these models can be run on hardware your company owns. For example, if you work in healthcare, a model like LLaMa 3 is very capable but can run solely on your company's computers, ensuring that protected health information never, ever leaves your network. Prompt engineering is important to squeeze every bit of performance out of that kind of model.
Finally, the fourth use case is scaling your prompts with code and agents. Say you write a prompt that does a great job of summarizing an article. Do you really want to copy and paste that a thousand times to analyze a big compendium of articles? Of course not. You want to automate that. But you want to make sure your prompt is bulletproof because once it goes into code or another system, you will have fewer chances to revise it, to make it efficient, to force very specific outcomes.
What this means in the big picture is that prompt engineering isn't going anywhere. We're still in the earliest days of generative AI, and what we do today is not what we will do tomorrow - but prompt engineering, based on the four use cases I outlined above - is unlikely to go away any time soon.
Okay, that's great. But HOW do you improve your prompt engineering? How do you become better at prompting? This is where the Trust Insights PARE framework comes into play, which I debuted a couple weeks ago. Let's take a few moments to step through it so you can see what it does - and again, I recommend you watch the video version of this newsletter to actually see it in action.
PARE is a series of four power question categories - Prime, Augment, Refresh, and Evaluate.
Prime means to get a model started by asking it what it knows about a topic. We don't want to presume a model knows everything about a topic, especially as we start using it for more specialized cases. So as part of our initial prompt, we ask it what it knows about a topic, and we evaluate its results. If it doesn't have the knowledge we want (or the knowledge is incorrect), then we know we have to provide it.
Augment means to ask a model what questions it has. This helps close gaps in our knowledge and prevents omissions on our part. After we complete our initial prompt, we ask this question category.
Refresh means to ask a model what we forgot, what we overlooked. This happens after the first response from the model, and can further seal gaps in its knowledge.
Evaluate means to ask a model if it fulfilled our prompt completely. This is an important question when a model's output doesn't meet our expectations - and our expectations were clear up front in the prompt engineering process.
Once we're satisfied with the results we've obtained, then the final step is to direct the model to create a prompt based on the results. This helps us engineer it further, putting it into the model's language, and prepares it for distribution to our team or for scaling up to big production uses. Almost everyone forgets this step, but it's critical for scaling and streamlining your use of generative AI.
Maybe I should add an S to the PARE framework for summarize, maybe in version 2.0.
Follow these steps to generate highly effective, scalable prompts and build a robust prompt engineering practice. You'll help your team grow their capabilities quickly and generate value from prompt engineering and generative AI faster than ever before.
And shameless plug, this is literally what my company does, so if getting started with this use of generative AI is of interest, hit me up.
How Was This Issue?
Rate this week's newsletter issue with a single click. Your feedback over time helps me figure out what content to create for you.
Share With a Friend or Colleague
If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:
For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.
ICYMI: In Case You Missed it
Besides the newly updated Generative AI for Marketers course I'm relentlessly flogging, I did a piece this week on how to tell if content was AI-generated or not.
Skill Up With Classes
These are just a few of the classes I have available over at the Trust Insights website that you can take.
Premium
Recommended by LinkedIn
Free
Advertisement: Generative AI Workshops & Courses
Imagine a world where your marketing strategies are supercharged by the most cutting-edge technology available – Generative AI. Generative AI has the potential to save you incredible amounts of time and money, and you have the opportunity to be at the forefront. Get up to speed on using generative AI in your business in a thoughtful way with Trust Insights' new offering, Generative AI for Marketers, which comes in two flavors, workshops and a course.
Workshops: Offer the Generative AI for Marketers half and full day workshops at your company. These hands-on sessions are packed with exercises, resources and practical tips that you can implement immediately.
Course: We’ve turned our most popular full-day workshop into a self-paced course. The Generative AI for Marketers online course is now available and just updated as of April 12! Use discount code ALMOSTTIMELY for $50 off the course tuition.
If you work at a company or organization that wants to do bulk licensing, let me know!
Get Back to Work
Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you're looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
What I'm Reading: Your Stuff
Let's look at the most interesting content from around the web on topics you care about, some of which you might have even written.
Social Media Marketing
Media and Content
SEO, Google, and Paid Media
Advertisement: Free Generative AI Cheat Sheets
The RACE Prompt Framework: This is a great starting prompt framework, especially well-suited for folks just trying out language models. PDFs are available in US English, Latin American Spanish, and Brazilian Portuguese.
4 Generative AI Power Questions: Use these four questions (the PARE framework) with any large language model like ChatGPT/Gemini/Claude etc. to dramatically improve the results. PDFs are available in US English, Latin American Spanish, and Brazilian Portuguese.
The Beginner's Generative AI Starter Kit: This one-page table shows common tasks and associated models for those tasks. PDF available in US English (mainly because it's a pile of links)
Tools, Machine Learning, and AI
All Things IBM
Dealer's Choice : Random Stuff
How to Stay in Touch
Let's make sure we're connected in the places it suits you best. Here's where you can find different content:
Advertisement: Ukraine 🇺🇦 Humanitarian Fund
The war to free Ukraine continues. If you'd like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia's illegal invasion needs your ongoing support.
Events I'll Be At
Here's where I'm speaking and attending. Say hi if you're at an event also:
Events marked with a physical location may become virtual if conditions and safety warrant it.
If you're an event organizer, let me help your event shine. Visit my speaking page for more details.
Can't be at an event? Stop by my private Slack group instead, Analytics for Marketers.
Required Disclosures
Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.
Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.
My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.
Thank You
Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.
See you next week,
Christopher S. Penn
Seasoned Carpenter, Aspiring Scientist, and Skilled Window Treatment installer
6moIt absolutely matters. Knowing the right questions to ask, IN THE RIGHT WAY, can yield a treasure trove of save timed and resources. If you're a lawyer or a cunning linguist, you can get chatgpt to tell you anything. Even the things they aren't supposed to tell you. Once I figured out that it's not the question you ask but how you ask the question, it was all too easy and has helped me out by leaps and bounds. It's a cheerful distraction to most and a vital tool to the few.
🚀 Switch Your Hosting to YOUSTABLE.COM 🌐
6moexcited to dive into the world of prompt engineering! 🧠
Making AI-powered solutions for you
6moAbsolutely Prompt engineering plays a crucial role in maximizing the potential of advanced AI models.
Director of Services Strategy & HubSpot Certified Trainer at SmartBug Media | Building the Roadmap for HubSpot's #1 Partner Agency
6moI really enjoyed today's newsletter. Curious, on the P part of the PARE model, you discussed asking the model what it knows about a certain topic. If the goal of your prompt is to better understand a topic that you don't know at all, then what should you be looking for if you're trying to validate if the model is capable of being a good research assistant for you (if you have zero background on the topic)? Thanks!
Senior Vice President at Metropolitan Partners Group | Executive Board Member | Forbes Council | DEIB Advocate
6moJordan Wilson thoughts?