Prompts Must Go
StableDiffusionX - Promptless

Prompts Must Go

We've mistaken generative AI prompts as an innovative miracle. The reality is they are neither innovative nor miraculous.

Less than fifteen seconds into this article, and you’ve probably pegged me as a nut job(1). But there may be good reason to ponder what I am about to clarify.

This is one of those slow-burning ideas that will take time to digest, but I assure you, the notion of a promptless future will hit you subtly with flashes of insight as you go about your daily work.


Many thanks to all the paying subscribers who support Impertinent. I’ve met some really interesting people on this journey—a path that wouldn’t exist without your encouragement. Most 70+ year-olds would rather be fishing. I hate fishing, so thank you for helping to deflect that retirement misery.


Prompts Must Go?

Generative AI is a very new idea. It feels like a decade in generative AI progress has passed by in the last 18 months.

ChatGPT (framed as a “product”) is a dumb idea. Even Sam Altman has intimated from time to time that it is the demo that went viral. It was never intended to be a product.

Demo’s-gone-viral is not an uncommon thing. Accidental business tactics have created some of the best products we use every day.

The impermanent glue that holds 3M’s Sticky Notes® was originally thought to be a failure until it was used to demonstrate the concept of temporary adhesives.

Prompts Direct to LLMs

Danish carpenter OleKirk Kristiansen bought the Billund Woodworking and Carpentry shop in 1916. After World War I, Ole found it much harder to source the birchwood necessary for his wooden toys. Luckily, he was introduced to a British plastic injection-molding machine in 1946, which he would purchase to produce plastic automatic binding bricks — the precursor for the LEGO® bricks we know and love (and obsess over) today.

ChatGPT is one of those fortunate demos that opened a lot of eyes. Unlike a Sticky Note®, it stuck seemingly for good. But it’s a dead-end for many reasons that may become obvious as we explore a possible future without prompts. One might even ask—how is that even possible? We’ll get to that shortly.

Tired and Wired

Let’s list some of the pros and cons associated with the underlying nature of prompts.

Wired: Prompts make work magical.

Undoubtedly, if you get into a groove writing good prompts, it’s delightful and rewarding to command a bunch of GPUs to do your work. You are finally the master of your domain. You feel the power and control over information.

Tired: Prompts require an intellectual effort for an outcome with - as of now - variable quality. Humans don’t like to play this kind of game. (Alexandre Kantjas)

True. It’s not easy to create useful prompts. Vast energies are being applied by everyone using generative AI, and little (if any) of that energy is stored and reusable. Show of hands, how many of you like the multi-hour tussle of dialing in a prompt?

Wired: Prompts require an intellectual effort and that is a good thing because it forces humans to think (again). (Alexandre Kantjas)

True. But should humans spend their limited time thinking about how to generate things to think more deeply about? I’m on the fence about this advantage.

Tired: Indirectly, prompts give gen AI a bad name because new users experience terrible outcomes, give up, and then tell their colleagues to not waste their time with generative AI.

No debate. This is happening a lot. Nichole Leffer recently said it best -

“Most people quickly write off AI as "all hype" and "not ready for prime time" after a few bad experiences - whether because they used a garbage tool, tried to use a good tool with zero idea how to prompt, or listened to advice on prompting proliferated by self-proclaimed "AI Gurus" who know nothing about using AI beyond thinking jumping on the bandwagon can make them rich.” — Nichole Leffer
Tired: Prompts are a hidden tax levied on productivity without accountability. We cannot calculate net productivity gains, if there are gains at all.

Very little work has been done to determine how productive generative AI truly is. Most hypesters on LinkedIn claim 7X, 10X, and even 100X productivity gains. This is all bullshit, of course. Productivity is central to generative AI, but like all ROI calculations, we leave out many contra-economic components and herald nothing but net. Swoosh! You’re a 10X developer in the snap of your fingers. Please.

Wired and Tired: Prompts are ideally suited to expand the gap that no-code attempts to close. They serve as an anti-democratic wedge for knowledge.

This is a boon for consultants and hucksters struggling to carve out a niche in the shadow of no-code platforms that democratize data management. It also opens a gap between those who can “program” natural language prompts better than everyday workers.

Tired: Prompts are the central cause for security, privacy, and accuracy risks in gen AI. The closer humans get to the core of any technology, the greater the risks.

This seems to be truthful. Micheal Silva at Emely AI will be quick to provide the harsh reality of prompting risks in ChatGPT. Examining the deeper cause for these risks is worthwhile.

  • Tools that wrap generative AI in a cloak of better usability are often hurried to market. They lack forethought and design choices that blend well with existing identity and security platforms.
  • Applications from start-ups are raced to market with one purpose - to acquire as many users as possible. We saw this with Web 1.0 and 2.0. SaaS platforms needed many years to backtrace their steps to establish formidable security. Many still are without privacy platforms.

In the early 80s, I was learning assembly programming. I was poking values into registers. I was the closest to the hardware that a developer could possibly get. One wrong move, and it’s over.

Modern languages make it possible to abstract developers away from the hardware. Libraries make this possible. Modern software applications provide additional abstractions that defend not only the machinery but also the data. Security abstractions add yet another layer of safety. As we pull back, we can observe that no-code platforms add yet additional layers that make it possible for democratized solution building.

The closer humans get to the core of any technology, the greater the risks.

These layers—built entirely with software—are designed to make computing safer, more productive, and resilient. These are the engineering tenets that make highly accurate business transactions possible.

And yet… we hand everyone the keys to an LLM.

As you sit in front of your ChatGPT screen pondering that next prompt, know that you are talking directly to the LLM. You and everyone else using ChatGPT are assembler-level generative AI programmers who are trying to conjure sensical outputs from a big blob of numbers.

Perhaps I’m over-sensationalizing this relationship between prompts and LLMs, but let’s be clear - you are fundamentally as close to the LLM as you can be without building the LLM itself, also made possible for nearly everyone in these early days of generative AI.

You’ve been given the ear of the LLM, and you can command it to do anything you want. Prompts are a direct and immediate pathway to the machinery known as the large language model.

And that’s the problem. It leads to:

  • Failures that are mistaken as hallucinations2
  • Misleading conclusions
  • Poor fitness-of-purpose for solutions
  • Compromised security and privacy
  • Compromised IP protection

As experienced ChatGPT users, I’m certain you could add to this list.

What’s the Remedy?

Software. History shows us why.

When Vulcan arrived in 1980, what came next? Base II, the first no-code database platform, became hyper-successful with software that framed its early personal database capabilities in a realm that was resilient and easier to use. Left in its natural state, it would not have become a near-billion-dollar company in the early 80s, a feat that, in today’s dollars, is one of the first software unicorns.

This pattern has existed since the late 70s. There’s no indication the pattern is changing, or that generative AI is exempt from this pattern.

Copilots: Promptless Generative AI

Name any significant technological advance in computing over the past fifty years, and one thing stands out that transformed these advancements into pervasively useful products—software.

We will need more engineers doing what engineers have always done—build architectures that make this new generative AI capability thrive. We're going to need a lot more software engineers.

As I suggested early last year - generative AI has made possible apps that were inconceivable just 12 months ago.

It has unearthed visions of future solutions that were financially impractical until now. The explosion in software development will be fueled by a demand that is invisible today, but obvious in a few years.

As Kevin Xu said -

There are way more technologies that ought to be built that aren’t.

You can’t end prompting today. But you can stop assuming prompts have a bright future in the hands of users. They don’t.

AI itself will kill the need for direct LLM prompting.

Prepare for the Promptless Future

You might think you cannot do much to brace for this inevitable impact. I think otherwise. You can find ways to abstract yourself and your users from LLMs in many places.

Think copilot in everything you build.

My Substack members are enjoying a comprehensive list of possible pathways that may help. Subscribe with a 7-day free trial to read the rest of this article.

If you have ideas and you need to bounce them off a nutty person who sees a promptless future, you know where to find me.


[1] Indeed, I may be nutty, but this article is not the reason.

[2] When the AI decides to make stuff up, we think it's hallucinating. It’s not; it’s behaving exactly as LLMs are designed to do—expound on a topic and embellish as needed. At the heart of an AI solution is the prompt, which attempts to guide the LLM to a satisfactory output. Ironically, we benefit greatly when LLMs exercise a degree of verbosity. But this also comes with the possibility the AI may be too exuberant, resulting in long-windedness or the prospect of it altogether abandoning reason. This is the dark side of artificial intelligence. Lacking specific guidance in carefully constructed prompts, LLMs are left to generalize independently - it’s what they do well.


Albert Chan

Meta NA Director & Head of Sales // Teacher // Board of Advisory // Author

8mo

Kudos on highlighting the potential risks of interacting with language models at a close proximity.

It's crucial to prioritize safety and avoid the potential risks involved in conversing with language models directly.

Steve Taplin

CEO at Sonatafy, AI/ML led Nearshore Software Development synced with US time zones for maximum Productivity & Collaboration | Forbes & Entrepreneur Author

8mo

Such an interesting perspective on the risks posed by the proximity to core technologies! 📚🤔

Haitham Khalid

Manager Sales | Customer Relations, New Business Development

8mo

A promptless future would minimize the inherent risks and uncertainties tied to direct interactions with LLMs

To view or add a comment, sign in

More articles by Bill French

  • No-Code: Stuck at the Trailing Edge

    No-Code: Stuck at the Trailing Edge

    The acceleration of software development driven by new possibilities with generative AI is evident everywhere we roam…

    15 Comments
  • Gemini has a Memory Feature Too

    Gemini has a Memory Feature Too

    The recent hype and ecstatic cheers about OpenAI's new memory feature created a swarm of happy users and some…

    5 Comments
  • AI vs Jobs

    AI vs Jobs

    Hypothesis: If I'm an AI Prompt Expert, My Job Will Be Protected Discussions about generative AI’s invasive assault on…

    2 Comments
  • Pervasive Copilots: Just-in-Time Knowledge

    Pervasive Copilots: Just-in-Time Knowledge

    It's been a few AI minutes since I revisited Copilots, so this update will reveal a lot of new ideas and awareness that…

    2 Comments
  • Why is SmartSuite becoming popular with front-end applications?

    Why is SmartSuite becoming popular with front-end applications?

    Avi Hercenberg posited this question recently. It's interesting because it drives to the heart of my fondness for the…

    6 Comments
  • 10X, 100X, 500X - Please Stop Saying That

    10X, 100X, 500X - Please Stop Saying That

    A recent comment about Martin Crowley's library of 500 GPT prompts triggered me. This is one of thousands of claims…

    3 Comments
  • ChatGPT: The Copilot Without a Plane

    ChatGPT: The Copilot Without a Plane

    If you pay close attention to the activities where generative AI produces high value, ChatGPT is not among them. Bard…

    6 Comments
  • No-Code Solution Defects: 88%?

    No-Code Solution Defects: 88%?

    Almost certainly, because that's the rate for spreadsheet defects. Spreadsheets ushered in the no-code revolution…

    5 Comments
  • Make, Zapier: Bubble?

    Make, Zapier: Bubble?

    No-code solutions lean on integration adhesives. Is it a healthy dependency? Or one to be concerned about? Years ago, I…

    20 Comments
  • Podcast Overdubbing: AI's Future Fiction

    Podcast Overdubbing: AI's Future Fiction

    Have you ever listened to a podcast and thought ..

    3 Comments

Insights from the community

Others also viewed

Explore topics