Skip to content.
Back to Walnut
The Sales Insider
Brought to you by

Key Takeaways

  • Roughly 3 in 4 prompts to a modern AI demo tool succeed on the first try. The ones that don’t almost always share a small set of fixable habits.
  • Prompts that wrap exact text in quotes are about twice as likely to land on the first attempt than prompts that describe what to change in general terms.
  • Prompts phrased as questions (“how do I…”) get a written explanation back roughly 3x more often than prompts phrased as commands (“change X to Y”).
  • The single biggest cause of stuck conversations is vague reference: “this,” “that,” and “the selected element” appear about twice as often in failed conversations as in successful ones.
  • Compound prompts that bundle three or four unrelated changes fail more often than focused ones, because a single misstep marks the whole thread as failed.
  • Roughly half of the conversations that stall could have been resolved with a clearer prompt. The other half are genuine product or tool limits, not user error.

The gap between AI demo tools that work and AI demo tools that get blamed

AI-driven demo creation has changed how sales teams ship product walkthroughs. Reps no longer need to wait in a queue for a sales engineer or block off a half-day to capture screens, annotate elements, and stitch together a flow. They describe what they need, and the demo gets built or edited in minutes. By 2030, Gartner expects 70% of routine sales tasks to be automated (Gartner, The Future of Sales 2030), and AI-assisted demo creation is exactly the kind of work that fits the trend.

But here’s the part no one talks about: the AI is only as fast as the prompt. A vague prompt produces a vague result, a follow-up question, or a polite refusal. A specific prompt produces a working demo. The difference between a rep who ships three demos a day with AI assistance and a rep who gives up after two failed tries usually comes down to a handful of prompting habits, not the AI itself.

We pulled the actual conversation logs to see this in detail. Over the last 90 days, we looked at more than 1,000 real conversations between sales reps and Walnut’s AI Mode, the in-editor agent that creates and edits demos from natural-language prompts. About 3 in 4 succeeded cleanly. The other quarter shared patterns that, in most cases, were one rewrite away from working. This post is the field guide we wrote for our own customers, with the actual examples (good and bad) that surfaced in the data.

Why some AI demo prompts succeed and others stall

There’s a temptation to blame the AI when a prompt doesn’t land. Sometimes that’s fair. But in the conversations we analyzed, the majority of stuck threads came down to one of five prompting habits. Each one is fixable in a single line.

The good news is that the habits aren’t intuitive only to engineers or power users. Once a rep sees the pattern, the fix sticks. The five habits below are ordered by how often they show up in failed conversations and how big the lift is from fixing them.

Habit 1: Wrap exact text in quotes

If you want an AI demo tool to find or change a specific word or phrase, put the exact words in quotes. Same goes for whatever you want it changed to. In the conversations we analyzed, prompts that used quoted text were roughly twice as likely to land a clean first-try result than prompts that referred to the target in general terms.

Here’s what works:

  • change all text strings in the screens which include the Exact Text “Extended ECM” to “Content Management”
  • In this demo, replace all “Senior Software Engineer” with “Product Analyst”
  • Replace this text with “Dario Garcia”

Each of these is unambiguous. There’s a literal string to look for and a literal string to swap in. The AI doesn’t need to guess.

Here’s what stalls:

  • Update email address to generic email address
  • Replace all the phone numbers in this display with (303) 555 numbers

The first doesn’t tell the AI which email or what counts as “generic.” The second asks the AI to identify a pattern (“phone numbers”) and replace it with another pattern (“(303) 555 numbers”) that isn’t a string. Both rewrites take ten seconds: Replace “alex@walnut.io” with “sample@acme.com” on every screen, or Replace every phone number you find with “(303) 555-0142”.

One subtle trap: even when a rep does use quotes, the spelling has to match the demo. We saw a prompt asking the AI to change “Nawest” to “NatWest”, which followed the rule but failed because the actual typo in the demo was spelled differently. When you can, copy the target string straight from the demo so it matches character-for-character.

Habit 2: Tell the AI to do the thing, don’t ask if it can

This is the single biggest source of misunderstood prompts. An AI demo tool like AI Mode is an agent. Its job is to take action. But when reps phrase their request as a question, the AI responds the way the words ask: with a helpful explanation rather than the change itself.

In our data, prompts that started with “how do I” or “how can I” received a written explanation about three times more often than prompts phrased as commands. The user wanted the change made. They got a paragraph instead.

Compare these two:

  • how do i edit element on page
  • Change the “Sign in” button on screen 1.2 to read “Log in”

The first gets a knowledge-base article. The second gets the change. Both took roughly the same number of words.

Other prompts from our data that got an explanation instead of an action:

  • how to unlink something
  • How do you delete a video in a module
  • do i have ability to toggle buttons on and off
  • how can I bring down an icon in the screen? its not in the right place

The fixes are all simple. Unlink this screen. Move the calendar icon down so it lines up with the row above. Toggle off the secondary CTA on every screen. Each one ships in a single message.

A quick gut-check before sending: if the prompt ends in a question mark and starts with “how,” rewrite it as a command. This single habit shift would have rescued the largest single category of stuck conversations in our dataset.

Habit 3: Name the scope

An AI demo tool can act on the current screen, one specific screen, a single guide step, or the whole template. But it needs the user to say which. In conversations that succeeded, the scope was almost always explicit.

Examples of clear scope that worked:

  • on the current slide, are you able to change each instance of “Mindy MarketDemo” to “Logan McNeill”?
  • Replace all 2025 to 2026 on every screen
  • In this demo, replace all “Senior Software Engineer” with “Product Analyst”
  • Move the first annotation of the first guide to the last guide

Examples that stalled because the scope was ambiguous:

  • Looking for a typo, does this contain “Selectins” at all?
  • Refine the text for the first annotation

The first prompt is unclear about whether to search this element, this screen, or every screen. The second doesn’t say which guide or screen the annotation lives in. The AI either picks the wrong interpretation or has to ask, slowing the conversation down.

A short library of scope phrases handles most situations:

  • on the current screen
  • on screen 1.4
  • across this template or across this demo or on every screen
  • in the first guide or in guide 2
  • in the column called “X”

Pin one of these to the start or end of every prompt and a large category of stuck conversations disappears.

This habit also pairs well with personalization workflows. Personalized demo flows where narrative, data, and highlighted features are tailored to a specific account drive conversion lifts in the 30 to 45% range. Getting AI assistance to apply those personalizations consistently across the right scope is exactly what good scope phrases make possible.

Habit 4: Be careful with “this,” “that,” and “the selected element”

Pointing words like “this” and “that” are useful when the rep has actually selected something in the editor. When nothing is selected, the AI has no idea what to act on. In our data, vague pointer words appeared about twice as often in conversations that stalled as in those that succeeded.

Examples that worked because something really was selected:

  • unblur this element (the rep had clicked the blurred element first)
  • Can you blur the title?
  • Replace this text with “Dario Garcia” (text element was selected)

Examples that stalled because the pointer didn’t connect to anything:

  • hide the selected element on all screens (nothing was selected)
  • Get ride of that cursor movement
  • Refine the text for the first annotation (which one?)
  • change the velocity logo in the top left corner (the “logo” was actually rendered as text, not an image)

A working rule: if you use “this” or “that,” confirm in your head that something is actually selected in the editor. If you’re not sure, describe the element instead. The “Velocity” wordmark in the top-left corner of every screen is unambiguous. The velocity logo is not.

Habit 5: One ask per prompt

Compound prompts that bundle multiple changes into one message fail more often than focused ones. Part of this is technical: if any step in a three-step ask trips up, the whole conversation gets reported as failed. Part of it is prompt design: the AI plans three things at once and loses the thread on one of them.

A real example of a compound prompt that mostly worked but ended in a failed state:

change the Confirmed bookings count to 2300 & accordingly also change the bar size. Change color of that bar to green.

The AI shipped the count and the color. The bar resize didn’t go through because of a chart-level limitation. The conversation was logged as a failure even though two out of three changes landed. Three separate prompts would have shipped all three.

Another compound prompt that stalled:

Translate demo into chinese and generate audio

These are two heavy, distinct jobs. Either one alone usually works fine. Together, the conversation gets stuck. The fix is to lead with the most important change, confirm it landed, and then move to the next one.

This isn’t a productivity tax. The follow-up pattern is faster than the compound pattern because the rep can react to each result, course-correct in real time, and avoid one-shot failures that wipe out otherwise good work.

When it’s not the prompt, it’s the tool

Not every stuck conversation is a prompting problem. About half of the stalled conversations in our data could have been resolved with a cleaner prompt. The rest were genuine tool gaps or product limits.

Things that aren’t prompt-solvable today and need a human in the editor:

  • Editing images by adding or removing specific objects inside them (the AI can regenerate a whole new image, not surgically edit an existing one)
  • Changing chart bar widths or other low-level chart styling
  • Editing text that’s been baked into an image rather than rendered as HTML
  • Reverting to a specific earlier message in a long conversation
  • Cropping or trimming media

When a rep hits one of these, the right move is to flag it with a thumbs-down and a quick note, then handle the change manually. Those notes are exactly what product teams use to prioritize the next round of capabilities. Modern demo platforms ship new AI capabilities on a regular cadence, and the gaps from one quarter are often closed by the next.

The reason to spot a tool gap quickly is simple: the rep stops wasting time tweaking prompts that can’t succeed, and the product team gets the signal it needs.

How to make these habits stick across your team

Five habits in a doc don’t change behavior. Operationalizing them does. Here’s what we’ve seen work in the field.

Run a 30-minute prompt clinic with new reps

Have a senior rep or sales engineer pull up the AI demo tool, share their screen, and walk through three real demo-customization tasks. Use the five habits as the scorecard. New reps learn faster from watching one strong prompt than from reading ten pages of theory.

Create a “prompt template” library in your sales enablement hub

The most common customizations (anonymize names, swap company name, update dates, change currency, translate to local language) account for the bulk of AI demo usage. Pre-write the prompt for each of these and store them somewhere reps can copy from. Anonymize all names by replacing them with: Alex Jordan, Priya Singh, Marco Costa works far better than Anonymize personal content, and the rep doesn’t have to think about it each time.

Treat thumbs-down as a product signal, not a complaint

When a rep marks a conversation as unhelpful, route those notes to whoever owns AI tool adoption on your team. Patterns surface quickly: maybe three reps in a row tried to change chart styles in a way that’s not yet supported, which means the team should know about the gap before they each independently waste time on it.

Set the expectation that AI is a first draft, not a final draft

The reps who get the most out of AI demo creation treat it as a way to produce a strong starting point faster, then polish manually. They don’t expect the AI to ship a production-ready demo in one prompt, and they don’t get frustrated when it asks a clarifying question. Once your team frames AI Mode as a co-pilot rather than a vending machine, success rates climb.

Walnut customers running AI Mode against their interactive demo libraries report that the combination of AI-assisted creation and good prompting practice has cut demo turnaround from hours to minutes, with average demo completion rates around 67%. The prompting habits in this post are the closest thing we’ve found to a multiplier on those numbers.

FAQ

How do I write a prompt for an AI demo tool?

Start with what you want to find (in quotes), what you want it changed to (in quotes), and where the change should happen (this screen, this guide, every screen). Phrase it as a command, not a question. Keep it to one ask per prompt. If you follow those four points, you’ll land a working result on the first try most of the time. Example: Replace “Q4 2025” with “Q1 2026” on every screen.

Why does my AI demo tool give me instructions instead of actually doing the change?

Because you phrased your request as a question. AI demo agents respond to the words you use. “How do I delete a guide?” reads as a knowledge question, so the AI returns instructions. “Delete guide 2” reads as a command, so the AI deletes it. The fastest fix is to re-read your prompt before sending and rewrite anything starting with “how” as a direct instruction.

Should I select an element before prompting the AI?

Yes, if your prompt uses words like “this,” “that,” or “the selected element.” If nothing is actually selected, the AI has nothing to point at. As an alternative, describe the element so the AI can find it: The “Velocity” wordmark in the top-left corner works without a selection. This logo doesn’t.

Can an AI demo tool change something on every screen at once?

Yes, if you tell it to. Phrases like on every screen, across this template, or across this demo tell the AI to apply the change at template scope rather than just the current view. Without that phrase, the AI usually defaults to the screen you’re looking at.

How long should an AI demo prompt be?

There’s no minimum or maximum length that matters. In our data, successful and failed prompts had nearly identical average lengths. What matters is whether the prompt names the target (in quotes), the destination (in quotes), and the scope. A 12-word prompt that hits all three works better than a 60-word prompt that doesn’t.

Can I prompt an AI demo tool in a language other than English?

Yes. Find-and-replace and most editing operations work in any language the demo is written in. In fact, writing the prompt in the same language as the demo content avoids translation mismatches. We see successful prompts in German, Spanish, French, and Portuguese in our data, often using the source text verbatim.

What’s the difference between asking the AI to personalize a demo versus anonymizing it?

Both are common requests, and both fail more often than they succeed when phrased generically. Personalize this for Acme doesn’t tell the AI what to change. Anonymize personal content doesn’t tell the AI what counts as personal. Stronger prompts include the target values: Replace the customer name “Velocity” with “Acme” everywhere. Replace any logos with the Acme logo I’ll upload next. Or for anonymization: Replace any real names with: Alex Jordan, Priya Singh, Marco Costa. Replace company names with “Acme.” Replace email addresses with sample@acme.com.

Ready to see what personalized demos can do for your pipeline? Start for free with Walnut.

You may also like...

The AI Buyer Paradox: Demo First, Rep Last
AI

The AI Buyer Paradox: Demo First, Rep Last

Key Takeaways The Short Answer: Why the Sales Call Moved to the End The B2B sales call used to be…
19 min read
Keep reading
What Tool Should I Use to Create Product Demos? The 2026 Decision Framework
AI

What Tool Should I Use to Create Product Demos? The 2026 Decision Framework

Key Takeaways The Question Every Sales Team Asks (And Why It Matters) If you have ever typed “what tool should…
17 min read
Keep reading
Demo Automation Software_ Why AI Beats Manual Demos
AI

Demo Automation Software: If Your Reps Still Build Every Demo by Hand, You Don’t Have It

Key Takeaways The Old Way of Demoing Is Costing You Deals Your AE spends three hours building a custom demo…
13 min read
Keep reading

You sell the best product.
You deserve the best demos.

Halftone purple background Halftone green background
Never miss a sales hack
Subscribe to our blog to get notified about our latest sales articles.

Book a Demo

Are you nuts?!

Walnut squirrel mascot illustration

Appreciate the intention, friend! We're all good. We make a business out of our tech. We don't do this for the money - only for glory. But if you want to keep in touch, we'll be glad to!


Let's keep in touch, you generous philanthropist!

Sign up here!

Fill out the short form below to join the waiting list.

Let's get started

Enter your email to get started

Nice to meet you

Share a bit about yourself

Company Info

Introduce your company by filling in your company details below

Let's get you started in Walnut…

Set your password and start building interactive demos in no time.

Continuing in 4 seconds...