Tutorial How-To
Table of Contents
- Chapter 1: Basic Prompt Structure
- Chapter 2: Being Clear and Direct
- Chapter 3: Assigning Roles (Role Prompting)
- Chapter 4: Separating Data from Instructions
- Chapter 5: Formatting Output & Speaking for Claude
- Chapter 6: Precognition (Thinking Step by Step)
- Chapter 7: Using Examples (Few-Shot Prompting)
- Chapter 8: Avoiding Hallucinations
- Chapter 9: Complex Prompts from Scratch - Chatbot
- Appendix: Chaining Prompts
---
Introduction
This interactive tutorial provides a comprehensive, hands-on journey through Claude prompt engineering using Claude for Sheets. It covers core techniques such as prompt structure, clarity, role prompting, data separation, output formatting, stepwise thinking, use of examples, hallucination avoidance, and building complex prompts for industry scenarios. Readers will learn practical patterns, templates, and best practices to craft effective Claude prompts within Google Sheets and beyond.
Chapter 1: Basic Prompt Structure
Page Contents
Lesson
Examples
Example Playground
The Claude for Sheets extension offers several functions you can use to call Claude. One such function is
CLAUDEMESSAGES(), which is built to reflect the Messages API structure.
CLAUDEMESSAGES() can take several parameters, in the following order:
1. Your prompt, in quotation marks
2. The model version, in quotation marks
3. Any optional additional parameters, such as temperature, system prompt, max tokens, etc.
Note: Temperature correlates to the degree of variability in Claude's answer. For these exercises, we have set the
"temperature" to 0. In Chapter 8, we'll dive deeper into temperature.
Anywhere in this sheet, you can call Claude by using the CLAUDEMESSAGES() formula. The basic formula is structured
like this: =CLAUDEMESSAGES("{PROMPT}", "{MODEL_VERSION}", "system", "{SYSTEM_PROMPT}")
For example, to call Claude 3 Haiku with a prompt located in cell A1, you would write: =CLAUDEMESSAGES(A1,
"claude-3-haiku-20240307", "system", "Respond only in Esperanto")
How does the CLAUDEMESSAGES() structure correlate with the Messages API structure? Let's look a few examples of
prompts sent to CLAUDEMESSAGES() along with the underlying formula.
Examples Back to top ↑
Let's take a look at how Claude responds to some correctly-formatted prompts. Don't worry about how the answers are
being generated.
Prompt Claude's Response
User: Hi Claude, how are you?
➤
#ERROR!
User: Can you tell me the color of the ocean?
➤
#ERROR!
User: Can you tell me the color of the ocean?
➤
#ERROR!
User: What year was Celine Dion born in?
➤
#ERROR!
Now let's take a look at some prompts that do not include the correct formatting. For these malformatted prompts, the
CLAUDEMESSAGES() function returns an error. Here's a prompt that's missing "User:" at the beginning.
Prompt Claude's Response
What year was Celine Dion born in?
➤
=CLAUDEMESSAGES prompt should be in "User: ...
Assistant: ..." format, with "User: ..." first and a newline before
each subsequent role. For newlines, press Ctrl/Cmd+Enter
Here's a prompt that fails to alternate between the User and Assistant roles.
Prompt Claude's Response
User: What year was Celine Dion born in?
User: Also, can you tell me some other facts about
her? ➤
#ERROR!
Here's a prompt that has too many newlines at the beginning.
Prompt Claude's Response
User: What year was Celine Dion born in?
➤
=CLAUDEMESSAGES prompt should be in "User: ...
Assistant: ..." format, with "User: ..." first and a newline before
each subsequent role. For newlines, press Ctrl/Cmd+Enter
"User" and "Assistant" messages MUST alternate, and messages MUST start with a "User:" turn.
When using CLAUDEMESSAGES(), be sure to demarcate messages by inserting a newline between each message (a
message is a single User or Assistant turn). If you fail to do so, Claude will not return an error, but Claude will consider
everything not separated by a newline as part of a single message.
You can have multiple "User:" / "Assistant:" pairs in a prompt (as if simulating a multi-turn conversation). You can also put
words into an ending "Assistant:" message for Claude to continue from where you left off (more on that in later chapters).
You can use a system prompt to give Claude instructions and guidelines. A system prompt is a way to provide context,
instructions, and guidelines to Claude before presenting it with a question or task in the "User" turn.
Structural, system prompts exist separately from the list of User & Assistant messages, and thus belong in a separate
"system prompt" parameter when using CLAUDEMESSAGES().
To make this easy for you in this tutorial, we've provided system prompt input boxes that feed into a complete CLAUDEMESSAGES() formula used to call Claude.
System Prompt Claude's Response
Your answer should always be a series of critical
thinking questions that further the conversation (do not
provide answers to your questions). Do not actually
answer the user question.
➤
Prompt
User: Why is the sky blue?
Back to top ↑
Table of Contents
Chapter 1: Basic Prompt Structure - Exercises
Page Contents
Exercise 1.1 - Counting to Three
Exercise 1.2 - System Prompt
Exercise 1.1 - Counting to Three Back to top ↑
Using proper User/Assistant formatting, write a prompt in the YELLOW cell below to get Claude to count to three.
Remember, if your answer is correct (which means formatting and prompt is good), Claude's response cell will turn GREEN.
Prompt Claude's Response
User: [Replace this text]
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 1.2 - System Prompt Back to top ↑
Modify the system prompt in the ORANGE box to make Claude output its answer in Spanish.
If your answer is correct, Claude's response cell will turn GREEN.
System Prompt Claude's Response
[Replace this text]
➤
#ERROR!
Prompt
User: Hello Claude, how are you?
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 1.1 - Counting to Three Back to top ↑
Chapter 1 Exercises: Basic Prompt Structure →
Example Playground Back to top ↑
This is an area for you to experiment freely with the prompt examples shown in this lesson. Feel free to tweak prompts to see how it may affect Claude's responses.
Note: The colors in the cells, which have been carried over from above, will not change even if the prompt or Claude's response changes.
Prompt Claude's Response
User: Hi Claude, how are you?
➤
#ERROR!
User: Can you tell me the color of the ocean?
➤
#ERROR!
User: What year was Celine Dion born in?
➤
#ERROR!
What year was Celine Dion born in?
➤
=CLAUDEMESSAGES prompt should be in "User: ...
Assistant: ..." format, with "User: ..." first and a newline before
each subsequent role. For newlines, press Ctrl/Cmd+Enter
User: What year was Celine Dion born in?
User: Also, can you tell me some other facts about
her? ➤
#ERROR!
System Prompt Claude's Response
Your answer should always be a series of critical
thinking questions that further the conversation (do not
provide answers to your questions). Do not actually
answer the user question.
➤
Prompt
User: Why is the sky blue?
Back to top ↑
Left Chapter 1: Basic Prompt Structure →
Chapter 2: Being Clear and Direct →
Back to top ↑
Chapter 2: Being Clear and Direct
Page Contents
Lesson
Examples
Example Playground
Chapter 2: Being Clear and Direct - Exercises
Page Contents
Exercise 2.1 - Spanish
Exercise 2.2 - One Player Only
Exercise 2.3 - Write a Story
Exercise 2.1 - Spanish Back to top ↑
Adapt the system prompt in the ORANGE box to make Claude output its answer in Spanish.
If your answer is correct, Claude's response cell will turn GREEN.
System Prompt Claude's Response
[Replace this text]
➤
#ERROR!
Prompt
User: Hello Claude, how are you?
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 2.2 - One Player Only Back to top ↑
Modify the basketball player prompt in the YELLOW prompt box so that Claude doesn't equivocate at all and responds with ONLY the name of
one specific player, with no other words or punctuation.
If your answer is correct, Claude's response cell will turn GREEN.
Prompt Claude's Response
User: Who is the best basketball player of all time?
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 2.3 - Write a Story Back to top ↑
Modify the prompt in the YELLOW prompt box so that Claude responds with as long a response as you can muster. If your answer is over 800
words, Claude's response cell will turn GREEN. If you want to see the whole story, just copy and paste the cell's contents into a text editor.
Note: If you did your job well, it will be "Loading" its response for a while, as the time Claude takes to respond is largely proportional to the
number of output tokens.
Prompt Claude's Response
User: Can you write me a story?
➤
#ERROR!
← Chapter 1 Exercises: Basic Prompt Structure Chapter 2 Exercises: Being Clear and Direct →
Back to top ↑
Table of Contents
Chapter 2: Being Clear and Direct - Exercises
Page Contents
Exercise 2.1 - Spanish
Exercise 2.2 - One Player Only
Exercise 2.3 - Write a Story
Exercise 2.1 - Spanish Back to top ↑
Adapt the system prompt in the ORANGE box to make Claude output its answer in Spanish.
If your answer is correct, Claude's response cell will turn GREEN.
System Prompt Claude's Response
[Replace this text]
➤
#ERROR!
Prompt
User: Hello Claude, how are you?
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 2.2 - One Player Only Back to top ↑
Modify the basketball player prompt in the YELLOW prompt box so that Claude doesn't equivocate at all and responds with ONLY the name of
one specific player, with no other words or punctuation.
If your answer is correct, Claude's response cell will turn GREEN.
Prompt Claude's Response
User: Who is the best basketball player of all time?
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 2.3 - Write a Story Back to top ↑
Modify the prompt in the YELLOW prompt box so that Claude responds with as long a response as you can muster. If your answer is over 800
words, Claude's response cell will turn GREEN. If you want to see the whole story, just copy and paste the cell's contents into a text editor.
Note: If you did your job well, it will be "Loading" its response for a while, as the time Claude takes to respond is largely proportional to the
number of output tokens.
Prompt Claude's Response
User: Can you write me a story?
➤
#ERROR!
← Chapter 1 Exercises: Basic Prompt Structure Chapter 2 Exercises: Being Clear and Direct →
Back to top ↑
Chapter 3: Assigning Roles (Role Prompting)
Page Contents
Lesson
Examples
Example Playground
Chapter 3: Assigning Roles (Role Prompting)
Continuing on the theme of Claude having no context aside from what you say, it's sometimes important to prompt Claude to inhabit a specific role (including all necessary
context). This is also known as role prompting. The more detail to the role context, the better.
Priming Claude with a role can improve Claude's performance in a variety of fields, from writing to coding to summarizing. It's like how humans can sometimes be helped when
told to "think like a ______". Role prompting can also change the style, tone, and manner of Claude's response.
Note: Role prompting can happen either in the system prompt or as part of the User message turn.
Examples Back to top ↑
In the example below, we see that without role prompting, Claude provides a straightforward and non-stylized answer when asked to give a single sentence perspective on
skateboarding.
However, when we prime Claude to inhabit the role of a cat, Claude's perspective changes, and thus Claude's response tone, style, content adapts to the new role.
Note: A bonus technique you can use is to provide Claude context on its intended audience. Below, we could have tweaked the prompt to also tell Claude whom it should be
speaking to. "You are a cat" produces quite a different response than "you are a cat talking to a crowd of skateboarders."
System Prompt (without Role Prompting) Claude's Response
➤
#ERROR!
Prompt
User: In one sentence, what do you think about
skateboarding?
Here is the same user question, except with role prompting.
System Prompt (with Role Prompting) Claude's Response
You are a cat.
➤
#ERROR!
Prompt
User: In one sentence, what do you think about
skateboarding?
System Prompt (without Role Prompting) Claude's Response
➤
#ERROR!
Prompt
User: Jack is looking at Anne. Anne is looking at George. Jack
is married, George is not, and we don’t know if Anne is
married. Is a married person looking at an unmarried person?
Now, what if we prime Claude to act as a logic bot? How will that change Claude's answer?
It turns out that with this new role assignment, Claude gets it right. (Although notably not for all the right reasons)
System Prompt (with Role Prompting) Claude's Response
You are a logic bot designed to answer complex logic
problems.
➤
#ERROR!
Prompt
User: Jack is looking at Anne. Anne is looking at George. Jack
is married, George is not, and we don’t know if Anne is
married. Is a married person looking at an unmarried person?
➤
#ERROR!
← Chapter 2 Exercises: Being Clear and Direct Chapter 3 Exercises: Assigning Roles →
Back to top ↑
Table of Contents
Chapter 3: Assigning Roles - Exercises
Page Contents
Exercise 3.1 - Math Correction
Exercise 3.1 - Math Correction Back to top ↑
In some instances, Claude may struggle with mathematics, even simple mathematics. Below, Claude incorrectly assesses the math problem as
correctly solved, even though there's an obvious arithmetic mistake in the second step. Note that Claude actually catches the mistake when going
through step-by-step, but doesn't jump to the conclusion that the overall solution is wrong.
Adapt the text in the YELLOW prompt box and / or the ORANGE system prompt box to make Claude grade the solution as incorrectly solved,
rather than correctly solved.
If your answer is correct, Claude's response cell will turn GREEN.
System Prompt Claude's Response
➤
#ERROR!
Prompt
User: Is this equation solved correctly below?
2x - 3 = 9
2x = 6
x = 3
If you need a hint, click on the plus sign ( ) on the far left of this row.
← Chapter 3: Assigning Roles Chapter 4: Separating Data and Instructions →
Back to top ↑
Table of Contents
Chapter 4: Separating Data from Instructions
Page Contents
Lesson
Examples
Example Playground
Lesson Back to top ↑
Oftentimes, we don't want to write full prompts, but instead want prompt templates that can be modified later with additional input data before submitting to Claude. This might come in handy if you want Claude to do the same thing every time, but the data that Claude uses for its task might be different each time.
Luckily, we can do this pretty easily by separating the fixed skeleton of the prompt from variable user input, then substituting the user input into the prompt before sending the full
prompt to Claude.
Below, we'll walk step by step through how to write a substitutable prompt template, as well as how to substitute in user input.
Examples Back to top ↑
In this first example, we're asking Claude to act as an animal noise generator. Notice that the full prompt submitted to Claude (the YELLOW third box in the chain) is just the prompt template (first ox) substituted with the input (in this case, "Cow", in the second box). Notice that the word "Cow" replaces "{{ANIMAL}}" in the yellow third box.
Note: You don't have to call your substitution placeholder anything in particular. For this example, definitely use {{ANIMAL}}, as that's how the exercise is formatted. But in
general, just as easily, we could have called it "{{CREATURE}}" or "{{A}}" (but it's generally good to have your stand-ins be specific and relevant so that your prompt is easy to understand
even without the substitution). Just make sure that whatever you name your substitution placeholder is what you use for the substitution formula.
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: I will tell you the name of an
animal. Please respond with the
noise that animal makes. {{ANIMAL}}
﹢
Cow
=
User: I will tell you the name of an animal.
Please respond with the noise that animal makes. Cow
➤
#ERROR!
Why would we want to separate and substitute inputs like this? Well, prompt templates simplify repetitive tasks. Let's say you build a prompt structure that invites third party users to submit
content to the prompt (in this case the animal whose sound they want to generate). These third party users don't have to write or even see the full prompt. All they have to do is fill in variables.
We do this substitution here using spreadsheet functions, but this is a best practice for coding as well! We use the {{double-curly-brackets}} formatting in our own code.
Note: Prompt templates can have as many variables as desired.
When introducing substitution variables like this, it is very important to make sure Claude knows where variables start and end (vs. instructions or task descriptions). Let's look at an example
where there is no separation between the instructions and the substitution variable.
Prompt Template Input {{EMAIL}} Prompt After Substitution Claude's Response
User: Yo Claude. {{EMAIL}} <-----
Make this email more polite but don't
change anything else about it.
﹢
Show up at 6am tomorrow
because I'm the CEO and I
say so.
=
User: Yo Claude. Show up at 6am tomorrow
because I'm the CEO and I say so. <-----
Make this email more polite but don't change
anything else about it.
➤
#ERROR!
Here, Claude thinks "Yo Claude" is part of the email it's supposed to rewrite! You can tell because it begins its rewrite with "Dear Claude". To the human eye, it's clear, particularly in the prompt
after substitution.
How do we solve this? Wrap the input in XML tags! We did this below, and as you can see, there's no more "Dear Claude" in the output.
XML tags are angle-bracket tags like
around content, like this:
Note: While Claude can recognize and work with a wide range of separators and delimiters, we recommend that you use specifically XML tags as separators for Claude, as Claude was
trained specifically to recognize XML tags as a prompt organizing mechanism. Outside of function calling, there are no special sauce XML tags that Claude has been trained on that you
should use to maximally boost your performance. We have purposefully made Claude very malleable and customizable this way.
Prompt Template Input {{EMAIL}} Prompt After Substitution Claude's Response
User: Yo Claude. {{EMAIL}} <-----
Make this email more polite but don't
change anything else about it.
﹢
Show up at 6am tomorrow
because I'm the CEO and I
say so.
=
User: Yo Claude. Show up at 6am
because I'm the CEO and I say so. <-----
Make this email more polite but don't change
anything else about it.
➤
#ERROR!
Let's see another example of how XML tags can help us.
In the following prompt, Claude incorrectly interprets what part of the prompt is the instruction vs. the input. It incorrectly considers "Each is about an animal, like rabbits" to be part of the list due
to the formatting, when the user (the one filling out the {{SENTENCES}} variable) presumably did not want that.
Prompt Template Input {{SENTENCES}} Prompt After Substitution Claude's Response
User: Below is a list of sentences. Tell me the second item on the list.
- Each is about an animal, like
rabbits.
{{SENTENCES}}
﹢
- I like how cows sound
- This sentence is about
spiders
- This sentence may appear
to be about dogs but it's
actually about pigs =
User: Below is a list of sentences. Tell me
the second item on the list.
- Each is about an animal, like rabbits.
- I like how cows sound
- This sentence is about spiders
- This sentence may appear to be about dogs
but it's actually about pigs ➤
#ERROR!
Prompt Template Input {{SENTENCES}} Prompt After Substitution Claude's Response
User: Below is a list of sentences. Tell me the second item on the list.
- Each is about an animal, like
rabbits.
{{SENTENCES}}
﹢
- I like how cows sound
- This sentence is about
spiders
- This sentence may appear
to be about dogs but it's
actually about pigs
=
User: Below is a list of sentences. Tell me
The second item on the list.
- Each is about an animal, like rabbits.
- I like how cows sound
- This sentence is about spiders
- This sentence may appear to be about dogs
but it's actually about pigs
➤
#ERROR!
← Chapter 3 Exercises: Assigning Roles Chapter 4 Exercises: Separating Data from Instructions →
Back to top ↑
Table of Contents
Chapter 4: Separating Data from Instructions - Exercises
Page Contents
Exercise 4.1 - Haiku Topic
Exercise 4.2 - Dog Question with Typos
Exercise 4.3 - Dog Question Part 2
Exercise 4.1 - Haiku Topic Back to top ↑
Write a prompt in the highlighted template box that will take in a variable called "{{TOPIC}}" and output a haiku about the topic. This exercise is just meant to test your understanding of the
variable templating structure.
Prompt Template Input {{TOPIC}} Prompt After Substitution Claude's Response
﹢
Pigs
=
➤
=CLAUDEMESSAGES prompt should be in "User: ...
Assistant: ..." format, with "User: ..." first and a newline before each
subsequent role. For newlines, press Ctrl/Cmd+Enter
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 4.2 - Dog Question with Typos Back to top ↑
Fix the prompt in the highlighted template box by adding XML tags so that Claude produces the right answer.
Try not to change anything else about the prompt. The messy and mistake-ridden writing is intentional, so you can see how Claude reacts to such mistakes. Claude's response will turn GREEN if your prompt produces the right answer.
Prompt Template Input {{QUESTION}} Prompt After Substitution Claude's Response
User: Hia its me i have a q about
dogs jkaerjv {{QUESTION}} jklmvca
tx it help me muhch much atx fst fst
answer short short tx ﹢
ar cn brown?
=
User: Hia its me i have a q about dogs jkaerjv
ar cn brown? jklmvca tx it help me muhch much atx fst fst
answer short short tx
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 4.3 - Dog Question Part 2 Back to top ↑
Fix the prompt in the highlighted template box WITHOUT adding XML tags. Instead, remove only one or two words from the prompt.
Just as with the above exercises, try not to change anything else about the prompt. This will show you what kind of language Claude can parse and understand. Claude's response will turn GREEN if your prompt produces the right answer.
Prompt Template Input {{QUESTION}} Prompt After Substitution Claude's Response
User: Hia its me i have a q about
dogs jkaerjv {{QUESTION}} jklmvca
tx it help me muhch much atx fst fst
answer short short tx ﹢
ar cn brown?
=
User: Hia its me i have a q about dogs jkaerjv
ar cn brown? jklmvca tx it help me muhch much atx fst fst
answer short short tx
➤
#ERROR!
← Chapter 4: Separating Data from Instructions Chapter 5: Formatting Output & Speaking for Claude →
Back to top ↑
Chapter 5: Formatting Output & Speaking for Claude
Page Contents
Lesson
Examples
Example Playground
Claude can format its output in a wide variety of ways. You just need to ask for it to do so!
One of these ways is by using XML tags to separate out the response from any other superfluous text. You've already learned that you can use XML tags to make your prompt clearer and
more parseable to Claude. It turns out, you can also ask Claude to use XML tags to make its output clearer and more easily understandable to humans.
Examples Back to top ↑
Remember the 'poem preamble problem' we solved in Chapter 2 by asking Claude to skip the preamble entirely? It turns out we can also achieve a similar outcome by telling Claude to put the
prompt in XML tags.
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Put it in
﹢
Rabbit
=
User: Please write a haiku about Rabbit. Put
it in
➤
#ERROR!
Why is this something we'd want to do? Well, having the output in XML tags allows the end user to reliably get the poem and only the poem by writing a short program to extract the content
between XML tags.
An extension of this technique is to put the first XML tag AFTER "Assistant:". When you put text after "Assistant:", you're basically telling Claude that Claude has already said something,
and that it should continue from that point onward. This technique is called "speaking for Claude" or "prefilling Claude's response."
Below, we've done this with the first
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Put it in
﹢
Cat
=
User: Please write a haiku about Cat. Put it in
#ERROR!
Claude also excels at using other output formatting styles, notably JSON. If you want to enforce JSON output (not deterministically, but close to it), you can also prefill Claude's response with
the opening bracket, "{".
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Please write a haiku about Cat. Use JSON format with
the keys as "first_line", "second_line", and "third_line".
Assistant: {
Cat
=
User: Please write a haiku about Cat. Use
JSON format with the keys as "first_line",
"second_line", and "third_line".
Assistant: { ➤
#ERROR!
Below is an example of multiple input variables in the same prompt AND output formatting specification, all done using XML tags.
Prompt Template Input {{EMAIL}} Prompt After Substitution Claude's Response
User: Hey Claude. Here is an email:
email more {{ADJECTIVE}}. Write the
new version in <{{ADJECTIVE} } _email> XML tags.
Assistant: <{{ADJECTIVE}}_email>
﹢
Hi Zack, just pinging you
for a quick update on that
prompt you were
supposed to write.
=
User: Hey Claude. Here is an email:
update on that prompt you were supposed to write.. Make this email more olde
english. Write the new version in
english_email> XML tags.
Assistant:
➤
#ERROR!
Input {{ADJECTIVE}}
﹢
olde english
Bonus lesson: To learn more about controlling Claude's output using the API, click on the plus sign ( ) on the far left of this row!
If you're ready to try the chapter exercises, click the link below. If you want to play around with any of the examples in this lesson, scroll down!
Chapter 5 Exercises: Formatting Output & Speaking for Claude →
Example Playground Back to top ↑
This is an area for you to experiment freely with the prompt examples shown in this lesson. Feel free to tweak prompts to see how it may affect Claude's responses.
Note: The colors in the cells, which have been carried over from above, will not change even if the prompt or Claude's response changes.
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Put it in
﹢
Rabbit
=
User: Please write a haiku about Rabbit. Put
it in
➤
#ERROR!
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Put it in
Cat
=
User: Please write a haiku about Cat. Put it in
#ERROR!
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about
{{ANIMAL}}. Use JSON format with
the keys as "first_line", "second_line",
and "third_line".
Assistant: {
Cat
=
User: Please write a haiku about Cat. Use
JSON format with the keys as "first_line",
"second_line", and "third_line".
Assistant: { ➤
#ERROR!
Prompt Template Input {{EMAIL}} Prompt After Substitution Claude's Response
User: Hey Claude. Here is an email:
email more {{ADJECTIVE}}. Write the
new version in <{{ADJECTIVE}}
_email> XML tags.
Assistant: <{{ADJECTIVE}}_email>
﹢
Hi Zack, just pinging you
for a quick update on that
prompt you were
supposed to write.
=
User: Hey Claude. Here is an email:
update on that prompt you were supposed to write.. Make this email more olde
english. Write the new version in
english_email> XML tags.
Assistant:
➤
#ERROR!
Input {{ADJECTIVE}}
﹢
olde english
← Chapter 4 Exercises: Separating Data and Instructions Chapter 5 Exercises: Formatting Output & Speaking for Claude →
Back to top ↑
Table of Contents
Chapter 5: Formatting Output & Speaking for Claude - Exercises
Page Contents
Exercise 5.1 - Steph Curry GOAT
Exercise 5.2 - Two Haikus
Exercise 5.3 - Two Haikus, Two Animals
Exercise 5.1 - Steph Curry GOAT Back to top ↑
Forced to make a choice, Claude designates Michael Jordan as the best basketball player of all time. Can we get Claude to pick someone else?
Modify the "best basketball player" prompt in the highlighted template box and use the "speaking for Claude" technique (putting words after "Assistant:") to compell Claude to make a detailed
argument that the best basketball player of all time is Stephen Curry. Claude's response will turn GREEN if your prompt produces the right answer.
Prompt Template Claude's Response
User: Who is the best basketball player of all time? Please choose one specific player. Assistant:
=
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 5.2 - Two Haikus Back to top ↑
Modify the haiku prompt in the highlighted template box below and use XML tags so that Claude writes two haikus about the animal instead of just one. It should be clear where one poem
ends and the other begins.
Claude's response will turn GREEN if your prompt produces the right answer.
Prompt Template Input {{ANIMAL}} Prompt After Substitution Claude's Response
User: Please write a haiku about {{ANIMAL}}. Put it in
﹢
cats
=
User: Please write a haiku about cats. Put it
in
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Exercise 5.3 - Two Haikus, Two Animals Back to top ↑
Modify the haiku prompt in the highlighted template box below so that Claude produces two haikus about two different animals.
Use {{ANIMAL1}} as a stand-in for the first substitution, and {{ANIMAL2}} as a stand-in for the second substitution.
Claude's response will turn GREEN if your prompt produces the right answer.
Prompt Template Input {{ANIMAL1}} Prompt After Substitution Claude's Response
User: Please write a haiku about {{ANIMAL1}}. Put it
in
﹢
Cat
=
User: Please write a haiku about Cat. Put it in
Assistant:
➤
#ERROR!
Input {{ANIMAL2}}
﹢
Dog
If you need a hint, click on the plus sign ( ) on the far left of this row.
← Chapter 6: Precognition (Thinking Step by Step) Chapter 7: Using Examples (Few-Shot Prompting) →
Back to top ↑
Chapter 6: Precognition (Thinking Step by Step)
Page Contents
Lesson
Examples
Example Playground
If someone woke you up and immediately started asking you several complicated questions that you had to respond to right away, how would you do? Probably not as good as if
you were given time to think through your answer first. Guess what? Claude is the same way.
Giving Claude time to think step by step sometimes makes Claude more accurate, particularly for complex tasks. However, thinking only counts when it's out loud. You
cannot ask Claude to think but output only the answer - in this case, no thinking has actually occurred.
Examples Back to top ↑
In the movie review prompt below, it's clear to a human reader that the second sentence belies the first. But Claude takes the word "unrelated" too literally.
Prompt Claude's Response
User: Is this movie review sentiment positive or negative?
This movie blew my mind with its freshness and originality. In totally
unrelated news, I have been living under a rock since the year 1900. ➤
#ERROR!
To improve Claude's response, let's allow Claude to think things out first before answering. We do that by literally spelling out the steps that Claude should take in
order to process and think through its task. Along with a dash of role prompting, this enables Claude to understand the review more deeply.
System Prompt Claude's Response
You are a savvy reader of movie reviews.
➤
#ERROR!
Prompt
User: Is this review sentiment positive or negative? First, write the best
arguments for each side in
This movie blew my mind with its freshness and originality. In totally
unrelated news, I have been living under a rock since 1900.
Claude is sometimes sensitive to ordering. This example is on the frontier of Claude's ability to understand nuanced text, and when we swap the order of the arguments from
the previous example so that negative is first and positive is second, this changes Claude's overall assessment to positive.
In most situations (but not all, confusingly enough), Claude is more likely to choose the second of two options, possibly because in its training data from the web, second
options were more likely to be correct.
Prompt Claude's Response
User: Is this review sentiment negative or positive? First write the best
arguments for each side in
This movie blew my mind with its freshness and originality. Unrelatedly, I
have been living under a rock since 1900.
➤
#ERROR!
Letting Claude think can shift Claude's answer from incorrect to correct. It's that simple in many cases where Claude makes mistakes!
Let's go through an example where Claude's answer is incorrect to see how asking Claude to think can fix that.
Prompt Claude's Response
User: Name a famous movie starring an actor who was born in the year
1956. ➤
#ERROR!
Let's fix this by asking Claude to think step by step, this time in
Prompt Claude's Response
User: Name a famous movie starring an actor who was born in the year
1956. First brainstorm about some actors and their birth years in
➤
#ERROR!
If you're ready to try the chapter exercises, click the link below. If you want to play around with any of the examples in this lesson, scroll down!
Chapter 6 Exercises: Precognition (Thinking Step by Step) →
Example Playground Back to top ↑
This is an area for you to experiment freely with the prompt examples shown in this lesson. Feel free to tweak prompts to see how it may affect Claude's responses.
Note: The colors in the cells, which have been carried over from above, will not change even if the prompt or Claude's response changes.
Prompt Claude's Response
User: Is this movie review sentiment positive or negative?
This movie blew my mind with its freshness and originality. In totally
unrelated news, I have been living under a rock since 1900. ➤
#ERROR!
System Prompt Claude's Response
You are a savvy reader of movie reviews.
➤
Prompt
User: Is this review sentiment positive or negative? First, write the best
arguments for each side in
This movie blew my mind with its freshness and originality. In totally
unrelated news, I have been living under a rock since 1900.
Prompt Claude's Response
User: Is this review sentiment negative or positive? First write the best
arguments for each side in
This movie blew my mind with its freshness and originality. Unrelatedly, I
have been living under a rock since 1900.
➤
#ERROR!
User: Is this review sentiment negative or positive? First write the best
arguments for each side in
This movie blew my mind with its freshness and originality. Unrelatedly, I
have been living under a rock since 1900.
➤
#ERROR!
User: Name a famous movie starring an actor who was born in the year
1956. ➤
#ERROR!
User: Name a famous movie starring an actor who was born in the year
1956. First brainstorm about some actors and their birth years in
➤
#ERROR!
← Chapter 6 Exercises: Precognition (Thinking Step by Step) Chapter 7: Using Examples (Few-Shot Prompting) →
Back to top ↑
Table of Contents
Chapter 6: Precognition (Thinking Step by Step) - Exercises
Page Contents
Exercise 6.1 - Classifying Emails
Exercise 6.2 - Email Classification Formatting
Exercise 6.1 - Classifying Emails Back to top ↑
In this exercise, we'll be instructing Claude to sort emails into the following categories:
(A) Pre-sale question
(B) Broken or defective item
(C) Billing question
(D) Other (please explain)
For the first part of the exercise, change the prompt in the YELLOW highlighted prompt template box to make Claude output the correct classification. Use Chain of Thought. To be marked as correct, Claude's answer needs to include the letter (A - D) of the correct choice, with the parentheses, as well as the name of the category.
Refer to the "Correct Classification" in column K to know which emails should be classified under what category. Claude's response will turn GREEN if your prompt yields the correct answer.
Tip: Use precognition and other techniques you've learned leading up to this chapter! Remember, thinking only counts when it's out loud!
Prompt Template Input - {{EMAIL}} Prompt After Substitution Claude's Response Correct Classification
User: Please classify this email as either green or blue: {{EMAIL}} ﹢
Hi -- My Mixmaster4000 is
powering a strange noise when I
operate it. It also smells a bit smoky
and plasticky, like burning electronics. I need a replacement. =
User: Please classify this email as either
green or blue: Hi -- My Mixmaster4000 is
powering a strange noise when I operate it.
It also smells a bit smoky and plasticky, like
burning electronics. I need a replacement. ➤
#ERROR!
B
﹢
Can I use my Mixmaster 4000
to mix paint, or is it only meant
for mixing food? =
User: Please classify this email as either
green or blue: Can I use my Mixmaster 4000
to mix paint, or is it only meant for mixing
food? ➤
#ERROR!
﹢
I HAVE BEEN WAITING 4 MONTHS FOR MY MONTHLY CHARGES TO END AFTER CANCELLING!! WTF IS GOING ON??? =
User: Please classify this email as either
green or blue: I HAVE BEEN WAITING 4 MONTHS FOR MY MONTHLY CHARGES TO END AFTER CANCELLING!! WTF IS GOING ON??? ➤
#ERROR!
﹢
How did I get here I am not good with computer. Halp.
=
User: Please classify this email as either
green or blue: How did I get here I am not
good with computer. Halp.
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Bonus Question: Time to think like a data scientist! Why is the second email the trickiest one to classify correctly? If the classification is debatable for humans, it's likely also tough for Claude!
If you need a hint, click on the plus sign ( ) on the far left of this row.
Hint: Let's take this exercise step by step:
1. How will Claude know what categories you want to use? Tell it! Include the four categories you want directly in the prompt. Be sure to include the parenthetical letters as well for easy classification. Feel
free to use XML tags to organize your prompt and make clear to Claude where the categories begin and end.
2. Try to cut down on superfluous text so that Claude immediately answers with the classification and ONLY the classification. There are several ways to do this, from speaking for Claude (providing anything
from the beginning of the sentence to a single open parenthesis so that Claude knows you want the parenthetical letter as the first part of the answer) to telling Claude that you want the classification and
only the classification, skipping the preamble.
3. Claude may still be incorrectly categorizing or not including the names of the categories when it answers. Fix this by telling Claude to include the full category name in its answer.
4. Be sure that you still have {{EMAIL}} somewhere in your prompt template so that we can properly substitute in emails for Claude to evaluate.
The conditional formatting in this exercise is looking for the correct categorization letter + the closing parentheses and the first letter of the name of the category, such as "C) B" or "B) B" etc.
Still stuck? Click on the plus sign ( ) on the far left of this row for the answer.
Exercise 6.2 - Email Classification Formatting Back to top ↑
In this exercise, we're going to refine the output of the above prompt to yield an answer formatted exactly how we want it.
Use your favorite output formatting technique to make Claude wrap just the letter of the correct classification in "
each email.
Claude's response will turn GREEN if your prompt yields the correct answer. For instance, the answer to the first email should contain the exact string "
Tip: As a first step, copy the final correct version of your prompt from Exercise 1 down into the highlighted prompt template box below. Then edit and refine your initial prompt from there.
Note: In this exercise, you can see that Claude in Sheets is a powerful prompt evaluation tool. Using substitutions, you can easily check how well a prompt does in multiple contexts by only modifying one
prompt and yielding several responses from Claude as a result. Here, we evaluate the prompt across four instances, but you can easily expand this evaluation to as many rows as needed.
Prompt Template Input {{EMAIL}} Prompt After Substitution Claude's Response
User: Please classify this email as
either green or blue: {{EMAIL}} ﹢
Hi -- My Mixmaster4000 is
powering a strange noise when I
operate it. It also smells a bit smoky
and plasticky, like burning electronics. I need a replacement. =
User: Please classify this email as either
green or blue: Hi -- My Mixmaster4000 is
operating a strange noise when I operate it.
burning electronics. I need a replacement. ➤
#ERROR!
User: Please classify this email as
either green or blue: {{EMAIL}}
﹢
Can I use my Mixmaster 4000 to mix paint, or is it only meant
for mixing food? =
User: Please classify this email as either
green or blue: Can I use my Mixmaster 4000
to mix paint, or is it only meant for mixing
food? ➤
#ERROR!
﹢
I HAVE BEEN WAITING 4 MONTHS FOR MY MONTHLY CHARGES TO END AFTER CANCELLING!! WTF IS GOING ON??? =
User: Please classify this email as either
green or blue: I HAVE BEEN WAITING 4 MONTHS FOR MY MONTHLY CHARGES TO END AFTER CANCELLING!! WTF IS GOING ON??? ➤
#ERROR!
﹢
How did I get here I am not good with computer. Halp.
=
User: Please classify this email as either
green or blue: How did I get here I am not
not good with computer. Halp.
➤
#ERROR!
If you need a hint, click on the plus sign ( ) on the far left of this row.
← Chapter 6: Precognition (Thinking Step by Step) Chapter 7: Using Examples (Few-Shot Prompting) →
Back to top ↑
Chapter 7: Using Examples (Few-Shot Prompting)
Page Contents
Lesson
Examples
Example Playground
Chapter 7: Using Examples (Few-Shot Prompting)
Giving Claude examples of how you want it to behave (or how you want it not to behave) is extremely effective for:
- Getting the right answer
- Getting the answer in the right format
This sort of prompting is also called "few shot prompting". You might also encounter the phrase "zero-shot" or "n-shot" or "one-shot". The number of "shots"
refers to how many examples are used within the prompt.
Examples Back to top ↑
Pretend you're a developer trying to build a "parent bot" that responds to questions from kids. Claude's default response is quite formal and robotic. This is going
to break a child's heart.
Prompt Claude's Response
User: Will Santa bring me presents on Christmas?
➤
#ERROR!
You could take the time to describe your desired tone, but it's much easier just to give Claude a few examples of ideal responses.
Prompt Claude's Response
User: Please complete the conversation by writing the next line, speaking as "A".
Q: Is the tooth fairy real?
A: Of course, sweetie. Wrap up your tooth and put it under your pillow tonight. There
might be something waiting for you in the morning.
Q: Will Santa bring me presents on Christmas? ➤
#ERROR!
In the following formatting example, we could walk Claude step by step through a set of formatting instructions on how to extract names and professions and then
format them exactly the way we want, or we could just provide Claude with some correctly-formatted examples and Claude can extrapolate from there. Note the
"
Tip: Double click on the prompt cell to scroll through the entire long prompt.
Prompt Claude's Response
User: Silvermist Hollow, a charming village, was home to an extraordinary group of
individuals. Among them was Dr. Liam Patel, a neurosurgeon who revolutionized
surgical techniques at the regional medical center. Olivia Chen was an innovative
architect who transformed the village's landscape with her sustainable and
breathtaking designs. The local theater was graced by the enchanting symphonies of
Ethan Kovacs, a professionally-trained musician and composer. Isabella Torres, a selftaught chef with a passion for locally sourced ingredients, created a culinary sensation
with her farm-to-table restaurant, which became a must-visit destination for food
lovers. These remarkable individuals, each with their distinct talents, contributed to the
vibrant tapestry of life in Silvermist Hollow.
1. Dr. Liam Patel [NEUROSURGEON]
2. Olivia Chen [ARCHITECT]
3. Ethan Kovacs [MISICIAN AND COMPOSER]
4. Isabella Torres [CHEF]
At the heart of the town, Chef Oliver Hamilton has transformed the culinary scene with
his farm-to-table restaurant, Green Plate. Oliver's dedication to sourcing local, organic
ingredients has earned the establishment rave reviews from food critics and locals
alike.
Just down the street, you'll find the Riverside Grove Library, where head librarian
Elizabeth Chen has worked diligently to create a welcoming and inclusive space for all.
Her efforts to expand the library's offerings and establish reading programs for children
have had a significant impact on the town's literacy rates.
As you stroll through the charming town square, you'll be captivated by the beautiful
murals adorning the walls. These masterpieces are the work of renowned artist,
Isabella Torres, whose talent for capturing the essence of Riverside Grove has
brought the town to life.
Riverside Grove's athletic achievements are also worth noting, thanks to former
Olympic swimmer-turned-coach, Marcus Jenkins. Marcus has used his experience
and passion to train the town's youth, leading the Riverside Grove Swim Team to
several regional championships.
1. Oliver Hamilton [CHEF]
2. Elizabeth Chen [LIBRARIAN]
3. Isabella Torres [ARTIST]
4. Marcus Jenkins [COACH]
Oak Valley, a charming small town, is home to a remarkable trio of individuals whose
skills and dedication have left a lasting impact on the community.
At the town's bustling farmer's market, you'll find Laura Simmons, a passionate organic
farm er known for her delicious and sustainably grown produce. Her dedication to
promoting healthy eating has inspired the town to embrace a more eco-conscious
lifestyle.
In Oak Valley's community center, Kevin Alvarez, a skilled dance instructor, has
brought the joy of movement to people of all ages. His inclusive dance classes have
fostered a sense of unity and self-expression among residents, enriching the local arts
sense.
Lastly, Rachel O'Connor, a tireless volunteer, dedicates her time to various charitable
initiatives. Her commitment to improving the lives of others has been instrumental in
creating a strong sense of community within Oak Valley.
Through their unique talents and unwavering dedication, Laura, Kevin, and Rachel
have woven themselves into the fabric of Oak Valley, helping to create a vibrant and
thriving small town.
Assistant:
➤
#ERROR!
If you're ready to try the chapter exercises, click the link below. If you want to play around with any of the examples in this lesson, scroll down!
Chapter 7 Exercises: Using Examples →
Example Playground Back to top ↑
This is an area for you to experiment freely with the prompt examples shown in this lesson. Feel free to tweak prompts to see how it may affect Claude's responses.
Note: The colors in the cells, which have been carried over from above, will not change even if the prompt or Claude's response changes.
Prompt Claude's Response
User: Will Santa bring me presents on Christmas?
➤
#ERROR!
User: Please complete the conversation by writing the next line, speaking as "A".
Q: Is the tooth fairy real?
A: Of course, sweetie. Wrap up your tooth and put it under your pillow tonight. There
might be something waiting for you in the morning.
Q: Will Santa bring me presents on Christmas? ➤
#ERROR!
User: In the bustling town of Emerald Hills, a diverse group of individuals made their
mark. Sarah Martinez, a dedicated nurse, was known for her compassionate care at
the local hospital. David Thompson, an innovative software engineer, worked tirelessly
on groundbreaking projects that would revolutionize the tech industry. Meanwhile,
Emily Nakamura, a talented artist and muralist, painted vibrant and thought-provoking
pieces that adorned the walls of buildings and galleries alike. Lastly, Michael
O'Connell, an ambitious entrepreneur, opened a unique, eco-friendly cafe that quickly
became the town's favorite meeting spot. Each of these individuals contributed to the
rich tapestry of the Emerald Hills community.
1. Sarah Martinez [NURSE]
2. David Thompson [SOFTWARE ENGINEER]
3. Emily Nakamura [ARTIST]
4. Michael O'Connell [ENTREPRENEUR]
At the heart of the town, Chef Oliver Hamilton has transformed the culinary scene with
his farm-to-table restaurant, Green Plate. Oliver's dedication to sourcing local, organic
ingredients has earned the establishment rave reviews from food critics and locals
alike.
Just down the street, you'll find the Riverside Grove Library, where head librarian
Elizabeth Chen has worked diligently to create a welcoming and inclusive space for all.
Her efforts to expand the library's offerings and establish reading programs for children
have had a significant impact on the town's literacy rates.
As you stroll through the charming town square, you'll be captivated by the beautiful
murals adorning the walls. These masterpieces are the work of renowned artist,
Isabella Torres, whose talent for capturing the essence of Riverside Grove has
brought the town to life.
Riverside Grove's athletic achievements are also worth noting, thanks to former
Olympic swimmer-turned-coach, Marcus Jenkins. Marcus has used his experience
and passion to train the town's youth, leading the Riverside Grove Swim Team to
several regional championships.
1. Oliver Hamilton [CHEF]
2. Elizabeth Chen [LIBRARIAN]
3. Isabella Torres [ARTIST]
4. Marcus Jenkins [COACH]
Oak Valley, a charming small town, is home to a remarkable trio of individuals whose
skills and dedication have left a lasting impact on the community.
At the town's bustling farmer's market, you'll find Laura Simmons, a passionate organic
farm er known for her delicious and sustainably grown produce. Her dedication to
promoting healthy eating has inspired the town to embrace a more eco-conscious
lifestyle.
In Oak Valley's community center, Kevin Alvarez, a skilled dance instructor, has
brought the joy of movement to people of all ages. His inclusive dance classes have
fostered a sense of unity and self-expression among residents, enriching the local arts
sense.
Lastly, Rachel O'Connor, a tireless volunteer, dedicates her time to various charitable
initiatives. Her commitment to improving the lives of others has been instrumental in
creating a strong sense of community within Oak Valley.
Through their unique talents and unwavering dedication, Laura, Kevin, and Rachel
have woven themselves into the fabric of Oak Valley, helping to create a vibrant and
thriving small town.
Assistant:
➤
#ERROR!
← Chapter 6 Exercises: Precognition (Thinking Step by Step) Chapter 7 Exercises: Using Examples →
Back to top ↑
Chapter 8: Avoiding Hallucinations
Page Contents
Lesson
Examples
Example Playground
Lesson Back to top ↑
Some bad news: Claude sometimes "hallucinates" and makes claims that are untrue or unjustified. The good news: there are techniques you
can use to minimize hallucinations.
Below, we'll go over a few of these techniques, namely:
- Giving Claude the option to say it doesn't know the answer to a question
- Asking Claude to find evidence before answering
However, there are many methods to avoid hallucinations, including many of the techniques you've already learned in this course. If Claude
hallucinates, experiment with multiple techniques to get Claude to increase its accuracy.
Examples Back to top ↑
Here is a question about general factual knowledge in answer to which Claude hallucinates several large hippos because it's trying to be as helpful as
possible.
Prompt Claude's Response
User: Who is the heaviest hippo of all time?
➤
#ERROR!
A solution we can try here is to "give Claude an out" — tell Claude that it's OK for it to decline to answer, or to only answer if it actually knows the
answer with certainty.
Prompt Claude's Response
User: Who is the heaviest hippo of all time? Only answer if you know the
answer with certainty.
➤
#ERROR!
In the prompt below, we give Claude a long document containing some "distractor information" that is almost but not quite relevant to the user's
question. Without prompting help, Claude falls for the distractor information and gives an incorrect "hallucinated" answer as to the size of Matterport's
subscriber base as of May 31, 2020.
Note: As you'll learn later in the next chapter, it's best practice to have the question at the bottom after any text or document, but we put it at the
bottom here to make the prompt easier to read. Feel free to double click on the prompt cell to get the full prompt text (it's very long!).
Prompt Claude's Response
User:
date of May 31, 2020?
Please read the below document. Then write a brief
numerical answer inside
Matterport SEC filing 10-K 2023
Item 1. Business
Our Company
Matterport is leading the digitization and datafication of the built
world. We
""" (The document continues with the same extensive content as above)
➤
#ERROR!
How do we fix this? Well, a great way to reduce hallucinations on long documents is to make Claude gather evidence first.
In this case, we tell Claude to first extract relevant quotes, then base its answer on those quotes. Telling Claude to do so here makes it correctly notice
that the quote does not answer the question.
Prompt Claude's Response
User:
date of May 31, 2020?
Please read the below document. Then, in
most relevant quote from the document and consider whether it answers
the user's question or whether it lacks sufficient detail. Then write a brief
numerical answer in
Matterport SEC filing 10-K 2023
Item 1. Business
Our Company
Matterport is leading the digitization and datafication of the built
world. We
""" (The document continues with the same extensive content as above)
➤
#ERROR!
User:
date of May 31, 2020?
Please read the below document. Then, in
most relevant quote from the document and consider whether it answers
the user's question or whether it lacks sufficient detail. Then write a brief
numerical answer in
Matterport SEC filing 10-K 2023
Item 1. Business
Our Company
Matterport is leading the digitization and datafication of the built
world. We
""" (The document continues with the same extensive content as above)
➤
#ERROR!
← Chapter 6: Precognition (Thinking Step by Step) Chapter 7: Using Examples (Few-Shot Prompting) →
Back to top ↑
Chapter 8 Exercises: Avoiding Hallucinations (continued)
Chapter 8 Exercises: Avoiding Hallucinations →
Chapter 9: Complex Prompts from Scratch - Chatbot
Page Contents
Lesson
Example
Lesson Back to top ↑
Congratulations on reaching Chapter 9. This chapter covers complex prompting techniques and real-world use cases, including chatbot prompts, coding prompts, and
career coach prompts. It provides practical templates with step-by-step guidance for constructing prompts that combine several prompt engineering elements for robust Claude interactions.
Example
Prompt: You will be acting as an AI career coach named Joe created by the company AdAstra Careers. Your goal is to give career advice to users. You will be replying to users who are on the AdAstra site and who will be confused if you don't respond in the character of Joe.
... (the example continues with the entire dialogue, history, and prompts as provided in Chapter 9 of the document)
Appendix: Chaining Prompts
Page Contents
Lesson
Examples
Example Playground
Lesson Back to top ↑
The saying goes, "Writing is rewriting." It turns out, Claude can often improve the accuracy of its response when asked to do so!
There are many ways to prompt Claude to "think again". The ways that feel natural to ask a human to double check their work will also generally work for Claude. (Check out our prompt chaining
documentation for further examples of when and how to use prompt chaining.)
Examples Back to top ↑
In this example, we ask Claude to come up with ten words... but one of them isn't a real word.
Prompt Claude's (Incorrect) Response
User: Name ten words that all end with the exact letters "ab".
➤
#ERROR!
Asking Claude to make its answer more accurate fixes the error!
Below, we've pulled down Claude's incorrect response from above and substituted it into a prompt that asks Claude to double-check its previous answer.
Prompt Template
Old Prompt + Claude's Incorrect Response
(From Above) {{CONVO_HISTORY}} Prompt After Substitution Claude's New Response
{{CONVO_HISTORY}} User: Please find replacements for all
"words" that are not real words.
﹢
#ERROR!
=
#ERROR!
➤
=CLAUDEMESSAGES prompt should be in "User:
... Assistant: ..." format, with "User: ..." first and a newline
before each subsequent role. For newlines, press Ctrl/Cmd+Enter
But is Claude revising its answer just because we told it to? What if we start off with a correct answer already? Will Claude lose its confidence? Here, we've placed a correct response in the purple box, and
a s
asked it to double check again.
Prompt Template
Hypothetical Correct Response from Claude
{{CONVO_HISTORY}}
Prompt After Substitution Claude's New Response
{{CONVO_HISTORY}} User: Please find replacements for all
"words" that are not real words.
﹢
User: Name ten words that all end with the exact
letters "ab". Assistant: 1. Cab
2. Dab
3. Grab
4. Gab
5. Jab
6. Lab
7. Nab
8. Slab
9. Tab
10. Blab
=
User: Name ten words that all end with the
exact letters "ab". Assistant: 1. Cab
2. Dab
3. Grab
4. Gab
5. Jab
6. Lab
7. Nab
8. Slab
9. Tab
10. Blab User: Please find replacements for all
"words" that are not real words.
➤
#ERROR!
Turns out, nope! Claude did not waver in its correct answer in this case.
You can also just ask Claude to make its responses better. Below, we asked Claude to first write a story, and then improve the story it wrote. Your personal tastes may vary, but many might agree that Claude's second version is better.
Prompt Claude's Response (First Version of Story)
User: Write a three-sentence short story about a girl who likes to run.
➤
#ERROR!
Prompt Template Claude's Previous Story {{PAST_STORY}} Prompt After Substitution Claude's New Story
{{PAST_STORY}} User: Make the story better.
﹢
#ERROR!
=
The content above completes the Chapter 9 Exercises and Appendix. It continues with examples and templates for complex prompts in coding, finance, and legal use cases, before wrapping up with
"Congratulations & Next Steps" and an extensive Appendix on advanced techniques like Chaining Prompts, Function Calling, and Search & Retrieval.