Prompt Maker: How to Teach Prompt Patterns by Example

Teach prompts by presenting models with a consistent, example-rich pattern so they infer the task and generate high-quality new prompts.

An early area of exploration for me was seeing whether you could teach the models how to write their own prompts.

In a lot of ways, that’s where we ended up with the models we have today: systems trained on a huge number of examples of people wanting to do something, plus examples of what “good instructions” look like for getting the model to correctly understand and respond.

When I went back through my notes, I found a bunch of prompts where I wasn’t just asking for an output—I was giving many different prompt examples in a single prompt, with natural language like “this does this, this does that,” and then letting the model infer the pattern. I got some surprisingly good results from that approach. I could basically tell the model: “Here’s how you interpret a description from somebody, and here’s how you turn that into a new prompt you can then use to solve the problem.”

A lot of what now makes these models extremely capable—training on high-quality data, improved reasoning—has roots in people experimenting with how to get the model to understand what you’re trying to do. You’d find where it “wants to go,” how it naturally explains something, and then you’d give it lots of examples of the right way and the wrong way to do it.

What’s interesting (at least to me) is how often I ended up creating prompts that were designed to generate other prompts. Because if I just asked for a simple task, it often wasn’t enough. If I said “write a blog post,” the model might respond with something unhelpful—like “no, I choose not to”—because it could misinterpret the context (for example, thinking it’s in the middle of a forum discussion).

But if I provided a structured set of examples—simple instructions followed by what the response should look like—I could get much higher-quality outputs. And that was something you could then train on top of. With a very thin post-training layer, you could make your model much more capable with relatively little effort, because you were effectively giving it a map: “when the user asks for X, format the task as Y, and return Z.”

Below is one of my early “prompt maker” prompts. The idea is straightforward: show the model several mini-tasks in a consistent format, so that when you start a new one (“I need to make a…”) it continues the pattern and generates a new prompt in the same style.

Prompt maker

Airport code extractor:

Text: "I want to fly form Los Angeles to Miami."
Airport codes: LAX, MIA

Text: "I want to fly from Orlando to Boston"
Airport codes: MCO, BOS


I need to make a noun identifier:
***Noun identifier***
Text: "The quick brown fox jumped over the lazy dogs"
Nouns: fox, dogs
Text: "The McDonalds cheeseburger tasted great. The milkshake was even better."
Nouns: McDonalds, cheeseburger, milkshake
###
I need to make a sentiment analyzer:
***Sentiment analyzer***
Text: This sucks
Sentiment: Negative
Text: I could really eat a burrito right now!
Sentiment: Neutral
Text: I hate your faces
Sentiment: Negative 
Text: This app is amazing.
Sentiment: Positive
###
I need to make



#A noun identifier:
Text: "The McDonalds cheeseburger tasted great. The milkshake was even better."
Nouns: McDonalds, cheeseburger, milkshake
###
#A sentiment analyzer:
Text: This sucks
Sentiment: Negative
###
#An airport code extractor:
Text: "I want to fly from Orlando to Boston"
Airport codes: MCO, BOS
###
#A tweet sentiment analyser:
Text: "I love this new song by @JustinBieber"
Tweet sentiment: Positive
###
#A headline generator:
Text: "Apple today announced M1, the most powerful chip it has ever created and the first chip designed specifically for the Mac. M1 is optimized for Mac systems in which small size and power efficiency are critically important. As a system on a chip (SoC), M1 combines numerous powerful technologies into a single chip, and features a unified memory architecture for dramatically improved performance and efficiency. M1 is the first personal computer chip built using cutting-edge 5-nanometer process technology and is packed with an astounding 16 billion transistors, the most Apple has ever put into a chip."
Headline: "Apple unleashes M1"
###
#A random address generator:
Text: ""
Address: "123 Main Street, Seattle, WA 98121"
###
#Title generator:
Text: ""
Title: "Mystery of the Disappearing Snowman"
###
#A character name generator:
Text: ""
Name: Daniel Jones
###

What I like about this old example is that it captures the core trick: don’t just ask for a task—show the model what you mean by the task, in a repeatable pattern, with multiple examples. Once you do that, the model can often generalize the structure and fill in the blank for a new capability you didn’t explicitly hard-code.

And in hindsight, that pattern—learning from lots of well-formed examples of “what the user wants” and “how the assistant should respond”—is basically the story of how we got from brittle, inconsistent outputs to systems that feel much more robust and useful.