Early Sentence-to-Email Prompts: A Foundational Transformation Pattern
Turn a minimal instruction into a polished email by providing a handful of consistent examples and letting the model complete the pattern, illustrating in-context learning and rapid productization.
A lot of things moved very quickly early on.
Back then, someone would ask, “Can you try to make this thing work?” and I’d jump in—not just to get it working, but to see if there was a better way to do it, or at least a reliable way. A lot of it was just exploration: you’d try things, see what broke, see what surprised you, and then try again. That ranged from better ways to summarize writing, to coding prompts, to whatever idea happened to come to mind.
Somewhere in that early swirl, I think I came up with what became a “sentence-to-email” prompt.
I can’t be too sure I was the first person to do it, and honestly, if anybody else thinks they came up with it before me, I am very glad to step aside and let them take credit for it. As fun as it is to watch a model turn a single sentence into a much longer email… now that we live in a world where we have to read billions of emails generated from one-sentence prompts, I’m not sure that’s necessarily a better place to be.
There’s a joke that’s been made multiple times, and it still feels painfully accurate: we can use these models to turn one sentence into an email, and then we now use the model to turn that email back into one sentence. Basically, we’re just creating a lot of make-work in between.
Still, as an early demo—especially before people had an intuition for what language models could do—it was a great “aha.” You’d show the model a few examples of a transformation (short instruction → polite, fully formed email), and it would generalize in a way that felt uncanny at the time.
I was sharing versions of this internally at OpenAI very early on, and I’ve found example prompts that go back to the earliest days before GPT-3 was launched. My first example of this was put into the documentation for GPT-3 at launch, and that would have been, I think, the first time many people came across it.
Below is one of my examples from back in 2020. The basic idea was simple: give the model a few examples of the mapping you want, then provide a new one-sentence instruction and let it continue.
Here’s the original prompt (as I was using it then):
Thank John for the book. Andrew
Subject: Thank you! \n
Dear John,
Thank you so much for the book. I really appreciate it.
I hope to hang out soon.
Your friend,
Andrew
Tell Big Burger I appreciate the great service. Kathy Powell
###
Subject: Great service \n
To Whom it May Concern,
I want you to know that I appreciate the great service at my Big Burger.
The staff is outstanding and I enjoy every visit.
Sincerely,
Kathy Powell
###
Invoice Kelly Watkins $500 for design consultation. David N'tko
###
Subject: Invoice for consultation \n
Dear Ms. Watkins,
This is my invoice for $500 for design consultation.
It was a pleasure to work with you.
Sincerely,
David N'tk
###
Invite Amanda and Paul to my party on Friday night. Jenny
###
Subject: Part Friday night! \n
Dear Paul and Amanda,
I hope this finds you doing well.
I want to invite you to my party on Friday night.
It will be a fun night of singing, drinking, and dancing.
Should be fun.
See you on Friday.
Regards,
Jenny
###
Ask George how the plans are coming along. Wilma
###
Subject: Checking in\n
Hey George,
I wanted to check in and see if there were any updates on the plans.
Best,
Wilma
If you’ve used any modern “write this email for me” feature, you’ve basically seen this same pattern industrialized: the user provides a minimal intent, and the system expands it into something that looks like a real email someone might send.
What I still find interesting about the original version isn’t that it writes emails—email is just a convenient format—but that it captures a general trick for prompting:
- Show the model a handful of examples of the transformation you want.
- Keep the examples consistent in structure and tone.
- Give it a new input in the same format.
- Let it complete the pattern.
At the time, it was a clean demo of in-context learning—no fine-tuning, no extra training step, just examples in the prompt. Today, it’s almost boring because we’re used to it. Which is probably the best sign that something has gone from “magic trick” to “product feature.”
And yet… I still can’t shake the feeling that the funniest part is also the most true: sometimes we’re just taking one sentence, turning it into an email, and then turning that email right back into one sentence.