The Stateless AI Guessing Game: A Prompting Lesson in Memory

Stateless models don’t remember between turns, so you can still play a guessing game by encoding the chosen object into the transcript, such as base-10 encoding or a foreign-language rendering, to persist it across questions.

One of the challenges with earlier models was their very limited context and, more importantly, the fact that they didn’t really hold onto anything like memory. I wanted to create a simple guessing game: the model thinks of an object, and I try to guess what it is. The problem is the model doesn’t actually “think of” something and then keep it around. It produces a response after you give it an input—maybe it “thinks” for a flash of a second—but it doesn’t really remember what it decided.

So if I tell the model, “Let’s play a guessing game. I want you to think of an object. I’m going to try to guess what it is,” all it truly has access to is whatever I said and whatever it generated in its first output. It won’t reliably know the thing it supposedly thought of. It might, in the process of generating that first response, pick something like “tiger,” but the moment it spits out the output—remember these earlier setups were effectively stateless—it essentially resets. That means if I then start asking questions to narrow it down, I’m basically playing a guessing game with an amnesiac (which I did see some people attempt).

So I needed some way for the model to “store” what it picked in a form that would persist in the visible conversation, even if it couldn’t internally remember it.

My strategy was to try a couple approaches:

One was to use base 10. We found the models could encode text pretty easily into that. So I’d tell it: “I want you to think of a thing, don’t say it, just write it in base 10.” Now the “secret” is in the transcript, but it’s not immediately readable as the answer.

Another simple trick was: “Think of an animal, but write the name in a language I don’t know—like Hindi or Japanese—and then I’ll try to guess what it is.” Even if I saw the prompt, I wouldn’t have any idea what it chose.

These are the kinds of little hacks you had to use to work around the fact that the models weren’t stateful: the moment you asked a question and it generated a response, it had no idea what just happened.