Ask it to stop illustration


Ask it to stop

In conversations it’s easy for a misunderstanding to be introduced. A misunderstanding should be removed as quickly as possible.

In human interactions we have non-verbal cues1, a raised hand or tilt of the face, to flag an error in the conversation. With a machine you need to be more forceful: tell it to stop. Depending on the LLM you’re interacting with that will mean hitting a stop button, cancelling the request via the terminal or closing the browser.

If you can’t stop it then you need to tell the machine to forget. The behavioural economists Amos Tversky and Daniel Kahneman talked about how in negotiations the frame of the conversation is vital. If the frame is unacceptable then it needs to be taken off the table2. The same is true with LLMs.

You need the incorrect answer to be taken off the table because of the token window a Large Language Model has. A token window roughly correlates to the number of words of text that a LLM can see. As of April 2023, Bing’s window is limited to ~1,200 words, GPT3.5 to ~4,000 words and GPT4 to ~7,000 words. That’s only part of the story though since more recent tokens have more weight in predicting the next words. It means the answer where the machine has misunderstood your intent will be polluting the next message.

Hopefully in the future there’ll be a less abrupt way to do this that can match real-world interactions.


Sorry, you misunderstood the question. I need you to forget what you just said and answer my original question [add additional context to make it clearer]


Oh, that was unexpected. Ok, can you try and answer my question again.


  1. Check out David McNeill’s research around psychology of language and gesture for a deep dive

  2. They referred to it as ‘Anchoring’ or the ‘Framing Effect’ but I prefer the table metaphor.