7.
Accept ambiguity
If you’re visiting a new town and can’t find where you’re staying you’ll ask for directions. If you ask a drunk person outside a bar you’ll take their suggestions with a pinch of salt. You’d accept that their response was probably ambiguous. Treat Large Language Models the same way.
Under the hood Large Language Models are stochastic, which means they have a random probability distribution, which means it has no predictable pattern. This is very hard to accept. We’re used to computers being deterministic. We are used to pressing a button and always getting the same response.
Humans though - without falling into philosophical weeds of chaotic vs disordered - are random in our responses. The same prompt - or unit of conversation - is likely to get a different response depending on where we are, what we’re doing, who we’re with and how we’re feeling. LLMs behave in the same way.
Humans are also frequently wrong. And LLMs are simulating human conversation. That means - counterintuitively - the more advanced the LLM is the less correct it might be because there’s a higher risk that more human falsehoods[^1] are encoded.
[^1] There is an interesting academic paper exploring this here
Do
Can you explain to me what Big D decision-making is and how it might be relevant to purpose-led organisations. Erin Meyer discusses the concept in her book The Culture Map
Don’t
Give me a quote from Erin Meyer’s book The Culture Map about Big D decision-making