The Advice–Action Gap
One of the more frustrating quirks of large language models is something I’ve started calling the Advice–Action Gap. It’s the space between what an AI will say you should do when performing a task and what it will actually do when you have it execute the same task.
If you ask a model to explain the “best practice” for almost anything — writing a secure SQL query, structuring clean React components, writing in noir style — it will correctly explain the principles: avoid SQL injection, keep components small and focused, show rather than tell. It will be a textbook answer.
But if you instead give it a practical request — “Write a login query,” “Build a small form in React,” “Draft a short noir story” — it often ignores its own advice to a frustrating degree. It does this because it follows one stochastic pattern when giving advice and a completely different one when executing the task. This is the Advice–Action Gap in action.
Closing the Gap
The good news is you can close the gap. Here are three ways you can improve the odds of AI following its own best advice based on my experience. You can use one of them or combine them all.
1. Recite
Ask the AI to summarize the best practices before having it perform the task.
This loads the rules into its “working memory” before execution and makes it less likely to wander.
2. Embed
Include the practices you want to emphasize in your request:
Write a Python script to do X, following these practices: [list here].
This gives the model less room to drift away from the standards you’ve set.
3. Verify
After you get the response, ask the model to review its own output against best practices and point out where it followed or broke them.
This won’t catch everything, but it’s a handy way to surface obvious misses before you review it yourself.
None of these tricks make the Advice–Action Gap disappear entirely, but they can help you and the LLM work together to arrive at the best result.
MagicShel