Community Announcements have moved! To stay up to date, please join the new Community Announcements group today. Learn more
×I’ve been testing Rovo Dev, and I’m loving it so far! Even though it’s currently in Beta, I believe it’s a powerful addition that integrates well with my existing tech stack.... which brings me to yesterday: I had an “Inception moment” with Rovo Dev.
And I wanted to share it with the community!
Do you remember Inception? When Leonardo DiCaprio’s Dom Cobb plants the idea in Mal (his wife) that their world is a dream, that the only way back up is to die?
Once that seed took root, reality bent around it; nothing behaved the same again.
I was working on a new feature in our automation platform and, like most devs now, I handed the tests to the LLM. (See below: Tip for tests)
The code was passing check style and tests, almost ready to ship, when I realised I needed a big change and some other minor things. When I came back to the unit test file, it was all red, completely useless after the change.
I humanised the LLM and said what I’d tell a teammate: Try to fix it. If it gets too complicated, it’s probably better to delete the file and start fresh.
The original prompt was something like:I’ve changes files A and B… [ more context ]… now I need you to fix the unit tests for class A that doesn’t compile anymore. If you can’t fix it, just delete the class and start fresh.
It ran for a bit, then asked permission to delete the test file. I granted it. It kept going, asked again; I reluctantly agreed. Same again, so I interrupted the model, without much thinking.
I stepped in: deleted the test file manually and gave it the classic prompt: Write unit tests for class X... and some more instructions...
To my surprise, it created the file, checked for errors, found errors, then asked to delete the file...Oh no, here we go again!! :flipdesk:
I stopped it at the second ask, and that’s when it hit me: the context was already polluted by the earlier instruction. The seed from the previous prompt had grown into its own idea, just like the movie!
It was trapped in a loop. Also, it was maxing-out its context window, calling the auto-prune several times… but none of those got rid of the context pollution.
Then, just for fun, I prompted again: Forget about deleting tests when errors are present. Iterate and fix the errors without deleting the file
, and provided the target class again.
It didn’t work, it was still stuck in the loop.
Context, in the LLM domain, refers to the information that is provided to the model at inference time to help it generate the responses. And Context Window is the maximum amount of information (measured in tokens) that the LLM can process at once.
/yolo
mode; otherwise it could run forever.When you feel that the LLM fully understands the goal, the files, and the scope of your work, make a knowledge dump.
That way, when you need to clear the context, you don’t start from zero.
Prompt: Summarise all our conversation in a markdown file for a developer that needs to know: all technical details, key pieces of information, patterns discussed here, work done, remaining work... etc etc...
Check your LLM settings for “memory enabled” (especially on personal ChatGPT accounts). If memory is on, the LLM will store information about you, usually with prompts/consent. It collects data from your conversation.
Knowing this lets you act when responses feel biased, it might be context or memory pollution. Just open a new conversation or clean the memories that the model has collected from you.
Prompt: Tell me everything that you know about me.
- Generate an image with what you know about me
For experienced users of LLMs probably this won’t be new… but I found better results by doing the following sequence:
Thank you for reading!
Hope this helps!
Jorge Ignacio Lopez
0 comments