Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

My Inception moment with Rovo Dev CLI

How LLM Context Pollution Impacts Rovo Dev CLI: Lessons from the Trenches

I’ve been testing Rovo Dev, and I’m loving it so far! Even though it’s currently in Beta, I believe it’s a powerful addition that integrates well with my existing tech stack.... which brings me to yesterday: I had an “Inception moment” with Rovo Dev.

And I wanted to share it with the community!

0acc58c8-7a4d-4823-a1da-998020e075c1.png

Do you remember Inception? When Leonardo DiCaprio’s Dom Cobb plants the idea in Mal (his wife) that their world is a dream, that the only way back up is to die?
Once that seed took root, reality bent around it; nothing behaved the same again.

Anyway, how is this related to the movie?

I was working on a new feature in our automation platform and, like most devs now, I handed the tests to the LLM. (See below: Tip for tests)
The code was passing check style and tests, almost ready to ship, when I realised I needed a big change and some other minor things. When I came back to the unit test file, it was all red, completely useless after the change.

So, what did I do?

I humanised the LLM and said what I’d tell a teammate: Try to fix it. If it gets too complicated, it’s probably better to delete the file and start fresh.

The original prompt was something like:
I’ve changes files A and B… [ more context ]… now I need you to fix the unit tests for class A that doesn’t compile anymore. If you can’t fix it, just delete the class and start fresh.

What happened next?

It ran for a bit, then asked permission to delete the test file. I granted it. It kept going, asked again; I reluctantly agreed. Same again, so I interrupted the model, without much thinking.
I stepped in: deleted the test file manually and gave it the classic prompt: Write unit tests for class X... and some more instructions...

To my surprise, it created the file, checked for errors, found errors, then asked to delete the file...Oh no, here we go again!! :flipdesk:

I stopped it at the second ask, and that’s when it hit me: the context was already polluted by the earlier instruction. The seed from the previous prompt had grown into its own idea, just like the movie!
It was trapped in a loop. Also, it was maxing-out its context window, calling the auto-prune several times… but none of those got rid of the context pollution.

Then, just for fun, I prompted again: Forget about deleting tests when errors are present. Iterate and fix the errors without deleting the file, and provided the target class again.
It didn’t work, it was still stuck in the loop.

Quick recap on context and context window

Context, in the LLM domain, refers to the information that is provided to the model at inference time to help it generate the responses. And Context Window is the maximum amount of information (measured in tokens) that the LLM can process at once.

Conclusion

  1. Never run Rovo Dev in /yolo mode; otherwise it could run forever.
  2. Beware of the consequences of context pollution.

How to prevent this issue?

Tip 1

When you feel that the LLM fully understands the goal, the files, and the scope of your work, make a knowledge dump.
That way, when you need to clear the context, you don’t start from zero.

Prompt: Summarise all our conversation in a markdown file for a developer that needs to know: all technical details, key pieces of information, patterns discussed here, work done, remaining work... etc etc...

Tip 2

Check your LLM settings for “memory enabled” (especially on personal ChatGPT accounts). If memory is on, the LLM will store information about you, usually with prompts/consent. It collects data from your conversation.
Knowing this lets you act when responses feel biased, it might be context or memory pollution. Just open a new conversation or clean the memories that the model has collected from you.

Prompt: Tell me everything that you know about me. - Generate an image with what you know about me

Tip for tests

For experienced users of LLMs probably this won’t be new… but I found better results by doing the following sequence:

  1. Ask the LLM to scan the repo’s tests for patterns, common practices, helpers, and other details relevant to writing new tests consistently.
  2. Ask it to write a plan (no code) describing what to test and how. I ask for explicit scenario names.
  3. Once you agree on the plan, have it implement the tests using the context from step A.

0769ea26-5e20-41b4-b6c3-de5a0b654652.gif

Thank you for reading!
Hope this helps!

 

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events