I bring to you the five stages of grief Rovo. This reflects my personal journey and mine alone. For more constructiveness skip to the last heading and for discussions jump to the comment section!
Weren’t we all very excited when Atlassian announced Rovo as a free-for-all at TEAM 2025? At the time, we were considering getting Rovo licensed anyway. The concept of the Teamwork Graph, connected knowledge and especially the unified search sounded great! Having Agentic AI interact with our data without us having to implement custom (pro-code) solutions seemed like a great opportunity. As an IT Consultancy, we could use Rovo to create value for ourselves as well as for our customers.
We also loved the out-of-the-box connectors to other systems like OneDrive. We try to use Confluence as much as possible but still have a lot of processes that are document-centered. Therefore, a lot of knowledge still lies in PDFs, Word files and PowerPoints… soon to be available in a single search… that actually finds stuff!!!
Being on Confluence Premium we got Rovo in early June. According to our Global administration we got it for the whole site. Turns out that being on Jira Standard, we won’t get a full Rollout until some time later this year. According to the Support, Rovo shouldn’t have access to Jira data at the moment and it looks like that at first: There is no sign of Rovo in the Jira UI. Not in the search, not as Chat icon. However, using the search from within Confluence I can still access Jira and ask stuff about my work items. Weird.
That was just the first thing that was confusing. Working with an Enterprise customer it got worse. They got told about 3 different Rollout dates, got emails with documentation referring to features that were only available a couple of days after the communication and so on.
Simply said: Even just figuring out when your Organization is getting access is a headache.
Once you do have Rovo on your Site, you are tasked with trying out all the features. Of cause, there is some Enablement material (special thanks to @Dr Valeri Colon _Connect Centric_ for providing lots of extra content) - checking the features against your own data and setup is pivotal though. So you create Agents, summarize content, search for pages and try to understand what’s happening under the hood. You run the same prompt on multiple pages, in different contexts and ask other users to do the same. With what result?
Inconclusiveness. Everyone gets a different result. And I am not speaking of different wording with the same meaning. I am talking about real inconsistencies, unexplainable behaviour and just pure confusion.
Once you start chatting with with the Agents, another interesting thing surfaces. Let’s take a “curated search” Use case for example - an Agent with limited and specific knowledge that should therefore be an expert on the topic. You begin with the conversation starter and the Agent searches for information. Sometimes you get 5, sometimes 10 sources as a result - some of them wrong. You ask a follow-up question. What happens now is plainright stupid. Rather than just using the chat history, Rovo starts a whole new search on all the content. The Agent is so caught up in its prompt, that it acts accordingly on every request that is sent to the Agent. So any new request you send to the same chat, not only the chat history (and therefore the request) gets longer, the whole knowledge base gets searched again. Why?! Or does it only look that way in the UI?
You soon come to the realisation that a lot of the concepts are great in theory but are badly implemented. You wonder about temperature settings (the degree of deterministic behaviour), AI models used and System prompts manipulating your Agent prompts. You long for Trial-and-Success but Trial-and-Error becomes your daily business. The excitement about new features like the Web search are replaced by disillusionment once you figure out that the feature doesn’t work yet. Not even as in “it doesn’t give me the expected results”. No. It just doesn’t search the Web. Why rollout a feature that has no function?
Governance? Remains a dead loss. Enterprise customers will get an enforced Rollout in two months time and still don’t have an ETA on basic things like restricting the use & creation of Agents. All while we also do not have an ETA on the introduction of Credits and Usage fees yet.
We would have loved to connect SharePoint/OneDrive to Rovo right away but do not get enough information from the one-pager documentation Atlassian provided. Saying that “Rovo respects the permissions” without giving information about the technicalities behind that is simply not good enough when you have sensitive information flying around. Also: When can we hide certain contents from Rovo?
Out internal AI Team was enthusiastic to try out features and even get into Forge Agents, using the Teamwork graph for custom development. These days I am struggling to find anyone willing to spend their time to test out features with me. Honestly - the only things that we found worked OK so far were the features that Atlassian AI basically provided before Rovo already: Summaries, Create Jira issues, Suggest a title… At first we also really liked the new Search. It really did improve. However, we still don’t quite understand when an AI answer is triggered. Also, a search should be deterministic. The unified search itself seems to be doing fine with that (but then, could it have gotten any worse than before?). As soon as AI is involved though, the question seems to be rephrased differently everytime resulting in various findings (“high temperature setting in AI speech). That is even worse when you task a Rovo Agent with finding knowledge.
And that makes me mad. It makes me so mad, that I am writing a whole Community article about how mad I am. So far I have not found a single Use Case for Rovo that would bring me or our Teams more value than the effort I have to put into the development of the Use Case. I have a list of them that could work in theory - but none of the tests consistently produced good enough outputs for us to actually use anything productively.
To give you an example: We created an Agent that should create a TL;DR summary of any given page. We gave it positive examples and prompted explicitly NOT to use anything like “This page describes…”; “This page includes…”; “This page shows…”. And what did we get? 4 out of 5 times we will receive a TL;DR starting with “This page…”. So we tested the same prompt with GOT 4o mini to outrule any issues with the underlying model - all good. Which could mean one of two things
Atlassian has implemented an orchestration router that picks “the best model for the job” - sadly it rather feels like they pick “the cheapest model possible with no regards for the quality of the output”. And I just don’t get it. Atlassian provides Rovo for free in the hopes to get us all hooked before monetizing it again. Why would they give us a horrible experience in the free phase, disencouraging us all to put Rovo to use and then also waving that Usage fee flag in our faces? In reality, Rovo is costing us greatly at the moment: we are spending internal ressources on trying, testing, hoping with no real ROI in sight. The worst of all: We are actively contributing to climate change by triggering AI that doesn’t bring us any greater benefits.
So there we are. Raging at this Blackbox in the hopes to be heard by some AI god. Trying to figure out how to trigger the better model. Improving our prompts and threatening this poor Agent to be deleted when not compliant. Stopping all research and waiting for Rovo to be better. Returning to our internal ChatGPT and self-coded custom RAGs. Writing Community articles to at least find consolation in others that also reached phase 5 of Rovo.
Or rather: What should Atlassian do with that?
Then, as an Atlassian partner i guess there is Stage 6: Hoping for the best. Watching Community content and announcements. Testing out new/improved features after all. And trusting Atlassian to deliver a great product sooner than later.
P.S. Unlike the graphics, none of this text was generated by AI - much less Rovo.
Rebekka Heilmann (viadee)
IT Consultant
viadee Unternehmensberatung AG
Germany
218 accepted answers
2 comments