Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

🚀 How We Automated QA for a 30-Agent Service Desk in JSM and What Happened Next

 TL;DR: We rolled out a partial automation of the QA process for a 30-agent service desk team using only built-in Jira Service Management tools: Rovo, Jira Automation, and Confluence. In just 30 days, this setup delivered real measurable impact:

  • 100% of high-priority tickets were reviewed with consistent QA scoring

  • Agent feedback loop shrank from 6 days to just 2

  • 3 systemic issues detected, created 7 new articles in KB

In this article, you’ll learn how we designed and deployed a lean, human-in-the-loop service desk QA automation workflow in Jira Service Management using only native tools - Rovo, Automation, and Confluence to improve ticket review speed, feedback loops, and service quality in just 30 days, and why you should also adapt the approach to your own ITSM team.


Service Desk QA Problem When Tickets Add Up


Service desks often lack structured, scalable QA. Without it:

  • CSAT drops without explanation

  • Agents don’t get timely, useful feedback

  • Customer churn increases and LTV drops

Before automation, QA for service desk tickets was handled manually by the service desk manager and team leads, since there was no dedicated QA analyst. As a result, only a small, non-representative sample of tickets was reviewed, leading to inconsistent feedback and a noticeable drop in service quality.

So we asked: What if we could partially automate the QA workflow using only Rovo and Jira Automation, but leave key decisions to humans?


Preparation to QA Automation


Before we can even think of AI automation of QA, we must make sure that documents that are essential for our Rovo QA Analyst agent are ready and up-to-date. It may be different in every organization as level of readiness and even the document naming can differ, so don’t focus on names, but its content. In my client’s case, we had to prep the ground for the key documents like Standard Operating Procedures (SOPs) and Operating Level Agreements (OLAs) between teams hadn’t been updated in a while which posed a problem. The Rovo QA Analyst agent relies on these as reference points when evaluating tickets using the QA scorecard. So we brought them up to date. Luckily, problem and change management policies were already solid, and SLAs were well-defined, so those needed no extra work.

As for the knowledge base, that’s a moving target. Product changes keep it in flux, so content updates are part of a continuous loop. There’s actually a dedicated automation step for surfacing recurring issues, identifying root causes, and recommending updates or new articles. But that piece belongs to a more advanced phase, so I’ve held it for Part 2. If you’d like to see how that works, let me know in the comments.

As part of our research, we mapped out the core documents we believe are essential to support a Rovo + Jira QA automation flow. These included standard operating procedures, escalation paths, and cross-team agreements.

Screenshot 2025-06-03 at 3.01.06 PM.png

One critical piece we had to develop before launch was the QA scorecard and scoring rubric. The team already had a basic version in place, so we used Rovo to analyze and refine it rather than starting from scratch. That gave us a solid starting point. We’ve since evolved it based on real-world usage, but here’s a quick preview of what the first working version looked like, just to give you a sense of the structure and what to aim for.

QA Evaluation Scorecard Criteria


Criteria

Description

  1. Response Quality

Agent uses canned responses appropriately, and customizes only when needed.

  1. Clarity of Instructions

Steps are clear and easy to follow. No vague or confusing language. E.g. customer doesn’t often confirm and double check withe agent whether they understood the agent properly.

  1. Communication Style

Calm, confident, and professional regardless of customer tone.

  1. Empathy

Agent must acknowledges user frustration when appropriate without being defensive.

  1. Resolution Ownership

Agent actively confirms with customer before resolving ticket.

  1. Closure Confidence

Agent closed after user confirms or there’s documented inactivity/fallback rule. Check your knowledge base for relevant paragraphs in standard operating procedures (SOP) documents to confirm that closure was performed according to documented rules.

  1. Internal Collaboration

If issue has been escalated. Check if internal notes correspond to instructions for escalations or handovers. Check your knowledge base and find relevant paragraphs in Operating Level Agreement (OLA) expectations.

  1. Knowledge Base Usage

Agent shared knowledge base articles which customer confirmed to be useful.

  1. Custom Instruction Writing

Writes detailed custom steps if the KB article was not useful to customer.



Scoring Rubric (10-Point System)


Criterion

Max Points

Scoring Guide

Response Quality

1

1 = Appropriate, 0 = Misused (e.g. copied wrong canned response or didn’t customize when needed)

Clarity of Instructions

2

2 = Clear and easy to follow, 1 = Mostly clear, 0 = Confusing or vague

Communication Style

1

1 = Calm, confident, professional

Empathy

1

1 = Shown if needed, 0 = Missing or inappropriate

Resolution Ownership

1

1 = Confirmation requested or documented fallback

Internal Collaboration

1

1 = Escalation-ready context provided

Knowledge Base Usage

1

1 = Correct decision to include/skip based on KB quality

Custom Instruction Writing

1

1 = Cleanly written, accurate steps

Closure Confidence

1

1 = Customer explicitly confirmed that their request has been addressed and ticket can be closed.


 


 The QA Steps We Automated in the MVP

 
We didn’t try to automate everything. We started with the pieces that mattered most, the ones we knew would show results fast. In the end, we built a loop of four steps. Small enough to launch in a week.

Here’s what we automated first:

  1. Ticket Sampling

  2. Triggering QA Review via Rovo

  3. Recording Results in Confluence

  4. Creating JWM Tasks Based on QA Review Outcomes

Let’s walk through each part.

Step 1: Sampling the Tickets

We wanted both random and targeted selection of sample tickets. Actually, we first were thinking, that random is enough, but after some tests, we decided that we need combination of both. Eventually, we combined three methods.

We also decided to keep sampling rules flexible. We review and adjust them weekly. Sometimes we want more tickets from a specific agent. Sometimes we’re watching the impact of a change, or there are times that we need to watch closely the quality of service provided to a new high-value customer.


a. Random Sampling

A formula gets a sample form each agent from last week’s tickets.

Note: “Random” isn’t native in Jira Automation. But with a clever workaround, we built it. Screenshot included. 

Screenshot 2025-06-03 at 4.28.53 PM.png


b. Stratified Sampling

This targets conditions we care about like SLA breaches, issue type or team escalations.

 

Note: SLAs aren’t available as smart values, but you can fetch them using custom fields. See how we did it in the screenshots.

Screenshot 2025-06-03 at 4.31.20 PM.png 

c. Targeted Sampling

This is precision QA. Use it when testing a hypothesis; say, tracking improvement after coaching or when monitoring a high value customer. Filter by specific customer, keywords, or agent name. 

If you are not sure what criteria to use for sampling, here are the most common ones.

Screenshot 2025-06-03 at 5.01.38 PM.png

Step 2: Triggering a QA Review with Rovo


We created a dedicated Rovo agent and trained it to act like a QA analyst. We gave it clear instructions and more importantly, context. The QA scorecard and rubric were embedded directly in its prompt. That worked far better than linking to external docs.

We also gave it access to the documents that we prepared beforehand: SOPs, OLAs, change and problem management policies.

Then we plugged the agent into our automation flow.


Step 3: Recording QA Results in Confluence

We wanted something structured. A live database in Confluence. But here’s the catch: there’s no API to update Confluence databases. (Yet.)

So we had to use pages. Now, each automation run creates a fresh Confluence page. That page captures the week’s QA reviews—ticket links, agent names, customer info, score, and notes. It lives in a dedicated Confluence space, and the service desk team uses it to dive deeper when needed, for tasks such as pattern detection, feedback sessions, reporting etc.


Step 4: Turning Scores Into Tasks


This was a team lead request, they wanted more visibility. They wanted a trail. And they wanted QA outcomes to turn into actual work.

So we built rules.

  • If the score is below 7.5, we create a task in a special JWM project for QA. The task gets assigned, and it includes everything: the ticket, the score, and a summary. Leads can follow up, coach, or trigger documentation updates.

  • If the score is above 9, we also create a task but this time, it’s for review and recognition. The goal is to extract best practices, highlight what went well, and encourage that agent in the internal Slack channel. That’s what they call “positive reinforcement”.

Of course, a human still reviews both ends of the spectrum. AI gets better, but it’s not perfect. The point is that these automations free up time for leaders and managers. These automations let leaders focus on solving problems, process optimizations and work with the staff that matter most.


The Full QA Workflow (for Context)

 
Even though we only automated a small slice of the QA process so far, I want to share the full workflow we’re building toward. Some steps will stay manual. Others, we’re planning to hand off entirely to Rovo.

Screenshot 2025-06-01 at 10.42.03 PM.png

And if your team is approaching QA differently or you’ve learned something from your own setup, I’d love to hear about it. Drop your thoughts and suggestions in the comments. 

If you’re wondering what each step actually does and how it fits into the bigger picture this table should help. It outlines the Inputs, Outputs, Tools and Owners for every activity in the QA process.

Screenshot 2025-06-03 at 4.54.25 PM.png


Outcomes and Strategic Objectives


Once we automated a few key QA steps and began using them regularly, we started seeing tangible results not just in measurable outcomes, but in how the leadership thought about SD ops quality. 

Screenshot 2025-06-03 at 5.06.18 PM.png

It also became clear how important it is to redefine strategic QA objectives at regular intervals every three months, or twice a year, depending on how fast your environment changes. These objectives guide your ticket sampling, and help you target what really matters: maybe it’s CSAT, customer churn, a low-converting onboarding experience, or the impact of a new training initiative.

Based on those priorities, you may pause or activate specific activities, and adjust your sampling method to match.


Wrap-up

 
I intentionally did not describe the exact technical setup, because Jira instances vary widely, like field names, workflows, configurations and what works in one setup may not directly translate to another. But if you’re interested in a more technical deep dive, where I walk through the specific automation rules, conditions, and Rovo prompt structure step-by-step, let me know in the comments. I’d be happy to share those details in a follow-up.

Also, the biggest pitfall of automating the QA process primarily using Jira Automation was the cap on rule executions. As we continue expanding our QA automation, we’re constantly hitting the ceiling. It leads us to consider building a dedicated app that runs key automations on our own backend with more flexibility, advanced sampling, and finer control over when and how checks are triggered.

Whether you’re a QA analyst, service desk agent, Jira admin, ITSM process owner, knowledge base manager, account manager or anyone involved in keeping ITSM workflows running smoothly I’d love to hear from you, if anything in this article prompt you to rethink or revisit parts of your approach.

Lastly, if you’re curious about the automation beyond MVP described in this article or want to explore options of complete QA automation in your organization, send your questions my way.

8 comments

Christoph Schaub
Contributor
June 3, 2025

Inspiring Articel, Rob. You write about many additional details that you could share. This might give me an idea for our project.

Like • # people like this
Rob Mkrtchian _CAIAT_US_
Atlassian Partner
June 3, 2025

@Christoph Schaub  Thank you, Chris! If there’s interest from the community, I’ll publish part 2. Is your project focused on service desk QA automation? Have you already started and set that up? Is there a specific task you’re finding difficult to automate or you are looking for something else?

Like • # people like this
Christoph Schaub
Contributor
June 3, 2025

We just started with the Service Desk in general. It's interesting to get first person experiences in addition to what we read and what our implementation partner tells us. It helps to shape the requirements and lead to a better result.

Like • # people like this
Rob Mkrtchian _CAIAT_US_
Atlassian Partner
June 4, 2025

@Christoph Schaub if you have plans to automate QA process, we can connect and talk about it.

You can connect with me on Li: https://www.linkedin.com/in/mkrtchian-robert/

Like • # people like this
Jared Schmitt
Contributor
June 11, 2025

Thanks @Rob Mkrtchian _CAIAT_US_ for the write-up. Definitely something I'd like to try out myself. Super interested in both, your part 2 and part 3 (the tech explainer).

We use JSM for quite some time now but don't do any QA. I like how your solution offers feedback and recognition at the same time, given a set of pre-defined metrics. Well done!

Like • # people like this
Rob Mkrtchian _CAIAT_US_
Atlassian Partner
June 11, 2025

@Jared Schmitt thanks for feedback and appreciation! What specifically are you interested in beyond the deeper tech explainer of already described automation process? I’m trying to tailor the next sections as closely as possible to meet the readers' expectations.

Jared Schmitt
Contributor
June 11, 2025

Sure @Rob Mkrtchian _CAIAT_US_ 

For part 2 you mentioned

There’s actually a dedicated automation step for surfacing recurring issues, identifying root causes, and recommending updates or new articles.

For the "recommending updates or new articles", is it done by Rovo alone? Or can we potentially also plug in some ChatGPT instance and create the articles automatically? This probably gets clearer for me once I see the tech behind your setup.

On another note, I understood the sampling step as some kind of automation that selects a number of tickets from each agent based on some criteria, either random or targeted - is that correct? 

Like • Rob Mkrtchian _CAIAT_US_ likes this
Rob Mkrtchian _CAIAT_US_
Atlassian Partner
June 13, 2025

@Jared Schmitt thanks for sharing your questions. I'll cover the tech in my next articles, but let me answer to your questions as much as I can at the moment:

1. For the "recommending updates or new articles", is it done by Rovo alone? - yes, a special Rovo agent with limited knowledge and instructions.

2. Or can we potentially also plug in some ChatGPT instance and create the articles automatically? - You can use any other LLM model if it has API in your automation, instead of Rovo. You might your own AI agent, that you have built with Langchain locally or on external platform and still use it instead of Rovo. Rovo has been used, because it will become/became free for all licensed user, and doesn't share your data with third party providers, that is why we have used Rovo.

3. On another note, I understood the sampling step as some kind of automation that selects a number of tickets from each agent based on some criteria, either random or targeted - is that correct? - yes, sampling is an automation step, usually it is a random 20-25% selection of agent's closed tickets during last 7 days. Each agent's random selection is a different branch in "sampling automation" flow.

Hope it makes sense...

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events