Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Why your Velocity and Sprint Reports are probably wrong

Have you ever stopped to analyze what work-items are actually counted toward the velocity and how the Velocity and Sprint Reports really work? If not, you likely don’t get reliable reports. 

Here are four common pitfalls that can seriously distort your Velocity and Sprint Reports - and how my app Multi-team Metrics & Retrospective can help fix them mostly automatically:

  1. Are you removing items from the sprint only when deprioritized?

If you're removing incomplete items from a sprint (to move them to the next sprint or backlog) before it ends instead of following the workflow that runs after the ‘Complete sprint’ button is clicked - the Sprint Report won’t count them as "uncompleted". That means your velocity data is artificially inflated. Only deprioritized items should be removed. Everything else should remain and be carried over properly.

  1. Are all completed items actually in the sprint?

The Sprint Report metric Issues completed outside of this sprint only captures work that was completed outside a sprint AND then added to it afterward. In reality, many teams resolve work-items but forget to add them to the sprint. If you look back over a quarter, chances are you’ll find several issues that were resolved but never included in any sprint, meaning your Velocity and Sprint Reports are missing real work.

  1. What about duplicates?

Handling duplicate issues is tricky. If you leave them out of the sprint, they get lost while while high-level analysis of them is essential. If you include them, you must ensure they don’t carry estimates, or you'll skew your metrics. In enterprise environments, it's not uncommon to have dozens of duplicates in play - a silent source of distortion.

  1. Are you double-estimating overlapping work?

A common example: you estimate a story in the sprint, and later a bug is filed (by your team or another) for the same functionality. Both get estimates. But in reality, the bug often is just a continuation of the same task. These overlaps inflate your reported effort and distort team performance metrics.

Bonus: Retrospectives and Added Scope confusion

If you're reviewing Added Scope during retrospectives, you know how challenging it can be to distinguish between real new work and expected additions - such as test-discovered bugs for tasks planned for the current sprint or duplicates. Sprint Report marks added issues with an asterisk (*), but there’s no separate metric, and it’s up to you to scan each work-item manually.

You can handle all of this manually, or let the app do most of the work for you

These problems can be addressed with strict process oversight and constant manual review, but that creates costly overhead. An alternative is the app I built - Multi-team Metrics & Retrospective.

The app provides the options to:

  1. Automatically include Removed Scope in both Final Scope and Uncompleted Scope (the ones that weren’t completed).
  2. Automatically include Scope Completed Outside in both Final Scope and Completed Scope.
  3. Automatically exclude duplicates from all metrics - if they're linked to another issue in the same sprint.
  4. Automatically exclude issues linked as testing discovered (the link type is automatically created by the app for you) from Added Scope (which is a separate metric) and Re-estimated Scope.

Part of the Configuration Screen:

New Project (9).png

The Dashboard View:

standard1.pngstandard2.png

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events