Have you ever stopped to analyze what work-items are actually counted toward the velocity and how the Velocity and Sprint Reports really work? If not, you likely don’t get reliable reports.
Here are four common pitfalls that can seriously distort your Velocity and Sprint Reports - and how my app Multi-team Metrics & Retrospective can help fix them mostly automatically:
If you're removing incomplete items from a sprint (to move them to the next sprint or backlog) before it ends instead of following the workflow that runs after the ‘Complete sprint’ button is clicked - the Sprint Report won’t count them as "uncompleted". That means your velocity data is artificially inflated. Only deprioritized items should be removed. Everything else should remain and be carried over properly.
The Sprint Report metric Issues completed outside of this sprint only captures work that was completed outside a sprint AND then added to it afterward. In reality, many teams resolve work-items but forget to add them to the sprint. If you look back over a quarter, chances are you’ll find several issues that were resolved but never included in any sprint, meaning your Velocity and Sprint Reports are missing real work.
Handling duplicate issues is tricky. If you leave them out of the sprint, they get lost while while high-level analysis of them is essential. If you include them, you must ensure they don’t carry estimates, or you'll skew your metrics. In enterprise environments, it's not uncommon to have dozens of duplicates in play - a silent source of distortion.
A common example: you estimate a story in the sprint, and later a bug is filed (by your team or another) for the same functionality. Both get estimates. But in reality, the bug often is just a continuation of the same task. These overlaps inflate your reported effort and distort team performance metrics.
Bonus: Retrospectives and Added Scope confusion
If you're reviewing Added Scope during retrospectives, you know how challenging it can be to distinguish between real new work and expected additions - such as test-discovered bugs for tasks planned for the current sprint or duplicates. Sprint Report marks added issues with an asterisk (*), but there’s no separate metric, and it’s up to you to scan each work-item manually.
You can handle all of this manually, or let the app do most of the work for you
These problems can be addressed with strict process oversight and constant manual review, but that creates costly overhead. An alternative is the app I built - Multi-team Metrics & Retrospective.
The app provides the options to:
Part of the Configuration Screen:
The Dashboard View:
Alexey Pavlenko _App Developer_
Founder & Full-Stack Developer, PMP
11 accepted answers
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
0 comments