Our company uses 2 week sprints. The issue we run into is that tickets are dev complete, but have not completed testing yet. At the end of a sprint, we have to move all these tickets into the next sprint even though the developer has completed it.
I'm looking for any suggestions on either a better way to handle this in Jira or a better process our team can use. We basically use the sprints for developers. The challenge is when we look at the active sprint, it still has all the tickets the developer has already completed from the previous sprint.
I know there can still be defects against these dev complete tickets and maybe there isn't a better way to handle this. I just hate having to move so many tickets into the next sprint every time I close the previous sprint.
Thanks
Hello @David
I have a few fundamental process questions for you, so I can better understand your situation.
Are development and testing handled by different people/teams?
Is having the testing completed part of the definition of done for the ticket? If so, then it is totally appropriate that the ticket remain open until that task is complete.
If testing is not part of the DOD, then why keep the tickets open?
If you need to track both development and testing work, but testing is not part of the DOD for the development tickets, then I would suggest tracking testing work as separate tickets and linking the development and testing tickets to each other.
We consider a ticket done when it passes testing. However, we use the sprints as a developer tool so our devs can see what is assigned to them in that sprint. This is why I would prefer the sprints only show work that is not dev complete.
What we do now is create a ticket and work it. We do create a separate QA task as a sub-task to the original ticket. Our QA team does not do any work against the original item other than to update it's status.
We didn't want to go down the path of tracking the same work against 2 different tickets (one for dev and one for QA). That felt like it was more to manage and more difficult to keep track of the status. But I'm open to suggestions.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @David
Yes, and...to Trudy's ideas:
You note that the team is using scrum: not the people doing dev work or testing work...the whole team. When there is often carry-over at the end of a sprint for one type of work, it may indicate capacity balance issues for the team. Seems like a good opportunity for a team discussion on workload, knowledge transfer, and how the team does work (e.g. manual versus automated testing)...to experiment and try to improve the balance issues.
Some teams try to solve this symptom by switching to what they perceive to be Kanban. But that may just "kick the can down the road": the capacity issue is still there, and will only get worse if more things get "dev complete" than the QA/testing capacity can support.
Best regards,
Bill
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
some very good input from my colleagues here already. However I will add my two cents as well based on my own personal experiences.
A solution that I have employed for this, although I did find it to be rather tedious, Was to split my workflow across a scrum and kanban board. The scrum board was for development and was considered done via a “QA Ready” done status. While this would be the final right-most column on the dev scrum board, it would be the first column on the QA kanban board. The QA team would take the issue to the final Done status on their board.
However I think a better solution is to break the test effort completely out of the development effort. Allow the QA team to create their own tasks and link them back to the Dev task if so desired. This allows the development team to work completely independently of the QA team. If QA find that there’s an issue during their testing they can open a bug and link it back to the development task.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
After reading through the comments and thinking about some of what our team had recently thought about, I'm going to explore splitting dev and QA completely. I want to see if I can automate the QA ticket creation based on the dev ticket status. We are already doing this by creating a QA task linked to the original ticket.
The main reason we didn't do this before was the concern that it would create confusion having 2 different tickets for the same work. However, I think this might just be changing our mindset. One ticket for development and once it is dev complete it's close. We would create a QA Ticket (manually or automated) and from that point on, all work is tracked against it. Defects would be linked to the QA ticket and the work from for the QA ticket would go from "QA Ready" to "Complete" or whatever status we choose.
I'm going to explore this path and see if we can make it work.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi drue,
I think splitting QA from dev is a good idea. This will give you a better insight in the velocity your devteam is making and taking out the “external factor”. If the QA is not part of the DOD like some said, this is fine.
However, what I do wonder is where your QA is actually done.
— If it's done on a test environment, isn't feedback still part of the original scope?
— If it's done on an RC environment, it's debatable if it's part of the original scope or a new bug introduced.
— If it's done on a production environment, it can be considered a new bug/feature and should be created on the backlog and follow the usual route.
I've run into the same issue to calculate velocity for my devs teams, since we require approval of our customers. To solve this, I've created 2 boards.
1. Scrum board with regular dev flow and columns for. To-do → In progress → On Test → On RC → Done.
2. Burn down board where I just use 3 columns. To do → In progress → Done. However, the done column contains every status after it's been pushed to RC (where we don't consider feedback part of the original scope anymore. (Jira sees a ticket as burned when it's in the last column of a board, no matter the status it's in.
It doesn't take away the problem with issues moving over to the next sprint every time they aren't released yet, but it does give me insight in my developer velocity without the customer factored in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @David
Welcome to the Atlassian community forum.
One best way to address your query is to have a separate Jira Kanban board for testing activities (QA bugs) and install an Atlassian market place test plugin to track the requirements traceability. This way all the development activities can be tracked in Jira Scrum board and will be easy to see the development stories closed in that Sprint as well through "active Sprints" track the progress of those stories.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.