Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Why Merging any Pull request take more time?

Gagan Gami
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
July 2, 2021

Since last week I'm facing an issue while merging any Pull Request it is taking more time compare to previous experience. 

It shows like below:

bitbucket_mr.png

Need to refresh page then again have to press "merge". Sometimes it asks for "merge" but when we hit "merge" then it says "This pull request is already closed."

MicrosoftTeams-image (17).png

 

 

1 answer

0 votes
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 2, 2021

Hello and welcome to the community.

Our engineering teams are in the process of migrating many of our core services onto new infrastructure. As part of this migration, we are aware that certain operations including those that require significant file system I/O may perform more slowly than usual.

It is helpful to realize that pull request merges are an asynchronous operation. This means that after you click “Merge”, this triggers work in the background to take care of merging the changes in your pull request into the destination branch. In fact, once you see the message “Merge in progress” on the page, you can safely navigate away. When you revisit the pull request a few minutes later, it will be merged.

While merges may take longer than normal over the next few days, rest assured they are still working! We will update our status page if our monitoring systems ever detect that merges are actually failing to complete successfully.

Kind regards,
Theodora

peteichuk
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
July 20, 2021

Hello Theodora!

Maybe it is worth rolling back the infrastructure changes because these changes negatively affect performance?

Kind regards,
Michael

Like # people like this
Drew Heasman
Contributor
July 20, 2021

Hi Theodora, is there any update? It's been a lot longer than a "few days"

Like fanky10 likes this
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 20, 2021

Hi everyone,

I wanted to share the following blog post from our head of engineering regarding the infrastructure changes in Bitbucket Cloud, the issues that came with it, and what our plan is:

This blog post, among other things, explains how merge tasks take longer post our infrastructure migration. We have made the merge tasks run asynchronously so your team wouldn’t be blocked from doing other activities on our platform while their PRs are being merged and can navigate away from the page. In the meantime, it is our primary goal to continue to work on improvements and to continue identifying and removing potential bottlenecks that are causing delays.

Our team is aware that slower merge times are having an impact on our customers and is working tirelessly on multiple initiatives to identify and eliminate the bottlenecks that are contributing to these delays. We are deploying small improvements daily.

At this point, it's hard to share the timeline around when we anticipate the merge times to improve, but rest assured, our team is treating it as a top priority.

Kind regards,
Theodora

shaun_titus
Contributor
July 23, 2021

Thanks for the update Theodora.  Unfortunately, making it an async process doesn't help at all.  Many times our engineers are sitting idle waiting for pipelines to trigger from the merge or are waiting for a merge operation to complete so they can pull master and create a new branch. 

When it takes 5+ minutes for a merge to happen this reduces the productivity of our engineering staff and makes it less likely they will submit smaller, more consumable, PR's instead of large, unwieldy PRs, where mistakes may be overlooked.

I think there's a lack of empathy and understanding of how users are consuming the service that is minimizing the impact and severity of this issue internally at Atlassian.  Let me be clear, your customers are losing time and money due to this problem.  Will this issue alone be enough for some organizations to switch to another provider?  Perhaps... it definitely makes it harder to defend not using the current leading name in the industry.

Like # people like this
Katarína Lukácsy
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 30, 2021

Hi!

Above all I'd like to very sincerely apologize for the frustrations our product is clearly causing you and your team! We don't take this lightly. We are engineers ourselves and we know exactly how important good tooling is for what we do. Please believe me when I say we strive to resolve issues with our product so our customers can be their most productive!

As the blogpost linked above summarized, we've had to make some large architectural and infrastructural changes recently and the journey turned out bumpier than we'd expected. The team is working tirelessly around the clock with our teams split among multiple timezones around the globe identifying bottlenecks and reducing the negative impacts of these changes. Your observations of long merges and your notes on the negative impact of those changes on the teams is what is driving us.

Over the past few weeks we have simplified our locking implementation to eliminate unnecessary contention while preserving data consistency, reviewed and pruned the list of pre- and post-receive hooks that run for internal Git operations, optimized the queuing layer of our pull request merge tasks so as to enhance our ability to scale up our system in case we detect bottlenecks and reconfigured our infrastructure to scale up more efficiently. Our internal monitoring systems show that the merge times should now be significantly shorter, though we do still see occasional spikes particularly during peak times that we continue to debug and work through.

I hope to see you continue being our valued customer and stick with us through these times so we can all enjoy Bitbucket Cloud together in its full potential on the other side of this!

Katarina

Engineering Manager, Atlassian Bitbucket Cloud

Like # people like this
shaun_titus
Contributor
July 30, 2021

Thank you Katarina for the thoroughness and transparency of your update.  It's a relief to know that engineering resources are focused on this issue and that we can expect things to improve relatively soon.  Given the complexity of the undertaking we couldn't ask for much more from your team.  

I very much hope we can make it through these rough times and enjoy Bitbucket Cloud as it is designed to be going forward.  If you continue to make large strides like this to close the performance/stability gaps it shouldn't be an issue.  

Thanks again for the update and the tremendous effort.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events