Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

C++ tests start failling after 15th of November

Joao Pires
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
November 21, 2024

Hello all,

We are experience something very strange in the pipelines of one of our repos. The only C++ one.

In the last week, some of the unit tests started to fail. The same commit and docker image, on the 15th of November were passing, and on the 19th of November start failing. We don't have runs on the day in between these dates.

The strange part, is that downloading the artefact from the pipelines and all test pass with success. And of course it's a Schrodinger bug, enabling any type of logging make the tests to pass .

Couple of question:

Was there any changes on the Pipelines between the 15th of November and 19th of November that can explain this?

Do you have any ideas on how to debug this?

 

Thanks

2 answers

0 votes
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 26, 2024

Hi Joao and welcome to the community!

We recently migrated Pipelines 1/2x steps to a new CI/CD runtime, so your build might be affected by this.

You said that the tests fail on the same commit and docker image as a previously successful build. Is the SHA of the Docker image also the same in these two builds? You can find that for each build if you open the build on our website, and expand the Build setup section. At the end of the section, you will see something like the following:

Images used:
build: atlassian/default-image@sha256:cd331889428bac55d8968a0eb1367b6fd8378fd358e48ce967637b84ad6fbe24

If there is a different SHA of the Docker image in these two builds, the failure could be caused by a change in the Docker image.

My suggestion would be to debug the failed build locally with Docker, following the steps from this knowledge base article:

If you follow the steps from this article and the commands run without any issues locally, I suggest creating a ticket with the support team and poviding the URL of the failed build and the output from the same commands when you troublehoot following our guide, for further investigation.

You can create a ticket via https://support.atlassian.com/contact/#/, in "What can we help you with?" select "Technical issues and bugs" and then Bitbucket Cloud as product. When you are asked to provide the workspace URL, please make sure you enter the URL of the workspace that is on a paid billing plan to proceed with ticket creation.

A support ticket you create can be accessed only by you and Atlassian staff, so anything you post there won't be publicly visible.

Kind regards,
Theodora

0 votes
ben_clifford November 21, 2024

We're getting the same thing with a node app. Tests are now failing by running out of memory, but this happens even when we pull up an old passing commit/PR.

Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 26, 2024

Hi Ben and welcome to the community.

The issues you see may be related to the migration of 1x/2x steps to a new runtime. We've had reports from other users for out of memory errors. You can check the following community article for further info as well as steps to mitigate this:

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
STANDARD
TAGS
AUG Leaders

Atlassian Community Events