Our builds run a BB pipeline that includes steps for unit and integration testing. If a test fails, we get a very nice report showing the failure. What's missing is the console log showing output from the application before the code hit the assert() that triggered the test failure. We need that previous output to diagnose the problem.
We have some tests that work fine when running in a local development environment, where we can see the console output and also use a debugger.
For tests that fail only in the build pipeline, obviously we can't use a debugger so the only option is to get hints printed out to tell us what is going on. But we can't find that console output. Where should we look?
Hi @Mark Linehan and welcome to the community.
I would suggest debugging this locally with Docker and seeing if you get the same output as in the Pipelines build or not.
This would help narrow down whether the issue seems to be specific to Pipelines or not (e.g. it could be related to some configuration that you have in your local environment that doesn't exist in the Docker image you use).
The steps to debug this locally are the following:
1. Take a new clone of the repo on your machine (don't use an existing one)
2. Navigate to the directory of that new clone, and do a git reset --hard to the commit of a failed build with this issue
3. Afterwards, start a Docker container with the following command
docker run -it --volume=/Users/myUserName/code/my-repo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4g --memory-swappiness=0 --cpus=4 --entrypoint=/bin/bash atlassian/default-image:3
Replace /Users/myUserName/code/my-repo in the command with the path of the clone you took on step 1.
Replace atlassian/default-image:3 in the command with the Docker image you are using in bitbucket-pipelines.yml for the step of the failed build.
In case your build uses services, you can check the documentation below on how to use services to test locally:
4. After the Docker container starts, you are inside your working directory and you can start executing commands.
You can run individual commands from your bitbucket-pipelines.yml and check if you see the output that is missing from the Pipelines log or not.
You can also configure tools inside the container.
Please feel free to let me know how it goes and if you have any questions.
Kind regards,
Theodora
I spent almost four hours trying to make this work, and I conclude that it is unrealistic. We have a complex pipeline that requires significant context to run. Normally this context is setup by a custom docker container and 8 internal pipeline steps. I was able to mostly recreate the context but I ran out of time.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Mark,
I understand that the process of debugging this locally has been time-consuming.
I have created a ticket with the support team to investigate this issue further. You should have received an email with a link to the support ticket. Just in case you haven't received it, please feel free to let me know and I can post the ticket URL here. The ticket will be visible only to you and Atlassian staff, no one else can view its contents even if they have the URL.
I would like to ask if you could please leave a reply in the support ticket and provide:
If you have any questions, please feel free to let me know.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Here is the <url_removed> with failed tests. Check the integration tests section and notice this error:
"org.springframework.jdbc.BadSqlGrammarException: StatementCallback; bad SQL grammar [select distinct SPECIES_NAME from ANIMAL_PROTOCOL_ITEMS where TENANTID = '8f9cdc53-2d86-426f-85e0-37151b982a72' order by SPECIES_NAME ASC]; nested exception is net.snowflake.client.jdbc.SnowflakeSQLException: SQL compilation error:
Object 'ANIMAL_PROTOCOL_ITEMS' does not exist or not authorized.
"
Since the SQL is known to be good (it is very simple and works in local testing), the likely problem is that the URL, userid, and password for the JDBC access are not available in the integration test environment. These values are passed to the code via environment variables from repository settings. This error does not occur in local testing, where the environment variables are set locally.
The code always prints the URL with this statement; the userid and password are not printed:
log.info("DARE/Snowflake url is {}", url);
If I could see this message in the console log, I could easily verify whether the environment variables that are set via the repository settings are coming through to the integration tests in the pipeline.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Mark,
Thank you for the info.
I have added this along with the URL of the build in the support ticket I created for you, so that the engineer that will be working on your ticket can check.
Just a heads up, I removed the URL of your build from your post to protect your privacy.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.