Hi all,
We have a repository that has a few related but distinct projects in multiple subdirectories (e.g. a directory for some ETL related work based on Scala, another for a webapp backed based on Kotlin, another for the frontend using webpack).
Is it possible to configure multiple pipelines for each of the subdirectories? I took a wild guess and created a `bitbucket-pipelines.yml` file in one of the subdirectories, but it didn't seem to create a pipeline.
Is there a recommended approach for doing this? For similar use cases on Jenkins in the past, when you create a pipeline you have the option of specifying where the Jenkinsfile you would like to use is located (i.e. specify a subdirectory) and create multiple jobs. Is there something equivalent here?
Thanks!
Jon
What you are looking for is
condition:
changesets:
https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/#condition
We also have a monorepo setup with multiple projects and subprojects where the build (and deployment) is triggered depending on files that have been modified.
We use an automatic tagging approach where if a serie of tests passes, we tag the build to be deployed or rollback.
Simplified example: backend.staging.[buildnumber] and frontend.staging.[buildnumber]
We use the same image from one environment to another (test, preprod, prod, userA, etc.) as we promote them inside our kubernetes clusters. All that done continuously.
While it works really great, some things to consider impoving :
Composition would be huge. We are coming from a gitlab environment, and regularly use a library of gitlab-ci files to standardize our build. While we could handle a lot of that by writing custom pipes, it reduces transparency in the build process and adds a lot of overhead and additional repos and pipelines to be created to accomplish the same thing.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Jon Boerner
It's not possible to configure multiple pipelines for different subdirectories but there are a few workarounds that might work for your use case:
Regards
Sam
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I like option #3 but am curious: Since the "pipeline", i.e. Bitbucket's execution of my build script as a step, needs to run in order to check whether to run the actual building of the subproject, in essence, isn't the Bitbucket pipeline running? In other words, I still have to run at least a portion of the pipeline to determine whether running the whole pipeline, on a particular subproject, is necessary. Is this understanding correct? Lastly, in order to detect whether to run the whole build pipeline on a subproject, it will use build minutes simply performing the check as to whether to build fully or not. Is that correct?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I imagine you'd always want a pipeline to run, the first part of the script just determines what build script to execute. For example you might have a project:
/
|
|-- bitbucket-pipelines.yml <- determine which build script to run in here
|
|
|-- Project A
| |
| |-- build.sh
|
|
|-- Project B
|
|-- build.sh
In other words there isn't a pipeline per project, just a single pipeline that determines which build script to run. It's an imperfect workaround but possibly suitable for some users. I personally prefer option #2 as it allows you to track a subproject's build status by branch.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Makes sense, I am also just confirming that at the point where "determine which build script to run in here" executes, the pipeline is already running to your point, it's now only a matter of what else to execute, i.e. Project A's build.sh or Project B's. In other words, build minutes are already being increased the moment the pipeline starts regardless of whether anything is actually built or not.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
That's correct
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@StannousBaratheon if you prefer option #2, could you give more clarification ? How do you manage the deployment for different environment ? Does it mean if we have 3 services and 3 environments, then we need to have 9 branches assumed that we want to do automatic deployment from branch to cloud environment?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Chandara Chea A single pipeline can deploy to multiple environments. For example if you have 3 environments, your pipeline might consist of 4 steps:
With this in mind, if you have 3 services in a mono-repo that you want to build and deploy separately you could have 3 branches (one for each service) and a branch pipeline for each branch that performs the build and deploys to all 3 environments as described above.
The documentation on Bitbucket deployments explains how to configure deployment environments for your pipelines: https://support.atlassian.com/bitbucket-cloud/docs/set-up-bitbucket-deployments/
Automatic vs manual deployments are determined by the step's "trigger" property. Step will run automatically by default. If you wish to perform a manual deployment simply add `trigger: manual` to the relevant deployment step.
Bitbucket pipelines has recently released a new feature called conditional steps that provides a new option for builds in mono-repos. This option allows you to specify if a step should run based on a changeset pattern. With this approach a single main branch can be used for all services. The pipeline would then contain a conditional step for each service that builds the service whenever a file in that service is changed. Please see the following blog post for more information about conditional steps: https://bitbucket.org/blog/conditional-steps-and-improvements-to-logs-in-bitbucket-pipelines
I wouldn't necessarily advocate one approach over the other. It's really incumbent on the project teams to understand the pros and cons of each and decide which works best for them.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We were using monorepo wtih Gradle as build tool and bitbucket pipelines as ci tool.
In short, there is one "main" pipeline defined for master branch and multiple custom pipelines per service (separate project in some folder). Main pipeline is starting shell script which contains logic for resolving which services (projects) were changed and trigger corresponding pipeline via REST API.
See setup showcase here:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.