With the new 4x and 8x size capabilities, how do i use the max memory and CPU to accomodate a large project build requirements. Is there a way to explicitly define memory and CPU in pipeline.yml. PFB curretn yaml
image: atlassian/default-image:2
pipelines:
tags:
'*[0-9].*[0-9].*[0-9]-*':
- step:
name: dev-release
image: atlassian/default-image:2
deployment: staging
clone:
enabled: true
services:
- docker
size: 8x
caches:
- maven
- gradle
script:
- docker login --username $HUB_USERNAME --password $HUB_PASSWORD
- VERSION=${BITBUCKET_TAG}
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -f Dockerfile.pipeline -t myrepo/jupyter-notebook:${BITBUCKET_TAG} . --build-arg build_version=$VERSION
# push the new Docker image to the Docker registry
- docker push myrepo/jupyter-notebook:${BITBUCKET_TAG}
definitions:
services:
docker:
memory: 4096
Hi mayank,
Memory allocation is customizable, whereas CPU will remain as a set amount depending on the option selected:
Allocating memory to service containers functions the same way as before, we have documentation that covers this below:
Cheers!
- Ben (Bitbucket Cloud Support)
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.