Hello,
I'm having increasingly common issues with build failures using Bitbucket Pipelines. While this used to happen only occasionally, it is now happening on nearly every build.
The issue always occurs on:
brunch build --production
Error is:
Container 'docker' exceeded memory limit.
Here's my bitbucket-pipelines.yml for reference:
pipelines:
default:
- step:
name: Deploy to ECS
deployment: production
image: atlassian/pipelines-awscli
size: 2x
services:
- docker
script:
- export AWS_DEFAULT_REGION=${AWS_REGION}
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRECT_ACCESS_KEY=${AWS_SECRECT_ACCESS_KEY}
- export IMAGE_NAME="${BITBUCKET_REPO_SLUG}:${BITBUCKET_BUILD_NUMBER}"
- export ECS_CLUSTER_NAME=${BITBUCKET_REPO_SLUG}
- export ECS_SERVICE_NAME=${BITBUCKET_REPO_SLUG}
- export ECS_TASK_NAME=${BITBUCKET_REPO_SLUG}
- docker build -t "${IMAGE_NAME}"
--build-arg MIX_ENV=${MIX_ENV}
--build-arg ... more args ...
.
- ECR_REPO="us-east-1.amazonaws.com"/"${IMAGE_NAME}"
- echo ${ECR_REPO}
- docker tag "${IMAGE_NAME}" "${ECR_REPO}"
- eval $(aws ecr get-login --no-include-email --region ${AWS_REGION})
- docker push "us-east-1.amazonaws.com"/"${IMAGE_NAME}"
- aws ecs list-clusters | grep "${ECS_CLUSTER_NAME}" || aws ecs create-cluster --cluster-name "${ECS_CLUSTER_NAME}"
- export TASK_VERSION=$(aws ecs register-task-definition
--family "${ECS_TASK_NAME}"
--container-definitions
'[{"name":"'"${BITBUCKET_REPO_SLUG}"'","image":"'"${ECR_REPO}"'","memory":512,"entryPoint":[],"portMappings":[{"hostPort":80,"protocol":"tcp","containerPort":4000}]}]'
| jq --raw-output '.taskDefinition.revision')
- echo "Registered ECS Task Definition:" "${TASK_VERSION}"
- aws ecs list-services --cluster "${ECS_CLUSTER_NAME}" | grep "${ECS_SERVICE_NAME}" || aws ecs create-service --service-name "${ECS_SERVICE_NAME}" --cluster "${ECS_CLUSTER_NAME}" --task-definition "${ECS_TASK_NAME}" --desired-count 1
- aws ecs update-service --cluster "${ECS_CLUSTER_NAME}" --service "${ECS_SERVICE_NAME}" --task-definition "${ECS_TASK_NAME}:${TASK_VERSION}"
- export TASK_ARN=$(aws ecs list-tasks --cluster "${ECS_CLUSTER_NAME}" | jq --raw-output '.taskArns[0]')
- aws ecs stop-task --task $TASK_ARN --cluster "${ECS_CLUSTER_NAME}"
- aws ecs run-task --cluster "${ECS_CLUSTER_NAME}" --task-definition "${ECS_CLUSTER_NAME}":"${TASK_VERSION}"
definitions:
service:
docker:
image: bitwalker/alpine-elixir-phoenix:latest
Any help or insight is greatly appreciated!
Kevin
Hi @kevin-stueber ,
try bumping the memory that is allocated to Docker: https://confluence.atlassian.com/bitbucket/run-docker-commands-in-bitbucket-pipelines-879254331.html#RunDockercommandsinBitbucketPipelines-Dockermemorylimits
So specifically, something like this:
definitions:
services:
docker:
memory: 2048
Thanks, @Jeroen De Raedt
That solved the issue. I appreciate the help!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks, @Jeroen De Raedt
Appreciate your help, the issue fixed.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank You! @Jeroen De Raedt
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I'm having the same issue..
Dear atlassian, what is the point of using a cloud build server if it has the same limitations as an old pc??
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I think you mean "more limitations than an old PC"?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I've also faced the same issue:
Container 'docker' exceeded memory limit.
Added memory parameter to my docker-compose.yml file and got this error:docker-compose up -d
The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.docker: 'memory'
Here is my
docker-compose.yml file:
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
firefox:
image: selenium/node-firefox
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
docker:
memory: 2048
How can I fix this?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Your showing a docker-compose file which is differnt to a pipelines file. They both run docker containers, but in a different way. As you have seen, you can not specify memory in that way in a docker-compose file.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.