Hi all,
I have a question regarding memory allocation for parallel steps.
Our setup looks like this:
definitions:
steps:
- step: &one
name: "Step one"
image: <some self hosted image>
size: 2x
script: |
- while true; do date && echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && echo "" && sleep 30; done >> memoryLogs.txt &
# taken from https://support.atlassian.com/bitbucket-cloud/docs/generate-support-logs/#Container-memory-monitoring
# some more commands
cache: # caches being used
artifacts:
- memoryLogs.txt
# some other artifacts
- step: $two
# similar to &one with some other scripts being executed
- step: &three
# ...
# ...
pipelines:
custom:
Run-Numbered-Steps:
- parallel:
- step: *one
- step: *two
- step: *three
# some more parallel steps, 12 in total
- step:
# some other stuff that will run sequentially
Mon Jun 20 12:04:51 UTC 2022
Memory usage in megabytes:
3903
Mon Jun 20 12:05:21 UTC 2022
Memory usage in megabytes:
3896
Mon Jun 20 12:05:51 UTC 2022
Memory usage in megabytes:
3917
Mon Jun 20 12:06:21 UTC 2022
Memory usage in megabytes:
3754
Mon Jun 20 12:06:51 UTC 2022
Memory usage in megabytes:
3754
Mon Jun 20 12:07:21 UTC 2022
Memory usage in megabytes:
3735
Mon Jun 20 12:07:51 UTC 2022
Memory usage in megabytes:
3740
Now, to my actual question: Are these containers actually not requiring up to 7 GB of memory to run their script commands (hence the logs with <= 4GB) or are the containers somehow only getting up to 4 GB of memory - contrary to the size 2x definition. - or is there an entirely different issue at hand?
We initially tried to increase memory for some of these steps, as the scripts running inside the steps (end-to-end tests) will give weird results and/or fail randomly, essentually becoming flaky, while on local machines they run fine. However on local machines they are granted more than 3 GB of memory, which is why we tried to also increase memory during test runs in the pipeline
Sadly the size property and memory allocation for containers in the context of parallel steps is not documented that well.
Thanks in advance - help and or directing me to the proper documentation would be highly appreciated.
Best regards
Deniz
Hello @Cengiz Deniz ,
Welcome to Atlassian Community!
Your understanding is correct and each step, be it a normal or parallel step, will spin up its own container with its own set of resources: 8GB for 2x steps and 4GB for 1x steps.
The command you are currently using to print the memory is actually printing the currently used memory and not the available memory.
So in this case, not seeing values greater than 4GB in your log just means that your current script does not require more than 4GB of memory, although it's getting very close to that value at its peak (3903 MB in your logs), so if you limit it to 4GB (1x) you might eventually run into out of memory errors in pipelines. When the build/services container runs out of memory, the pipeline will fail and explicitly say that it's a memory-related issue.
For more details about the pipelines memory allocation you can refer to the following documentation :
Thank you @Cengiz Deniz .
Kind regards,
Patrik S
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.