We deploy from BitBucket Pipelines into a Docker Swarm Mode cluster. This requires setting up a remote SSH context to one of the cluster manager nodes, and executing the docker commands in that context. Essentially the following:
# Preserve the docker host, then unset it so it doesn't interfere
# with the context
export PREVIOUS_DOCKER_HOST=$DOCKER_HOST
unset DOCKER_HOST
# Create and use the remote context
docker context create remote --docker "host=ssh://$DEPLOYMENT_USER@$DEPLOYMENT_HOST"
docker context use remote
# Log into the registry
echo $DOCKER_HUB_PASSWORD | docker login --username $DOCKER_HUB_USER --password-stdin
# Deploy the service
docker stack deploy \
--with-registry-auth \
--prune \
--compose-file docker-compose.production.yaml \
$BITBUCKET_REPO_SLUG
# Restore the pipeline docker host, in case we need it later on
export DOCKER_HOST=PREVIOUS_DOCKER_HOST
This has several drawbacks:
Instead, we'd like to have a self-hosted runner, running in the cluster, so the deployment step can run within the cluster and deploy services to the local Docker instance directly.
This solves every drawback:
What sounds so neat in theory doesn't work, because I can't get the step executed inside the runner to talk to the Docker daemon on my cluster node. I would need to mount the Docker socket from the host into the step container, or resolve the host IP to communicate with the TCP socket.
Is there a way to talk to the docker daemon on the runner host from within a build step?
Edit: For reference, here's the output of docker info inside the build step (emphasis mine):
+ DOCKER_HOST=$BITBUCKET_DOCKER_HOST_INTERNAL docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.15
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Authorization: pipelines
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
runc version: v1.1.1-0-g52de29d7
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
userns
Kernel Version: 5.4.0-105-generic
Operating System: Alpine Linux v3.15 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.746GiB
Name: 086d6e21a117
ID: WVBT:EBC5:HCCO:SKNB:P664:IUDG:AU2M:XDQE:BXR2:22E4:AXMH:HT3G
Docker Root Dir: /var/lib/docker/165536.165536
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://localhost:5000/
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/
WARNING: No swap limit support
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.