I am trying to use a self hosted runner to deploy application changes to a kuberentes cluster.
The runner is the linux docker runner running inside of a kuberentes cluster. This was accomplished using this guide.
When I try to run a simple pipeline to 'get pods -A' the pipeline fails with the errors below.
I initially though this was because the pod running the runner and docker in docker containers didn't have a reference for the devk8m00 server specificed in the kubeconfig. So I edited the pod to have a host file entry pointing to the k8s cluster server LB VIP for devk8sm00.
Am I missing something, how can I get this runner running inside of the k8s cluster to be able to run commands to it?
0b57ecec8b3f: Pull complete
dc70d1c0613d: Pull complete
Digest: sha256:2c607a7cfb7cf0f4fd7bf95f9d70fdf910b5f2042ffe5a9576da8a90e5368b09
Status: Downloaded newer image for bitbucketpipelines/kubectl-run:3.8.0
INFO: Configuring kubeconfig...
E0930 21:16:43.341822 7 memcache.go:265] couldn't get current server API group list: Get "https://devk8sm00:6443/api?timeout=32s": dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
E0930 21:16:43.347571 7 memcache.go:265] couldn't get current server API group list: Get "https://devk8sm00:6443/api?timeout=32s": dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
E0930 21:16:43.353729 7 memcache.go:265] couldn't get current server API group list: Get "https://devk8sm00:6443/api?timeout=32s": dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
E0930 21:16:43.358067 7 memcache.go:265] couldn't get current server API group list: Get "https://devk8sm00:6443/api?timeout=32s": dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
E0930 21:16:43.362971 7 memcache.go:265] couldn't get current server API group list: Get "https://devk8sm00:6443/api?timeout=32s": dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
Unable to connect to the server: dial tcp: lookup devk8sm00 on 10.96.0.10:53: server misbehaving
✖ kubectl get pods -A failed.
Figured it out, it was a DNS issue. devk8sm00 is the load balancer vip for my cluster which did not have a dns entry so the pod couldn't resolve it. Correcting that fixed the issue and I'm up and running now.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.