Não é possível executar o Hyperkube (kubernetes) localmente via Docker

Eu tenho seguido este tutorial para executar o cluster kubernetes localmente em um contêiner Docker. Quando Eu corro kubectl get nodes, Eu entendo:

A conexão com o servidor localhost: 8080 foi recusada - você especificou o host ou a porta certa?

Notei que alguns contêineres iniciados pelo kubelet, como o apiserver, foram encerrados. Esta é a saída de docker ps -a:

CONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS                       PORTS               NAMES778bc9a9a93c        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   3 seconds ago       Exited (255) 2 seconds ago                       k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_de6ff8f912dd99c83c34        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   3 seconds ago       Exited (7) 2 seconds ago                         k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_3283400bef7383fa9203        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   4 seconds ago       Exited (7) 4 seconds ago                         k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87beca1bb3896f4896b1        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube scheduler"   5 seconds ago       Up 4 seconds                                     k8s_scheduler.fc12fcbe_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_16584c07e9b1bc5aeeaa        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   5 seconds ago       Exited (255) 4 seconds ago                       k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87e1ad70c81dbe181afa        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube controlle"   5 seconds ago       Up 4 seconds                                     k8s_controller-manager.70414b65_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_1e30d24263dfa0fb0881        gcr.io/google_containers/etcd:2.2.1               "/usr/local/bin/etcd "   5 seconds ago       Up 4 seconds                                     k8s_etcd.7e452b0b_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_94a862fa6bb963ef351d        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube proxy --m"   5 seconds ago       Up 4 seconds                                     k8s_kube-proxy.9a9f4853_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_6098241c311e2788de45        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 4 seconds                                     k8s_POD.6059dfa2_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_79e4e3e83b3cf3ada645        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 4 seconds                                     k8s_POD.6059dfa2_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_9eb869b9aa7efd2154fb        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 5 seconds                                     k8s_POD.6059dfa2_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_b66baa5fc380b4a9004e        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube kubelet -"   12 seconds ago      Up 12 seconds                                    kubelet

Informacao

  • Versão do Docker: 1.10.3

  • Versão do Kubernetes: 1.2.2

  • Sistema operacional: Ubuntu 14.04

Comando Docker run

docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true --name=kubelet -d gcr.io/google_containers/hyperkube-amd64:v1.2.2 /hyperkube kubelet --containerized --hostname-override="172.20.34.112" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --allow-privileged=true --v=2

kubelet container logs

I0422 11:04:45.158370     541 plugins.go:56] Registering credential provider: .dockercfgI0422 11:05:25.199632     541 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs"I0422 11:05:25.199788     541 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir"I0422 11:05:25.199863     541 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd"I0422 11:05:25.199903     541 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo"I0422 11:05:25.199948     541 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path"I0422 11:05:25.199982     541 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs"I0422 11:05:25.200023     541 plugins.go:291] Loaded volume plugin "kubernetes.io/secret"I0422 11:05:25.200059     541 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi"I0422 11:05:25.200115     541 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs"I0422 11:05:25.200170     541 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim"I0422 11:05:25.200205     541 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd"I0422 11:05:25.200249     541 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder"I0422 11:05:25.200289     541 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs"I0422 11:05:25.200340     541 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api"I0422 11:05:25.200382     541 plugins.go:291] Loaded volume plugin "kubernetes.io/fc"I0422 11:05:25.200430     541 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker"I0422 11:05:25.200471     541 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file"I0422 11:05:25.200519     541 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap"I0422 11:05:25.200601     541 server.go:645] Started kubeletE0422 11:05:25.200796     541 kubelet.go:956] Image garbage collection failed: unable to find data for container /I0422 11:05:25.200843     541 server.go:126] Starting to listen read-only on 0.0.0.0:10255I0422 11:05:25.201531     541 server.go:109] Starting to listen on 0.0.0.0:10250E0422 11:05:25.201684     541 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)I0422 11:05:25.206656     541 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzerI0422 11:05:25.206714     541 manager.go:123] Starting to sync pod status with apiserverI0422 11:05:25.206888     541 kubelet.go:2356] Starting kubelet main sync loop.I0422 11:05:25.207036     541 kubelet.go:2365] skipping pod synchronization - [container runtime is down]I0422 11:05:25.333829     541 factory.go:233] Registering Docker factoryI0422 11:05:25.336920     541 factory.go:97] Registering Raw factoryI0422 11:05:25.392065     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:25.392148     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:25.398401     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:25.492441     541 manager.go:1003] Started watching for new ooms in managerI0422 11:05:25.493365     541 oomparser.go:182] oomparser using systemdI0422 11:05:25.495129     541 manager.go:256] Starting recovery of all containersI0422 11:05:25.583462     541 manager.go:261] Recovery completedI0422 11:05:25.622022     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:25.622065     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:25.622485     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:26.038631     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:26.038753     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:26.039300     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:26.852863     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:26.852892     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:26.853320     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:28.468911     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:28.468937     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:28.469355     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.207357     541 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11), k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8), k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)"E0422 11:05:30.207416     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.207465     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.207505     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.209316     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedE0422 11:05:30.209332     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.209396     541 manager.go:1688] Need to restart pod infra container for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)" because it is not foundW0422 11:05:30.209828     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-etcd-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedE0422 11:05:30.209899     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedW0422 11:05:30.212690     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-proxy-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.214297     541 manager.go:1688] Need to restart pod infra container for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" because it is not foundW0422 11:05:30.214935     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.220596     541 manager.go:1688] Need to restart pod infra container for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)" because it is not foundI0422 11:05:31.693419     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:31.693456     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:31.694191     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

API Server container (encerrado) logs

I0425 13:18:55.516154       1 genericapiserver.go:82] Adding storage destination for group batch W0425 13:18:55.516177       1 server.go:383] No RSA key provided, service account token authentication disabled F0425 13:18:55.516185       1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

Eu reproduzi seu problema antes e também executei com sucesso o contêiner kubelet algumas vezes.

Aqui está o comando exato que estou executando quando for bem-sucedido:export K8S_VERSION=v1.2.2docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ --name=kubelet \ -d \ gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --allow-privileged=true --v=2

Removi essas 2 configurações do comando sugerido do tutorial porque o DNS não era necessário no meu caso:--cluster-dns=10.0.0.10--cluster-domain=cluster.local

Além disso, iniciei o portal docker SSH em segundo plano antes iniciando o contêiner kubelet, usando este comando:

docker-machine ssh `docker-machine active` -f -N -L "8080:localhost:8080"

Também não fiz alterações nos certificados SSL.

Eu sou capaz de executar o contêiner kubelet com K8S_VERSION = v1.2. 2 e K8S_VERSION=1.2.3.

Em uma execução bem-sucedida, observo que todos os processos estão "Para Cima"; nenhum está "saído":

$ docker ps -aCONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS              PORTS               NAMES42e6d973f624        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   About an hour ago   Up About an hour                        k8s_apiserver.78ec1de_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_5d260d3c135c020f14b4        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube controlle"   About an hour ago   Up About an hour                        k8s_controller-manager.70414b65_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_9b338f27873656c913fd        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   About an hour ago   Up About an hour                        k8s_setup.e5aa3216_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ff89fc7c8b12f5f20e8f        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube scheduler"   About an hour ago   Up About an hour                        k8s_scheduler.fc12fcbe_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ea90af7593d9b2387b2e        gcr.io/google_containers/etcd:2.2.1               "/usr/local/bin/etcd "   About an hour ago   Up About an hour                        k8s_etcd.7e452b0b_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_d66f84f0f6e45af93ee9        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube proxy --m"   About an hour ago   Up About an hour                        k8s_kube-proxy.9a9f4853_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_b0084efcf6748442f2d1        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_f4758f9bd515c10910c4        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_3248c1d6958f4865df9f        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_3850b11e2611ee951476        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube kubelet -"   About an hour ago   Up About an hour                        kubelet

Em uma execução bem-sucedida, também vejo saída de log semelhante à sua quando executo docker logs kubelet. Em particular, vejo:Unable to register 127.0.0.1 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

Mas, eventualmente, funciona:$ kubectl -s http://localhost:8080 cluster-infoKubernetes master is running at http://localhost:8080$ kubectl get nodesNAME STATUS AGE127.0.0.1 Ready 1h192.168.99.100 NotReady 1hlocalhost NotReady 1h

Dica:

  • Você pode precisar esperar um pouco para que o servidor API seja iniciado. Por exemplo, esse cara usa um loop de tempo:until $(kubectl -s http://localhost:8080 cluster-info &> /dev/null); dosleep 1done

  • No Mac OS X, notei que a VM do Docker pode ficar instável sempre que minhas alterações sem fio ou quando suspendo/retomo meu laptop. Normalmente, posso resolver esses problemas com um docker-machine restart.

  • Ao experimentar com kubelet, muitas vezes vou querer parar o contêiner kubelet e parar / remover todo contentores no meu docker. Eu faço isso correndo docker stop kubelet && docker rm -f $(docker ps -aq)

Informações sobre minha configuração, OS X El Capitan 10.11.2:

$ docker --versionDocker version 1.10.3, build 20f81dd$ kubectl versionClient Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}

[Eu não sou um especialista em kubernetes-apenas seguindo meu nariz aqui].

o fracasso de kubelet é aparentemente um sintoma consequente do fechamento da porta 8080, que você observou no início de sua pergunta. Não é onde você deve estar focado.

Observe a seguinte linha nos logs que você nos mostrou:

I0422 11:05:28.469355     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

Então, kubelet está tentando entrar em contato com o apiserver e obter a conexão recusada. Isso não é surpreendente, dado que, como você observa, ele saiu.

As linhas de log que você nos mostra para o apiserver mostram reclamando de não ter um certificado. Os certificados estão normalmente em /var/run/kubernetes (anotado aqui). Que cai dentro do /var/run volume configurado no comando docker para executar o kubernetes em seu tutorial. Eu estaria olhando atentamente para essa especificação de volume para ver se você cometeu algum erro e para ver se os certificados estão lá conforme o esperado.

Há alguns bits em https://github.com/kubernetes/kubernetes/issues/11000 o que pode ser útil para descobrir o que está errado com seus certificados, incluindo devurandom fornecer um script para criar os certificados, se for necessário.

tada How to continue a Docker container which has exited - Stack Overflow