No se puede ejecutar Hyperkube (kubernetes) localmente a través de Docker

He seguido este tutorial para ejecutar el clúster de kubernetes localmente en un contenedor Docker. Cuando corro kubectl get nodes, Me pongo:

Se rechazó la conexión al servidor localhost:8080. ¿especificó el host o el puerto correctos?

He notado que algunos contenedores iniciados por kubelet, como apiserver, están cerrados. Este es el resultado de docker ps -a:

CONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS                       PORTS               NAMES778bc9a9a93c        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   3 seconds ago       Exited (255) 2 seconds ago                       k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_de6ff8f912dd99c83c34        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   3 seconds ago       Exited (7) 2 seconds ago                         k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_3283400bef7383fa9203        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   4 seconds ago       Exited (7) 4 seconds ago                         k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87beca1bb3896f4896b1        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube scheduler"   5 seconds ago       Up 4 seconds                                     k8s_scheduler.fc12fcbe_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_16584c07e9b1bc5aeeaa        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   5 seconds ago       Exited (255) 4 seconds ago                       k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87e1ad70c81dbe181afa        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube controlle"   5 seconds ago       Up 4 seconds                                     k8s_controller-manager.70414b65_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_1e30d24263dfa0fb0881        gcr.io/google_containers/etcd:2.2.1               "/usr/local/bin/etcd "   5 seconds ago       Up 4 seconds                                     k8s_etcd.7e452b0b_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_94a862fa6bb963ef351d        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube proxy --m"   5 seconds ago       Up 4 seconds                                     k8s_kube-proxy.9a9f4853_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_6098241c311e2788de45        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 4 seconds                                     k8s_POD.6059dfa2_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_79e4e3e83b3cf3ada645        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 4 seconds                                     k8s_POD.6059dfa2_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_9eb869b9aa7efd2154fb        gcr.io/google_containers/pause:2.0                "/pause"                 5 seconds ago       Up 5 seconds                                     k8s_POD.6059dfa2_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_b66baa5fc380b4a9004e        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube kubelet -"   12 seconds ago      Up 12 seconds                                    kubelet

INFO

  • Versión de Docker: 1.10.3

  • Versión de Kubernetes: 1.2.2

  • Sistema operativo: Ubuntu 14.04

Comando de ejecución de Docker

docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true --name=kubelet -d gcr.io/google_containers/hyperkube-amd64:v1.2.2 /hyperkube kubelet --containerized --hostname-override="172.20.34.112" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --allow-privileged=true --v=2

registros de contenedores de kubelet

I0422 11:04:45.158370     541 plugins.go:56] Registering credential provider: .dockercfgI0422 11:05:25.199632     541 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs"I0422 11:05:25.199788     541 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir"I0422 11:05:25.199863     541 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd"I0422 11:05:25.199903     541 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo"I0422 11:05:25.199948     541 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path"I0422 11:05:25.199982     541 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs"I0422 11:05:25.200023     541 plugins.go:291] Loaded volume plugin "kubernetes.io/secret"I0422 11:05:25.200059     541 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi"I0422 11:05:25.200115     541 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs"I0422 11:05:25.200170     541 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim"I0422 11:05:25.200205     541 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd"I0422 11:05:25.200249     541 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder"I0422 11:05:25.200289     541 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs"I0422 11:05:25.200340     541 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api"I0422 11:05:25.200382     541 plugins.go:291] Loaded volume plugin "kubernetes.io/fc"I0422 11:05:25.200430     541 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker"I0422 11:05:25.200471     541 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file"I0422 11:05:25.200519     541 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap"I0422 11:05:25.200601     541 server.go:645] Started kubeletE0422 11:05:25.200796     541 kubelet.go:956] Image garbage collection failed: unable to find data for container /I0422 11:05:25.200843     541 server.go:126] Starting to listen read-only on 0.0.0.0:10255I0422 11:05:25.201531     541 server.go:109] Starting to listen on 0.0.0.0:10250E0422 11:05:25.201684     541 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)I0422 11:05:25.206656     541 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzerI0422 11:05:25.206714     541 manager.go:123] Starting to sync pod status with apiserverI0422 11:05:25.206888     541 kubelet.go:2356] Starting kubelet main sync loop.I0422 11:05:25.207036     541 kubelet.go:2365] skipping pod synchronization - [container runtime is down]I0422 11:05:25.333829     541 factory.go:233] Registering Docker factoryI0422 11:05:25.336920     541 factory.go:97] Registering Raw factoryI0422 11:05:25.392065     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:25.392148     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:25.398401     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:25.492441     541 manager.go:1003] Started watching for new ooms in managerI0422 11:05:25.493365     541 oomparser.go:182] oomparser using systemdI0422 11:05:25.495129     541 manager.go:256] Starting recovery of all containersI0422 11:05:25.583462     541 manager.go:261] Recovery completedI0422 11:05:25.622022     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:25.622065     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:25.622485     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:26.038631     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:26.038753     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:26.039300     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:26.852863     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:26.852892     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:26.853320     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:28.468911     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:28.468937     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:28.469355     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.207357     541 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11), k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8), k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)"E0422 11:05:30.207416     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.207465     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.207505     541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cacheE0422 11:05:30.209316     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedE0422 11:05:30.209332     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.209396     541 manager.go:1688] Need to restart pod infra container for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)" because it is not foundW0422 11:05:30.209828     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-etcd-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedE0422 11:05:30.209899     541 kubelet.go:1764] Failed creating a mirror pod for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refusedW0422 11:05:30.212690     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-proxy-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.214297     541 manager.go:1688] Need to restart pod infra container for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" because it is not foundW0422 11:05:30.214935     541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-172.20.34.112: dial tcp 127.0.0.1:8080: connection refusedI0422 11:05:30.220596     541 manager.go:1688] Need to restart pod infra container for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)" because it is not foundI0422 11:05:31.693419     541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112I0422 11:05:31.693456     541 kubelet.go:1134] Attempting to register node 172.20.34.112I0422 11:05:31.694191     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

registros del contenedor del servidor api (cerrado)

I0425 13:18:55.516154       1 genericapiserver.go:82] Adding storage destination for group batch W0425 13:18:55.516177       1 server.go:383] No RSA key provided, service account token authentication disabled F0425 13:18:55.516185       1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

He reproducido su problema antes, y también he ejecutado con éxito el contenedor kubelet un par de veces.

Aquí está el comando exacto que estoy ejecutando cuando tiene éxito:export K8S_VERSION=v1.2.2docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ --name=kubelet \ -d \ gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --allow-privileged=true --v=2

Eliminé estas 2 configuraciones del comando sugerido por el tutorial porque DNS no era necesario en mi caso:--cluster-dns=10.0.0.10--cluster-domain=cluster.local

Además, inicié el portal SSH de docker en segundo plano antes iniciando el contenedor kubelet, usando este comando:

docker-machine ssh `docker-machine active` -f -N -L "8080:localhost:8080"

Tampoco realicé ningún cambio en los certificados SSL.

Puedo ejecutar el contenedor kubelet con K8S_VERSION=v1.2.2 y K8S_VERSION=1.2.3.

En una ejecución exitosa, observo que todos los procesos están "Activos"; ninguno está "Salido":

$ docker ps -aCONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS              PORTS               NAMES42e6d973f624        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube apiserver"   About an hour ago   Up About an hour                        k8s_apiserver.78ec1de_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_5d260d3c135c020f14b4        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube controlle"   About an hour ago   Up About an hour                        k8s_controller-manager.70414b65_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_9b338f27873656c913fd        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/setup-files.sh IP:1"   About an hour ago   Up About an hour                        k8s_setup.e5aa3216_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ff89fc7c8b12f5f20e8f        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube scheduler"   About an hour ago   Up About an hour                        k8s_scheduler.fc12fcbe_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ea90af7593d9b2387b2e        gcr.io/google_containers/etcd:2.2.1               "/usr/local/bin/etcd "   About an hour ago   Up About an hour                        k8s_etcd.7e452b0b_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_d66f84f0f6e45af93ee9        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube proxy --m"   About an hour ago   Up About an hour                        k8s_kube-proxy.9a9f4853_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_b0084efcf6748442f2d1        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_f4758f9bd515c10910c4        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_3248c1d6958f4865df9f        gcr.io/google_containers/pause:2.0                "/pause"                 About an hour ago   Up About an hour                        k8s_POD.6059dfa2_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_3850b11e2611ee951476        gcr.io/google_containers/hyperkube-amd64:v1.2.2   "/hyperkube kubelet -"   About an hour ago   Up About an hour                        kubelet

En una ejecución exitosa, también veo resultados de registro similares a los suyos cuando ejecuto docker logs kubelet. En particular, veo:Unable to register 127.0.0.1 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

Pero, con el tiempo, funciona:$ kubectl -s http://localhost:8080 cluster-infoKubernetes master is running at http://localhost:8080$ kubectl get nodesNAME STATUS AGE127.0.0.1 Ready 1h192.168.99.100 NotReady 1hlocalhost NotReady 1h

Otros consejos:

  • Es posible que deba esperar un poco para que se inicie el servidor de API. Por ejemplo, este tipo usa un bucle while:until $(kubectl -s http://localhost:8080 cluster-info &> /dev/null); dosleep 1done

  • En Mac OS X, he notado que la máquina virtual Docker puede volverse inestable cada vez que cambia mi conexión inalámbrica o cuando suspendo/reanudo mi computadora portátil. Por lo general, puedo resolver estos problemas con un docker-machine restart.

  • Cuando experimento con kubelet, a menudo quiero detener el contenedor de kubelet y detener/eliminar todo contenedores en mi docker. Lo hago corriendo docker stop kubelet && docker rm -f $(docker ps -aq)

Información sobre mi configuración, OS X El Capitan 10.11.2:

$ docker --versionDocker version 1.10.3, build 20f81dd$ kubectl versionClient Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}

[No soy un experto en kubernetes, solo sigo mi nariz aquí].

la falla de kubelet es aparentemente un síntoma consecuente de que el puerto 8080 está cerrado, lo que notó al principio de su pregunta. No es donde debes concentrarte.

Tenga en cuenta la siguiente línea en los registros que nos mostró:

I0422 11:05:28.469355     541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

Por lo tanto, kubelet está intentando ponerse en contacto con el apiserver y se rechaza la conexión. Eso no es sorprendente dado que, como usted nota, ha salido.

Las líneas de registro que nos muestra para el apiserver lo muestran quejándose de que no tiene un certificado. Los certificados están normalmente en /var/run/kubernetes (señalar aquí). Que cae dentro de la /var/run volumen que se configura en el comando de docker para ejecutar kubernetes en el tutorial. Miraría de cerca esa especificación de volumen para ver si ha cometido algún error y para ver si los certificados están allí como se esperaba.

Hay algunos bits en https://github.com/kubernetes/kubernetes/issues/11000 lo que podría ser útil para averiguar qué está mal con sus certificados, incluidos devurandom proporcionar un script para crear los certificados si eso es lo que se necesita.

tada How to continue a Docker container which has exited - Stack Overflow