Confusion with Jenkins Docker Plugin and Jenkins Docker Slaves

So I am pleasantly surprised in some respects to see the Jenkins Docker plugin "pushing" Docker images to my Docker host metal but it is confusing too because my builds are taking place in Docker Slave Containers that are running on the Docker host metal. Even my Jenkins master runs in a Docker container, not directly on metal...

Following this popular Jenkins master/slave guide I arrived to the point where I had Jenkins builds working in ephemeral Docker containers.

This means that when I do Jenkins a build of some source code software component/service of mine, the build is kicked off in a Jenkins slave which happens to be a Docker Container spun up by the Jenkins Docker Plugin.

The Jenkins work space is in this slave container, the Jenkins master with Docker Plugin installed, will dispose of this slave container once the build is complete. See a diagram I made to help explain:

enter image description here

Some important follow up points after you have digested this diagram:

  • The Jenkins Master and Jenkins Slave are running on the same Docker Host Metal at this point, as I am just at the beginning stages of getting this system running
  • I am using the Docker Plugin and SSH Slaves plugin to accomplish this setup

So within this Docker Slave my software component/service build artifact is created, it could be, for example, a .dll or a .war. It happens to be the case though that my build artifact will be a Docker image. To be clear, I am building a Docker image inside a running Docker Container (the Jenkins Slave).

My confusion starts with my expectation that I should have to explicitly run a cmd to push my software component Docker image build artifact to a Docker registry. Otherwise, when the Jenkins build job is complete, the Docker plugin will shutdown the Docker container slave, dispose of (rm) the the slave container and than I will lose the build artifact inside that slave container.

What actually happens, and why I am pleasantly surprised, at least in the short term while I am getting devops up and running, is that the build artifact Docker image shows up on the Docker host metal, docker image ls.

I am surprised that the Docker plugin would go to this level of assumption/help... I know the Docker plugin allows you to configure a Docker registry and that you can add a build step to Build/Publish to a Docker Cloud which I assume that cloud is treated as a registry for images and perhaps a place to also run those images as well:

enter image description here

What is particularly interesting is that I am not using the Docker Plugin for any build steps, I just use the Docker Plugin to configure a Slave Container for build Jenkins Item:

enter image description here

The only build step I have is that I Execute a Shell Script, yes this script happens to ultimately build a Docker image but the Docker Plugin would not know this:

enter image description here

The Docker Plugin spins up the Docker Slave Containers, I configure the Docker Plugin and tell it a Docker Host (my metal in my situation) a Cloud is what the Docker Plugin calls a Docker Host and also Docker slave images to use on that Docker Host/Cloud:

enter image description here Do I just have misunderstandings about how isolated a Jenkins build work space happens to be when inside a Docker slave container?

Is the Docker Plugin just defaulting to using the one and only Docker Cloud (my Docker host metal) I have setup for all any docker commands I happen to run inside a Jenkins Docker slave container? (a slave container by the way that does have Docker-CE installed on it)

My Jenkins Master Dockerfile:

#reference
#https://engineering.riotgames.com/news/putting-jenkins-docker-container

FROM jenkins:2.60.1
MAINTAINER Brian Ogden

USER root

#Timezone
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Prep Jenkins Directories
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins

# Copy in local config filesfiles
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh

# Install default plugins
# Set list of plugins to download / update in plugins.txt like this
# pluginID:version
# credentials:1.18
# maven-plugin:2.7.1
# ...
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/plugins.sh /tmp/plugins.txt

USER jenkins

#give Jenkins a nice 8 GB memory pool and room to handle garbage collection
#ENV JAVA_OPTS="-Xmx8192m"
#give Jenkins a nice base pool of handlers and a cap
#ENV JENKINS_OPTS="--handlerCountStartup=100 --handlerCountMax=300"

ENV JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"

I use docker-compose and Docker volume with my Jenkins Master, my docker-compose.yml:

version: '2'
services:
  data:
    build: data
    image: tsl.devops.jenkins.data.image
    container_name: tsl.devops.jenkins.data.container
  master:
    build: master
    image: tsl.devops.jenkins.master.image
    container_name: tsl.devops.jenkins.master.container
    volumes_from:
      - data
    ports:
      - "50000:50000"
    #network_mode: "host"
  nginx:
    build: nginx
    image: tsl.devops.jenkins.nginx.image
    container_name: tsl.devops.jenkins.nginx.container
    ports:
      - "80:80"
    links:
      - master:jenkins-master
  slavebasic:
    build:
      context: ./slaves
      dockerfile: basic/Dockerfile
    image: tsl.devops.jenkins.slave.basic.image
    container_name: tsl.devops.jenkins.slave.basic.container
  slavedotnetcore:
    build:
      context: ./slaves
      dockerfile: dotnetcore/Dockerfile
    image: tsl.devops.jenkins.slave.dotnetcore.image
    container_name: tsl.devops.jenkins.slave.dotnetcore.container

My Jenkins Master volume/drive Dockerfile:

#reference
#https://engineering.riotgames.com/news/docker-jenkins-data-persists
FROM centos:7
MAINTAINER Brian Ogden

#create the Jenkins user in this container
RUN useradd -d "/var/jenkins_home" -u 1000 -m -s /bin/bash jenkins
#NOTE: we set the UID here to the same one the Cloudbees Jenkins image uses 
#so we can match UIDs across containers, which is essential if you want 
#to preserve file permissions between the containers. We also use the same home directory and bash settings.

#Jenkins log directory
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins

#Docker volume magic
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
USER jenkins

#just a little output reminder of the container's purpose
CMD ["echo", "Data container for Jenkins"]

My Slave Dockerfile:

FROM centos:7
MAINTAINER Brian Ogden

#the USER will be root by default just explicitly 
#expressing it for better documentation
USER root

# Install Essentials
RUN yum update -y && \
         yum clean all

#############################################
# Jenkins Slave setup
#############################################
RUN yum install -y \
    git \
    wget \
    openssh-server \
    java-1.8.0-openjdk \
    sudo \
    make && \
    yum clean all

# gen dummy keys, centos doesn't autogen them like ubuntu does
RUN /usr/bin/ssh-keygen -A

# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional pam_loginuid.so/' /etc/pam.d/sshd

# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash

# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/id_rsa.pub /home/jenkins/.ssh/authorized_keys

#setup permissions for the new folders and files
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh

# Add the jenkins user to sudoers
RUN echo "jenkins  ALL=(ALL)  ALL" >> etc/sudoers
#############################################

#############################################
# Docker and Docker Compose Install
#############################################
#install required packages
RUN yum install -y \
    yum-utils \
    device-mapper-persistent-data \
    lvm2 \
    curl && \
    yum clean all

#add Docker CE stable repository
RUN yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

#Update the yum package index.
RUN yum makecache fast

#install Docker CE
RUN yum install -y docker-ce-17.06.0.ce-1.el7.centos

#install Docker Compose 1.14.0
#download Docker Compose binary from github repo
RUN curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#Apply executable permissions to the binary
RUN chmod +x /usr/local/bin/docker-compose
#############################################

#############################################
# .NET Core SDK
#############################################
RUN yum install -y \
    libunwind \
    libicu

RUN curl -sSL -o dotnet.tar.gz https://go.microsoft.com/fwlink/?linkid=848821
RUN mkdir -p /opt/dotnet && tar zxf dotnet.tar.gz -C /opt/dotnet
RUN ln -s /opt/dotnet/dotnet /usr/local/bin

#add Trade Service Nuget Server
RUN mkdir -p /home/jenkins/.nuget/NuGet
COPY /files/NuGet.Config /home/jenkins/.nuget/NuGet/NuGet.Config

RUN chown -R jenkins /home/jenkins/.nuget
RUN chgrp -R jenkins /home/jenkins/.nuget

RUN chmod 600 /home/jenkins/.nuget/NuGet/NuGet.Config
RUN chmod 700 /home/jenkins/.nuget/NuGet

#speed up dotnet core builds
ENV NUGET_XMLDOC_MODE skip
ENV DOTNET_SKIP_FIRST_TIME_EXPERIENCE true
#############################################

# Expose SSH port and run SSHD
EXPOSE 22
#Technically, the Docker Plugin enforces this call when it starts containers by overriding the entry command. 
#I place this here because I want this build slave to run locally as it would if it was started in the build farm.
CMD ["/usr/sbin/sshd","-D"]

An example software/component Dockerfile that will create a Docker image build artifact inside a Jenkins Slave Docker container:

FROM centos:7
MAINTAINER Brian Ogden

#Timezone
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

RUN yum update -y && \
         yum clean all

#############################################
# .NET Core SDK
#############################################
RUN yum install -y \
    libunwind \
    libicu

RUN curl -sSL -o dotnet.tar.gz https://go.microsoft.com/fwlink/?linkid=848821
RUN mkdir -p /opt/dotnet && tar zxf dotnet.tar.gz -C /opt/dotnet
RUN ln -s /opt/dotnet/dotnet /usr/local/bin

#speed up dotnet core builds
ENV NUGET_XMLDOC_MODE skip
ENV DOTNET_SKIP_FIRST_TIME_EXPERIENCE true
#############################################

#############################################
# .NET Sevrice setup
#############################################
ARG ASPNETCORE_ENVIRONMENT

# Copy our code from the "/src/MyWebApi/bin/Debug/netcoreapp1.1/publish" folder to the "/app" folder in our container
WORKDIR /app
COPY ./src/TSL.Security.Service/bin/Debug/netcoreapp1.1/publish .

# Expose port 5000 for the Web API traffic
ENV ASPNETCORE_URLS http://+:5000
ENV ASPNETCORE_ENVIRONMENT $ASPNETCORE_ENVIRONMENT 

EXPOSE 5000

# Run the dotnet application against a DLL from within the container
# Don't forget to publish your application or this won't work
ENTRYPOINT ["dotnet", "TSL.Security.Service.dll"]
#############################################

According to your Docker Plugin configuration, you are using 172.17.0.1 as the Docker host. From the slave or master container, this will be the Docker daemon running on the host (there is no Docker in Docker happening here). When your Jenkins slave builds an image (regardless if the slave is running as a container or on the host) it is using the Docker on the host and this is why your image shows up on the host.

It is worth noting that the data likely first goes to the Docker volume being used by the slave (according to the Jenkins Dockefile at https://github.com/jenkinsci/docker/blob/9f29488b77c2005bbbc5c936d47e697689f8ef6e/Dockerfile the default is /var/jenkins_home). In your case, this is just a volume from the data service (though, in Compose v2 format, you can just define a named volume, you don't need to create a data container). From here, your code and Dockerfile get sent to the Docker build context on the host through the API at tcp://172.17.0.1:4243.

Can you confirm the host layout? Is the Jenkins master and slave on different physical hosts? Also, which Docker plugins are you using? I notice there are at least two for Jenkins (Docker Plugin and Docker Slaves Plugin, I’m not sure if you are using one or both or other).

Hi @AndyShinn, I updated my question to answer your questions, " - The Jenkins Master and Jenkins Slave are running on the same Docker Host Metal at this point, as I am just at the beginning stages of getting this system running

  • I am using the Docker Plugin and SSH Slaves plugin to accomplish this setup"

I have an explanation I can give. But I am confused on a couple more parts that maybe you can clarify. 1) How does the Docker slaves plugin know to launch Jenkins as the slave Docker image? It is unclear to me how the Jenkins slave actually launches the image from the Dockerfile you provide. 2) Can you confirm the docker run command you used to start the master? I am wondering if it passes in the Docker control socket, in which case any builds in the container would just use the host Docker (thus, why you are seeing the image being built on the host). I’ll outline this as an answer.

@AndyShinn ok, I updated my question with the information you requested. I have added my Dockerfile for my Jenkins Master, which has a Docker volume so I added my Dockerfile for the Jenkins Master data volume as well. I also explained, including Jenkins configuration screenshot, how you configure the “Docker Plugin”, NOT Docker slaves plugin as you called it, to use a defined Docker image as a Slave, my Dockerfile for my slave was included in my original question