I'll start by admitting I'm pretty new to Docker and I may be approaching this problem from the wrong set of assumptions... let me know if that's the case. I've seen lots of discussion of how Docker is useful for deployment but no examples of how that's actually done.
Here's the way I thought it would work:
- create the data container to hold some persistent data on machine A
- create the application container which uses volumes from the data container
- do some work, potentially changing the data in the data container
- stop the application container
- commit & tag the data container
- push the data container to a (private) repository
- pull & run the image from step 6 on machine B
- pick up where you left off on machine B
The key step here is step 5, which I thought would save the current state (including the contents of the file system). You could then push that state to a repository & pull it from somewhere else, giving you a new container that is essentially identical to the original.
But it doesn't seem to work that way. What I find is that either step 5 doesn't do what I think it does or step 7 (pulling & running the image) "resets" the container to it's initial state.
I've put together a set of three Docker images and containers to test this: a data container, a writer which writes a random string into a file in the data container every 30 s, and a reader which simply echo
es the value in the data container file and exits.
Data container
Created with
docker run \
--name datatest_data \
-v /datafolder \
myrepository:5000/datatest-data:latest
Dockerfile:
FROM ubuntu:trusty
# make the data folder
#
RUN mkdir /datafolder
# write something to the data file
#
RUN echo "no data here!" > /datafolder/data.txt
# expose the data folder
#
VOLUME /datafolder
Writer
Created with
docker run \
--rm \
--name datatest_write \
--volumes-from datatest_data \
myrepository:5000/datatest-write:latest
Dockerfile:
FROM ubuntu:trusty
# Add script
#
ADD run.sh /usr/local/sbin/run.sh
RUN chmod 755 /usr/local/sbin/*.sh
CMD ["/usr/local/sbin/run.sh"]
run.sh
#!/bin/bash
while :
do
sleep 30s
NEW_STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
echo "$NEW_STRING" >> /datafolder/data.txt
date >> /datafolder/data.txt
echo "wrote '$NEW_STRING' to file"
done
This script writes a random string and the date/time to /datafolder/data.txt
in the data container.
Reader
Created with
docker run \
--rm \
--name datatest_read \
--volumes-from datatest_data \
myrepository:5000/datatest-read:latest
Dockerfile:
FROM ubuntu:trusty
# Add scripts
ADD run.sh /run.sh
RUN chmod 0777 /run.sh
CMD ["/run.sh"]
run.sh:
#!/bin/bash
echo "reading..."
echo "-----"
cat /datafolder/data.txt
echo "-----"
When I build & run these containers, they run fine and work the way I expect:
Stop & Start on the development machine:
- create the data container
- run the writer
- run the reader immediately, see the "no data here!" message
- wait a while
- run the reader, see the random string
- stop the writer
- restart the writer
- run the reader, see the same random string
But committing & pushing do not do what I expect:
- create the data container
- run the writer
- run the reader immediately, see the "no data here!" message
- wait a while
- run the reader, see the random string
- stop the writer
- commit & tag the data container with
docker commit datatest_data myrepository:5000/datatest-data:latest
- push to the repository
- delete all the containers & recreate them
At this point, I would expect to run the reader & see the same random string, since the data container has been committed, pushed to the repository, and then recreated from the same image in the repository. However, what I actually see is the "no data here!" message.
Can someone explain where I'm going wrong here? Or, alternatively, point me to an example of how deployment is done with Docker?