Dockerized Starlab
15/Jun 2015
Once you have Docker installed, you may want to see it in action. Here I demonstrate how to use Docker in a slightly different way respect what you can find around (AKA use Docker to set up a web-server or something similar).
I think Docker is a great solution to deal with easiness of installation and reproducibility in science. LXD would be probable be better. LXD already provide unprivileged containers AND is more about a container containing more than an application, while Docker is based on the idea of one container for a single app. However, I still have to try LXD.
Here I will show you how to use Docker to install and run Starlab.
NB: nVidia (AKA the most annoying GPU producer in the world) drivers, in addition to be
the worst Linux GPU drivers, require a system-dependent installation. This means that
you can’t just download the Docker image from my Docker registry and run a container
from it, but you need to download the Dockerfile
and build the image on your own.
You can use the image I provide ONLY if you run the non-GPU StarLab version.
And to do this you need to have a loooot of time to wait for the simulations to finish.
Create a Docker image
The image I’m going to create contains:
- our modified StarLab version (it contains updated stellar evolution recipes), both GPU and non GPU version
- the same version with an Allen-Santillan galactic tidal field, corrected for the non-inertial reference frame used in StarLab (at the moment this version is not working, probably because a problem in the timestep calculation, but I am working on it!)
You can use the public version and correct the
Dockerfile
.
It is possible to download and extract the StarLab sourced directly from the Internet but I prefer to have everything already in the folder.
First of all, create a new empty folder and cd
into it. Then, copy the StarLab sources
and the docker file into that folder.
Mine looks like that:
starlabDocker.tar.gz
|-sapporo
|-starlab
|-starlabAS
where starlabAS
only contains the files that differ from the versio without the Allen-Santillan tidal field.
Then, you need a Dockerfile
. The Dockerfile
tells Docker what it has to do
in order to create your image. Which base images to use (if any), which packages to download and install and so on.
Mine is:
FROM ubuntu:14.04
MAINTAINER brunetto ziosi <my email hehe>
# For the public version of StarLab4.4.4, see http://www.sns.ias.edu/~starlab/
ENV DEBIAN_FRONTEND noninteractive
ENV STARLAB_FILE starlabDocker.tar.gz
# Copy StarLab bundle into the image
COPY $STARLAB_FILE /
# This has to be set by hand and MUST be the same of the host
##############
# longisland #
##############
# ENV CUDA_DRIVER 340.46
# ENV CUDA_INSTALL http://us.download.nvidia.com/XFree86/Linux-x86_64/${CUDA_DRIVER}/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run
# ENV CUDA_TOOLKIT cuda_6.0.37_linux_64.run
# ENV CUDA_TOOLKIT_DOWNLOAD http://developer.download.nvidia.com/compute/cuda/6_0/rel/installers/$CUDA_TOOLKIT
##############
# uno #
##############
# ENV CUDA_DRIVER 331.38
# ENV CUDA_INSTALL http://us.download.nvidia.com/XFree86/Linux-x86_64/${CUDA_DRIVER}/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run
# ENV CUDA_TOOLKIT cuda_5.5.22_linux_64.run
# ENV CUDA_TOOLKIT_DOWNLOAD http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/$CUDA_TOOLKIT
##############
# spritz #
##############
ENV CUDA_DRIVER 331.113
ENV CUDA_INSTALL http://us.download.nvidia.com/XFree86/Linux-x86_64/${CUDA_DRIVER}/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run
ENV CUDA_TOOLKIT cuda_5.5.22_linux_64.run
ENV CUDA_TOOLKIT_DOWNLOAD http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/$CUDA_TOOLKIT
################
# sfursat #
# to be tested #
################
# ENV CUDA_DRIVER 270.41.19
# ENV CUDA_INSTALL http://us.download.nvidia.com/XFree86/Linux-x86_64/${CUDA_DRIVER}/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run
# ENV CUDA_TOOLKIT ????
# ENV CUDA_TOOLKIT_DOWNLOAD ????????
# Update and install minimal and clean up packages
RUN apt-get update --quiet && apt-get install --yes \
--no-install-recommends --no-install-suggests \
build-essential module-init-tools wget libboost-all-dev \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Install CUDA drivers
RUN wget $CUDA_INSTALL -P /tmp --no-verbose \
&& chmod +x /tmp/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run \
&& /tmp/NVIDIA-Linux-x86_64-${CUDA_DRIVER}.run -s -N --no-kernel-module \
&& rm -rf /tmp/*
# Install CUDA toolkit
RUN wget $CUDA_TOOLKIT_DOWNLOAD && chmod +x $CUDA_TOOLKIT \
&& ./$CUDA_TOOLKIT -toolkit -toolkitpath=/usr/local/cuda-site -silent -override \
&& rm $CUDA_TOOLKIT
# Set env variables
RUN echo "PATH=$PATH:/usr/local/cuda-site/bin" >> .bashrc \
&& echo "LD_LIBRARY_PATH=/usr/local/cuda-site/lib64" >> .bashrc \
&& . /.bashrc \
&& ldconfig /usr/local/cuda-site/lib64
# Install StarLab w/ and w/o GPU, w/ and w/o tidal fields
RUN tar -xvf $STARLAB_FILE && rm $STARLAB_FILE \
&& cp -r starlab starlab-no-GPU \
&& cp -r starlab starlabAS-no-GPU \
&& cp -r starlab starlabAS-GPU \
&& mv starlab starlab-GPU
# Tidal field version only has 5 files different,
# so we can copy them into a copy of the non TF version:
# starlab/src/node/dyn/util/add_tidal.C
# starlab/src/node/dyn/util/dyn_external.C
# starlab/src/node/dyn/util/dyn_io.C
# starlab/src/node/dyn/util/set_com.C
# starlab/src/node/dyn/util/dyn_story.C
RUN cp starlabAS/*.C starlabAS-no-GPU/src/node/dyn/util/ \
&& cp starlabAS/*.C starlabAS-GPU/src/node/dyn/util/ \
&& cp starlabAS/dyn.h starlabAS-no-GPU/include/ \
&& cp starlabAS/dyn.h starlabAS-GPU/include/ \
&& rm -rf starlabAS
# Compile sapporo
RUN cd sapporo/ && make && bash compile.sh && cd ../
# With and w/o GPU and w/ and w/o AS tidal fields
RUN cd /starlab-GPU/ && ./configure --with-f77=no && make && make install && cd ../ \
&& mv /starlab-GPU/usr/bin slbin-GPU && rm -rf /starlab-GPU \
&& cd /starlabAS-GPU/ && ./configure --with-f77=no && make && make install && cd ../ \
&& mv /starlabAS-GPU/usr/bin slbinAS-GPU && rm -rf /starlabAS-GPU \
&& cd /starlab-no-GPU/ && ./configure --with-f77=no --with-grape=no && make && make install && cd ../ \
&& mv /starlab-no-GPU/usr/bin slbin-no-GPU && rm -rf /starlab-no-GPU \
&& cd /starlabAS-no-GPU/ && ./configure --with-f77=no --with-grape=no && make && make install && cd ../ \
&& mv /starlabAS-no-GPU/usr/bin slbinAS-no-GPU && rm -rf /starlabAS-no-GPU
# Default command.
ENTRYPOINT ["/bin/bash"]
The first part of the Dockerfile
specify to use Ubuntu 14.04
as base image
(a special version customized for Docker). Then it lists me as maintainer of the image.
What follows are environment variables needed for the installation.
COPY
copy the Starlab sources from the host folder to the image /
folder.
After that I set the right variables needed to install the right CUDA drivers
and libraries for each system.
After setting the environment variables, the RUN
command is used to launch
the update of the system indexes and packages and the installations of the needed
build tools.
Then we can install the CUDA drivers and the CUDA libraries.
Because Docker add a layer for each of the Docker commands used, I minimize the number of layers
running more that one bash command chaining them with &&
.
The following steps are extract the sources, copy the files in the right places and compile sapporo and Starlab. Sapporo is the library that allow Starlab (developed for GRAPE) to run on the GPUs.
The final line tells Docker that a container based on this image should start with /bin/bash active.
To build the image just run
time docker build --force-rm=true -t <your registry name>/starlab-cuda-<driver version>:$(date +"%Y%m%d") .
This is my build line containing
time
command, just to know how log does it take to build the imagedocker build --force-rm=true
build the image removing intermediate layer-t
to tag the image you create with a name you like, I use my Dcoker Hub username, the name of the program I’m dockerizing, if using cuda, the driver version and the build date,.
the final dot is not a typo, it tells Docker to build an image using theDockerfile
in the current folder.
Let’s assume that the container name is me/starlab-cuda-340.46-6.0.37-2015-06-15
At the end of the process you can check if the image was successfully created (ok, you can do this also from the errors!)
by running:
$ docker images
REPOSITORY TAG IMAGE ID
me/starlab-cuda-340.46-6.0.37-2015-06-15 20150615 b073d414323f
CREATED VIRTUAL SIZE
37 minutes ago 5.272 GB
Run a StarLab container
Now that you created the image, it’s time to run a container with it. To create and run a container based on your newly created image run:
$ docker run -ti --device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidia1:/dev/nvidia1 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
-v <abs path to host folder>:<container folder> \
me/starlab-cuda-340.46-6.0.37-2015-06-15
where:
docker run
is obvious-ti
means open a interactive pseudo tty (that is, more or less, give me a terminal inside the container, once started, where I can run commands)--device
specify which devices to attach; in this case I am connecting 2 CUDA GPUs and allow for the Unified Virtual Memory to be used (it works only from CUDA6)-v <abs path to host folder>:<container folder>
allow to share a folder between host and containerme/starlab-cuda-340.46-6.0.37-2015-06-15
is the name of the image from which to create the container
You can check by running:
docker ps [-a]
CONTAINER ID IMAGE
ccdffc10c680 me/starlab-cuda-340.46-6.0.37-2015-06-15
COMMAND CREATED
"/bin/bash" 15 seconds ago
STATUS PORTS NAMES
Up 15 seconds adoring_turing
The -a
flags tells Docker to show you also the stopped containers. Note that the container
has a random name given by Docker.
It is also possible to directly run commands just after the container creation, for example:
$ time echo "Hello world"
Hello world
real 0m0.000s
user 0m0.000s
sys 0m0.000s
$ time docker run ubuntu:14.04 /bin/echo 'Hello world'
Hello world
real 0m0.219s
user 0m0.028s
sys 0m0.005s
In this example, the second command ran into a docker container.
We can do something better: we want a script that creates a container, start it, run some commands and then clean everything.
This could be quite easy, but we are using StarLab, that makes heavy use of pipes. I found three solutions to get it works, the last being the better.
The first attempt is something like this:
#!/bin/bash # shebang line to specify the interpreter
set -x # set -x tells bash to echo the command is going to run
# Create a docker container with devices and volumes and give it a name
docker create --name sltest -i -t \
--device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidia1:/dev/nvidia1 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
me/starlab-cuda-340.46-6.0.37-2015-06-15
# Start the container
docker start sltest
# Exec commands to create StarLab initial conditions
(docker exec -i sltest /slbin/makeking -n 100 -w 5 -i -u ) > makeking.out
(docker exec -i sltest /slbin/makemass -f 8 -l 0.1 -u 40 ) < makeking.out > makemass.out
(docker exec -i sltest /slbin/add_star -R 1 -Z 0.1 ) < makemass.out > add_star.out
(docker exec -i sltest /slbin/scale -R 1 -M 1 ) < add_star.out > ics.txt
# Start kira
(docker exec -i sltest /slbin/kira -t 3 -d 1 -D 1 -f 0 -n 10 -e 0 -B -b 1) < ics.txt > out.txt 2> err.txt
# Stop and delete the container
docker stop sltest
docker rm sltest
This example make use of the STDIN/ERR/OUT redirection, but does not always work very well.
The second attempt, a little better is
#!/bin/bash
set -x
# Create env variables for the folders
LOCAL_FOLDER=~/starlab-results
DOCKER_FOLDER=/starlab-results
# Create a docker container with devices and volumes and give it a name
docker create --name sltest -i -t \
--device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidia1:/dev/nvidia1 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
-v $LOCAL_FOLDER:$DOCKER_FOLDER \
me/starlab-cuda-340.46-6.0.37-2015-06-15
# Start the container
docker start sltest
# Exec commands to create StarLab initial conditions
(docker exec -i sltest -c "/slbin/makeking -n 100 -w 5 -i -u ) > $DOCKER_FOLDER/makeking.out"
(docker exec -i sltest -c "/slbin/makemass -f 8 -l 0.1 -u 40 ) < $DOCKER_FOLDER/makeking.out > $DOCKER_FOLDER/makemass.out"
(docker exec -i sltest -c "/slbin/add_star -R 1 -Z 0.1 ) < $DOCKER_FOLDER/makemass.out > $DOCKER_FOLDER/add_star.out"
(docker exec -i sltest -c "/slbin/scale -R 1 -M 1 ) < $DOCKER_FOLDER/add_star.out > $DOCKER_FOLDER/ics.txt"
# Start kira
docker exec -i sltest bash -c "/slbin/kira -t 3 -d 1 -D 1 -f 0 \
-n 10 -e 0 -B -b 1 < $DOCKER_FOLDER/ics.txt \
> $DOCKER_FOLDER/out.txt 2> $DOCKER_FOLDER/err.txt"
# Stop and delete the container
docker stop sltest
docker rm sltest
In this second example we make use of the internal container folder attached to a host system folder.
We will found our files in ~/starlab-results
.
However, the way I prefer is to make the container bash read a script in the
exchange folder. To do this, we need two files.
The first create the container and launch the second, located into the exchange folder.
$ cat dockerized_starlab.sh
#!/bin/bash
set -x
# Create a docker container with devices and volumes and give it a name
CONTAINER_NAME=test-001
docker create --name $CONTAINER_NAME -i -t \
--device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidia1:/dev/nvidia1 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
-v /home/ziosi/tests/$CONTAINER_NAME/results/:/sl-exchanges/ \
me/starlab-cuda-340.46-6.0.37-2015-06-15
# Start the container
docker start $CONTAINER_NAME
# Execute the script in the exchange folder
docker exec -i $CONTAINER_NAME bash -c "/sl-exchanges/run.sh"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
The second may contain the instructions to run StarLab commands:
#!/bin/bash
set -x
for RUN in $(ls create_*.sh); do
echo "Run $RUN";
/slbin-GPU/makeking -n 1000 -w 5 -i -u > /sl-exchanges/makeking-$RUN.out;
/slbin-GPU/makemass -f 8 -l 0.1 -u 150 < /sl-exchanges/makeking-$RUN.out > /sl-exchanges/makemass-$RUN.out;
/slbin-GPU/add_star -R 1 -Z 0.10 < /sl-exchanges/makemass-$RUN.out > /sl-exchanges/add_star-$RUN.out;
/slbin-GPU/set_com -r 5 0 0 -v 0 1 0 < /sl-exchanges/add_star-$RUN.out > /sl-exchanges/set_com-$RUN.out;
/slbin-GPU/scale -R 1 -M 1 < /sl-exchanges/set_com-$RUN.out > /sl-exchanges/ics-$RUN.txt;
/slbin-GPU/kira -t 500 -d 1 -D 1 -f 0 -n 10 -e 0 -B -b 1 < /sl-exchanges/ics-$RUN.txt > /sl-exchanges/out-$RUN.txt 2> /sl-exchanges/err-$RUN.txt;
done
where I take advantage of the fact that I wrote a script to loop over different simulations to be ran.
That’s it!!