Docker is the biggest advance to production software engineering in the past decade. If you're like me though, you slept on it this entire time and are now feeling like you're too far behind to catch up and finally start using them. I'm going to make a few short posts on the rudimentary basics to get up and running with Docker, creating a
Dockerfile, publishing it to Docker Hub and deploying/running it live on a server. This post will cover making a simple
Dockerfile and building your image. Let's go!
Dockerfile is a simple, declarative file that contains the commands for how to create a Docker image. I find the easiest way to work with them is to consider them the sequence of commands you'd have to run on a fresh machine in order to get your project up and running. Lets take a look at this simple example from Via:
FROM ubuntu:20.04 WORKDIR /code RUN apt -y update && apt -y install build-essential python-numpy python-setuptools python3-scipy libatlas-base-dev libatlas3-base python3-pip RUN python3 -m pip install virtualenv COPY requirements.txt requirements.txt RUN python3 -m pip install -r requirements.txt EXPOSE 8080 COPY . . CMD ["make", "run_bottle"]
FROM ubuntu:20.04 Most (if not all... I'm not experienced with this, really you should be finding a better source of information!)
Dockerfiles start with a
FROM statement. This defines what base image you want to used and is published and maintained by people who really know what they're doing. In this case we are basing our environment on
ubuntu:20.04. Generally speaking this is a bad idea as it is a big image and you should use something like Alpine. Using a big image like Ubuntu is helpful however for bootstrapping or for dealing with difficult requirements.
WORKDIR /code simply defines the current working directory within the container. I consider this to be like
cd from the container context.
RUN ... This is where things start looking bash-y. The
RUN command very simply runs whatever commands you give it within the container context. Here we're installing updates through
apt and creating a Python virtual environment.
COPY is more interesting as it is passing information between the host system and the container context. Here we are copying the
requirements.txt file from the root of the repository to
/code/requirements.txt (as the last
WORKDIR statement placed us within that directory!).
EXPOSE is used to open ports on the container. It wraps up what I'm sure are some disgusting networking hassles and lets you just say what ports you want to map from the machine running the container to the container itself. Here we're picking
8080 because that's the port we have configured Bottle to run on.
COPY statement in order to copy all our code in our repo to the
/code dir on the container and then finally...
CMD. This is more nuanced and should probably be just used to specify arguments to an
ENTRYPOINT statement, but we're just looking at getting up and running. This can be considered to be the command run when the container is started. Here we're using a
make rule to start our server.
Dockerfile is all made, lets build and run it!
docker build -t conorjf/via-web . docker run conorjf/via-web:latest
docker build -t <tag_name> . uses the Dockerfile in the current directory (and the context of the current directory) to build your image with the tag
docker run will then run the image and tag you specify. All going well, your image should now be accessible on your localhost and you can drop into a shell on it by running
docker exec -it conorjf/via-web bash and do whatever looking around you'd like to see what kind of a container you've built!
I am a big advocate of learning just enough to get started with something and when you are using it, you can begin to discover the nuance and best practices in a breadth-first way wherever your curiosity points. I highly recommend checking out the Dockerfile Best Practices. Please leave me any feedback at all as I have never written content like this and would like to make it as informative as possible!