Docker: Part 2
-
Digging deeper into Docker
In the previous instalment we got to grips with some basic Docker concepts, so today I would like to build on what we know. We will look at what can you do with Docker, and how we can create something more interesting than a hello-world application.
Containers are ephemeral
This is a concept you will encounter in the world of containerisation. That the containers are disposable, none of the data inside the container persists when the container stops. This is by design. You may have heard of the phrase 'micro service'? Containers should do a specific task, and if you need to do a lot of that task, you can "scale" a service by creating more containers. With this mindset, containers should be easy to create and delete on demand when they are needed. Websites and web-services built with this mindset can grow and shrink to meet demand. Which can be good for the bank balance - but also for the environment!
Packaged application
One typical use for this is a self contained packaged application. And here at the OPF we have demonstrator packages for our web based tools that are packaged as docker images. These require no persistent storage and are therefore fully self contained. They can be spun up and down as needed.
One example is veraPDF. This is a command line application but there is a Desktop GUI version and Web GUI version too. In the past installing Java applications has been a bit tricky because you needed a compatible Java already installed and admin permissions perhaps (Recent veraPDF installers have made this task easier). However if you have Docker available to you then all of these dependency issues and Java versions problems all vanish. Getting veraPDF up and running is easy, getting a web demonstrator running in the browser is just a few keystrokes away:docker run -d -p 8123:8080 --name veraPDF darrenopf/verapdf-rest
After the image is pulled from docker hub and begins running, you can access the web GUI on your browser by going to the localhost address. You can also use the Docker Desktop tool to get the same result.
Search for the Image
Run the Image
By visiting the locally running web application (or by clicking the ports mapping link under the Container name in Docker Desktop) I can see the Web GUI for veraPDF.This is one of the ways we here at the OPF use Docker. We have automated processes that package up our products into a Docker Image, so that they can be easily distributed and used by others.
Image Layering
Docker images can be extended into new images through a mechanism known as image layering. Docker utilizes a layered filesystem that allows images to be built incrementally by adding new layers on top of existing ones. When extending an image, Docker creates a new layer containing only the modifications or additions made in the new image. This approach leverages the existing layers from the base image, reducing redundancy and optimizing storage usage.
You can do this yourself easily via the command line when you make modifications to an image you pulled locally from Docker Hub. That image is then available locally to you on your PC. Or you can push it back up to the Docker Hub website to share with the world.
Let's create an Image
We will start with the basic nginx web server Image. For those familiar with setting up nginx. You normally put your HTML files into the correct folder, perhaps edit a website.conf settings file and run the server.
With Docker, I could run the image and then get a shell inside the running Container, and edit the files I need. Or with Docker Desktop I could find the files and edit them:
Find and edit the index.html file (changing 'nginx' to 'DOCKER').
Save the file and witness the changes live, by reloading the web page.
Docker Desktop does not have the commit command, so I jump into the terminal and get the Container ID via thedocker ps
command. I can then use the 'docker commit' command with that ID and a new Image name. To create my own extended nginx image.
I can now Run that new image in a new Container - and we can see the updated home page is part of the ImageIs this practical?
I don't think this a practical workflow for someone developing a website. It might be a way to deploy a website on a cloud hosting platform however. But even in that use case there would be a better way to generate the Image.
Practical use cases
If we were going to develop a website and use this nginx Image and Docker to help us. Then a much better approach would be to create a folder for our project, put our HTML files inside this project folder and while we are at it - put this project under git management. We can then use Docker Volumes to inject these files into the Container's filesystem. To do that we use the
-v <source>:<destination>
flag and pass in the source and destination folders (or individual file) from the Host filesystem to the Container filesystem.Volumes
Volumes, we finally made it to one of the most important core concepts pertaining to docker, and I may have gone a bit of a roundabout way here. But I think their usefulness and power is better demonstrated when you see how cumbersome the alternative is! We will also come full circle later in this series when we come to package our applications into Images by including our code or binaries into the Image.
Example time
I recently created a simple nginx based project for the OPF event in Finland that is good candidate for this example. There is an 'html' folder in the project that we can map into the nginx folder and when the Container runs we will see the demo application running - not the normal nginx welcome screen.
By putting together everything we have learned so far, the terminal command to launch our app inside a running Container would look something like this:
docker run -d -v ./html:/usr/share/nginx/html -p 80:80 --name my-volume-app nginx
We have the -d for background operation (detached in docker speak), next the volume mapping the apps html folder over the container's html folder. the port, the app name and finally the base Image to use from Docker Hub.
If I were to edit the html files and save them, and then reload the web the the app is running in, I would see those changes immediately in the browser! Cool!
And just to check that we are seeing the files we expect, let's look inside the container's filesystem:
Here I can see that the files inside the container have been replaced by the project files. (The reason this UI suddenly looks different is because this is actually VSCode, an IDE, or Integrated Development Environment. Where I actually spend most of my time when dealing with GitHub and Docker. In this screenshot I have a Docker explorer running in the sidebar and a bash shell in the bottom pane)Conclusion
We covered a lot of ground in a short time span, so I am going to take a break here. We have explored a little mode depth into the inner workings of Docker, and we have seen how to actually test and develop applications with docker and volumes to help us. The commands to mount volumes and to create images are becoming a bit unwieldy, so next time we are going to see some tools and processes that will make our lives easier when it comes to volumes and creating Images. We are also going to look at using containers in groups (putting the stack in full stack development!). So hopefully most of what we did today will become academic theory. Next time look out for stacks, docker-compose and Dockerfiles!