-
Some Docker usage
04/01/2019 at 05:50 • 0 commentsThis is based on a demo I did in class recently, but expanded to cover more advanced topics that I had too little time to talk about then.
Part 1: Basics
Hello World, etc.
I assume you have already installed Docker and have a command line such as Docker Quickstart Terminal. To make sure it's working, type
$ docker run hello-world
That command will tell Docker to run a container based on an image called
hello-world
. If you haven't done this before, you won't have that image, so Docker will have to download it first. Then the container will run. It contains a program that just prints out a message explaining the process of running the container and getting the output from it back to your terminal. Once this is done, the container shuts down, but the image remains on your computer in case you want to run a container based on it again.To see that you still have that image, you can type
docker image ls
which will print a list of all images you have on your host. To see that the container is no longer running, you can type
$ docker container ls
which will print a list of all currently running containers (none, now that the
hello-world
container has shut down). That container still exists, though—you can use this command to show a list of all containers including those not currently running:$ docker container ls -a
The
hello-world
output suggests running the below command:$ docker container run -it ubuntu /bin/bash
This will start an Ubuntu Linux container, and give you an interactive session.
-i
makes sure STDIN stays open, and-t
connects your terminal to a terminal in the container. You also have to specify what shell executable to run once the container is up (if a shell is what you want).In my demo, I replaced
ubuntu
withalpine
, to use Alpine Linux, because it would download faster. It's a lightweight Linux distribution, which doesn't come with everything you might expect, but it was sufficient for my purposes. It's a popular base for Docker projects, too.Anyway, when you're done playing with it, you can type exit to disconnect from the container, at which point it will shut down. (The program you told Docker to run when you started the container—
bash
—has exited, so Docker considers it time to shut down the container.)You can also run programs other than shells, like so (noting the absence of
-it
):$ docker container run ubuntu ls -l
which will list the contents of the root directory with details, and then the container will shut down. Even without
-i
or-t
, the output will be routed by Docker from the virtual terminal in the container to the terminal you're typing in.Isolation between containers, and a bit of container management
You can run multiple containers based on the same image, and they will automatically be isolated from each other. Each has its own filesystem, and they cannot communicate with each other unless you tell Docker to let them. For example, if we start a container:
docker run -it alpine /bin/bash
and create a file:
$ echo "secret text" > secret.txt
we can then, still in the same container, work with that file. But once we exit the container and start a new one from the same image:
$ exit $ docker run alpine ls
we cannot see that file. It doesn't exist in this new container. But we can still get back into the container in which we created the file. First we need to know the shut-down container's ID:
$ docker container ls -a
Find the container in the list that matches the command you started it with and the time when you started it. Note the ID listed. You only need as much of the beginning of the ID as is necessary to distinguish it from other IDs. (The real IDs are actually much longer than what's shown, so even if you type in the full ID shown by the above command, you're still using the shortcut.)
Then start that existing container:
$ docker container start <container ID>
OK, now the container is running again. But it's running in the background, and we have no interactive session. So how do we tell it what to do? We can use the
exec
command:$ docker container exec <container ID> cat secret.txt secret text
Now that the container is running in the background, it will keep running even after commands we give it have finished. So, to shut it down, we need to use
stop
:$ docker container stop <container ID>
We can also remove containers we no longer need:
$ docker container rm <container ID>
(and similarly for images).
Part 2: Trying to do something slightly useful
Now we'll try to make our own container with custom contents. The contents will be a program called FIGlet that makes ASCII art banner text, as well as a custom Python script that both runs FIGlet and returns the hostname of the container (which Docker sets to the container ID). To do this, we'll need to start with a stock Python container. This way of building custom containers by starting with off-the-shelf containers is pretty common in the world of Docker.
Here's how you make a custom Docker image:
- Prepare all of the files you want to put in it (executables, configuration files, etc.) by putting them in a directory on your host
- Make a text file with the name
Dockerfile
(no extension) that specifies how the container is to be built and run - Tell Docker to build it
Here's the Dockerfile we'll use for this example—read the comments to see what everything means:
# Use official Alpine Linux-based Python 3 image as parent image FROM python:3-alpine # Update package repository RUN apk update # Install FIGlet RUN apk add figlet # Set the working directory to /app WORKDIR /app # Copy the requirements file into the container COPY requirements.txt ./ # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir --trusted-host pypi.python.org -r requirements.txt # Copy anything else in COPY . . # Define environment variable ENV NAME a-container # Run app.py when the container launches CMD ["python", "app.py"]
Put it in a work directory on your host. In the same directory, put a Python file called
<a target="_blank" rel="noopener noreferrer" href="http://app.py">app.py</a>
and a text file calledrequirements.txt
.requirements.txt
is actually empty right now; it would list the names of Python packages to be installed bypip
(Python's package manager) during the Docker container build process, if we needed any.<a target="_blank" rel="noopener noreferrer" href="http://app.py">app.py</a>
contains this code:import os, socket, subprocess if __name__ == "__main__": name = os.getenv("NAME", "none") hostname = socket.gethostname() print("Hello from " + name + "/" + hostname) subprocess.run(["echo", "Python"])
(You might notice that FIGlet is not being called from Python. Wait and see.) Once those files are in place, navigate to your work directory in your Docker terminal, and run this command:
$ docker build -t python-demo:v0.1 .
Don't forget the dot at the end! That tells Docker to build from the contents of the current directory. If you don't want to
cd
to your work directory before building, you can just put the path in place of the dot.-t
is for "tag", which comes in two parts:python-demo
is the repository name, andv0.1
is the tag proper (which is usually used for a version number).Docker will take a while to build your image, but it provides nice verbose output about what steps it's taking. When it's done, you can check that your image exists using
docker image ls
$ docker run python-demo:v0.1
It should output two lines of text, one just
Python
and one that saysHello from <hostname/container ID>
, and then shut down.Now we'll have it use FIGlet. Change
app.py
's last line to:subprocess.run(["figlet", "Python"])
Now we have to rebuild the image:
$ docker build -t python-demo:v0.2 .
Note that the version number is now 0.2. Once it's done, if you run
docker image ls
, you should see both versions. You can still run v0.1 and it will produce the same output as before. But running v0.2:$ docker run python-demo:v0.2
should produce a nice "Python" ASCII art banner instead of just text, in addition to the same
Hello from <hostname/container ID>
line. (The outputs appear out of order because the external program,echo
orfiglet
, runs in parallel to the Python script.)You can also get an interactive session with a container based on this image:
$ docker run -it python-demo:v0.2 /bin/bash
and then you can do whatever you want from the command line within the container, including using FIGlet to make other banners. Don't forget to clean up containers you won't use again by shutting them down and deleting them.
-
Docker works, and a Docker Toolbox installation tip
04/01/2019 at 05:39 • 0 commentsI got Docker working properly since my last log entries. I just had to install Docker Toolbox, uninstall Docker Toolbox when it stopped working, uninstall VirtualBox too, and reinstall Docker Toolbox, letting it install VirtualBox.
Therefore, I have a tip: If you're going to use Docker Toolbox with VirtualBox as the backend, uninstall VirtualBox if you have it installed, and let the Docker Toolbox installer reinstall it. It will be much more reliable that way.
Another solution I tried was Play with Docker, which a web-based Docker learning environment. That worked great at first, but that was in the middle of the night when nobody else was using it. During the day (of North America) I found I couldn't get a session because it was fully loaded. So I just used my local Docker, and that was fine.
-
Running Docker Desktop on Windows means you can't use VirtualBox. Workaround 2: Docker Machine
02/18/2019 at 06:53 • 0 commentsUsing Docker Desktop for Windows makes it impossible to simultaneously use hardware virtualization apps like VirtualBox and VMware Workstation (Player). This post discusses one workaround: using Docker Toolbox, which includes Docker Machine, to run Docker on Windows without using Hyper-V. This is the workaround I plan to use at present.
The previous post discussed another workaround: running Docker Engine on Linux, inside a VirtualBox hardware virtual machine running on a Windows host.
Docker Desktop is Docker's current official desktop application for Windows and macOS. Before it came out, the official way to use Docker on a workstation was Docker Toolbox. While Docker Toolbox is now mainly used for server provisioning, it still available for people who need it for desktop use, such as people using a CPU or OS too old to run Docker Desktop. In our case, we can use it to avoid having to enable Hyper-V. (See the previous post for explanation of why I want to avoid Docker Desktop's Hyper-V requirement.)
Docker Toolbox includes a tool called Docker Machine.[1] (So does Docker Desktop, but due to the requirement to avoid enabling Hyper-V, I am avoiding installing Docker Desktop.) Docker Machine performs the function of installing Docker Engine on virtual hosts, which it creates and manages. Docker Machine operates these virtual hosts using either Hyper-V (which does not help, because I am avoiding that) or VirtualBox.[2] Wait! The previous post's workaround was also VirtualBox! Yes, but in that case the user creates a VirtualBox virtual machine, installs a Linux operating system on it, and then installs Docker on that. No Docker components are installed on the host OS. With the Docker Machine method, the user installs Toolbox (or Desktop) on the host OS, and then uses Machine to create a VirtualBox VM with Docker Engine preinstalled. This VM runs a Linux operating system, on which Docker Engine runs.[3]
Once you have created a VM using Machine, you can use Machine to connect to the VM's Engine and run Docker commands as usual.
This is probably the easier and better-supported of the two workarounds I have studied, so it is what I am going to try first. Keep an eye out for a post soon on how that goes for me.
References
-
Running Docker Desktop on Windows means you can't use VirtualBox. Workaround 1: VirtualBox
02/18/2019 at 06:53 • 1 commentUsing Docker Desktop for Windows makes it impossible to simultaneously use hardware virtualization apps like VirtualBox and VMware Workstation (Player). This post discusses one workaround: running Docker Engine on Linux, inside a VirtualBox hardware virtual machine running on a Windows host. This is a workaround that I do not intend to try at this time, and it has its limitations, but it also has some advantages.
The next post will discuss another workaround: using Docker Toolbox, which includes Docker Machine, to run Docker on Windows without using Hyper-V.
When you install Docker Desktop on Windows, it requires and automatically enables Hyper-V[1], a hypervisor from Microsoft. Hyper-V replaces your Windows OS as the host on the computer, and your Windows OS becomes a virtual machine. Unfortunately, inside this virtual machine, Windows no longer has access to the CPU's virtualization features. These features are necessary for running hardware virtualization applications like VirtualBox and VMware Workstation (Player).[2][3]
I need to be able to run VMware Workstation Player on a weekly basis for school at present, so I cannot reasonably install Docker Desktop on the Windows OS that's on my school laptop. Another reason is that I am still running Windows 7, which I think does not support Hyper-V, though a classmate has told me it worked for him.
I have not been able to find out in detail why Docker actually requires Hyper-V when running on Windows—people seem to just say what amounts to "that is a reasonable requirement" without explaining it. (I read—though I have forgotten where—that Docker may have plans to stop requiring Hyper-V when running on Windows, but even if they do end up doing that, it does not help us now.)
You probably noticed that I said "when running on Windows" a couple of times in the previous paragraph. Indeed, Docker does not require Hyper-V when running on Linux. Hyper-V isn't even available there. That is also the case for macOS—though on macOS, Docker runs its own hypervisor (using HyperKit, which is Mac-only because it is based on a macOS framework)[4].
This leads to the idea of a potential solution to the problem of running Docker while keeping the ability to run hardware VMs: create a hardware VM running Linux and run Docker inside that. This may not be practical for other purposes, such as serious development or production, where you might need Docker running directly on Windows for various reasons, but for just experimenting with Docker, it should be sufficient. But even in serious use cases, you might find it preferable to run Docker inside a Linux VM, because Docker on Linux can be more stable and easier to manage.
The VM cannot run Windows as the guest OS, because then you would be back to running Docker on Windows, in which case it would need to run Hyper-V, which is impossible inside a VM, because only one hypervisor can run at a time on an x86 CPU—VirtualBox on the host OS is already using the CPU's virtualization features, so they are unavailable to the guest OS (Hyper-V). (It is theoretically possible for VirtualBox to provide these features inside its VMs, but it does not yet do so because this would be difficult to implement.[5])
Discussions about running Docker inside a Linux hardware VM on Windows are easy to find online. However, I will not be pursuing this option at this time, so I will end this post with some further-reading links:
- /r/docker: Docker inside VirtualBox
- /r/docker: Docker on Windows or Docker on Linux VM on Windows?
- Andrea Lettieri: Using Docker with VirtualBox and Windows 10
- Docker Forums: Running Docker Machine on a Ubuntu VM from VirtualBox
References
- "Install Docker Desktop for Windows." Docker. Retrieved 2019-02-17.
- "Answer to: Why VirtualBox or VMware can not run with Hyper-V enabled Windows 10". Veovis. 2017-05-13. Retrieved 2019-02-17.
- "Answer to: What is Hyper-V for". LawrenceC. 2014-03-23. Retrieved 2019-02-17.
- "Install Docker Desktop for Mac". Docker. Retrieved 2019-02-17.
- "[feature-request] Nested Virtualization: VT-in-VT". Technologov. 2009-05-16. Retrieved 2019-02-17.
-
Introduction to Docker
02/18/2019 at 03:24 • 0 comments[coming soon]