1/26/2022
76

1. Introduction

Docker run -it centos cat /etc/redhat-release Pull an Image. This will pull the latest CentOS 5 image from the remote repository and cache it in the local index. If the image you are pulling is made up of layers, all of the layers will be pulled. Docker pull centos List Images. Nov 19, 2019 Docker Compose is a tool used to define and run multi-container Docker applications. Users utilize this software to launch, execute, communicate, and close containers with a single coordinated command. This tutorial will show you how to install Docker Compose on CentOS 7.

As more of our applications are deployed to cloud environments, working with Docker is becoming a necessary skill for developers. Often when debugging applications, it is useful to copy files into or out of our Docker containers.

In this tutorial, we'll look at some different ways we can copy files to and from Docker containers.

2. Docker cp Command

The quickest way to copy files to and from a Docker container is to use the docker cp command. This command closely mimics the Unix cp command and has the following syntax:

docker cp <SRC> <DEST>

Before we look at some examples of this command, let's assume we have the following Docker containers running:

The first example copies a file from the /tmp directory on the host machine into the Grafana install directory in the grafana container:

We can also use container IDs instead of their names:

To copy files from the grafana container to the /tmp directory on the host machine, we just switch the order of the parameters:

We can also copy an entire directory instead of single files. This example copies the entire conf directory from the grafana container to the /tmp directory on the host machine:

The docker cp command does have some limitations. First, we cannot use it to copy between two containers. It can only be used to copy files between the host system and a single container.

Second, while it does have the same syntax as the Unix cp command, it does not support the same flags. In fact, it only supports two:

-a: Archive mode, which preserves all uid/gid information of the files being copied
-L: Always follow symbolic links in SRC

3. Volume Mounts

Another way to copy files to and from Docker containers is to use a volume mount. This means we make a directory from the host system available inside the container.

To use volume mounts, we have to run our container with the -v flag:

The command above runs a grafana container and mounts the /tmp directory from the host machine as a new directory inside the container named /transfer. If we wanted to, we could provide multiple -v flags to create multiple volume mounts inside the container.

There are several advantages to this approach. First, we can use the Unix cp command, which has many more flags and options over the docker cp command.

The second advantage is that we can create a single shared directory for all Docker containers. This means we can copy directly between containers as long as they all have the same volume mount.

Keep in mind this approach has the disadvantage that all files have to go through the volume mount. This means we cannot copy files in a single command. Instead, we first copy files into the mounted directory, and then into their final desired location.

Another drawback to this approach is we may have issues with file ownership. Docker containers typically only have a root user, which means files created inside the container will have root ownership by default. We can use the Unix chown command to restore file ownership if needed on the host machine.

Centos

4. Dockerfile

Dockerfiles are used to build Docker images, which are then instantiated into Docker containers. Dockerfiles can contain several different instructions, one of which is COPY.

The COPY instruction lets us copy a file (or files) from the host system into the image. This means the files become a part of every container that is created from that image.

The syntax for the COPY instruction is similar to other copy commands we saw above:

Just like the other copy commands, SRC can be either a single file or a directory on the host machine. It can also include wildcard characters to match multiple files.

Let's look at some examples.

This will copy a single from the current Docker build context into the image:

And this will copy all XML files into the Docker image:

The main downside of this approach is that we cannot use it for running Docker containers. Docker images are not Docker containers, so this approach only makes sense to use when the set of files needed inside the image is known ahead of time.

5. Conclusion

In this tutorial, we've seen how to copy files to and from a Docker container. Each has some pros and cons, so we must pick the approach that best suits our needs.

Get started with Spring 5 and Spring Boot 2, through the Learn Spring course:

>> CHECK OUT THE COURSE

In this lab, we will look at some basic Docker commands and a simple build-ship-run workflow. We’ll start by running some simple containers, then we’ll use a Dockerfile to build a custom app. Finally, we’ll look at how to use bind mounts to modify a running container as you might if you were actively developing using Docker.

Difficulty: Beginner (assumes no familiarity with Docker)

Time: Approximately 30 minutes

Tasks:

Task 0: Prerequisites

You will need all of the following to complete this lab:

  • A clone of the lab’s GitHub repo.
  • A DockerID.

Clone the Lab’s GitHub Repo

Uninstall Docker From Centos

Use the following command to clone the lab’s repo from GitHub (you can click the command or manually type it). This will make a copy of the lab’s repo in a new sub-directory called linux_tweet_app.

Make sure you have a DockerID

If you do not have a DockerID (a free login used to access Docker Hub), please visit Docker Hub and register for one. You will need this for later steps.

Task 1: Run some simple Docker containers

There are different ways to use containers. These include:

  1. To run a single task: This could be a shell script or a custom app.
  2. Interactively: This connects you to the container similar to the way you SSH into a remote server.
  3. In the background: For long-running services like websites and databases.

Remove Docker From Centos

In this section you’ll try each of those options and see how Docker manages the workload.

Run a single task in an Alpine Linux container

In this step we’re going to start a new container and tell it to run the hostname command. The container will start, execute the hostname command, then exit.

  1. Run the following command in your Linux console.

    The output below shows that the alpine:latest image could not be found locally. When this happens, Docker automatically pulls it from Docker Hub.

    After the image is pulled, the container’s hostname is displayed (888e89a3b36b in the example below).

  2. Docker keeps a container running as long as the process it started inside the container is still running. In this case the hostname process exits as soon as the output is written. This means the container stops. However, Docker doesn’t delete resources by default, so the container still exists in the Exited state.

    List all containers.

    Notice that your Alpine Linux container is in the Exited state.

    Note: The container ID is the hostname that the container displayed. In the example above it’s 888e89a3b36b.

Containers which do one task and then exit can be very useful. You could build a Docker image that executes a script to configure something. Anyone can execute that task just by running the container - they don’t need the actual scripts or configuration information.

Run an interactive Ubuntu container

You can run a container based on a different version of Linux than is running on your Docker host.

In the next example, we are going to run an Ubuntu Linux container on top of an Alpine Linux Docker host (Play With Docker uses Alpine Linux for its nodes).

  1. Run a Docker container and access its shell.

    In this example, we’re giving Docker three parameters:

    • --interactive says you want an interactive session.
    • --tty allocates a pseudo-tty.
    • --rm tells Docker to go ahead and remove the container when it’s done executing.

    The first two parameters allow you to interact with the Docker container.

    We’re also telling the container to run bash as its main process (PID 1).

    When the container starts you’ll drop into the bash shell with the default prompt [email protected]<container id>:/#. Docker has attached to the shell in the container, relaying input and output between your local session and the shell session in the container.

  2. Run the following commands in the container.

    ls / will list the contents of the root director in the container, ps aux will show running processes in the container, cat /etc/issue will show which Linux distro the container is running, in this case Ubuntu 18.04.3 LTS.

  3. Type exit to leave the shell session. This will terminate the bash process, causing the container to exit.

    Note: As we used the --rm flag when we started the container, Docker removed the container when it stopped. This means if you run another docker container ls --all you won’t see the Ubuntu container.

  4. For fun, let’s check the version of our host VM.

    You should see:

Notice that our host VM is running Alpine Linux, yet we were able to run an Ubuntu container. As previously mentioned, the distribution of Linux inside the container does not need to match the distribution of Linux running on the Docker host.

However, Linux containers require the Docker host to be running a Linux kernel. For example, Linux containers cannot run directly on Windows Docker hosts. The same is true of Windows containers - they need to run on a Docker host with a Windows kernel.

Interactive containers are useful when you are putting together your own image. You can run a container and verify all the steps you need to deploy your app, and capture them in a Dockerfile.

You cancommit a container to make an image from it - but you should avoid that wherever possible. It’s much better to use a repeatable Dockerfile to build your image. You’ll see that shortly.

Run a background MySQL container

Background containers are how you’ll run most applications. Here’s a simple example using MySQL.

  1. Run a new MySQL container with the following command.

    • --detach will run the container in the background.
    • --name will name it mydb.
    • -e will use an environment variable to specify the root password (NOTE: This should never be done in production).

    As the MySQL image was not available locally, Docker automatically pulled it from Docker Hub.

    As long as the MySQL process is running, Docker will keep the container running in the background.

  2. List the running containers.

    Notice your container is running.

  3. You can check what’s happening in your containers by using a couple of built-in Docker commands: docker container logs and docker container top.

    This shows the logs from the MySQL Docker container.

    Let’s look at the processes running inside the container.

    You should see the MySQL daemon (mysqld) is running in the container.

    Although MySQL is running, it is isolated within the container because no network ports have been published to the host. Network traffic cannot reach containers from the host unless ports are explicitly published.

  4. List the MySQL version using docker container exec.

    docker container exec allows you to run a command inside a container. In this example, we’ll use docker container exec to run the command-line equivalent of mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version inside our MySQL container.

    You will see the MySQL version number, as well as a handy warning.

  5. You can also use docker container exec to connect to a new shell process inside an already-running container. Executing the command below will give you an interactive shell (sh) inside your MySQL container.

    Notice that your shell prompt has changed. This is because your shell is now connected to the sh process running inside of your container.

  6. Let’s check the version number by running the same command again, only this time from within the new shell session in the container.

    Notice the output is the same as before.

  7. Type exit to leave the interactive shell session.

Task 2: Package and run a custom app using Docker

In this step you’ll learn how to package your own apps as Docker images using a Dockerfile.

The Dockerfile syntax is straightforward. In this task, we’re going to create a simple NGINX website from a Dockerfile.

Build a simple website image

Let’s have a look at the Dockerfile we’ll be using, which builds a simple website that allows you to send a tweet.

  1. Make sure you’re in the linux_tweet_app directory.

  2. Display the contents of the Dockerfile.

    Let’s see what each of these lines in the Dockerfile do.

    • FROM specifies the base image to use as the starting point for this new image you’re creating. For this example we’re starting from nginx:latest.
    • COPY copies files from the Docker host into the image, at a known location. In this example, COPY is used to copy two files into the image: index.html. and a graphic that will be used on our webpage.
    • EXPOSE documents which ports the application uses.
    • CMD specifies what command to run when a container is started from the image. Notice that we can specify the command, as well as run-time arguments.
  3. In order to make the following commands more copy/paste friendly, export an environment variable containing your DockerID (if you don’t have a DockerID you can get one for free via Docker Hub).

    You will have to manually type this command as it requires your unique DockerID.

    export DOCKERID=<your docker id>

  4. Echo the value of the variable back to the terminal to ensure it was stored correctly.

  5. Use the docker image build command to create a new Docker image using the instructions in the Dockerfile.

    • --tag allows us to give the image a custom name. In this case it’s comprised of our DockerID, the application name, and a version. Having the Docker ID attached to the name will allow us to store it on Docker Hub in a later step
    • . tells Docker to use the current directory as the build context

    Be sure to include period (.) at the end of the command.

    The output below shows the Docker daemon executing each line in the Dockerfile

  6. Use the docker container run command to start a new container from the image you created.

    As this container will be running an NGINX web server, we’ll use the --publish flag to publish port 80 inside the container onto port 80 on the host. This will allow traffic coming in to the Docker host on port 80 to be directed to port 80 in the container. The format of the --publish flag is host_port:container_port.

    Any external traffic coming into the server on port 80 will now be directed into the container on port 80.

    In a later step you will see how to map traffic from two different ports - this is necessary when two containers use the same port to communicate since you can only expose the port once on the host.

  7. Click here to load the website which should be running.

  8. Once you’ve accessed your website, shut it down and remove it.

    Note: We used the --force parameter to remove the running container without shutting it down. This will ungracefully shutdown the container and permanently remove it from the Docker host.

    In a production environment you may want to use docker container stop to gracefully stop the container and leave it on the host. You can then use docker container rm to permanently remove it.

Task 3: Modify a running website

When you’re actively working on an application it is inconvenient to have to stop the container, rebuild the image, and run a new version every time you make a change to your source code.

One way to streamline this process is to mount the source code directory on the local machine into the running container. This will allow any changes made to the files on the host to be immediately reflected in the container.

We do this using something called a bind mount.

When you use a bind mount, a file or directory on the host machine is mounted into a container running on the same host.

Start our web app with a bind mount

  1. Let’s start the web app and mount the current directory into the container.

    In this example we’ll use the --mount flag to mount the current directory on the host into /usr/share/nginx/html inside the container.

    Be sure to run this command from within the linux_tweet_app directory on your Docker host.

    Remember from the Dockerfile, usr/share/nginx/html is where the html files are stored for the web app.

  2. The website should be running.

Modify the running website

Bind mounts mean that any changes made to the local file system are immediately reflected in the running container.

Docker From Centos

  1. Copy a new index.html into the container.

    The Git repo that you pulled earlier contains several different versions of an index.html file. You can manually run an ls command from within the ~/linux_tweet_app directory to see a list of them. In this step we’ll replace index.html with index-new.html.

  2. Go to the running website and refresh the page. Notice that the site has changed.

    If you are comfortable with vi you can use it to load the local index.html file and make additional changes. Those too would be reflected when you reload the webpage.If you are really adventurous, why not try using exec to access the running container and modify the files stored there.

Docker From Centos 7.6

Even though we’ve modified the index.html local filesystem and seen it reflected in the running container, we’ve not actually changed the Docker image that the container was started from.

To show this, stop the current container and re-run the 1.0 image without a bind mount.

  1. Stop and remove the currently running container.

  2. Rerun the current version without a bind mount.

  3. Notice the website is back to the original version.

  4. Stop and remove the current container

Update the image

To persist the changes you made to the index.html file into the image, you need to build a new version of the image.

  1. Build a new image and tag it as 2.0

    Remember that you previously modified the index.html file on the Docker hosts local filesystem. This means that running another docker image build command will build a new image with the updated index.html

    Be sure to include the period (.) at the end of the command.

    Notice how fast that built! This is because Docker only modified the portion of the image that changed vs. rebuilding the whole image.

  2. Let’s look at the images on the system.

    You now have both versions of the web app on your host.

Remove Docker From Centos

Test the new version

  1. Run a new container from the new version of the image.

  2. Check the new version of the website (You may need to refresh your browser to get the new version to load).

    The web page will have an orange background.

    We can run both versions side by side. The only thing we need to be aware of is that we cannot have two containers using port 80 on the same host.

    As we’re already using port 80 for the container running from the 2.0 version of the image, we will start a new container and publish it on port 8080. Additionally, we need to give our container a unique name (old_linux_tweet_app)

  3. Run another new container, this time from the old version of the image.

    Notice that this command maps the new container to port 8080 on the host. This is because two containers cannot map to the same port on a single Docker host.

  4. View the old version of the website.

Push your images to Docker Hub

  1. List the images on your Docker host.

    You will see that you now have two linux_tweet_app images - one tagged as 1.0 and the other as 2.0.

    These images are only stored in your Docker hosts local repository. Your Docker host will be deleted after the workshop. In this step we’ll push the images to a public repository so you can run them from any Linux machine with Docker.

    Distribution is built into the Docker platform. You can build images locally and push them to a public or private registry, making them available to other users. Anyone with access can pull that image and run a container from it. The behavior of the app in the container will be the same for everyone, because the image contains the fully-configured app - the only requirements to run it are Linux and Docker.

    Docker Hub is the default public registry for Docker images.

  2. Before you can push your images, you will need to log into Docker Hub.

    You will need to supply your Docker ID credentials when prompted.

  3. Push version 1.0 of your web app using docker image push.

    You’ll see the progress as the image is pushed up to Docker Hub.

  4. Now push version 2.0.

    Notice that several lines of the output say Layer already exists. This is because Docker will leverage read-only layers that are the same as any previously uploaded image layers.

You can browse to https://hub.docker.com/r/<your docker id>/ and see your newly-pushed Docker images. These are public repositories, so anyone can pull the image - you don’t even need a Docker ID to pull public images. Docker Hub also supports private repositories.

Next Step

Check out the introduction to a multi-service application stack orchestration in the Application Containerization and Microservice Orchestration tutorial.