There are some Docker Security Best Practices that you need to consider while building, sharing, and running your application. Docker is an open-source platform used to build, share and run your containerized applications. You can easily build your Docker images containing your applications, share them within your teams or outside the team and run your applications with just a single command. Docker container security looks very easy, right? It is; however.
You probably already know what Docker is and how it works. Hence, we won’t get into details about it. In this blog, we will cover the Top 12 Docker Security Best Practices that you need to consider when using the Docker platform.
Containerization has taken the world by storm due to its flexibility in the way applications can be deployed anywhere, however, this also introduces some security vulnerabilities. Docker and Docker Containers make it very easy to build, share and deploy your applications making it very difficult to find, detect, report, and fix security issues after the deployment. Much of this overhead can be prevented and minimized by shifting the security left. Therefore, it is crucial to secure everything from Dockerfile to a Docker Container in order to have Docker Container Security in place.
Read the blog: How to Create a CI CD Pipeline with Jenkins, containers, and ECS
There are various majors that one can take to improve the Docker Container Security right from the start, i.e. from writing a Dockerfile to running a Docker Container. Let’s proceed and see the top 12 Docker Security Best Practices that can put the Docker Container Security in place.
Docker containers run on the Docker engine available on host machines. These host machines could be Linux/Mac or Windows. The Docker Engine can be one of the available versions. It is vital to use the latest, stable version of it, which has updates to the known issues from the previous releases. The same applies to the host Operating System.
E.g. As of 27th/Dec/2021, if you are using Ubuntu as the host OS, then you can use Ubuntu 20.04 LTS instead of Ubuntu 16.04 LTS and for Docker Engine, you can go with 20.10.11. Find the Docker Release Notes for more information and check the OS Platforms supported by Docker.
It is important to have updates or patches in place as every patch solves vulnerabilities like CVE-2021-41092. This particular patch fixed the bug due to which the credentials used to login docker registry from the CLI were being sent to `registry-1.docker.io` rather than the intended private registry, this used to happen due to misconfigured configuration file (typically `~/.docker/config.json`) listing a `credsStore` or `credHelpers.
Refer to the following screenshot to understand the release cycle of Ubuntu OS.
Dockerfile contains instructions used to create Docker Images. COPY, RUN, ADD, CMD, ENTRYPOINT, etc. These are a few of the command instructions that may be part of the Dockerfile. Along with them, we can also use ENV and ARGS to set variables at run time or build time depending on the requirement. You can use environment variables within the Docker containers using ENV or ARGS instructions. While setting up environment variables, you should make a note that one should never assign secrets, credentials, passwords, etc to variables in Dockerfile or should not hardcode in any command. Consider this as one of the Dockerfile security best practices as ignoring it can lead to potential security threats and can cost you a lot.
E.g. If you hardcode the password or connection details of your databases in the Dockerfile, assign them to environment variables and upload the Dockerfile to a public repository, anyone with access to the repository could get those credentials and can access details or connect to the databases.
FROM mysql:5.7 ENV DATABASE_USERNAME=admin # < - Setting username is acceptable ENV DATABASE_PASSWORD=mypassword@!@#$ # < - Avoid setting password at any cost
Image registry is where Docker images are stored. It can be private or public. Docker Hub is the official cloud-based registry of Docker for Docker images and it is the default registry when you install it. This means, if you do not configure any registry, the images are pulled from this public repository.
When you pull Docker images from any public registry, be aware of the source as you might not know who built them, how they were built, or whether they are trustworthy or not. Whenever you pull an image from an untrusted publisher, do not forget to verify the source registry, the Dockerfile used to build the image, and also carefully choose the base image for your Dockerfile, i.e. the FROM instruction.
FROM <registry/image:tag> # < - Verify the base image before you use it
E.g.
Even if you are using a public registry like Docker Hub, you can have a look at images and see if they have the “Verified Publisher” tag or not.
If tags do not have the “Verified Publisher”, you can avoid using such images.
While writing a Dockerfile for applications that need files to be copied from your local machine to Docker Image, you should be aware of the content being copied using the COPY instruction. You may have files on your local machine that may contain confidential data or secrets. So, if these files get copied into the Docker Image, anyone with access to the container can get those files from within the container. Hence, it is very important to copy only the files needed in the container rather than copying everything as shown in the following instruction to improve Docker Container Security.
You can give it a try to .dockerignore it can help exclude files and directories that match patterns and avoid unnecessarily sending large or sensitive files to images using ADD or COPY.
E.g. In the following COPY instruction, all the files from the current location will get copied in the Docker Image. So, you should always avoid “COPY . ./” and should explicitly specify the file names in the COPY instruction as “COPY package.json ./”
FROM node:12.18.4-alpine WORKDIR /app ENV PATH /app/node_modules/.bin:$PATH COPY . ./ # < - Avoid such kind of COPY instruction
Docker images are built from Dockerfiles and Dockerfiles contain instructions to use the base image, install packages, start applications, etc. Dockerfile may also contain credentials hardcoded in it by mistake. So, what if the base image that you use has some vulnerabilities and credentials hardcoded in the Dockerfile are leaked from the container created using the image built using that Dockerfile?
Scanning images is the process of identifying known security vulnerabilities in your Docker images so that you can fix them before pushing them to Docker Hub or any other registry.
You can now run Snyk vulnerability scans directly from the Docker Desktop CLI as Snyk and Docker have partnered together.
E.g.
You can get a detailed report about a Docker image by providing the Dockerfile used to create the image.
docker scan --file PATH_TO_DOCKERFILE DOCKER_IMAGE
In Docker, images have tags to them. The most common and the default tag for Docker images is “latest”. So, if you do not assign a tag to your image, it will have the “latest” tag to it by default. One can publish multiple images with the same tag, i.e. Docker image tags are not immutable. Hence, it is very important to –
E.g.
Let’s say,
There are various Dockerfile linters available in the market used to make sure that the Dockerfile follows Dockerfile security best practices. Linters can help you halt the build process in case it generates warnings, they traverse through your Dockerfiles and raise warnings in case the Dockerfiles do not follow the best practices. You can leverage Hadolint, which is an open-source project for validating inline bash and linting Dockerfiles.
By default, Docker containers running on host machines can utilize all of the RAM and CPU. In cases when Docker containers are compromised, attackers may try to use the host machine resources to perform malicious activity. Also, if a particular container starts utilizing all the resources from the host machine, other containers residing on the same place may get impacted due to resource unavailability. To avoid such situations, it is recommended to set resource limits on Docker containers so that they do not use more resources than allotted to them and also to help provide Docker Container Security.
E.g. The following 1st command can guarantee the container at most 50% of the CPU every second if the host machine has 1 CPU and the 2nd command can limit the memory usage of the container to 1 GB.
CPU Limit
docker run -it --cpus=".5" ubuntu /bin/bash
RAM Limit
docker run -it --memory="1g" ubuntu /bin/bash
Docker communicates with a UNIX domain socket called /var/run/docker.sock and that is the main entry point for the Docker API. If anyone has access to the Docker daemon socket also gets root access. So, allowing any user to write /var/run/docker.sock or exposing the socket to a container is a great Docker Container Security risk to the rest of the system as this would give root privileges to it. Hence, never mount /var/run/docker.sock inside the Docker container.
E.g. Never run your docker run command with an option like “ -v /var/run/docker.sock:/var/run/docker.sock” and consider this as one of the Docker Security Best Practices that would help you keep your system protected.
As per the “2021 Container Security and Usage Report” by sysdig.com, the majority of images are overly permissive with 58% of containers running as root. This is not considered as a Dockerfile best practice, one should avoid doing this as there are very few instances where you would really need to run your process in the container using a root user. To make sure that you don’t use root user, always specify a “USER” instruction with the user name.
Using a non-root user in the Docker containers ensures to mitigate potential vulnerabilities in the container runtime and achieve Docker Container Security.
E.g.
Always try to create and use a non-root user to run your application processes in the container. The non-root user you need may not exist, hence you first will have to create one before using it.
FROM alpine:3.12 RUN adduser -D non-root-user && chown -R non-root-user /myapp-data # < - Create a user USER non-root-user # < - use a non-root user
Not convinced about Docker? Check out these Docker alternatives for your team.
Many artifacts are created throughout the Dockerfile build process that is only necessary during build time. These packages are required for compilation or dependencies for running unit tests, temporary files, secrets, etc. Keeping these artifacts in the base image increases the size of the Docker image, which might slow down download time and expand the attack surface because more packages are loaded, as a result, being one of the reasons Docker supports multi-stage builds. This feature allows you to use numerous temporary images in the build process while only maintaining the most recent image and the data you copied into it.
As a result, you have two images:
In this way, you can adopt multi-stage builds to build your Docker Images that can reduce the size of the images and avoid installing unwanted libraries that increase the chance of a potential security risk.
As new security vulnerabilities are found regularly, keeping up-to-date on security patches is a general Docker Security Best Practice. So, it is important to use frequently updated base images and build your own on top of them. There’s no need to always go to the most recent version, which may contain breaking changes, but you should have a versioning strategy. Following are a few of the points to consider w.r.t your base images.
Don’t miss our video about the DevOps Security best practices.
We all know that “prevention is better than cure” and the same applies in case of Docker. One cannot simply ignore its security as that can lead to a huge disaster. We now know why Docker Container Security is vital and why it has to be in place. Shifting security left is essential for improving your Docker Environment and reducing the management overhead.
The top 12 recommendations mentioned above, focused on Dockerfile security best practices and Docker Container Security best practices, will help you in the journey to securing your Docker container and Docker environment.
Secure your Docker Container with a highly experienced team. Our DevOps engineers count with the necessary skillset and certifications to scale your company up, we surely can help.
Best Practice | Summary |
---|---|
1. Keep Docker Host and Docker Engine Up to Date | Always make sure your Docker Environment and Docker Host OS is up to date. |
2. Avoid Storing Confidential Data in Dockerfile Instructions | Do not store database passwords, login details, access keys in your Dockerfile. |
3. Avoid Using an Untrusted Image Registry | Always try to use private registries. |
4. Beware of Recursive Copy | Make sure you do not copy files containing confidential information from the local machine to images. |
5. Scan Images During Development | Scan images for security vulnerabilities so that you solve them before they arise in the Production environment. |
6. Use Fixed Tags for Immutability | Do not use common tags like ‘latest’ to tag your images. |
7. Lint Dockerfile | Linters are used to make sure that the Dockerfile follows security best practices. |
8. Limit Container Resources | Have a CPU/RAM limit in place for containers. |
9. Don’t Expose the Docker Daemon Socket | Do not mount /var/run/docker.sock in your containers. |
10. Run Containers as a Non-Root User | Never use a root user to run your containers. |
11. Keep Docker images as Small as Possible | To use numerous temporary images in the build process. |
12. Use Docker Images that are Up to Date | Use frequently updated base images. |
This blog is also available on DZone; go and follow us there
Kubernetes vs Docker: Read the blog to make your choice.
Dockerfile is a set of instructions used to create Docker images that can be used to create Docker containers.
Docker Container Security in simple words is to make sure that the Docker host, Docker environment, and Docker containers are running as securely as possible by following Docker Security Best Practices.
Follow Docker Security Best Practices and Dockerfile security best practices to make sure you are using Docker base images with the least or no known vulnerabilities, Dockerfile does not expose confidential data, containers do not use a root user, and run with limited resources.
Have you ever wondered how businesses easily process enormous volumes of data, derive valuable insights,…
Discover the steps for developing cloud applications, from costs to cloud app deployment
Imagine launching your product with just the core features, getting honest user feedback, and then…
When a tight deadline is non-negotiable, every second counts! Here’s how we developed and launched…
You may have considered hiring a nearshore software development company or services, but you still have doubts…
End-to-end project management goes as far back as you can remember. Every project in history, even…