Mastering Docker: A Comprehensive Guide to Creating, Running, and Managing Containers

1. Creating Docker Containers

1.1 Creating Docker Containers Using Existing Images

When creating containers using existing Docker images, the following steps are typically involved:

Obtain the Image: First, you need to obtain the required image from Docker Hub or another image repository. You can use the docker pull command to get the image, with the syntax as follows:

Code language: JavaScript

docker pull :

Here, is the name of the image you want to obtain, and is an optional version or identifier.

Create the Container: Once you have the required image, you can use the docker run command to create a container. Typically, when running a container, you can specify some options to customize the container’s behavior, such as port mapping, data volume mounting, etc. The basic syntax is as follows:

Code language: JavaScript

docker run [options] :

Here, [options] are optional parameters used to configure how the container runs. : specifies the image and its version or tag to use.

Example: Below is a simple example demonstrating how to create a running container using an existing nginx image and map the container’s port 80 to the host’s port 8080:

Code language: JavaScript

docker run -d -p 8080:80 nginx

In this example, the -d parameter indicates running the container in detached mode, -p 8080:80 specifies mapping the container’s port 80 to the host’s port 8080, and nginx is the name of the image to use.

Check Container Status: After creating a container, you can use the docker ps command to view the list of currently running containers to ensure the container has been successfully created and is running. To view all containers, including stopped ones, you can add the -a parameter.

These are the basic steps for creating containers using existing images. Depending on actual needs, you can further customize the container’s configuration, such as mounting data volumes, setting environment variables, etc.

1.2 Custom Images

Custom images are created by writing a Dockerfile and using the Docker build command. Here are the basic steps for creating a custom image:

Write a Dockerfile: A Dockerfile is a text file containing instructions for creating an image. In a Dockerfile, you can define the operations and configurations needed from a base image, such as installing packages, setting environment variables, adding files, etc. Below is a simple example Dockerfile:

Code language: JavaScript

# Use the official Node.js image as the base imageFROM node:14# Set the working directoryWORKDIR /app# Copy files from the current directory to the working directoryCOPY . .# Install application dependenciesRUN npm install# Expose the application's portEXPOSE 3000# Define the command to run when the container startsCMD ["node", "app.js"]

Build the Image: After writing the Dockerfile, use the docker build command to build the image. In the command, you need to specify the directory where the Dockerfile is located (usually the current directory) and the name and optional tag for the image. For example:

Code language: JavaScript

docker build -t my-custom-image .

In this example, the -t parameter is used to specify the name of the image (my-custom-image), and . indicates that the Dockerfile is in the current directory.

Run the Container: After a successful build, you can use the docker run command to run the newly created image and create a container instance. For example:

Code language: JavaScript

docker run -d -p 3000:3000 my-custom-image

This command will run the container in detached mode and map the container’s port 3000 to the host’s port 3000.

By following these steps, you can create custom Docker images and run your applications in containers. In practice, you may need to further customize the Dockerfile configuration based on your application’s requirements.

2. Managing Docker Containers
2.1 Starting and Stopping Containers

Starting and stopping containers are common operations when using Docker to run containers. Here are the basic steps for starting and stopping containers:

Start a Container

  • Start an Existing Container: If you have already created a container but have not started it, you can use the docker start command to start it. The syntax is as follows:

Code language: JavaScript

docker start 
  • Create and Start a New Container: If you want to create and start a new container, you can use the docker run command. For example:

Code language: JavaScript

docker run -d 

This will start a new container in detached mode.

Stop a Container

  • Stop a Running Container: If a container is running, you can use the docker stop command to stop it. The syntax is as follows:

Code language: JavaScript

docker stop 

This will send a stop signal to the container, causing it to stop running.

  • Force Stop a Container: In some cases, you may need to force stop a container, even if it does not respond to the normal stop signal. You can use the docker kill command to force stop the container. For example:

Code language: JavaScript

docker kill 

Example Below is an example demonstrating how to start and stop a container:

Start a Container:

Code language: JavaScript

docker start my-container

Stop a Container:

Code language: JavaScript

docker stop my-container

Be sure to replace in the commands with the actual container ID or name. Using these commands, you can easily control the start and stop of Docker containers.

2.2 Viewing Container Status

To view the status of a container, you can use the docker ps command. This command is used to list the currently running containers. If you want to view all containers, including those that have stopped, you can use the docker ps -a command.

View Running Containers Use the following command to list the currently running containers:

Code language: JavaScript

docker ps

This will display a list containing some key information about the containers, such as container ID, image name, creation time, status, etc.

View All Containers (Including Stopped Containers) If you want to view all containers, including those that have stopped, you can use the -a parameter:

Code language: JavaScript

docker ps -a

This will display all containers, regardless of whether their status is running or stopped.

Example Below is an example output:

Code language: JavaScript

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS      NAMESabcdef123456   nginx:latest   "nginx -g 'daemon of…"   5 minutes ago   Up 5 minutes   80/tcp     my-nginx-container123456abcdef   mysql:latest   "docker-entrypoint.s…"   2 hours ago     Up 2 hours     3306/tcp   my-mysql-container

In this example, the docker ps command shows two containers, one is a running Nginx container, and the other is a running MySQL container. The status column will display “Up” indicating the container is running.

2.3 Entering a Container

To enter a running container and interact with it, you can use the docker exec command. This command allows you to execute specific commands inside the container.

Enter the Container’s Interactive Shell To enter the container’s interactive shell, you can use the following command:

Code language: JavaScript

docker exec -it  /bin/bash

In this command, the -it parameter specifies using an interactive terminal, and /bin/bash is the shell to execute inside the container. You can also use other shells, such as /bin/sh or /bin/zsh.

Example For example, to enter a container named my-container, you can run the following command:

Code language: JavaScript

docker exec -it my-container /bin/bash

This will start an interactive Bash shell inside the container and place you in the container’s file system, allowing you to execute commands and view the internal state of the container.

Note Be sure to replace in the command with the actual ID or name of the container you want to enter. Additionally, the container you want to enter must be in a running state.

Using docker exec to enter a container is a very useful feature that allows you to debug, view logs, execute commands, and perform other operations inside the container.

2.4 Deleting a Container

To delete a container, you can use the docker rm command. Here is the basic syntax for deleting a container:

Code language: JavaScript

docker rm 

This command will delete the specified container. You can also delete multiple containers at once by providing multiple container IDs or names in the command.

Example For example, to delete a container named my-container, you can run the following command:

Code language: JavaScript

docker rm my-container

Delete All Containers If you want to delete all stopped containers, you can use the docker ps -a command combined with the awk command. For example, to delete all stopped containers, you can run the following command:

Code language: JavaScript

docker rm $(docker ps -a -q)

In this command, docker ps -a -q is used to list the IDs of all containers, and then $(...) passes these IDs to the docker rm command to delete these containers.

Note

  • Before deleting containers, make sure you no longer need them. Deleting a container will result in the loss of its internal data unless you used data volumes for persistence when creating the container.
  • If you want to delete a running container, you can add the -f parameter to force delete the container, for example: docker rm -f .
3. Container Communication and Data Management
3.1 Container Networking

Container networking is an important concept in Docker, allowing communication between containers and between containers and the outside world. In container networking, each container has its own IP address and can communicate with other containers or the host through this IP address. Below are some key concepts and features of container networking (specific explanations of networking will be presented in later chapters):

  1. Default Network Mode When you create a new container, Docker assigns it a default network, usually a bridge network. In a bridge network, each container is assigned a unique IP address, and containers can communicate with each other through these IP addresses. Additionally, Docker provides a network mode called host mode, allowing containers to share the network namespace with the host.
  2. User-Defined Networks In addition to the default network mode, Docker allows users to create custom networks to meet specific networking needs. User-defined networks can make a group of containers in the same network, allowing them to access each other by container name instead of relying on IP addresses. Additionally, user-defined networks support connecting to external networks, allowing containers to communicate with external services.
  3. Network Drivers Docker provides multiple network drivers to support different types of networks. In addition to the default bridge network driver, there are overlay networks, macvlan networks, etc. Each network driver has its own characteristics and applicable scenarios, such as overlay networks for cross-host container communication, and macvlan networks allowing containers to bind directly to physical network interfaces.
  4. External Connectivity Containers can communicate with the outside world through external connectivity. This means containers can connect to the host network, external services, or other networks to access external resources or provide services. External connectivity usually requires port mapping or special configuration of the container network to achieve.
  5. Container-to-Container Communication Communication between containers is usually done through container IP addresses or container names. Containers in the same network can communicate directly through IP addresses or container names without additional configuration. If containers are in different networks, communication may require port mapping or other network connection methods.

Container networking is an important part of Docker, providing containers with the ability to communicate and connect to the outside world. By understanding the basic concepts and features of container networking, you can better understand and manage the network deployment of containerized applications.

3.2 Shared Data Volumes

Shared data volumes are a mechanism in Docker for implementing data sharing between containers. A data volume is a special directory that can bypass the container’s file system and can be shared and accessed by one or more containers. Shared data volumes allow multiple containers to read and write data on the same data volume, enabling data sharing and persistent storage. Below are the main features and usage of shared data volumes (detailed explanations of volumes will be presented in later chapters):

Create a Data Volume In Docker, data volumes can be created in two ways:

Use the docker volume create command to create an anonymous data volume:

Code language: JavaScript

docker volume create myvolume

Specify a mount point when running a container to create a named data volume:

Code language: JavaScript

docker run -v myvolume:/path/to/mount ...

Mount Data Volumes to Containers To use a data volume in a container, you need to mount the data volume to a specified path in the container when running the container. You can use the -v or --mount parameter to specify the mount point.

Multi-Container Shared Data Volumes Multiple containers can perform read and write operations on the same data volume, enabling data sharing. Simply mount the same data volume to different containers when running them.

Data Volume Lifecycle Management Data volumes can persistently store data during the container’s lifecycle, and the data in the data volume remains even if the container is deleted. You can choose to delete data volumes or keep them for future use.

Use Cases Shared data volumes are suitable for many scenarios, including:

  • Database Containers: Multiple database containers can share the same data volume for persistent data storage.
  • File Sharing: Multiple application containers can share the same data volume for file sharing and synchronization.
  • Log Collection: Multiple log containers can write log data to the same data volume, and another container can collect and process the log data.

Shared data volumes are an important mechanism in Docker for implementing data sharing and persistent storage between containers. By using shared data volumes, you can more flexibly design and manage containerized applications and achieve persistent storage and data sharing.

3.3 Network Connections

In Docker, network connections between containers can be achieved in several ways:

  1. Default Bridge Network By default, Docker uses a bridge network to connect containers. In a bridge network, each container is assigned a unique IP address, and containers can communicate through these IP addresses. Through the bridge network, you can establish network connections between multiple containers running on the same host.
  2. User-Defined Networks In addition to the default bridge network, Docker allows users to create custom networks to meet specific networking needs. User-defined networks can make a group of containers in the same network, allowing them to access each other by container name instead of relying on IP addresses. Through user-defined networks, you can more flexibly manage network connections between containers and achieve isolation and communication between different containers.
  3. External Connectivity Containers can communicate with the outside world through external connectivity. External connectivity is usually achieved through port mapping or special network configuration of the container. Through external connectivity, you can allow containers to communicate with the host network, external services, or other networks to access external resources or provide services.
  4. Container-to-Container Communication Containers in the same network can communicate directly through IP addresses or container names. In the default bridge network, each container is assigned a unique IP address, so communication can be done through IP addresses. In user-defined networks, containers can access each other by container name.
  5. Cross-Host Communication If containers are deployed on different hosts, you can use Docker’s overlay network to achieve cross-host communication. Overlay networks allow containers on multiple hosts to communicate in the same network, enabling cross-host container communication.

Through the above methods, you can achieve flexible network connections between containers in Docker and choose the appropriate network configuration to achieve communication and isolation between containers.

4. Deployment and Scaling of Docker Containers
4.1 Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. With a simple YAML file, you can configure the services, networks, and volumes of an application, and use a single command to start, stop, and manage the entire application. Below are the main features and usage of Docker Compose:

Features:

  • Declarative Syntax: Use YAML files to define the services, networks, and volumes of an application, making the configuration more concise and clear.
  • Multi-Container Applications: Support applications composed of multiple containers, allowing you to define and manage dependencies and connections between multiple services.
  • Container Orchestration: Automate the creation, start, stop, and deletion of containers, simplifying the process of container orchestration and management.
  • Cross-Platform Support: Docker Compose can run on different operating systems such as Windows, macOS, and Linux, and supports common container orchestration features.

Usage:

  • Write a Docker Compose File: Create a YAML file named docker-compose.yml and define the services, networks, volumes, and other configuration information of the application in the file.
  • Define Services: Use the services keyword in the Docker Compose file to define the various services of the application. Each service includes the container’s image, port mapping, environment variables, and other configurations.
  • Build and Start the Application: Use the docker-compose up command to build and start the entire application. Docker Compose will read the docker-compose.yml file and create and start containers based on the configuration in the file.
  • Manage the Application: Once the application is successfully started, you can use the docker-compose command to manage the application’s state, including starting, stopping, restarting, and deleting operations.
  • Extend and Customize: Docker Compose allows you to extend and customize the application by adding new services, modifying configuration files, and other means to meet specific needs.

Example Docker Compose File: Below is a simple example of a Docker Compose file:

Code language: JavaScript

version: '3'services:  web:    image: nginx:latest    ports:      - "8080:80"    volumes:      - ./html:/usr/share/nginx/html    networks:      - mynetworknetworks:  mynetwork:    driver: bridge

In this example, we define a service named web that uses the Nginx image and maps the host’s port 8080 to the container’s port 80. We also define a data volume to mount the html directory on the host to the /usr/share/nginx/html directory inside the container. Finally, we define a custom network mynetwork to connect the various services of the application.

With Docker Compose, you can easily manage the deployment and operation of multi-container Docker applications, simplifying the process of container orchestration and management, and improving development and deployment efficiency.

4.2 Using Docker Swarm for Cluster Deployment

Docker Swarm is Docker’s official container orchestration tool that allows you to combine multiple Docker hosts into a virtual container cluster for deploying, managing, and scaling containerized applications. Below are the basic steps for using Docker Swarm for cluster deployment:

Initialize Swarm First, initialize Swarm on a Docker host, which will serve as the manager node of the Swarm cluster. Use the docker swarm init command to initialize Swarm. For example:

Code language: JavaScript

docker swarm init --advertise-addr 

In this command, the --advertise-addr parameter is used to specify the IP address of the manager node.

Add Other Nodes to Swarm Next, add other Docker hosts to the Swarm cluster as worker nodes. Run the docker swarm join command on each node to be added to connect to the Swarm cluster. For example:

Code language: JavaScript

docker swarm join --token  :

In this command, is the token generated when Swarm is initialized, is the IP address of the manager node, and is the port of the Swarm control plane, defaulting to 2377.

Deploy Services Once the Swarm cluster is established, you can use the docker service command to deploy services. A service is a logical unit of a containerized application, consisting of one or more containers, and running in the Swarm cluster according to the specified number of replicas. For example:

Code language: JavaScript

docker service create --name my-web-app --replicas 3 -p 8080:80 my-web-image

This command will create a service named my-web-app built from the my-web-image image and run 3 replicas in the cluster.

Scale Services Use the docker service scale command to scale or reduce the number of replicas of a service. For example:

Code language: JavaScript

docker service scale my-web-app=5

This command will increase the number of replicas of the my-web-app service to 5.

Manage Services You can use the docker service ls command to list all services running in the Swarm cluster, use the docker service ps command to view the task status of a specific service, and use the docker service rm command to delete a service.

Manage the Cluster You can use the docker node ls command to list all nodes in the Swarm cluster, use the docker node inspect command to view detailed information about a specific node, and use the docker node rm command to remove a node from the cluster.

Note

  • Before deployment, ensure that Docker Engine is installed on all nodes and that the versions are compatible.
  • When deploying cluster applications using Docker Swarm, it is recommended to use Docker labels to restrict container deployment to specific nodes for more flexible resource management.
  • Using Docker Swarm makes it easy to deploy and manage containerized applications, but in a production environment, you still need to consider issues such as high availability, security, and monitoring.
4.3 Integration of Kubernetes and Docker Containers

Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications. Although Kubernetes can manage any containerized application, it is most commonly used to manage Docker containers. Below are the main ways Kubernetes integrates with Docker containers:

  1. Using Docker Images Kubernetes supports using Docker images as the basis for containerized applications. Kubernetes obtains Docker images from Docker Hub or other image repositories and deploys them to nodes in the cluster.
  2. Container Runtime Kubernetes uses a container runtime to run containers on nodes. Docker is one of the container runtimes supported by Kubernetes, and other common container runtimes include containerd and CRI-O.
  3. Container Objects In Kubernetes, containers are abstracted as the concept of Pods. A Pod can contain one or more containers, and these containers share the same network namespace, storage volumes, and other resources. Each Pod can contain one or more Docker containers.
  4. Container Orchestration Kubernetes provides powerful container orchestration capabilities, automatically scheduling and managing containers based on application needs. Kubernetes can schedule containers based on resource requirements, health status, network connections, and other factors, and perform automatic load balancing and fault recovery between nodes.
  5. Docker CLI and Kubernetes Kubernetes provides a command-line tool kubectl for interacting with the cluster. Although Kubernetes uses Docker containers as the runtime, it does not directly depend on the Docker CLI. Instead, Kubernetes provides its own API and object model, allowing users to manage containers and applications in the cluster using the kubectl command.
  6. Container Storage Volumes Kubernetes provides various types of storage volumes to manage the persistent storage needs of containers. These storage volumes can be used with Docker containers to provide persistent storage and data sharing.
  7. Container Networking Kubernetes provides network plugins (CNI plugins) to manage network connections between containers. These plugins can be used with Docker containers to provide network isolation, load balancing, and service discovery.

Kubernetes is closely integrated with Docker containers, allowing you to more easily manage and run Docker containerized applications through Kubernetes, and providing many advanced features and tools to simplify container orchestration, automated deployment, and management of containerized applications.

5. Container Usage Specifications

Container security is a very important aspect of containerized applications, as containerized applications often involve sensitive data and critical business logic. Below are some common measures and best practices to improve container security:

  1. Use Official Images Try to use official Docker images or trusted image repositories to obtain container images. Official images are usually regularly updated and patched for security vulnerabilities to ensure the security of the latest versions.
  2. Minimize Image Size When building containers, try to choose minimal base images and only install the dependencies and components required by the application. Reducing image size can reduce the attack surface and improve security.
  3. Regularly Update Images Regularly update container images to apply the latest security patches and fixes. You can use automation tools to monitor and update images to reduce the risk of manual operations and missed updates.
  4. Implement Container Image Signing Use container image signing to verify the source and integrity of images. Container image signing can prevent images from being tampered with or replaced, thereby improving container security.
  5. Limit Container Permissions Use the principle of least privilege in containers, and try to limit the permissions and access scope of containers. For example, run containers as non-privileged users and use Linux namespaces and control groups to isolate containers.
  6. Implement Network Isolation Implement network isolation between containers to prevent unauthorized network access and attacks. You can use container network plugins to achieve network isolation, firewall rules, and traffic control.
  7. Use Secure Configurations Securely configure containers and container orchestration platforms, including enabling security options, limiting resource access, and using security policies. Ensure that the security configurations of containers and container hosts comply with best practices and security standards.
  8. Monitoring and Auditing Implement monitoring and auditing mechanisms to detect and respond to security incidents and threats. Monitor container activities, logs, and metrics, and regularly review security policies and configurations, and respond to security incidents in a timely manner.
  9. Security Training and Awareness Improve the security awareness of team members and developers, and strengthen container security training and education. Ensure that the team understands container security best practices and knows how to respond to security threats and incidents.
  10. Continuous Improvement Container security is a continuous improvement process that requires regular review and improvement of security measures. Regularly conduct security reviews, vulnerability scans, and penetration tests, and fix security vulnerabilities and defects.

By implementing the above measures and best practices, you can improve the security of containerized applications, reduce security risks and threats, and protect sensitive data and critical business logic.

6. Summary

This article introduces the process of creating and using Docker containers. First, we learned the steps for creating containers using existing images, and how to easily start containers using the docker run command. Next, we learned how to meet specific application needs by customizing images, including writing Dockerfiles, building images, and publishing images to repositories. Then, we explored operations such as starting, stopping, viewing the status, entering, and deleting containers, as well as how to use data volumes to achieve data sharing between containers. Finally, we briefly introduced container networking, including default network modes, user-defined networks, and external connectivity. The process of creating and using Docker containers is relatively simple and flexible. By mastering the basic Docker commands and concepts, developers can easily build, deploy, and manage containerized applications, achieving fast, consistent, and repeatable development environments. The popularity and widespread use of Docker have made container technology an important part of modern software development, providing a strong foundation for building reliable, scalable, and secure applications.