Docker Interview Questions to Boost your Interview Preparation
Docker interview questions are an important part of the hiring process for any organization that is looking to build and maintain a team with expertise in containerization technology. In this Article, We have Covered the Top 30 Docker Interview Questions from Basic to Advance. Docker is a popular platform for developing, packaging, and deploying applications in containers, and it has become an essential tool for DevOps teams that need to build and deploy applications quickly and efficiently.
Docker interview questions can cover a broad range of topics, from basic Docker commands and containerization concepts to more advanced topics like networking, orchestration, and security. These questions are designed to assess a candidate’s technical knowledge, problem-solving skills, and ability to work collaboratively with others.
Q.1 What is Docker, and how does it differ from traditional virtualization?
Docker is an open-source containerization platform. Containers include code, libraries, dependencies, and configuration files for an application.
Docker varies from virtualization in various ways. Docker containers are lighter and faster than virtual computers (VMs). Docker containers share the host OS, making them smaller and faster to start than VMs.
Second, Docker ensures application consistency across development, testing, and production environments. .
Docker helps developers manage and deploy complicated programmes by packaging apps and dependencies into containers. Docker lets developers create containers for each application component, such as the web server, database, and application code, and manage them separately.
Q.2 How do you create a Docker image, and what are the steps involved?
To generate a Docker image, follow these steps:
- Now, choose a base image for your Docker image. A base image is a pre-built image with an operating system and dependencies.
- Build a Dockerfile: A Dockerfile is a text file with instructions for constructing a Docker image. Dockerfiles specify the base image, application code, and dependencies.
- Create the Docker image: Use the Docker command-line interface (CLI) to construct the image after creating the Dockerfile. Docker build and Dockerfile location are required for this process.
- Test the Docker image: After building it, test it to make sure it works. By running a container from the Docker image and testing the application, you can do this.
- Publish the Docker image to a registry after testing: A registry maintains Docker images, which you can share with other developers or deploy to production.
Q.3 What is Docker Hub, and how does it work?
Docker Hub lets users store and share Docker images.
Users upload Docker images to Docker Hub. User-selected photos can be public or private. Private photos are password-protected, but public ones are not.
Docker Hub users can browse or search for images. They can download an image and utilise it to make containers.
Automated builds allow developers to automatically create and test their Docker images after source code changes on Docker Hub. This keeps Docker images updated and bug-free.
You can Also Have a Look to Terraform Interview Questions
Q.4 What is the difference between Docker container and Docker image?
Docker container and Docker image are Docker platform components, however they have different functions and characteristics.
Read-only Docker images contain application code, libraries, dependencies, and other files needed to operate an application. It’s a snapshot of an application’s file system. Docker images build containers.
A Docker container is a running Docker image. . Docker containers run in isolated environments from Docker images.
Q.5 What is the difference between Docker and Kubernetes?
Docker and Kubernetes are popular containerization technologies with different uses and characteristics.
Docker builds, ships, and runs containerized apps. Developers can package an application and its dependencies into a container and deploy it on any Docker-supported machine. Docker simplifies application development, testing, and deployment by providing a uniform environment.
Kubernetes facilitates containerized application deployment, scalability, and management. It allows cluster-wide container management. Kubernetes automates container deployment, scalability, failover, service discovery, load balancing, and storage orchestration.
Q.6 What is Dockerfile, and how does it work?
Dockerfiles are text files with instructions for constructing Docker images. It specifies the base image, application code, and dependencies for building a Docker image.
Each Dockerfile instruction performs a specific purpose. Each Dockerfile instruction builds a layer in the image and is run in order.
Docker CLI builds images from Dockerfiles. The Docker CLI reads the Dockerfile and builds the image when you run Docker build. The Docker CLI creates a new layer based on the base image. It then performs each Dockerfile instruction to layer the Docker image.
Dockerfile instructions include:
FROM: The Docker image’s base image. RUN: Runs a Docker command during build. COPY: Copies host files and folders to Docker image. Sets the working directory for subsequent instructions. EXPOSE: The container’s runtime port. CMD: The container startup command.
Q.7 What is the purpose of the Docker compose file, and how is it used?
YAML files configure Docker applications. It defines and runs a multi-container Docker application with one command.
Docker Compose simplifies multi-container application deployment and management. Developers can define the application’s services, networks, volumes, and configuration settings.
Docker Compose typically has these components:
- Services: Sets the application’s containers’ image, ports, and environment variables.
- Networks: Defines the containers’ network names and IP ranges.
- Volumes: Specifies data storage and configuration options for containers. The Docker Compose file requires installation. After creating a Docker Compose file, use docker-compose to build and run the application. Docker Compose reads the file and generates containers, networks, and volumes.
Q.8 How do you mount a volume in Docker?
Mounting a volume in Docker allows you to share data between the host machine and the Docker container. There are several ways to mount a volume in Docker, including using the -v or –mount option when running a Docker container.
To mount a volume using the -v option, you can use the following syntax:
docker run -v /path/on/host:/path/in/container image_name
This command mounts the directory at /path/on/host
on the host machine to the directory at /path/in/container
in the Docker container.
Alternatively, you can use the –mount option to specify more advanced volume configurations. For example:
docker run --mount type=bind,source=/path/on/host,target=/path/in/container,readonly image_name
This command mounts the directory at /path/on/host
on the host machine to the directory at /path/in/container
in the Docker container, with read-only access.
You can also use Docker Compose to mount volumes. To mount a volume in Docker Compose, you can include a volumes section in your docker-compose.yml file. For example:
version: "3"
services:
myservice:
image: myimage
volumes:
- /path/on/host:/path/in/container
This configuration mounts the directory at /path/on/host
on the host machine to the directory at /path/in/container
in the Docker container for the myservice
service.
Q.9 What is the Docker registry, and how is it used?
A Docker registry is a central repository for storing and distributing Docker images. It allows users to share and collaborate on Docker images, and to easily download and deploy images to their own Docker environment.
The most popular Docker registry is Docker Hub, which is a cloud-based registry hosted by Docker Inc. Other popular Docker registries include Amazon Elastic Container Registry (ECR), Google Container Registry (GCR), and Azure Container Registry (ACR).
To use a Docker registry, you need to first push your Docker images to the registry. This can be done using the docker push command, which uploads the specified image to the registry. For example:
docker push myregistry/myimage:latest
This command pushes the myimage
image with the latest
tag to the myregistry
registry.
Once you have pushed your Docker images to the registry, other users can download and use them by running the docker pull command. For example:
docker pull myregistry/myimage:latest
This command downloads the myimage
image with the latest
tag from the myregistry
registry.
Using a Docker registry provides a number of benefits, including:
- Centralized management of Docker images: Docker registries provide a single location for managing and distributing Docker images.
- Version control: Docker registries allow you to store multiple versions of an image and easily switch between them.
- Collaboration: Docker registries allow users to share Docker images with each other, and to collaborate on the development and deployment of Docker applications.
Q.10 What is the difference between Docker Swarm and Kubernetes?
Docker Swarm and Kubernetes are container orchestration tools for managing and scaling containerized apps.
However, they differ in several ways:
- Architecture: Kubernetes can be used with any container runtime, while Docker Swarm is built into Docker. Docker Swarm is easier to set up and use if you’re already using Docker, but Kubernetes is more flexible if you’re using different container runtimes.
- Features: Docker Swarm has fewer features than Kubernetes for scaling, load balancing, and application deployment. .
- Scaling: Both tools can scale containerized applications, but Kubernetes is more advanced. It automatically scales containers based on CPU, memory, and network traffic, unlike Docker Swarm.
- Community: Kubernetes has a larger and more active community than Docker Swarm, with more resources and support.
Q.11 How do you monitor Docker containers?
There are several ways to monitor Docker containers:
- Docker Stats: Docker Stats is a built-in command that allows you to monitor the resource usage of running containers, including CPU usage, memory usage, and network traffic.
- Docker logs: Docker logs allow you to view the log output of a container, which can be useful for debugging and troubleshooting.
- Docker Healthcheck: Docker Healthcheck is a feature that allows you to define a command to check the health of a container. You can use this to monitor the health of your containers and automatically restart them if they fail.
- Third-party monitoring tools: There are many third-party monitoring tools available for Docker, such as Prometheus, Grafana, and Nagios. These tools allow you to monitor the performance of your containers in real-time and set up alerts for critical events.
- Docker events: Docker events allow you to monitor changes to your Docker environment, such as container creation, deletion, and modification. You can use this to track changes to your environment and troubleshoot issues.
Q.12 How do you manage networking in Docker?
Docker provides several ways to manage networking between containers:
- Bridge networking: By default, Docker creates a bridge network for each host, which allows containers to communicate with each other on the same host. You can also create custom bridge networks to isolate containers and control their network traffic.
- Host networking: With host networking, the container uses the host’s network stack directly, rather than creating its own network namespace. This can be useful for high-performance networking, but it can also reduce isolation and security.
- Overlay networking: Overlay networking allows you to connect containers across multiple hosts and data centers, creating a virtual network that spans multiple physical networks. This can be useful for large-scale deployments that require distributed communication.
- Macvlan networking: With Macvlan networking, you can assign a MAC address to a container and connect it to a physical network interface, allowing it to communicate directly with other devices on the network.
- Port mapping: With port mapping, you can map ports on the host to ports on the container, allowing external clients to connect to the container’s services.
To manage networking in Docker, you can use the Docker networking commands, such as docker network create
to create a custom network, docker network ls
to list the available networks, and docker network connect
to connect a container to a network. You can also use Docker Compose to define the network settings for your application and deploy it with a single command.
Q.13 How do you set environment variables in Docker?
There are several ways to set environment variables in Docker:
- In Dockerfile: You can set environment variables in the Dockerfile using the
ENV
command. For example,ENV MY_VAR=my_value
sets the environment variableMY_VAR
tomy_value
. - During container creation: You can set environment variables during container creation using the
e
or-env
flag. For example,docker run -e MY_VAR=my_value image_name
sets the environment variableMY_VAR
tomy_value
when running the container. - In Docker Compose: You can set environment variables in Docker Compose by defining them in the
.env
file or in theenvironment
section of the Compose file. For example:yamlCopy code version: '3' services: app: image: image_name environment: MY_VAR: my_value
- In Kubernetes: In Kubernetes, you can set environment variables in the deployment manifest using the
env
field. For example:yamlCopy code apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: image_name env: - name: MY_VAR value: my_value
Q.14 How do you remove Docker images, containers, and volumes?
To remove Docker images, containers, and volumes, you can use the following commands:
- Remove Docker images: To remove a Docker image, you can use the
docker rmi
command followed by the image ID or name. For example,docker rmi image_name
removes the image with the nameimage_name
. - Remove Docker containers: To remove a Docker container, you can use the
docker rm
command followed by the container ID or name. For example,docker rm container_name
removes the container with the namecontainer_name
. - Remove Docker volumes: To remove a Docker volume, you can use the
docker volume rm
command followed by the volume name or ID. For example,docker volume rm volume_name
removes the volume with the namevolume_name
.
Note: Before removing a container, you must stop it first using the docker stop
command. Also, before removing a volume, make sure no containers are using it, or else you will get an error message.
You can also use the following commands to remove all unused images, containers, and volumes:
- Remove all unused images:
docker image prune
- Remove all stopped containers:
docker container prune
- Remove all unused volumes:
docker volume prune
Q.15 How do you debug Docker containers?
Debugging Docker containers can be done in several ways, including:
- Container logs: Docker containers automatically log their output to standard output and standard error. You can use the
docker logs
command to view the container logs, for exampledocker logs container_name
. - Shell access: You can access the shell of a running container using the
docker exec
command followed by the container ID or name and the shell command. For example,docker exec -it container_name /bin/bash
opens an interactive shell session in the container. - Remote debugging: You can use remote debugging tools such as
gdb
orstrace
to debug a running container. To enable remote debugging, you can use thedocker run
command with the-cap-add=SYS_PTRACE
flag, which grants the container the ability to trace system calls. For example,docker run --cap-add=SYS_PTRACE -it image_name /bin/bash
enables remote debugging in the container. - Debugging tools: You can install debugging tools such as
tcpdump
,netstat
, orps
inside the container to diagnose networking, process, or performance issues. For example,apt-get install tcpdump
installstcpdump
in a Debian-based container. - Container inspection: You can use the
docker inspect
command to view detailed information about a container, such as its environment variables, network settings, or volumes. For example,docker inspect container_name
displays the metadata of the container.
Q.16 What is Docker swarm mode, and how does it work?
Docker Swarm mode lets you operate a cluster of Docker nodes as a virtual system. ц
Docker Swarm has two types of nodes:
- Management nodes manage the Swarm cluster and schedule tasks on worker nodes.
- Worker nodes: They run containers and perform tasks assigned by manager nodes.
A Swarm cluster requires at least one manager and one worker node. Docker swarm init on the manager node initialises the cluster and generates a join token for worker nodes to join the Swarm.
Once the Swarm cluster is set up, you can use the docker service command to deploy services as tasks across nodes. A service defines a task by specifying the image, replica count, and placement limitations. Swarm mode organises tasks on available worker nodes when you launch a service and maintains the desired state even if nodes fail or are added to the cluster.
Swarm mode also provides load balancing and automatic failover for services. If a container fails, Swarm mode replaces it with a new one, and if a node fails, it reschedules jobs on other nodes.
Q.17 How do you configure Docker to use a proxy server?
To configure Docker to use a proxy server, you can follow these steps:
- Create a file called
daemon.json
in the/etc/docker
directory. If the file already exists, edit it. - Add the following configuration to the file:
{
"proxies": {
"default": {
"httpProxy": "<http://your-proxy-server>:port",
"httpsProxy": "<http://your-proxy-server>:port"
}
}
}
Replace your-proxy-server
and port
with the actual proxy server and port number.
- Save the file and exit.
- Restart the Docker daemon using the following command:
sudo systemctl restart docker
After configuring Docker to use a proxy server, you can test the connection by running a command such as docker run hello-world
. If the command succeeds, Docker is now using the proxy server for internet access.
Q.18 How do you configure Docker to use a custom DNS server?
To configure Docker to use a custom DNS server, you can follow these steps:
- Create a file called
daemon.json
in the/etc/docker
directory. If the file already exists, edit it. - Add the following configuration to the file:
{
"dns": ["your-dns-server"]
}
Replace your-dns-server
with the actual IP address or hostname of your DNS server.
- Save the file and exit.
- Restart the Docker daemon using the following command:
sudo systemctl restart docker
After configuring Docker to use a custom DNS server, you can test the connection by running a command such as docker run busybox nslookup google.com
. If the command succeeds and returns the correct IP address for google.com
, Docker is now using the custom DNS server for name resolution.
Q.19 What is Docker Compose, and how is it used?
Docker Compose defines and runs multi-container Docker applications. YAML files can define a group of related services, their configurations, and their network connections. Docker Compose lets you deploy the entire application stack with one command.
To use Docker Compose, you need to create a docker-compose.yml
file in your project directory. In this file, you can define the services that make up your application, their configurations, and their dependencies. For example, a docker-compose.yml
file for a web application might look like this:
version: '3'
services:
web:
build: .
ports:
- "80:80"
database:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: example
In this example, docker-compose.yml defines web and database services. The Dockerfile in the directory builds the web service on port 80. The database service uses the official MySQL image, sets the root password to “example” via environment variable, and has no ports exposed.
To start the application, you can use the docker-compose up
command. This command reads the docker-compose.yml
file, builds the necessary containers, and starts the services. You can also use the docker-compose down
command to stop and remove the containers.
Docker Compose also provides many other commands for managing the application stack, such as docker-compose ps
to list the running containers, docker-compose logs
to view the container logs, and docker-compose exec
to run commands inside the containers.
Q.20 What is Docker overlay network, and how does it work?
Docker overlay networks enable multi-host communication between Docker containers. It works with Docker Swarm to let containers on different hosts communicate as if they were on the same network.
Docker network create on the Docker Swarm manager node with the —driver overlay option creates an overlay network. .
A Docker Swarm node automatically connects a container to the overlay network so it can communicate with other containers on the same network, even if they are on different hosts. Network packets are encapsulated in a VXLAN tunnel to traverse the physical network.
Docker overlay networks support service discovery, load balancing, and security. The overlay network’s service name resolves to the container’s IP address. Docker Swarm load balances requests across containers.
Q.21 How do you create a Docker container from an image?
To create a Docker container from an image, you need to follow these steps:
- Pull the image: First, you need to pull the image from a Docker registry using the
docker pull
command. For example, to pull the latest version of the officialnginx
image, you can use the commanddocker pull nginx
. - Create a container: Once you have the image, you can create a container from it using the
docker run
command. For example, to create a container from thenginx
image and run it in the background, you can use the commanddocker run -d nginx
. This command will create a new container with a randomly generated name and start it in the background. - Customize the container: You can customize the container by passing additional options to the
docker run
command. For example, you can specify the container name using the-name
option, map container ports to host ports using thep
option, and set environment variables using thee
option. - Manage the container: Once the container is created, you can manage it using the
docker container
command. For example, you can usedocker container ls
to list all running containers,docker container stop
to stop a running container, anddocker container rm
to remove a stopped container.
Q.22 What is Docker registry mirror, and how does it work?
A local Docker registry mirror speeds up image pulls and improves availability. . If the image is not in the local cache, the engine pulls it from the remote registry and stores it for later use.
Docker registry mirrors redirect image requests to local caches. The Docker daemon’s default registry is the mirror’s URL. The Docker engine automatically redirects image requests from the default registry to the mirror’s URL.
Setting up a Docker registry mirror involves configuring the Docker daemon on each host to use the mirror’s URL as the default registry. This can be done by modifying the Docker daemon configuration file (/etc/docker/daemon.json
) to include the following configuration:
{
"registry-mirrors": ["<https://mirror.example.com>"]
}
Once the mirror is set up, all image pulls will be directed to the mirror’s URL, and the images will be cached locally for faster access. This can significantly improve image pull times and reduce network bandwidth usage, especially in environments with limited network connectivity.
Q.23 What is the difference between Dockerfile and docker-compose.yml?
Docker-compose and Dockerfile. yml are both tools used to build and run Docker applications, but they have distinct purposes.
Dockerfiles contain instructions for building Docker images in text format. Docker containers are created and run using Docker images from Dockerfiles. Dockerfiles are typically used to build and package single containers.
However, a docker-compose.yml file defines and manages multi-container Docker applications. It is a YAML file that lists an application’s services and network and volume settings. The docker-compose.yml file lets you define and run a group of Docker containers as a single application, making complex applications easier to manage and deploy. Define environment variables, expose ports, and link containers with it.
Q.24 How do you create a Docker network?
To create a Docker network, you can use the docker network create
command followed by the name of the network you want to create. For example:
docker network create my-network
This command will create a new network called my-network
.
By default, Docker creates a bridge network for each Docker host, which can be used to connect containers running on the same host. However, you can also create custom networks with specific settings and properties to suit your application’s needs.
To specify custom settings for the network, you can include additional options in the docker network create
command. For example, to create a network with a specific subnet and IP range, you can use the --subnet
and --ip-range
options:
docker network create --subnet=192.168.0.0/16 --ip-range=192.168.5.0/24 my-network
This command will create a network called my-network
with a subnet of 192.168.0.0/16
and an IP range of 192.168.5.0/24
.
Once you have created a network, you can connect containers to it using the --network
option when running the docker run
command. For example:
docker run --name my-container --network my-network my-image
This command will create a new container called my-container
and connect it to the my-network
network.
Q.25 What is a Docker-compose override file, and how is it used?
A Docker Compose override file is a YAML file that is used to override or add to the configuration specified in the original docker-compose.yml
file. This allows you to customize the configuration of a Docker Compose application without having to modify the original file.
To use a Docker Compose override file, you create a new YAML file with the name docker-compose.override.yml
and place it in the same directory as the original docker-compose.yml
file. The override file should contain only the configuration options that you want to modify or add to the original file. When you run the docker-compose up
command, Docker Compose will merge the configuration from the override file with the configuration from the original file.
For example, suppose you have an application defined in a docker-compose.yml
file with the following configuration:
version: '3'
services:
app:
image: my-app:latest
ports:
- "80:80"
To override the port mapping for the app
service, you could create a new docker-compose.override.yml
file with the following configuration:
version: '3'
services:
app:
ports:
- "8080:80"
When you run the docker-compose up
command, Docker Compose will use the configuration from both files and create the app
service with the port mapping of 8080:80
instead of the original mapping of 80:80
.
Q.26 What is Docker registry authentication, and how does it work?
Docker registry authentication verifies the identity of users or machines trying to access or push/pull images. Basic, token-based, and OAuth 2.0 authentication secure Docker registries.
Docker registry authentication requires a username and password. Since usernames and passwords are sent in plaintext, this authentication method is simple but insecure.
Bearer authentication, or token-based authentication, generates a temporary token for Docker registry access. This token allows users to access the Docker registry until it expires after authenticating with their username and password.
With OAuth 2.0 authentication, users can sign in to a Docker registry with credentials from a different service, like GitHub or Google. The identity provider issues access tokens to the Docker registry after users log in.
Docker registry authentication requires Docker daemon authentication credentials. Using config.json or environment variables, you can configure authentication.
Q.27 How do you access the logs of a Docker container?
o access the logs of a Docker container, you can use the docker logs
command. The syntax for this command is as follows:
docker logs [OPTIONS] CONTAINER
Here, OPTIONS
are any additional options you want to pass to the command, and CONTAINER
is the name or ID of the container whose logs you want to access.
By default, the docker logs
command shows the standard output and standard error logs of the container. If you want to see only one of these logs, you can use the --stdout
or --stderr
options, respectively.
You can also use the -f
option to follow the logs of the container in real-time, similar to the tail -f
command. This can be useful for monitoring the output of a running container.
Additionally, you can use the --since
and --until
options to filter the logs by time. For example, you can use --since 1h
to show only logs that have been generated in the last hour.
Q.28 What is Docker machine, and how is it used?
Docker Machine manages local and remote Docker hosts. It lets users create Docker hosts on VirtualBox, AWS, GCP, and Azure.
One command creates and configures Docker hosts with Docker Machine. This eliminates the time-consuming process of installing and configuring Docker on each host.
Docker Machine requires local installation to create a Docker host. Once installed, use docker-machine create to create a new Docker host on a supported platform. Use docker-machine env to configure your local shell to use the new Docker host.
Docker Machine also offers docker-machine ls to list all Docker hosts, docker-machine start to start a stopped Docker host, and docker-machine stop to stop a running Docker host.
Q.29 How do you configure Docker to use a proxy with authentication?
To configure Docker to use a proxy with authentication, you need to provide Docker with the appropriate proxy settings. This can be done by setting environment variables when starting the Docker daemon or by modifying the Docker service configuration file.
Here are the steps to configure Docker to use a proxy with authentication:
- Set the
HTTP_PROXY
andHTTPS_PROXY
environment variables with the proxy URL, including the port number. For example:export HTTP_PROXY=http://username:[email protected]:8080 export HTTPS_PROXY=http://username:[email protected]:8080
Replaceusername
andpassword
with your proxy credentials. - If you are using Systemd, create a new service configuration file for Docker at
/etc/systemd/system/docker.service.d/http-proxy.conf
. Add the following lines to the file:[Service] Environment="HTTP_PROXY=http://username:[email protected]:8080" Environment="HTTPS_PROXY=http://username:[email protected]:8080"
Again, replaceusername
andpassword
with your proxy credentials. - Reload the Systemd configuration and restart Docker:
systemctl daemon-reload systemctl restart docker
- Verify that Docker is now using the proxy by running a Docker command that requires internet access, such as
docker run hello-world
.
Q.30 What is Docker swarm, and how is it used for orchestration?
Docker Swarm orchestrates Docker containers natively. It lets you establish and manage a Docker swarm to execute containerized applications. Docker Swarm orchestrates containerized application deployment, scalability, and management.
Here are the key features of Docker Swarm:
- Service discovery: Docker Swarm includes a built-in DNS service that enables containers to discover each other and communicate with each other, regardless of which node they are running on.
- Load balancing: Docker Swarm includes a built-in load balancer that can distribute traffic across containers running on multiple nodes.
- Automatic failover: Docker Swarm can automatically detect and recover from container failures by restarting failed containers on healthy nodes.
- Scaling: Docker Swarm allows you to easily scale your application by adding or removing nodes to the swarm.
Docker Swarm orchestration requires a management node and one or more worker nodes. The management node manages the swarm and coordinates container deployment and scalability, while worker nodes run containers.
After setting up the swarm, create a service to deploy and scale containers to deploy your application. You can set the number of service replicas and container resources.
Docker Swarm also provides a number of tools for monitoring and managing your swarm, including the docker service
command for managing services, the docker node
command for managing nodes, and the docker stack
command for managing stacks, which are groups of services that can be deployed together.
You can Check the Docker Official Website