Free Practice Questions Docker Certified Associate (DCA) 30 Questions with Answers Free Practice Questions Docker Certified Associate (DCA) 30 Questions with Answers
FREE QUESTIONS

Docker Certified Associate (DCA)
Practice Questions

30 free questions with correct answers and detailed explanations.

30 Free Questions
2 Free Exams
100% With Explanations

DCA-100 Practice Set-01

15 questions
Q1
A developer reports that after restarting the Docker daemon on a Linux host, all running containers stopped. The team wants to ensure containers survive future daemon restarts without manual intervention. Which daemon configuration achieves this?
A Set "icc": true in /etc/docker/daemon.json and restart the daemon
Set "live-restore": true in /etc/docker/daemon.json and restart the daemon
C Run all containers with --restart=always flag — daemon restarts are handled automatically
D Enable "userland-proxy": false in /etc/docker/daemon.json to reduce daemon dependency
Correct Answer
Set "live-restore": true in /etc/docker/daemon.json and restart the daemon
Explanation
B is correct. The live-restore feature in Docker allows containers to continue running when the daemon is stopped, crashed, or restarted. Setting "live-restore": true in /etc/docker/daemon.json decouples container lifecycle from daemon lifecycle. A is wrong. icc (inter-container communication) controls whether containers on the same bridge network can communicate — it has no effect on daemon restart behavior. C is wrong. The --restart=always policy causes Docker to restart containers after the daemon comes back up, but containers will still stop during the daemon restart window. It does not keep containers running through a daemon restart — only live-restore does that. D is wrong. userland-proxy controls how port forwarding is handled (kernel vs. userland) and is completely unrelated to container survival during daemon restarts.
Q2
A team is building a CI pipeline that needs to communicate with a remote Docker daemon securely over TCP. Which combination of options correctly enables encrypted, authenticated access to the Docker daemon?
A Start the daemon with -H tcp://0.0.0.0:2375 and use firewall rules to restrict access
Start the daemon with -H tcp://0.0.0.0:2376 --tlsverify --tlscacert --tlscert --tlskey and distribute client certificates
C Enable the Docker daemon socket at /var/run/docker.sock and use SSH tunneling for all remote clients
D Set "hosts": ["tcp://0.0.0.0:2375"] in daemon.json and enable Docker Content Trust
Correct Answer
Start the daemon with -H tcp://0.0.0.0:2376 --tlsverify --tlscacert --tlscert --tlskey and distribute client certificates
Explanation
B is correct. Port 2376 is the conventional TLS-secured Docker port. Using --tlsverify along with CA, server certificate, and server key enables mutual TLS (mTLS), meaning both the daemon and the client authenticate each other. Client certificates are distributed to authorized users. A is wrong. Port 2375 is unencrypted and unauthenticated. Relying solely on firewall rules is a weak security posture and provides no encryption in transit or client authentication. C is wrong. SSH tunneling is a valid alternative approach, but it uses the Unix socket locally — it doesn't expose a TCP endpoint. While secure, it's not the standard TLS daemon configuration the question asks about. D is wrong. Port 2375 (unencrypted) is still exposed here. Docker Content Trust (DCT) controls image signing and verification, not transport-layer security for daemon communication.
Q3
A container started with docker run -d nginx is showing status Exited (1) immediately after launch. Which sequence of commands is most effective in diagnosing the root cause?
A docker inspect <container_id> then docker diff <container_id>
docker logs <container_id> then docker inspect <container_id> --format '{{.State}}'
C docker stats <container_id> then docker top <container_id>
D docker events then docker history nginx
Correct Answer
docker logs <container_id> then docker inspect <container_id> --format '{{.State}}'
Explanation
B is correct. docker logs retrieves stdout/stderr from the container's entrypoint process — this is the first place crash reasons appear (missing config, port conflicts, permission errors, etc.). Following up with docker inspect scoped to .State reveals the exit code, error message, and OOM kill status for deeper analysis. A is wrong. docker inspect shows metadata but doesn't show the application's error output. docker diff shows filesystem changes — useful for auditing, not for diagnosing a startup crash. C is wrong. docker stats shows real-time resource usage and docker top lists running processes — neither works meaningfully on an already-exited container. D is wrong. docker events shows Docker daemon-level events (start, die, etc.) but not why the process failed. docker history shows image layer history — irrelevant to a runtime crash.
Q4
You pull an image tagged myapp:latest and run a container. A colleague pushes a new version of myapp:latest to the registry. What happens to your running container and local image?
A The running container automatically pulls and applies the new image layers
The running container continues using the original image; the local tag myapp:latest still points to the old image digest until you explicitly pull again
C The local myapp:latest tag is updated to point to the new digest automatically, but the running container is unaffected
D Docker pulls the new image in the background and schedules a rolling restart of the container
Correct Answer
The running container continues using the original image; the local tag myapp:latest still points to the old image digest until you explicitly pull again
Explanation
B is correct. Docker image tags are pointers to a specific image digest at the time of pull. Once a container is running, it uses the image layers already present on the host. A remote re-tag or push does not affect the local tag or the running container — both remain unchanged until an explicit docker pull is performed. A is wrong. Docker has no mechanism to auto-update running containers when an upstream image changes. This would be a significant operational risk in production. C is wrong. Docker does not poll registries for tag updates. The local tag is only updated when docker pull is explicitly invoked. D is wrong. Docker Engine does not perform background pulls or rolling restarts based on registry changes. Orchestration platforms like Kubernetes with image pull policies can approximate this behavior, but Docker alone does not.
Q5
A data science team has a Dockerfile that installs Python dependencies and then copies a large dataset file used only during the build for preprocessing. The final image is 4.2 GB. Which Dockerfile technique most effectively reduces the final image size?
A Use .dockerignore to exclude the dataset file from the build context
Use a multi-stage build — preprocess data in a builder stage and copy only the resulting artifacts to the final stage
C Add RUN rm -rf /dataset as the last instruction to delete the file
D Use --squash flag during build to merge all layers into one
Correct Answer
Use a multi-stage build — preprocess data in a builder stage and copy only the resulting artifacts to the final stage
Explanation
B is correct. Multi-stage builds allow you to use a heavy builder stage (with build tools, datasets, compilers) and then copy only the output artifacts to a slim final image. Files that existed only in the builder stage never appear in the final image's layers, dramatically reducing size. A is wrong. .dockerignore prevents files from being sent in the build context, so it would actually prevent the dataset from being available during the build at all — breaking the preprocessing step. It cannot remove something needed during build from the final image. C is wrong. In a standard (non-multi-stage) Dockerfile, every RUN instruction adds a new layer. Even if you delete the file in a later layer, the data still exists in the earlier layer and contributes to the image size. D is wrong. --squash merges layers into one but the total data (including the large dataset) is still present in that single layer — image size remains roughly the same.
Q6
A team notices that their CI builds are slow because the npm install step runs on every commit, even when only application source code changes. Their Dockerfile currently looks like: FROM node:18 WORKDIR /app COPY . . RUN npm install Which revised instruction order best leverages Docker's build cache?
A Add --no-cache to the docker build command to ensure clean builds
Copy package.json and package-lock.json first, run npm install, then copy the rest of the source
C Use a .dockerignore file to exclude node_modules and run npm install last
D Use ARG CACHEBUST and pass a timestamp to invalidate cache selectively
Correct Answer
Copy package.json and package-lock.json first, run npm install, then copy the rest of the source
Explanation
B is correct. Docker cache is invalidated when a layer's content changes. By copying only package.json and package-lock.json first and running npm install, that layer is cached as long as dependencies don't change. Only when those files change does npm install re-run. Subsequent source code changes only invalidate layers after the COPY . . instruction. A is wrong. --no-cache disables all caching, making every build slower — the opposite of what's needed. C is wrong. Excluding node_modules via .dockerignore is a good practice (prevents local modules from overwriting the container's), but it doesn't fix the ordering problem. npm install still runs every time if COPY . . precedes it. D is wrong. ARG CACHEBUST is a technique to force cache invalidation — it makes things slower, not faster, and doesn't solve the structural ordering problem.
Q7
A containerized PostgreSQL database is writing data to a Docker-managed volume. An engineer runs docker rm -f postgres_container. What happens to the volume and its data?
A The volume and all data are deleted along with the container
The volume persists independently; data is intact and can be mounted to a new container
C The volume is marked as dangling and is automatically garbage collected after 24 hours
D The volume persists but becomes read-only to prevent accidental writes until explicitly reattached
Correct Answer
The volume persists independently; data is intact and can be mounted to a new container
Explanation
B is correct. Named Docker volumes have a lifecycle independent of containers. docker rm (with or without -f) removes the container but leaves attached volumes intact. The data remains accessible and can be mounted to a new or replacement container using --mount or -v. A is wrong. Volumes are only removed if you explicitly use docker rm -v <container> (with the -v flag) or run docker volume rm <volume_name> separately. docker rm -f alone does not remove volumes. C is wrong. Docker does not automatically garbage collect volumes on a timer. Dangling volumes (unattached named volumes) persist indefinitely until removed with docker volume prune or docker volume rm. D is wrong. Docker volumes do not change permissions or become read-only upon container removal. They remain in their original state.
Q8
A developer needs a container to have a high-speed, temporary scratch space for intermediate computation that should never be persisted to disk and should be lost when the container stops. Which storage option is most appropriate?
A A named Docker volume mounted at /scratch
B A bind mount pointing to /tmp on the host
A tmpfs mount inside the container
D An anonymous volume declared in the Dockerfile with VOLUME /scratch
Correct Answer
A tmpfs mount inside the container
Explanation
C is correct. tmpfs mounts store data entirely in the host's memory (RAM). They are never written to disk, are invisible to other containers, and are automatically discarded when the container stops — exactly matching the requirements. A is wrong. Named volumes persist on disk beyond the container's lifetime — the data is not lost when the container stops, violating the requirement. B is wrong. A bind mount to /tmp on the host writes to disk (even if the host OS later clears /tmp). Data could persist beyond the container and is visible on the host — a security and persistence concern. D is wrong. Anonymous volumes declared in Dockerfiles are still disk-backed volumes. They persist until explicitly removed and are not RAM-based.
Q9
Which statement accurately describes the difference between a bind mount and a Docker volume when used in production?
A Bind mounts are managed by Docker and offer better portability across hosts; volumes depend on the host filesystem path
Volumes are managed by Docker, offer better portability, work with volume drivers for remote storage, and are the recommended approach for production data; bind mounts expose host filesystem paths and are host-dependent
C Bind mounts and volumes are functionally identical; the choice only affects the command syntax
D Docker volumes are stored inside the container's writable layer, while bind mounts reference external storage
Correct Answer
Volumes are managed by Docker, offer better portability, work with volume drivers for remote storage, and are the recommended approach for production data; bind mounts expose host filesystem paths and are host-dependent
Explanation
B is correct. Docker volumes are fully managed by the Docker daemon — stored under /var/lib/docker/volumes/, independent of host directory structure, portable across environments, and compatible with volume drivers (e.g., for NFS, AWS EBS, Azure Disk). Bind mounts couple containers tightly to specific host paths, reducing portability. A is wrong. This reverses the definitions. Bind mounts are host-path-dependent; volumes are Docker-managed and portable. C is wrong. Bind mounts and volumes differ significantly in management, portability, driver support, and security posture. They are not functionally identical. D is wrong. Docker volumes are stored in Docker's managed storage area on the host, not inside the container's writable layer. The writable layer is a separate concept related to the copy-on-write (CoW) filesystem.
Q10
A container running on the default bridge network cannot resolve another container's name via DNS. Why does this happen, and how is it resolved?
A DNS resolution is disabled on bridge networks by default; enable it with --dns flag per container
The default bridge network does not support automatic DNS-based service discovery between containers; create a user-defined bridge network, which includes an embedded DNS server
C Add both containers to the same network namespace using --network container:<id> to share DNS
D Set "dns": ["127.0.0.11"] in daemon.json to enable the embedded DNS server on all networks
Correct Answer
The default bridge network does not support automatic DNS-based service discovery between containers; create a user-defined bridge network, which includes an embedded DNS server
Explanation
B is correct. The default bridge network (docker0) does not provide automatic DNS resolution between containers — containers can only communicate via IP. User-defined bridge networks include Docker's embedded DNS server (127.0.0.11), which resolves container names and network aliases automatically. A is wrong. --dns sets the upstream DNS server for the container (e.g., for external name resolution), not for inter-container discovery. It doesn't enable container-name resolution. C is wrong. --network container:<id> shares the network namespace entirely — both containers share the same IP. This is a completely different use case (sidecar pattern) and not a DNS solution. D is wrong. The embedded DNS server (127.0.0.11) is automatically active on user-defined networks. Adding it to daemon.json manually doesn't enable it on the default bridge, and this is not how the configuration works.
Q11
You deploy a container with docker run -p 8080:80 nginx. A security scan reports that the service is accessible on all host interfaces, including public ones. What is the correct way to restrict the port binding to the loopback interface only?
A docker run -p 80:8080 nginx
docker run -p 127.0.0.1:8080:80 nginx
C docker run --expose 80 -p 8080 nginx
D docker run -p 8080:80 --network host nginx
Correct Answer
docker run -p 127.0.0.1:8080:80 nginx
Explanation
B is correct. Docker port binding syntax supports an optional IP address prefix: [host_ip:]host_port:container_port. Specifying 127.0.0.1:8080:80 binds the host port 8080 only on the loopback interface, making the service inaccessible from external network interfaces. A is wrong. This reverses host and container ports — the host port would be 80 and the container port 8080 — and it still binds to all interfaces. C is wrong. --expose marks a port in the image metadata for inter-container communication; it does not publish the port to the host. The -p 8080 form without a container port is invalid syntax. D is wrong. --network host removes network isolation entirely — the container uses the host's network stack directly. This makes the restriction worse, not better.
Q12
In a Docker Swarm cluster, two services need to communicate with each other by service name. Which network type and configuration enables this?
A Create a bridge network and attach both services to it; Swarm DNS resolves service names automatically
Create an overlay network with --attachable and deploy both services connected to it; the Swarm-embedded DNS resolves service names across nodes
C Use --network host for both services so they share the host's network namespace
D Attach both services to the ingress network, which handles all inter-service communication in Swarm
Correct Answer
Create an overlay network with --attachable and deploy both services connected to it; the Swarm-embedded DNS resolves service names across nodes
Explanation
B is correct. Overlay networks span multiple Docker hosts in a Swarm. When services are connected to the same user-defined overlay network, the Swarm DNS automatically resolves service names to their VIP (Virtual IP) or individual task IPs. The --attachable flag allows standalone containers to also attach if needed. A is wrong. Bridge networks are single-host only. Services on different Swarm nodes cannot communicate via a bridge network, and Swarm DNS does not operate on bridge networks in this context. C is wrong. --network host bypasses Docker networking, exposing containers directly on the host network. Services on different hosts would need direct host IP routing — there's no service-name DNS resolution. D is wrong. The ingress network is Swarm's built-in network for routing mesh (external traffic load balancing to services). It is not designed or used for service-to-service internal communication.
Q13
A container is running a legacy application that requires binding to port 80 (a privileged port below 1024). Instead of running the container as root, which is the minimal capability required?
A CAP_SYS_ADMIN
B CAP_NET_RAW
CAP_NET_BIND_SERVICE
D CAP_NET_ADMIN
Correct Answer
CAP_NET_BIND_SERVICE
Explanation
C is correct. CAP_NET_BIND_SERVICE is the specific Linux capability that allows a process to bind to privileged ports (ports below 1024) without running as root. It can be added with --cap-add NET_BIND_SERVICE. A is wrong. CAP_SYS_ADMIN is an extremely broad capability (often called "the new root") that grants dozens of system administration privileges. Granting it for a port binding requirement is a severe over-privilege. B is wrong. CAP_NET_RAW allows a process to use raw sockets and PACKET sockets — used for tools like ping or packet sniffers. It does not grant the ability to bind to privileged ports. D is wrong. CAP_NET_ADMIN allows various network administration operations (interface configuration, firewall rules, routing, etc.). It does not specifically grant privileged port binding and is much broader than needed.
Q14
A security team requires that a container be completely prevented from gaining additional privileges through setuid or setgid binaries. Which docker run flag enforces this at the kernel level?
A #NAME?
B #NAME?
#NAME?
D #NAME?
Correct Answer
#NAME?
Explanation
C is correct. --security-opt no-new-privileges sets the no_new_privs bit on the process, which is a kernel-level enforcement that prevents any child process from gaining additional privileges via setuid/setgid executables or file capabilities, even if such binaries exist in the container. A is wrong. --read-only makes the container's root filesystem read-only, preventing writes. While useful for security, it doesn't specifically prevent privilege escalation through executing existing setuid binaries. B is wrong. --cap-drop ALL removes Linux capabilities from the container, but a setuid binary can still execute and potentially escalate if capabilities aren't perfectly managed. no-new-privileges is the explicit kernel guarantee against this. D is wrong. --userns-remap remaps the container's user and group IDs to a range on the host, isolating the container's root from the host's root. It's a different (complementary) security mechanism, not specifically targeted at setuid binaries.
Q15
A DevOps engineer needs to pass a database password to a containerized application in Docker Swarm. Which approach follows Docker's recommended security practice for secrets management?
A Pass the password as an environment variable in the docker service create command
B Bake the password into the Docker image at build time using ARG
Store the password using docker secret create and reference it in the service definition; it will be mounted as a file in /run/secrets/
D Mount a host file containing the password as a bind mount inside the container
Correct Answer
Store the password using docker secret create and reference it in the service definition; it will be mounted as a file in /run/secrets/
Explanation
C is correct. Docker Swarm secrets are encrypted at rest in the Raft log and encrypted in transit via mutual TLS between Swarm nodes. They are only decrypted and made available to authorized services, mounted as in-memory tmpfs files at /run/secrets/<secret_name>. This is the most secure approach. A is wrong. Environment variables are visible in docker inspect, process listings (/proc/<pid>/environ), and are often inadvertently logged by applications. They are not recommended for sensitive data. B is wrong. Build-time ARG values appear in the image's layer history (docker history) and are therefore visible to anyone with access to the image — a serious security violation for secrets. D is wrong. Bind mounts expose the host filesystem path, the secret file is visible on the host, and access controls depend entirely on host filesystem permissions. There is no encryption or Swarm-level access control.

DCA-100 Practice Set-02

15 questions
Q1
Which Linux kernel feature does Docker primarily use to limit a container's access to host resources such as CPU, memory, and I/O bandwidth?
A Linux namespaces
Control Groups (cgroups)
C Seccomp profiles
D AppArmor/SELinux
Correct Answer
Control Groups (cgroups)
Explanation
B is correct. Control Groups (cgroups) are the kernel mechanism that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network bandwidth) of process groups. Docker uses cgroups to enforce resource constraints set with flags like --memory, --cpus, --blkio-weight, etc. A is wrong. Linux namespaces provide isolation — they limit what processes can see (process tree, network interfaces, mounts, users, etc.). They isolate visibility, not resource consumption. C is wrong. Seccomp (Secure Computing Mode) restricts which system calls a container process can make. It's about syscall filtering, not resource limiting. D is wrong. AppArmor and SELinux are Mandatory Access Control (MAC) systems that enforce access control policies on files, capabilities, and network access. They are not resource-limiting mechanisms.
Q2
A docker-compose.yml file defines a web service and a db service. The web service has depends_on: [db]. A developer starts the stack and finds the web container is crashing because it cannot connect to the database. What is the most likely cause?
A depends_on is not supported in Compose v3 and was silently ignored
depends_on only ensures the db container starts before web, not that the database process inside is ready to accept connections
C The web container must be on the same network as db, which is not automatic in Docker Compose
D depends_on requires a healthcheck defined on the dependent service or it defaults to a 30-second sleep
Correct Answer
depends_on only ensures the db container starts before web, not that the database process inside is ready to accept connections
Explanation
B is correct. depends_on controls container start order — Docker waits until the db container is in a running state before starting web. It does not wait for the database application inside the container to be ready to accept connections. Database engines like PostgreSQL or MySQL take time to initialize, and web may attempt to connect before they're ready. The fix is to combine depends_on with a healthcheck on db and use condition: service_healthy. A is wrong. depends_on is supported in Compose v3, though condition-based waiting with service_healthy or service_completed_successfully requires Compose v3.4+ and specifying conditions explicitly. C is wrong. Docker Compose automatically creates a default network for each project and attaches all services to it, enabling DNS resolution by service name. D is wrong. There is no default 30-second sleep behavior in depends_on. Without a health check condition, it purely checks that the container has transitioned to a running state.
Q3
A Compose file defines a service with deploy.replicas: 3. A developer runs docker-compose up and observes only one instance of the service is started. Why?
A docker-compose up respects deploy.replicas only when the --scale flag is also provided
The deploy key in a Compose file is only honored when deploying to Docker Swarm using docker stack deploy; docker-compose ignores it
C The service must define restart: always before deploy.replicas takes effect
D Compose requires version: '3.8' or higher for deploy.replicas to be respected
Correct Answer
The deploy key in a Compose file is only honored when deploying to Docker Swarm using docker stack deploy; docker-compose ignores it
Explanation
B is correct. The deploy key (including replicas, resources, placement, update_config, etc.) is part of the Swarm-mode deployment specification. When using docker-compose up, these keys are ignored. To run multiple replicas with Docker Compose, use docker-compose up --scale web=3 or use docker stack deploy with a Swarm cluster. A is wrong. Even with --scale, deploy.replicas in the file itself is still ignored by docker-compose; --scale is a separate CLI override that works independently. C is wrong. restart and deploy.replicas are independent settings. One does not gate the other, and restart has no effect on replica count. D is wrong. The version of the Compose file format does not change this behavior. The deploy block is a Swarm-only concept regardless of schema version.
Q4
In a Docker Compose file, you define two services that should communicate over an encrypted custom network. You want to ensure these services cannot communicate with a third service on a different network. Which Compose networking configuration achieves this isolation?
A Define all three services in the same Compose file; isolation is automatic based on service name
Define internal: true on the shared network between the two services and attach the third service only to a separate network
C Use icc: false in daemon.json and create a shared network between the two isolated services
D Deploy all services with --network host and use firewall rules to block cross-service traffic
Correct Answer
Define internal: true on the shared network between the two services and attach the third service only to a separate network
Explanation
B is correct. Defining a custom network with internal: true prevents any external access and keeps communication scoped within that network. By attaching only the two intended services to this network and placing the third service exclusively on a separate network, you achieve network-level isolation. Services on different networks cannot communicate unless explicitly connected to the same network. A is wrong. All services in the same Compose file are by default attached to the same project-level default network, meaning they can all communicate. Isolation is not automatic. C is wrong. icc: false disables inter-container communication on the default bridge network at the daemon level. However, it affects all containers on that bridge and is a blunt instrument — it doesn't provide selective, service-specific isolation with encryption. D is wrong. --network host removes all Docker network isolation — every container shares the host's network stack. This is the least isolated configuration possible and would make host-level firewall management complex and error-prone.
Q5
A Docker Swarm service is deployed with --replicas 5. During a rolling update, you want to ensure that no more than 2 replicas are updated simultaneously and that the Swarm waits 30 seconds between each update batch. Which flags achieve this?
--update-parallelism 2 --update-delay 30s
B --rollback-parallelism 2 --update-order 30s
C --update-max-failure-ratio 0.2 --update-delay 30s
D --restart-window 30s --update-parallelism 2
Correct Answer
--update-parallelism 2 --update-delay 30s
Explanation
A is correct. --update-parallelism 2 specifies the number of replicas updated at the same time (2 in this case). --update-delay 30s defines the time to wait between updating each batch of replicas. Together, they implement controlled, incremental rolling updates. B is wrong. --rollback-parallelism controls rollback behavior, not the update process. --update-order determines whether new tasks start before or after old ones are stopped — it doesn't accept a time value. C is wrong. --update-max-failure-ratio sets the threshold of failures that triggers a rollback during an update. It doesn't control parallelism or timing between update batches. D is wrong. --restart-window defines the time window in which Docker evaluates whether a container's restart attempts indicate a failure. It is part of the restart policy, not the update configuration.
Q6
A Swarm service is deployed with mode global. How does this differ from a replicated service, and when should it be used?
A A global service runs one task per Swarm manager node; it's used to ensure control-plane redundancy
A global service runs exactly one task on every node in the Swarm (subject to placement constraints); it's suited for agents, monitoring tools, or log collectors that must run on every host
C A global service ignores placement constraints and runs on all available nodes, including those with --availability=drain
D A global service is identical to a replicated service with --replicas set equal to the number of nodes, but auto-scales when nodes are added or removed
Correct Answer
A global service runs exactly one task on every node in the Swarm (subject to placement constraints); it's suited for agents, monitoring tools, or log collectors that must run on every host
Explanation
B is correct. In global mode, Swarm automatically schedules exactly one task per node that satisfies any defined placement constraints. As nodes join the Swarm, tasks are automatically scheduled on them. This makes global services ideal for infrastructure-level workloads like Prometheus node-exporters, log shippers (Fluentd, Filebeat), or security agents. A is wrong. Global services run on all nodes by default (workers and managers), not just managers. Manager-only scheduling requires a placement constraint (node.role == manager). C is wrong. Nodes with --availability=drain are explicitly excluded from receiving new task scheduling, including global service tasks. Placement constraints and node availability are both respected. D is wrong. While the behavior resembles this in a steady state, they are fundamentally different. A replicated service with a fixed --replicas count does not automatically add or remove tasks as nodes join or leave — you must manually update the replica count. Global mode handles this automatically.
Q7
A Swarm service published on port 8080 using routing mesh receives a request on a worker node that has no running replica of that service. What happens to the request?
A The request is dropped and the client receives a connection refused error
The request is transparently forwarded by the IPVS-based routing mesh to a node that is running a healthy replica of the service
C The request is forwarded to the Swarm manager, which proxies it to the correct worker node
D The worker node returns a 503 Service Unavailable until a replica is scheduled on that node
Correct Answer
The request is transparently forwarded by the IPVS-based routing mesh to a node that is running a healthy replica of the service
Explanation
B is correct. Docker Swarm's routing mesh (implemented using Linux IPVS and iptables) means every node in the Swarm listens on published ports, regardless of whether it runs a task for that service. Incoming requests on any node are transparently load-balanced to one of the healthy service replicas across the cluster using the VIP (Virtual IP) mechanism. A is wrong. The routing mesh specifically exists to prevent this scenario. Connection is accepted on the node and forwarded internally. C is wrong. Manager nodes are not request proxies. The routing mesh is handled at the node kernel level using IPVS — there is no manager-in-the-middle for data-plane traffic. D is wrong. The service is accessible as long as at least one healthy replica exists anywhere in the Swarm. The absence of a local replica does not affect availability to external clients.
Q8
A company wants to ensure that only images signed by their internal CI pipeline can be deployed in production. Which Docker feature enforces this policy at the client level?
Docker Content Trust (DCT) with Notary, enforced by setting DOCKER_CONTENT_TRUST=1
B Image digest pinning using sha256: references in all Dockerfiles
C Enabling --image-verification flag on the Docker daemon
D Configuring the registry with RBAC so only CI service accounts can push images
Correct Answer
Docker Content Trust (DCT) with Notary, enforced by setting DOCKER_CONTENT_TRUST=1
Explanation
A is correct. Docker Content Trust (DCT), built on Notary (The Update Framework / TUF), provides cryptographic signing and verification of images. When DOCKER_CONTENT_TRUST=1 is set, the Docker client only pulls, runs, or builds from images with valid, trusted signatures. The CI pipeline signs images on push, and any unsigned or invalidly signed image is rejected. B is wrong. Digest pinning (image@sha256:...) ensures image immutability — you always get the exact same image bits. However, it does not verify who created or signed the image. Any image with that digest can be pulled, regardless of origin. C is wrong. --image-verification is not a real Docker daemon flag. This option does not exist in Docker. D is wrong. Registry RBAC controls who can push and pull images — it's access control at the registry level. It does not prevent someone from pushing an unsigned image or running an image that bypassed the registry.
Q9
An engineer runs docker pull myregistry.example.com/team/app:v2.1 and receives Error: unauthorized: authentication required. The registry is private. What is the correct sequence of steps to resolve this?
Run docker login myregistry.example.com, provide credentials, then retry the pull
B Add the registry to the insecure-registries list in daemon.json and restart the daemon
C Run docker trust inspect myregistry.example.com/team/app:v2.1 to retrieve the credentials
D Set DOCKER_CONTENT_TRUST=0 to bypass authentication for private registries
Correct Answer
Run docker login myregistry.example.com, provide credentials, then retry the pull
Explanation
A is correct. docker login <registry> authenticates the Docker CLI against the private registry. Credentials are stored (in the system credential store or ~/.docker/config.json) and subsequent docker pull, push, and run commands will include the authentication token automatically. B is wrong. insecure-registries configures Docker to communicate with a registry over plain HTTP instead of HTTPS (or to accept self-signed TLS certificates). It is related to transport security, not authentication. An unauthorized error is an auth failure, not a TLS error. C is wrong. docker trust inspect examines image signing metadata (Notary trust data). It does not authenticate to the registry or retrieve credentials. D is wrong. DOCKER_CONTENT_TRUST controls image signature verification. Setting it to 0 disables trust checking but has absolutely no effect on authentication. An unauthorized error requires valid credentials.
Q10
You want to retag a local image myapp:v1.0 and push it to Docker Hub under your account devteam as myapp:stable. What is the correct sequence of commands?
docker tag myapp:v1.0 devteam/myapp:stable then docker push devteam/myapp:stable
B docker rename myapp:v1.0 devteam/myapp:stable then docker push devteam/myapp:stable
C docker commit myapp:v1.0 devteam/myapp:stable then docker push devteam/myapp:stable
D docker tag devteam/myapp:stable myapp:v1.0 then docker push myapp:v1.0
Correct Answer
docker tag myapp:v1.0 devteam/myapp:stable then docker push devteam/myapp:stable
Explanation
A is correct. docker tag <source> <target> creates a new tag pointing to the same image layers as the source — no data is copied, it's a metadata operation. The target format <registry>/<account>/<image>:<tag> tells Docker Hub where to push it. docker push then uploads the layers and manifest. B is wrong. docker rename renames a running container, not an image. It has no effect on image tags. C is wrong. docker commit creates a new image from the current state of a running container's filesystem. It's not used for retagging — it creates entirely new image layers. D is wrong. The arguments are reversed. docker tag takes source then target. This command would attempt to tag a (likely nonexistent) devteam/myapp:stable as myapp:v1.0, and then push myapp:v1.0 which would fail due to the missing registry prefix.
Q11
A developer uses docker exec -it <container_id> /bin/bash to open a shell in a running container. Which statement correctly describes what happens at the kernel level?
A A new container is created with a shared network namespace but separate PID and mount namespaces
A new process is spawned inside the existing container's namespaces (PID, mount, network, IPC) — it shares the container's namespace context without creating a new container
C The Docker daemon forks a new subprocess with a copy of the container's filesystem for the exec session
D The exec process runs in the host's namespaces and uses a chroot into the container's overlay filesystem
Correct Answer
A new process is spawned inside the existing container's namespaces (PID, mount, network, IPC) — it shares the container's namespace context without creating a new container
Explanation
B is correct. docker exec uses the setns() syscall to join the existing namespaces (PID, mount, network, IPC, UTS) of the target container and then spawns a new process within that context. The exec'd process sees the same process tree, network interfaces, filesystem mounts, and hostname as the container — it is truly running inside the container's namespace context. A is wrong. No new container is created. The exec'd process shares all namespaces with the existing container, not just the network namespace. C is wrong. Docker does not copy the filesystem for exec sessions. The overlay filesystem is shared — exec'd processes see the same filesystem state as the main container process. D is wrong. The exec process does not run in host namespaces. Using host namespaces would defeat isolation and expose the full host environment. setns() places the process firmly inside the container's namespaces.
Q12
An image is built with the following Dockerfile excerpt: FROM ubuntu:22.04 RUN apt-get update && apt-get install -y curl RUN apt-get install -y wget Which optimization reduces the number of layers and improves caching for dependency installation?
A Replace both RUN instructions with CMD apt-get install -y curl wget
Combine both RUN instructions into a single RUN instruction using &&, and chain the installs: RUN apt-get update && apt-get install -y curl wget
C Use COPY --chown to transfer a pre-built apt cache into the image
D Add ENV DEBIAN_FRONTEND=noninteractive after both RUN instructions to speed up installs
Correct Answer
Combine both RUN instructions into a single RUN instruction using &&, and chain the installs: RUN apt-get update && apt-get install -y curl wget
Explanation
B is correct. Each RUN instruction creates a separate image layer. Combining commands with && into a single RUN reduces layer count, keeping image size smaller. Critically, apt-get update and apt-get install must be in the same RUN command — if they're separated, a cached apt-get update layer may be reused with a stale package list, leading to the "cache busting" pitfall. Installing all packages together in one call is both efficient and cache-correct. A is wrong. CMD defines the default command to run when a container starts — it does not execute during the image build. Using CMD for package installation would do nothing at build time and would overwrite any entrypoint at runtime. C is wrong. Copying a pre-built apt cache is fragile, non-portable, and generally an anti-pattern. It doesn't reduce layers meaningfully and introduces maintenance complexity. D is wrong. ENV DEBIAN_FRONTEND=noninteractive suppresses interactive prompts during package installation (useful to set before installs, not after). While a useful addition, it doesn't reduce layers or address the update/install separation problem.
Q13
A containerized application needs to communicate directly with a service running on the host machine (e.g., a locally running database on localhost:5432). The container is running on a Linux Docker host. What is the correct way to reach the host from within the container?
A Use localhost or 127.0.0.1 inside the container — it resolves to the host automatically
Use the Docker bridge gateway IP (typically 172.17.0.1) or the special DNS name host.docker.internal (if configured) to reach the host
C Mount the host's /etc/hosts file as a bind mount so the container inherits host DNS entries
D Use --network host for all containers that need to access host services to eliminate network address translation
Correct Answer
Use the Docker bridge gateway IP (typically 172.17.0.1) or the special DNS name host.docker.internal (if configured) to reach the host
Explanation
B is correct. Inside a container, localhost refers to the container's own loopback, not the host. On Linux, the host is typically reachable at the Docker bridge gateway IP (172.17.0.1 for the default bridge, or the gateway of the user-defined network). host.docker.internal is a DNS name available natively on Docker Desktop (Mac/Windows) and can be manually configured on Linux hosts via --add-host=host.docker.internal:host-gateway. A is wrong. localhost and 127.0.0.1 within a container resolve to the container's own loopback interface due to network namespace isolation. The host's localhost is not accessible this way. C is wrong. Mounting the host's /etc/hosts would override the container's name resolution but wouldn't add the correct host IP mapping automatically. /etc/hosts on the host doesn't typically list the host itself under a resolvable name for this purpose. D is wrong. --network host is a valid workaround (the container sees the host's full network stack), but it removes all network isolation — it's a significant security trade-off and should not be the default recommendation just to access one host service.
Q14
You are writing a Dockerfile for a Go application. The build process requires the Go toolchain (several hundred MB), but the final binary is a single statically compiled executable. Which Dockerfile approach produces the smallest possible production image?
A Use FROM golang:1.22 as the only stage and run go build; use .dockerignore to exclude the source code after build
Use a multi-stage build: compile in FROM golang:1.22 AS builder, then copy only the binary into FROM scratch or FROM gcr.io/distroless/static as the final stage
C Use FROM alpine:3.19 as the base, install Go, compile the binary, then uninstall Go in the same RUN instruction
D Use FROM golang:1.22-alpine to minimize the base image size while retaining the full Go toolchain
Correct Answer
Use a multi-stage build: compile in FROM golang:1.22 AS builder, then copy only the binary into FROM scratch or FROM gcr.io/distroless/static as the final stage
Explanation
B is correct. Multi-stage builds are the canonical solution here. The golang:1.22 builder stage has access to the full toolchain to compile the binary. The final stage uses FROM scratch (empty image — zero base overhead) or FROM distroless/static (which adds only minimal runtime libraries for DNS, TLS, etc.). The final image contains only the binary, resulting in images often under 10 MB. A is wrong. .dockerignore prevents files from entering the build context but cannot remove files from completed layers. The Go toolchain and source code would still be present in the image layers even if excluded from the context later. C is wrong. Even if Go is uninstalled in the same RUN layer (which would be correct layer hygiene), alpine + Go compiler + build dependencies still yields a much larger image than scratch. Go toolchain installation on Alpine requires significant packages. D is wrong. golang:1.22-alpine is a smaller variant of the Go builder image, but it still includes the complete Go toolchain (~300+ MB). It's appropriate as a builder stage, not as a production image.
Q15
A container is running with the default Docker seccomp profile. A developer tries to use strace inside the container to debug a system call issue and receives Operation not permitted. What is the underlying reason, and what is the appropriate fix?
A strace requires root inside the container; fix by running the container with --user root
The default Docker seccomp profile blocks the ptrace syscall used by strace; fix by running the container with --security-opt seccomp=unconfined or a custom profile that permits ptrace
C Seccomp profiles block all debugging tools by default; the only fix is to disable AppArmor using --security-opt apparmor=unconfined
D The container needs CAP_SYS_PTRACE added with --cap-add SYS_PTRACE; seccomp is not involved
Correct Answer
The default Docker seccomp profile blocks the ptrace syscall used by strace; fix by running the container with --security-opt seccomp=unconfined or a custom profile that permits ptrace
Explanation
B is correct. Docker's default seccomp profile blocks a number of syscalls considered risky, including ptrace, which is what strace uses to intercept and record system calls of other processes. Running with --security-opt seccomp=unconfined disables seccomp filtering entirely (appropriate only in trusted development environments), or a custom seccomp JSON profile can be crafted to allow only ptrace while keeping other restrictions in place. A is wrong. While strace typically requires elevated privileges, running as root inside the container doesn't overcome a seccomp restriction — seccomp policies apply regardless of the user ID inside the container. C is wrong. AppArmor and seccomp are separate security layers. AppArmor provides path-based and capability-based MAC policies. The ptrace blocking is a seccomp filter, not an AppArmor rule. Disabling AppArmor would not allow ptrace. D is wrong. This is a common misconception. While CAP_SYS_PTRACE is the capability required to use ptrace, seccomp filtering operates below the capability layer — even with the capability granted, a seccomp profile that blocks ptrace will still deny the syscall. Both CAP_SYS_PTRACE and a permissive seccomp profile are required. The primary blocker here is seccomp.

Want More Practice?

These are just the free questions. Unlock the full Docker Certified Associate (DCA) exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to Docker Certified Associate (DCA) Exams