Free Practice Questions•Kubernetes & Cloud Native Security Associate•29 Questions with Answers•Free Practice Questions•Kubernetes & Cloud Native Security Associate•29 Questions with Answers•
FREE QUESTIONS
Kubernetes & Cloud Native Security Associate Practice Questions
29 free questions with correct answers and detailed explanations.
29Free Questions
2Free Exams
100%With Explanations
KCSA Practice Set-01
14 questions
Q1
Scenario: Your security team discovers that a container image in production was built from a Dockerfile with no pinned base image digest - only a floating tag (node:18). The CI pipeline has no image signing step.
Which combination of controls best addresses the supply chain risks in this scenario?
Pin the base image using its SHA256 digest in the Dockerfile and enforce image signature verification via Cosign and a policy engine like Kyverno at admission time
B
Scan the image with Trivy in the CI pipeline and block deployment if critical CVEs are found
C
Use a private registry that mirrors Docker Hub to avoid tag mutation, and add a PodSecurityPolicy to block privileged containers
D
Require that all images include a valid SBOM attached as an OCI artifact, and scan SBOMs offline for known vulnerabilities
Correct Answer
Pin the base image using its SHA256 digest in the Dockerfile and enforce image signature verification via Cosign and a policy engine like Kyverno at admission time
Explanation
Option A addresses both risks directly. Digest pinning eliminates tag mutation (where :18 could silently point to a different image), and Cosign signing combined with Kyverno admission policy ensures only cryptographically verified images reach the cluster. This is the defense-in-depth approach recommended by SLSA and CNCF supply chain security guidance.
B is wrong because Trivy scanning catches known CVEs but does not solve tag mutation or tampering. An attacker who replaces the image at the same tag bypasses scanning done at build time.
C is wrong because a private mirror helps with tag mutation but does not verify image integrity or provenance. PodSecurityPolicy was deprecated in 1.21 and removed in 1.25, and it does not address supply chain concerns regardless.
D is wrong because SBOMs are valuable for transparency but do not by themselves prevent deployment of an unsigned or tampered image. Scanning the SBOM offline still leaves a window between build and deploy.
Q2
Scenario: A Falco alert fires — rule: Terminal shell in container, container=api-server, command=bash. The container is supposed to be a stateless REST API with no legitimate need for shell access.
What is the MOST appropriate immediate response, and what longer-term control would prevent this class of alert from being a security gap?
A
Immediately restart the Pod; long-term, add a liveness probe to detect shell processes
Quarantine the Pod by removing its labels to detach it from the Service, collect forensics, and long-term enforce a seccomp profile that blocks execve for shell binaries
C
Alert the on-call engineer and document the incident; long-term, use PodDisruptionBudgets to prevent unauthorized restarts
D
Drain the node the Pod is running on; long-term, enforce AppArmor profiles cluster-wide using a MutatingAdmissionWebhook
Correct Answer
Quarantine the Pod by removing its labels to detach it from the Service, collect forensics, and long-term enforce a seccomp profile that blocks execve for shell binaries
Explanation
Removing labels detaches the Pod from the Service (cutting live traffic) without destroying forensic evidence. Killing or restarting the Pod would destroy in-memory artifacts needed for investigation. Long-term, a seccomp profile restricting execve or a custom AppArmor profile denying shell executables reduces the attack surface for this class of container breakout attempt.
A is wrong because restarting the Pod destroys all forensic state. Liveness probes are for application health — they cannot detect or prevent shell invocations.
C is wrong because documentation is important but is not the primary response action. PodDisruptionBudgets manage availability during voluntary disruptions and have no security enforcement function.
D is wrong because draining the entire node is disproportionate and impacts all other workloads on it. Node-level action is only warranted if node compromise is confirmed, which has not been established here.
Q3
You run kube-bench against a control plane node and see the following failing check: [FAIL] 1.2.1 Ensure that the --anonymous-auth argument is set to false. Which component does this check target, and what is the security risk if left unremediated?
A
The kubelet; anonymous authentication allows unauthenticated users to query the kubelet API, potentially exposing node and Pod metadata
The kube-apiserver; anonymous authentication allows requests without credentials to be processed, bypassing authentication but still subject to authorization
C
The kube-apiserver; anonymous authentication allows fully unauthenticated and unauthorized access to all API resources in the cluster
D
The etcd server; anonymous clients could read or write cluster state directly without credentials
Correct Answer
The kube-apiserver; anonymous authentication allows requests without credentials to be processed, bypassing authentication but still subject to authorization
Explanation
CIS check 1.2.1 targets the kube-apiserver. When --anonymous-auth=true (the default), requests with no credentials are assigned the system:anonymous user and system:unauthenticated group. They still pass through the authorizer (RBAC), so they are not fully unauthorized — but any RBAC rules granting access to those identities (including default ClusterRoleBindings) become exploitable without credentials.
A is wrong because the kubelet does have its own --anonymous-auth flag but it falls under CIS section 4, not 1.2. Check 1.2.1 specifically targets the kube-apiserver by CIS benchmark numbering.
C is wrong because anonymous auth does NOT bypass authorization. The risk is indirect — misconfigured RBAC for anonymous subjects — not a blanket bypass of all access controls.
D is wrong because etcd authentication is a separate concern covered in CIS section 2. etcd has its own authentication model entirely distinct from this check.
Q4
A security engineer wants to prevent a containerized workload from making Linux kernel calls not needed by the application — for example ptrace, mount, and reboot. Which Kubernetes-native mechanism is most appropriate, and at what layer does it operate?
A
AppArmor profile — operates at the filesystem and capability layer, loaded by the container runtime
Seccomp profile — operates at the Linux kernel syscall layer, filtering system calls before they reach the kernel
C
Linux capabilities via securityContext.capabilities.drop: [ALL] — removes all POSIX capabilities, effectively blocking dangerous syscalls
D
OPA/Gatekeeper admission policy — intercepts the API request at admission time and rejects Pods that lack seccomp annotations
Correct Answer
Seccomp profile — operates at the Linux kernel syscall layer, filtering system calls before they reach the kernel
Explanation
Seccomp (Secure Computing Mode) is the kernel-level mechanism that intercepts syscalls and allows or denies them based on a profile before the kernel processes them. Since Kubernetes 1.19, seccomp profiles can be applied natively via securityContext.seccompProfile. It directly addresses the requirement to block specific syscalls like ptrace, mount, or reboot.
A is wrong because AppArmor works on file access paths, network access, and Linux capabilities — not individual syscalls. It cannot block ptrace by syscall number.
C is wrong because dropping Linux capabilities reduces privilege but does not map 1:1 with syscalls. Some dangerous syscalls remain callable without any capability in certain conditions. Capabilities and syscalls are related but distinct layers.
D is wrong because OPA/Gatekeeper acts at admission control — it can enforce that a Pod has a seccomp profile configured, but it does not itself perform any syscall filtering at runtime.
Q5
Scenario: A microservice in namespace payments needs to query the Kubernetes API to list its own Pods for health aggregation. The service account has the following bound ClusterRole: verbs: [""], resources: [""], apiGroups: ["*"].
What is the primary security concern with this configuration, and what is the correct remediation?
The ClusterRole grants cluster-admin equivalent access. Remediate by creating a Role in payments with only get and list on pods, and bind it with a RoleBinding
B
The service account can only access resources within its namespace so the risk is low. Add a NetworkPolicy to restrict egress to the API server
C
The ClusterRole is fine but should be bound with a RoleBinding instead of ClusterRoleBinding to limit its scope to the namespace
D
Disable service account token automounting and use a projected service account token with a short TTL instead
Correct Answer
The ClusterRole grants cluster-admin equivalent access. Remediate by creating a Role in payments with only get and list on pods, and bind it with a RoleBinding
Explanation
Wildcard verbs and resources with no apiGroup restriction is functionally equivalent to cluster-admin. Any compromise of this Pod gives an attacker full cluster control. The correct fix is creating a namespaced Role with minimal verbs (get and list) on only the pods resource, bound via a RoleBinding — pure least-privilege principle.
B is wrong because a ClusterRole bound with a ClusterRoleBinding is cluster-wide regardless of namespace. The service account is not limited to its namespace and can read Secrets, create workloads, and escalate privileges anywhere in the cluster.
C is wrong because binding a wildcard ClusterRole via a RoleBinding does restrict the scope to the namespace, but keeping the wildcard still violates least privilege. The workload only needs pods access.
D is wrong because disabling automounting is a good general practice but does not solve the overbroad permissions. The service account still exists with those permissions and can be manually mounted or otherwise leveraged.
Q6
A team wants to adopt the SLSA (Supply Chain Levels for Software Artifacts) framework. At SLSA Level 2, which requirement is newly introduced that is NOT present at Level 1?
A
The build process must be scripted and not performed manually
Build provenance must be generated and signed by the build service
C
The build must run on a dedicated ephemeral environment with no persistent state between builds
D
All source code commits must be reviewed by at least two people before merging
Correct Answer
Build provenance must be generated and signed by the build service
Explanation
SLSA Level 1 requires that build provenance (metadata about how an artifact was built) exists, but it can be self-generated and unsigned. Level 2 adds the requirement that provenance is generated AND cryptographically signed by the build service itself — making it tamper-evident and attributable to the build system, not just the developer.
A is wrong because scripted builds are a SLSA Level 1 requirement. Level 1 already mandates that the build process is fully defined in a build script.
C is wrong because isolated ephemeral build environments with no persistence between builds are a SLSA Level 3 requirement, not Level 2.
D is wrong because two-person review of source code is a SLSA Level 4 requirement related to source control integrity, not Level 2.
Q7
Scenario: You are reviewing a production cluster and find that etcd is accessible on port 2379 from any node in the cluster. TLS is enabled but client certificate authentication is not enforced (--client-cert-auth=false).
What is the correct remediation, and why is etcd a particularly critical target?
A
Apply a NetworkPolicy to restrict access to etcd port 2379. etcd is critical because it stores all Kubernetes objects including Secrets which are base64-encoded by default
Enable --client-cert-auth=true on etcd and restrict etcd network access to only the kube-apiserver via firewall rules or network ACLs. etcd stores the entire cluster state including Secrets, ConfigMaps, and token data
C
etcd has its own authentication model. Kubernetes RBAC applies only to the kube-apiserver, not to direct etcd connections.
D
Enable RBAC on etcd using Kubernetes RBAC policies. etcd is critical because it runs as root and can escalate to node-level access
Correct Answer
Enable --client-cert-auth=true on etcd and restrict etcd network access to only the kube-apiserver via firewall rules or network ACLs. etcd stores the entire cluster state including Secrets, ConfigMaps, and token data
Explanation
Enabling --client-cert-auth=true ensures only clients with valid certificates (i.e., the kube-apiserver) can connect. Combined with network-level isolation allowing only the API server to reach port 2379, this follows defense in depth. etcd stores ALL cluster state — including Kubernetes Secrets (which without encryption-at-rest are just base64), service account tokens, and kubeconfig data. Compromising etcd is effectively a full cluster compromise.
A is wrong because NetworkPolicy only restricts Pod-to-Pod traffic within the cluster and does not apply to control plane host-level components like etcd. Firewall or ACL rules are required for host-level network isolation.
C is wrong because certificate rotation and audit logging are good practices but do not fix the authentication gap. An attacker can still connect without a client certificate while this is unresolved.
D is wrong because etcd does not use Kubernetes RBAC. etcd has its own authentication model. Kubernetes RBAC applies only to the kube-apiserver, not to direct etcd connections.
Q8
Which statement accurately describes the difference between a ValidatingAdmissionWebhook and a MutatingAdmissionWebhook in Kubernetes?
A
Validating webhooks run before mutating webhooks, and both can modify the incoming API object before it is persisted
Mutating webhooks can modify the incoming request object and run before validating webhooks; validating webhooks can only accept or reject a request without modification
C
Validating webhooks are synchronous and block the API request; mutating webhooks are asynchronous and run after the object is created
D
Both webhook types can reject requests, but only mutating webhooks can add labels or annotations to objects
Correct Answer
Mutating webhooks can modify the incoming request object and run before validating webhooks; validating webhooks can only accept or reject a request without modification
Explanation
In the Kubernetes admission chain, mutating webhooks run first and can modify the object — for example injecting sidecar containers, adding default labels, or enforcing defaults. After all mutations complete, the object goes through validating webhooks, which can only approve or deny. This ordering ensures that validation operates on the final, fully-mutated version of the object.
A is wrong because the order is reversed. Mutating runs before validating, not after. Also, only mutating webhooks can modify objects.
C is wrong because both types are synchronous and block the API request while they execute. Asynchronous processing is the domain of controllers, not admission webhooks.
D is wrong because validating webhooks can also reject requests. The real distinction is that validating webhooks cannot modify objects at all, not just that they lack label-writing ability.
Q9
You want to use Falco to detect when a process inside a container attempts to read /etc/shadow. Which Falco rule condition would correctly match this specific filesystem access event?
condition: evt.type = open AND fd.name = /etc/shadow AND container.id != host
B
condition: evt.type = execve AND proc.name = shadow AND container = true
C
condition: syscall.type = read AND file.path startswith /etc AND user.name != root
D
condition: evt.type = connect AND fd.name contains shadow AND k8s.pod.name exists
Correct Answer
condition: evt.type = open AND fd.name = /etc/shadow AND container.id != host
Explanation
Falco rules use a filter language where evt.type = open captures the open or openat syscall used to open a file. fd.name = /etc/shadow matches the specific file path. container.id != host (equivalent to container = true) scopes the detection to containerized processes only, not the host OS. This is the canonical Falco pattern for detecting sensitive file access inside containers.
B is wrong because execve is a process execution event, not a file read. proc.name = shadow makes no sense as a process name. This rule would never fire on a file read.
C is wrong because Falco does not use field names like syscall.type or file.path. These do not exist in the Falco filter syntax.
D is wrong because evt.type = connect is a network syscall. It would never trigger on a file open or read operation. This rule fundamentally misidentifies the event type.
Q10
Scenario: An audit finds that worker nodes in a Kubernetes cluster are running sshd, avahi-daemon, and rpcbind services that are not required for node operation.
What is the security principle being violated, and what is the recommended action per CIS Kubernetes Benchmark guidance?
Principle of least privilege — disable or remove unnecessary services using systemctl disable and remove the packages entirely where possible
B
Principle of separation of duties — move these services to a dedicated node pool that is isolated from workload nodes
C
Principle of defense in depth — add a host-based firewall such as iptables to block the ports these services listen on
D
Principle of non-repudiation — enable audit logging on these services so their activity can be traced
Correct Answer
Principle of least privilege — disable or remove unnecessary services using systemctl disable and remove the packages entirely where possible
Explanation
The CIS Kubernetes Benchmark recommends minimizing the attack surface by disabling or removing services not required for node operation — this is the principle of least functionality, a subset of least privilege. Each running service is a potential attack vector. avahi-daemon has had remote code execution CVEs, rpcbind is a classic lateral movement target, and sshd expands the credential attack surface on the node.
B is wrong because separation of duties concerns conflicting responsibilities across roles. Moving services to another node pool still leaves them running and exposed — it does not reduce the attack surface.
C is wrong because adding a firewall is a compensating control, not a remediation. If an attacker is already on the node or the firewall is misconfigured, the service remains exploitable. Disabling the service entirely is the preferred action.
D is wrong because non-repudiation is about traceability and audit trails, not attack surface reduction. Logging the activity of a vulnerable service does not make it safe to run.
Q11
When using kube-bench to assess a cluster against CIS Kubernetes Benchmarks, which of the following statements about its findings is most accurate?
A
kube-bench findings are definitive vulnerabilities that must all be remediated immediately to achieve compliance
kube-bench checks are recommendations based on CIS Benchmark profiles; findings should be reviewed against organizational risk tolerance as some controls may have compensating controls or may be intentionally configured differently
C
kube-bench only checks kube-apiserver configuration and does not assess kubelet, etcd, or worker node settings
D
kube-bench automatically remediates failing checks when run with the --fix flag, making it a complete compliance automation tool
Correct Answer
kube-bench checks are recommendations based on CIS Benchmark profiles; findings should be reviewed against organizational risk tolerance as some controls may have compensating controls or may be intentionally configured differently
Explanation
CIS Benchmarks are best-practice recommendations, not absolute compliance mandates. kube-bench maps checks to benchmark levels (L1 and L2). Some findings may be handled by a managed Kubernetes provider in ways that don't appear in the check output. Others may be intentionally configured differently based on architectural decisions with compensating controls in place. A mature security program treats findings as inputs to a risk-based remediation process, not an unconditional fix list.
A is wrong because not all findings are equal in severity or applicability. Treating every finding as an immediate critical remediation without context ignores risk prioritization and can cause operational disruption.
C is wrong because kube-bench covers multiple CIS sections: master node components (1.x), etcd (2.x), control plane configuration (3.x), worker nodes and kubelet (4.x), and policies (5.x). It is not limited to the API server.
D is wrong because kube-bench has no --fix flag and performs no automated remediation. It is strictly a read-only auditing tool that produces a report for human review.
Q12
Scenario: A Kyverno ClusterPolicy is in place that denies any image not matching registry.company.com/*. A developer deploys a Pod with image registry.company.com/app:latest and it is created successfully. Later, an attacker injects a malicious layer into the image at the same tag on the registry.
Which additional control would have detected or prevented the deployment of the tampered image?
A
Add a Kyverno rule to deny images with the :latest tag, forcing the use of pinned version tags
B
Enable Kubernetes audit logging to capture all kubectl apply events with the image name
Enforce Cosign image signature verification in the Kyverno policy, requiring a valid signature from the organization's private key
D
Configure the container registry with immutable tags so the :latest tag cannot be overwritten after the initial push
Correct Answer
Enforce Cosign image signature verification in the Kyverno policy, requiring a valid signature from the organization's private key
Explanation
The existing Kyverno policy only checks the registry domain — it says nothing about image content integrity. Cosign signatures are cryptographically bound to a specific image digest. If the image is tampered with, its digest changes and the original signature becomes invalid. A Kyverno policy enforcing valid Cosign signatures would reject any image whose digest does not match a signature made with the organization's private key, regardless of the tag used.
A is wrong because banning :latest improves reproducibility but does not prevent tampering. An attacker can overwrite a pinned version tag such as v1.2.3 just as easily as :latest on most registries.
B is wrong because audit logging is a detective control, not a preventive one. It would record what was deployed after the fact but would not stop the tampered image from running.
D is wrong because immutable tags would prevent this specific attack vector and is a valid partial control. However option C is more comprehensive — Cosign verification works even if the image is copied to a different tag or if immutability is somehow bypassed.
Q13
Scenario: You have three microservices in a namespace — frontend, api, and database. The requirement is: frontend can reach api, api can reach database, frontend must NOT reach database, and no external ingress to database is allowed.
Which NetworkPolicy applied to the database Pod correctly enforces this requirement using Kubernetes-native constructs?
A
Apply an Egress NetworkPolicy on frontend that denies traffic to database
Apply an Ingress NetworkPolicy on database that allows traffic only from Pods with label app: api and denies all other ingress
C
Apply a default-deny-all NetworkPolicy in the namespace and then add an Egress policy on api allowing traffic to database
D
Configure a Service of type ClusterIP for database with no selector so frontend cannot resolve its DNS name
Correct Answer
Apply an Ingress NetworkPolicy on database that allows traffic only from Pods with label app: api and denies all other ingress
Explanation
Applying an Ingress NetworkPolicy on the database Pod that whitelists only traffic from Pods labeled app: api directly enforces the requirement at the receiving end. Kubernetes NetworkPolicy is additive and allow-based — once a policy selects a Pod, all ingress traffic not explicitly allowed is denied by default. This is the canonical pattern for controlling microservice-to-microservice access in Kubernetes.
A is wrong because an Egress policy on frontend is a valid complementary control but is insufficient on its own. It depends on frontend being correctly labeled and does not protect database from other services that may be able to reach it.
C is wrong because a default-deny plus api egress policy would work conceptually but requires more policies and is more complex. More importantly, an Egress policy on api does not directly protect database from other Pods that could independently reach it.
D is wrong because omitting a Service selector to obscure DNS is security through obscurity. It is not a reliable enforcement mechanism — the database Pod's IP is still discoverable and reachable without DNS resolution.
Q14
A cluster administrator enables etcd encryption at rest using an EncryptionConfiguration manifest with the aescbc provider. Which of the following statements correctly describes a limitation of this approach?
A
Once enabled, etcd encryption applies retroactively to all existing Secrets automatically
The encryption key itself is stored in the EncryptionConfiguration file on the control plane node, so compromise of that file exposes the key used to encrypt Secrets
C
AES-CBC encryption in Kubernetes etcd is deprecated and no longer supported in clusters running 1.20 or later
D
etcd encryption at rest also encrypts data in transit between etcd nodes, so mTLS between etcd peers is no longer necessary
Correct Answer
The encryption key itself is stored in the EncryptionConfiguration file on the control plane node, so compromise of that file exposes the key used to encrypt Secrets
Explanation
The EncryptionConfiguration file on the control plane node contains the encryption key in plaintext (base64-encoded). Any user or process that can read that file can decrypt all encrypted etcd data. This is why production deployments should use the kms provider — integrated with AWS KMS, GCP KMS, or HashiCorp Vault — to envelope-encrypt the data encryption key using an externally managed master key that never touches the control plane filesystem.
A is wrong because encryption at rest does NOT apply retroactively. After enabling it, administrators must force-update all existing Secrets by re-applying them through the API — for example using kubectl get secrets --all-namespaces -o json | kubectl replace -f - — to trigger re-encryption.
C is wrong because aescbc is a supported provider alongside aesgcm and the preferred kms provider. None of these were removed at Kubernetes 1.20.
D is wrong because etcd encryption at rest only protects data written to disk. Data in transit between etcd peers requires mTLS, which is an entirely separate security control. Enabling at-rest encryption has no effect on in-transit confidentiality.
KCSA Practice Set-02
15 questions
Q1
A Kubernetes platform team achieves full automation of their security controls: all RBAC is generated from code, all admission policies are deployed via GitOps, all cluster configurations pass kube-bench, and all runtime alerts are automated. During an audit, the auditor states that the platform still has a critical compliance gap. The gap is that there are no documented security control objectives, no risk assessment process, no evidence of security control testing, and no incident response runbooks — despite all technical controls being implemented and operational. What compliance principle does this gap represent, and why do technical controls alone not satisfy it?
A
The gap represents a pure documentation requirement — the controls are sufficient, only the paperwork is missing, and this has no practical security impact
Rebuilding is not required; the governance layer needs to be added to the existing technical foundation.
C
The gap is minor — automated controls are inherently more reliable than manually managed ones and should receive a compliance waiver
D
The gap means the platform must be rebuilt from scratch using a compliance-first approach before any technical controls can be credited toward compliance
Correct Answer
Rebuilding is not required; the governance layer needs to be added to the existing technical foundation.
Explanation
This final question addresses the most important conceptual boundary in security compliance: the difference between security controls and a security program. Technical controls (firewalls, encryption, RBAC, admission policies) are the implementation layer. Compliance frameworks evaluate whether the organization has a systematic process for: (1) identifying what risks it faces (risk assessment), (2) selecting and implementing appropriate controls to address those risks (control implementation), (3) testing that controls operate effectively (control testing — this is what SOC 2 Type II evaluates over time), and (4) responding when controls fail (incident response). A perfectly automated cluster with no documented objectives, no risk-based reasoning, no testing evidence, and no incident response capability cannot demonstrate that it is managing security systematically — it is a collection of controls without a program. Mature security requires both: strong technical implementation AND the governance framework that ensures it remains effective, adapts to new threats, and recovers from failures.
A is wrong because the missing elements are not just paperwork — they represent genuine capability gaps. Without incident response runbooks, the team does not know what to do when a security event occurs. Without control testing evidence, the team cannot demonstrate that controls work as intended (they may be configured but misconfigured). Without risk assessment, the team cannot demonstrate that the right controls were chosen for the right reasons.
C is wrong because automated controls, while more reliable for consistency, still require documentation, testing, and governance. Automation does not eliminate the need for understanding why each control exists, verifying it achieves its objective, and planning for failure scenarios. Compliance frameworks do not grant waivers for automation.
D is wrong because technical controls that are already implemented and operational do count toward compliance — they just need to be supported by the governance elements described in option B. Rebuilding is not required; the governance layer needs to be added to the existing technical foundation.
Q2
A Pod is running in namespace staging with a service account that has no RBAC bindings. The Pod attempts to call the Kubernetes API using the automounted service account token. What response does it receive?
A
A 401 Unauthorized response because the token has no associated RBAC bindings and cannot authenticate
A 403 Forbidden response — the token authenticates successfully (the API server recognizes the service account identity), but the request is denied because no RBAC policies grant that service account permission to perform the requested action
C
All access is explicitly granted through Roles and ClusterRoles. The system:authenticated group has minimal default permissions.
D
The request succeeds with read-only access because all authenticated principals have implicit read access to their own namespace
Correct Answer
A 403 Forbidden response — the token authenticates successfully (the API server recognizes the service account identity), but the request is denied because no RBAC policies grant that service account permission to perform the requested action
Explanation
The automounted service account token is a valid credential — the kube-apiserver accepts it and authenticates the request as system:serviceaccount:staging:<serviceaccount-name>. Authentication succeeds. The request then proceeds to the authorization layer (RBAC). With no RoleBindings or ClusterRoleBindings, the service account has no permissions. RBAC denies the request with a 403 Forbidden response. This distinction between authentication failure (401) and authorization failure (403) is important — the token itself is valid, but the identity lacks permissions.
A is wrong because 401 Unauthorized indicates that authentication failed — the identity could not be established. A service account token is valid credentials; authentication succeeds.
C is wrong because 404 Not Found indicates the requested resource does not exist. The API server does not hide endpoints from unauthenticated or unauthorized users — it returns 401 or 403 respectively.
D is wrong because there is no implicit read access for authenticated principals in Kubernetes RBAC. All access is explicitly granted through Roles and ClusterRoles. The system:authenticated group has minimal default permissions.
Q3
A development team uses an init container in their Pod to clone a Git repository at startup and copy files to a shared emptyDir volume. The main application container reads these files. A security reviewer flags this as a supply chain risk. Why?
A
Init containers run as root by default and can write arbitrary files to the host filesystem through the emptyDir volume
The init container fetches content from an external Git repository at runtime — meaning the running application is loading code or configuration that was not present in the container image, bypassing image scanning, signing, and admission controls applied at image build and deployment time
C
Using emptyDir volumes across init and main containers violates the PodSecurity restricted profile and will cause the Pod to be rejected
D
The Git repository credentials used by the init container are stored in plaintext in the Pod spec
Correct Answer
The init container fetches content from an external Git repository at runtime — meaning the running application is loading code or configuration that was not present in the container image, bypassing image scanning, signing, and admission controls applied at image build and deployment time
Explanation
This is a subtle but real supply chain risk. Container image scanning, signing, and Kyverno admission policies all operate on the declared container images. However, code or configuration fetched at runtime from an external source (Git, S3, HTTP endpoints) completely bypasses these controls — it was not present when the image was scanned, it has no signature, and admission controllers cannot inspect it. An attacker who compromises the Git repository (or performs a DNS/MITM attack on the clone operation) can inject malicious code into running production containers without touching any image or passing any security gate.
A is wrong because emptyDir volumes are not mounted on the host filesystem — they are temporary storage within the Pod's lifecycle. Init containers writing to emptyDir cannot reach the host filesystem through that mechanism.
C is wrong because emptyDir volumes are permitted under the restricted PodSecurity profile. The profile restricts volume types like hostPath but explicitly allows emptyDir.
D is wrong because while Git credentials are a valid concern, the question asks why the reviewer flagged this as a supply chain risk — and the core issue is runtime code fetching bypassing security controls, not the credential storage method.
Q4
A company is deploying Kubernetes in a regulated financial environment subject to PCI-DSS. They want to ensure that their Kubernetes cluster configuration meets the relevant requirements. Which mapping is most accurate?
A
PCI-DSS Requirement 2 (Default Passwords) maps to disabling anonymous authentication on the kube-apiserver and rotating default service account tokens
PCI-DSS Requirement 10 (Audit Logs) maps to enabling kube-apiserver audit logging with sufficient retention, and Requirement 6 (Secure Systems) maps to applying CIS Kubernetes Benchmark hardening and container image vulnerability scanning
C
PCI-DSS Requirement 8 (Authentication) maps to enabling imagePullSecrets on all namespaces to secure registry access
D
PCI-DSS Requirement 1 (Firewalls) maps to disabling the Kubernetes dashboard and removing unused Services
Correct Answer
PCI-DSS Requirement 10 (Audit Logs) maps to enabling kube-apiserver audit logging with sufficient retention, and Requirement 6 (Secure Systems) maps to applying CIS Kubernetes Benchmark hardening and container image vulnerability scanning
Explanation
PCI-DSS Requirement 10 specifically requires audit logging of all access to system components and cardholder data, with logs retained for at least 12 months (3 months immediately available). Kubernetes API audit logging directly satisfies this requirement for the cluster control plane. PCI-DSS Requirement 6 requires protecting systems against known vulnerabilities — this maps to applying CIS hardening benchmarks, maintaining up-to-date images, and scanning for CVEs. These are the most direct and well-established PCI-DSS to Kubernetes control mappings.
A is wrong because PCI-DSS Requirement 2 covers vendor-supplied defaults (default accounts and passwords), which maps more directly to disabling default service accounts with excessive permissions and removing default admin credentials — not specifically token rotation.
C is wrong because PCI-DSS Requirement 8 covers identification and authentication of all users and system components. imagePullSecrets authenticate to a container registry — this is relevant but peripherally so. The requirement more directly maps to strong authentication for cluster access (OIDC, client certificates, MFA for privileged users).
D is wrong because PCI-DSS Requirement 1 covers firewall and network segmentation controls, which maps more directly to Kubernetes NetworkPolicy, security groups, and cluster network architecture — not to disabling dashboards or services, which are Requirement 2 concerns.
Q5
A newly provisioned Kubernetes cluster shows the following when kube-bench is run: [FAIL] 5.1.5 Ensure that default service accounts are not bound to active cluster roles. What does this mean, and how should it be remediated?
A
The default service account in kube-system has been bound to a ClusterRole — remediate by deleting the kube-system namespace's default service account
The default service account in one or more namespaces has been granted permissions via a RoleBinding or ClusterRoleBinding. Remediate by removing these bindings and setting automountServiceAccountToken: false on the default service accounts
C
All service accounts in the cluster are currently missing ClusterRole bindings — create ClusterRoles for each namespace's default service account to fix this
D
The kubelet is using the default service account instead of a dedicated kubelet service account — remediate by creating a separate service account for kubelet authentication
Correct Answer
The default service account in one or more namespaces has been granted permissions via a RoleBinding or ClusterRoleBinding. Remediate by removing these bindings and setting automountServiceAccountToken: false on the default service accounts
Explanation
CIS Benchmark check 5.1.5 identifies cases where the default service account (which is automatically assigned to Pods that do not specify a service account) has been granted RBAC permissions. Because any Pod that does not explicitly specify a service account uses the default one, granting it permissions inadvertently gives those permissions to all workloads in the namespace that omit service account configuration. Remediation involves removing the bindings from the default service account and setting automountServiceAccountToken: false on it to prevent the token from being mounted by careless workloads.
A is wrong because deleting the kube-system default service account would break system components. The remediation is removing the RBAC bindings, not the service account itself.
C is wrong because the check is flagging too many permissions on the default service account, not too few. Creating more ClusterRole bindings would worsen the situation.
D is wrong because kubelet authentication uses node certificates (system:node:<name> identity), not service accounts. Service accounts are for Pod workloads, not kubelets.
Q6
A Kubernetes cluster uses eBPF-based runtime security (like Tetragon or Cilium's eBPF enforcement) instead of a kernel module-based approach like Falco with a kernel module. What is a key security advantage of the eBPF approach?
A
eBPF programs can only be loaded by root, so they are inherently more secure than kernel modules which can be loaded by any user
eBPF programs run in a sandboxed in-kernel virtual machine with a verifier that checks for safety before loading — they cannot crash the kernel, access arbitrary memory, or loop infinitely. Kernel modules run with unrestricted kernel privileges and a bug or exploit in a module can cause system instability or be weaponized
C
eBPF provides faster syscall interception than kernel modules, reducing the performance overhead on workloads
D
eBPF programs are compiled at build time and cannot be modified at runtime, making them more resistant to tampering than loadable kernel modules
Correct Answer
eBPF programs run in a sandboxed in-kernel virtual machine with a verifier that checks for safety before loading — they cannot crash the kernel, access arbitrary memory, or loop infinitely. Kernel modules run with unrestricted kernel privileges and a bug or exploit in a module can cause system instability or be weaponized
Explanation
The eBPF in-kernel verifier is the key differentiator. Before any eBPF program is loaded, the kernel verifier runs a static analysis pass that ensures the program is memory-safe (no out-of-bounds access), terminates (no infinite loops), and only accesses permitted kernel data structures. This makes eBPF programs much safer to run in kernel space than traditional kernel modules (LKMs), which run with unrestricted kernel privileges. A bug in a kernel module can cause kernel panics, memory corruption, or be exploited to gain kernel-level code execution. The eBPF verifier provides a strong safety guarantee that kernel modules lack.
A is wrong because both eBPF programs and kernel modules require root (specifically CAP_BPF or CAP_SYS_ADMIN for eBPF, and CAP_SYS_MODULE for kernel modules) to load. The privilege requirement is similar for both.
C is wrong because performance is indeed a consideration (eBPF is generally considered more efficient than kernel module hooks in many benchmarks), but this is a performance advantage, not a security advantage as the question asks.
D is wrong because eBPF programs can be loaded and unloaded dynamically at runtime — this is one of their key features. The safety guarantee comes from the verifier, not from being immutable.
Q7
A development team is using Helm and wants to pass a database password to their chart. They include it as a Helm value in their values.yaml file stored in Git. A security reviewer objects. What is the correct approach?
A
Base64-encode the password in values.yaml — base64 is sufficient protection for passwords in Git repositories
Use a Secrets management solution: store the password in an external vault (HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault), and use a secrets injection mechanism (External Secrets Operator, Vault Agent Injector, or CSI Secrets Store Driver) to populate Kubernetes Secrets at runtime without the password ever being committed to Git
C
Create the Kubernetes Secret manually with kubectl and reference it in the Helm chart via a secretKeyRef — this removes the password from Git but the plaintext is still passed as a kubectl argument
D
Enable Helm's built-in encryption feature using Helm Secrets to encrypt values.yaml with AES-256 before committing to Git
Correct Answer
Use a Secrets management solution: store the password in an external vault (HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault), and use a secrets injection mechanism (External Secrets Operator, Vault Agent Injector, or CSI Secrets Store Driver) to populate Kubernetes Secrets at runtime without the password ever being committed to Git
Explanation
The correct enterprise approach is to never commit secrets to Git in any form — even encrypted or encoded. External Secrets Operator (ESO) or the CSI Secrets Store Driver connects to an external vault at runtime and creates (or populates) Kubernetes Secrets from the vault's stored values. The Helm chart references the Secret by name — no secret value is ever in Git, in the Helm chart, or in values.yaml. This approach also centralizes secret rotation, audit logging of secret access, and access control in the vault.
A is wrong because base64 is encoding, not encryption. It provides zero security — anyone who can read the file can decode it instantly. Committing base64-encoded passwords to Git is equivalent to committing them in plaintext.
C is wrong because kubectl create secret --from-literal=password=<value> passes the plaintext password in the shell command, which appears in shell history and process listings. It also requires manual credential management, which doesn't scale.
D is wrong because Helm Secrets (using SOPS or similar) is a valid approach for teams that must keep secrets in Git. However, it requires careful key management, and the gold standard for production environments is external secret management that keeps secrets out of Git entirely.
Q8
A security engineer reviews a microservice deployment and finds the container image is 2.3 GB, based on ubuntu:20.04, and includes Python 3, pip, curl, wget, git, and gcc. The application is a simple REST API that serves pre-compiled static responses. What hardening recommendation would have the highest security impact?
A
Upgrade the base image from ubuntu:20.04 to ubuntu:22.04 to get the latest OS packages
Rebuild the image using a distroless or minimal base image containing only the application binary and its runtime dependencies, removing all unnecessary tools — this reduces the CVE surface area dramatically and eliminates attacker tooling available post-exploitation
C
Add a seccomp profile to restrict syscalls available to the container, keeping the existing image
D
Enable a read-only root filesystem on the container to prevent the included tools from being executed at runtime
Correct Answer
Rebuild the image using a distroless or minimal base image containing only the application binary and its runtime dependencies, removing all unnecessary tools — this reduces the CVE surface area dramatically and eliminates attacker tooling available post-exploitation
Explanation
A 2.3 GB Ubuntu image with compilers, curl, wget, git, pip, and Python contains hundreds of packages, each potentially with CVEs. More critically, post-exploitation, an attacker has access to curl (download more tools), wget (exfiltrate data), git (clone attacker repos), gcc (compile exploits), and pip (install malicious Python packages) — essentially a full attack toolkit. Rebuilding with a minimal or distroless base containing only the application binary reduces the attack surface to near zero. If the application serves static responses from pre-compiled code, it may not even need a language runtime in the final image.
A is wrong because upgrading from 20.04 to 22.04 reduces some known vulnerabilities in OS packages but does not address the fundamental problem: the image contains massive amounts of unnecessary software. The upgrade helps but has far less impact than removing the unnecessary components entirely.
C is wrong because a seccomp profile is a good additional hardening layer but does not remove the tools from the image. An attacker can still read and copy tools even if some syscalls are blocked.
D is wrong because readOnlyRootFilesystem prevents writing to the container filesystem but does not prevent execution of existing binaries. An attacker can still run curl, wget, gcc, and other tools already present in the image — they just cannot write new files to the container's own filesystem.
Q9
What is the role of the Kubernetes API server's --service-account-signing-key-file and --service-account-issuer flags in cluster security?
A
They define the key used to sign etcd data at rest and the issuer name embedded in etcd encryption headers
They configure the private key used to sign service account tokens and the issuer claim embedded in those tokens — enabling other components to validate tokens and preventing token reuse across clusters
C
They configure the TLS certificate and hostname used for the kube-apiserver's HTTPS endpoint
D
They define the service account used by the API server itself to authenticate with the kubelet on each node
Correct Answer
They configure the private key used to sign service account tokens and the issuer claim embedded in those tokens — enabling other components to validate tokens and preventing token reuse across clusters
Explanation
Service account tokens are JWTs (JSON Web Tokens). The --service-account-signing-key-file specifies the private key the API server uses to sign these tokens. The --service-account-issuer defines the iss (issuer) claim in the token — typically the API server URL. Components that receive a service account token (like OIDC-compatible systems or other clusters) can validate the token by verifying the signature against the public key and checking the issuer claim. Having a unique issuer per cluster prevents a token issued for one cluster from being reused against another cluster — a critical boundary for multi-cluster environments.
A is wrong because etcd data encryption is configured via --encryption-provider-config, not service account signing keys. The signing key is for JWT tokens, not etcd data.
C is wrong because the TLS certificate for the HTTPS endpoint is configured via --tls-cert-file and --tls-private-key-file. These are separate from service account token signing keys.
D is wrong because the API server authenticates with kubelets using its own client certificate (--kubelet-client-certificate), not a service account.
Q10
A security team wants to enforce that every container image deployed to production has a corresponding vulnerability scan that found no critical CVEs and was completed within the last 72 hours. How can this be implemented in a Kubernetes cluster?
A
Configure imagePullPolicy: Always and add a liveness probe that runs trivy inside the container to scan itself at startup
Use a ValidatingAdmissionWebhook (custom or via a tool like Kyverno with external data sources) that queries a scan results API at admission time, checking that the image's digest has a scan result marked as passed within the last 72 hours — rejecting the Pod if no valid scan result exists
C
Run Trivy in the CI pipeline and add a build label to the image with the scan timestamp — the Kyverno policy reads this label and checks the timestamp
D
Configure the container registry to automatically delete any image that has not been scanned within 72 hours, preventing such images from being pulled
Correct Answer
Use a ValidatingAdmissionWebhook (custom or via a tool like Kyverno with external data sources) that queries a scan results API at admission time, checking that the image's digest has a scan result marked as passed within the last 72 hours — rejecting the Pod if no valid scan result exists
Explanation
This requirement demands real-time policy enforcement at deployment time based on external scan data. A ValidatingAdmissionWebhook that queries a scan results database (Grype DB, a custom API, or a registry like JFrog Xray) at Pod admission time can check whether the specific image digest being deployed has a scan result that passes all severity thresholds and was completed within the required time window. If no valid recent scan exists, the webhook denies the Pod creation. This provides cryptographically-bound (digest-based) enforcement of scan recency and pass/fail status.
A is wrong because running Trivy inside a running container as a liveness probe is operationally impractical (Trivy would need to be included in every image, adding significant size) and runs after the container starts, not before. A failing liveness probe would restart the container, not prevent it from starting.
C is wrong because image labels are metadata added at build time. An attacker who can push images to the registry could add a forged scan timestamp label. Labels are not cryptographically verifiable scan evidence — they can be trivially manipulated.
D is wrong because deleting images from a registry based on scan age would disrupt deployments and image availability. The requirement is to block deployment of unscanned images, not to delete them. Deletion would prevent recovery of known-good images and break reproducibility.
Q11
A Kubernetes cluster runs Falco, and the security team receives the following alert: rule: K8s Secret Access by Unusual Process. This Falco rule detects when a process reads a Kubernetes service account token from /var/run/secrets/kubernetes.io/serviceaccount/token. The alert shows the process is curl inside a Pod named data-processor in namespace analytics. What should the security team investigate first, and what is the likely significance?
A
The alert is almost certainly a false positive — curl is a standard HTTP client and reading the service account token is normal initialization behavior for any container
The data-processor Pod's application does not appear to natively use curl — the security team should investigate whether curl was invoked by a child process spawned from a code injection or command execution vulnerability, as this is a common pattern in Kubernetes lateral movement: reading the service account token to authenticate to the Kubernetes API and escalate privileges
C
This indicates a network configuration issue — curl is trying to reach the Kubernetes API server and the token read is part of normal TLS handshake initialization
D
The alert indicates that the service account token has expired and curl is attempting to refresh it from the API server
Correct Answer
The data-processor Pod's application does not appear to natively use curl — the security team should investigate whether curl was invoked by a child process spawned from a code injection or command execution vulnerability, as this is a common pattern in Kubernetes lateral movement: reading the service account token to authenticate to the Kubernetes API and escalate privileges
Explanation
A process reading a service account token to then make Kubernetes API calls is a well-documented lateral movement technique. If the data-processor application is legitimately a batch data processing job, it should use its language's Kubernetes SDK (Python kubernetes client, Go client) — not curl — to make API calls. The detection of curl reading the token is a high-fidelity indicator that a child process (likely spawned via a code execution vulnerability, dependency confusion attack, or malicious dependency) is conducting reconnaissance by reading the service account token. The attacker's goal is typically to use the token to list cluster resources, read Secrets from other namespaces, or create privileged Pods. This requires immediate investigation of the Pod's recent process tree and network connections.
A is wrong because while curl can legitimately read service account tokens (e.g., a shell script making API calls), this should be expected behavior that is known and baselined. An unexpected curl process in a data processing container is not normal and should not be dismissed as a false positive without investigation.
C is wrong because service account token reads are not part of TLS handshake initialization. TLS uses X.509 certificates for connection authentication — the service account token is a Kubernetes API authentication mechanism, not a TLS credential.
D is wrong because service account token refresh in Kubernetes 1.21+ is handled automatically by the kubelet's token projector — not by the application process manually reading and refreshing the token file using curl.
Q12
A company uses Kubernetes with an external OIDC provider for human user authentication. An administrator wants to implement just-in-time (JIT) access for production cluster access. Which approach best achieves this?
A
Set the Kubernetes certificate TTL to 5 minutes for all client certificates issued to administrators
Use an identity provider with short-lived OIDC token TTLs (15-30 minutes) combined with an access request system (like Teleport or a custom approval workflow) that grants a user membership in a specific OIDC group only for the duration of an approved access window — after which group membership is removed and existing tokens expire
C
Delete and recreate the RBAC RoleBinding for each administrator every time they need access
D
Require administrators to run kubectl proxy from their laptop, which automatically limits their session to the proxy's default 15-minute timeout
Correct Answer
Use an identity provider with short-lived OIDC token TTLs (15-30 minutes) combined with an access request system (like Teleport or a custom approval workflow) that grants a user membership in a specific OIDC group only for the duration of an approved access window — after which group membership is removed and existing tokens expire
Explanation
JIT access means granting elevated permissions only when needed and automatically revoking them after a defined window. The most robust implementation combines: (1) short-lived OIDC tokens that expire quickly so long-lived tokens cannot be reused after access is revoked, (2) an access request system that temporarily adds the user to a privileged OIDC group with a defined expiry, and (3) RBAC that grants the required permissions to the group rather than individual users. When the access window expires, group membership is removed and the next token refresh will not include the group claim, effectively revoking access without any manual Kubernetes RBAC changes.
A is wrong because Kubernetes does not support per-user or configurable short TTL for OIDC tokens — token TTL is set by the identity provider, not by Kubernetes. Client certificates have a TTL set at issuance but are not easily used for JIT access in practice.
C is wrong because manually deleting and recreating RoleBindings is operationally fragile, creates an audit trail gap (permissions were deleted and re-created), and does not handle the time-based automatic revocation requirement — someone must remember to delete the binding after the access window.
D is wrong because kubectl proxy does not enforce session timeouts and does not provide any access control beyond what the user's existing kubeconfig credentials allow. It is a local network proxy tool, not an access management system.
Q13
A cluster runs on nodes with kernel version 5.4. A security team wants to use Seccomp with the RuntimeDefault profile on all containers. Starting from which Kubernetes version did seccomp RuntimeDefault become the default for new Pods without explicit configuration, and what changed?
A
Kubernetes 1.19 — seccomp was promoted to GA and RuntimeDefault was applied to all Pods automatically
Kubernetes 1.27 — the SeccompDefault feature gate was promoted to stable and enabled by default in the kubelet, applying the RuntimeDefault seccomp profile to Pods that do not specify a seccomp profile — but this requires opt-in via kubelet configuration (--seccomp-default) in most distributions and does not retroactively apply to existing Pods
C
Kubernetes 1.20 — seccomp became mandatory for all containers as part of the AppArmor and Seccomp Security Enhancement KEP
D
Seccomp RuntimeDefault has never been a Kubernetes default — it must always be explicitly specified in every Pod's securityContext
Correct Answer
Kubernetes 1.27 — the SeccompDefault feature gate was promoted to stable and enabled by default in the kubelet, applying the RuntimeDefault seccomp profile to Pods that do not specify a seccomp profile — but this requires opt-in via kubelet configuration (--seccomp-default) in most distributions and does not retroactively apply to existing Pods
Explanation
The SeccompDefault feature gate allows the kubelet to apply the RuntimeDefault seccomp profile to any container that does not explicitly specify a seccomp profile. It was introduced as alpha in 1.22, beta in 1.25, and graduated to stable in 1.27. However, even in 1.27+, the feature requires explicit enablement in the kubelet configuration (--seccomp-default flag or equivalent) in most Kubernetes distributions — it is not automatically enabled in all clusters. The RuntimeDefault profile uses the container runtime's default seccomp profile (containerd's or crun's built-in list), which blocks commonly dangerous syscalls while allowing those needed by typical applications.
A is wrong because in Kubernetes 1.19, seccomp support was promoted to GA (moving from alpha annotation to native securityContext.seccompProfile field), but RuntimeDefault was not automatically applied. The GA promotion was about the API field, not automatic default application.
C is wrong because Kubernetes 1.20 did not make seccomp mandatory. No version of Kubernetes has made seccomp mandatory for all containers — it remains opt-in (or opt-in via the SeccompDefault kubelet flag).
D is wrong because the SeccompDefault feature gate (stable in 1.27) does allow RuntimeDefault to be applied automatically when explicitly enabled in kubelet configuration. It is not always required to specify it in every Pod spec when this feature is enabled.
Q14
A Kubernetes cluster's etcd is configured with --auto-compaction-retention=1. What is the security implication of this setting, and what value should be chosen for production?
A
It means etcd compacts its revision history every 1 second, causing data loss for any Kubernetes object modified more frequently than once per second
It means etcd retains revision history for only 1 hour before compacting it. This limits the ability to use etcd's watch history for forensic purposes (reviewing what values a key had in the past), but reduces etcd disk usage and memory. Production values should balance operational needs — 8 hours is a common choice — while ensuring audit logs capture the same information for longer retention
C
It disables automatic compaction entirely — the value 1 means "one compaction per day" — increasing disk usage indefinitely over time
D
It means only 1 revision of each key is retained in etcd, causing the Kubernetes API server to fail if any watch operations are performed
Correct Answer
It means etcd retains revision history for only 1 hour before compacting it. This limits the ability to use etcd's watch history for forensic purposes (reviewing what values a key had in the past), but reduces etcd disk usage and memory. Production values should balance operational needs — 8 hours is a common choice — while ensuring audit logs capture the same information for longer retention
Explanation
etcd stores the complete revision history of all key changes, enabling watch operations and point-in-time queries. auto-compaction-retention controls how long this revision history is kept before being compacted (deleted). A value of 1 (in hours, by default) means revisions older than 1 hour are discarded during compaction. The security implication is that forensic analysis of historical etcd state — for example, determining what value a Secret had 3 hours ago — becomes impossible once compaction runs. For production, teams should choose a retention period that balances etcd disk usage (revision history consumes significant space in active clusters) with forensic needs, recognizing that audit logs and external monitoring should be the primary source of truth for historical security analysis.
A is wrong because the compaction retention unit for the --auto-compaction-retention flag (in periodic mode, which is the default) is hours, not seconds. A value of 1 means 1 hour.
C is wrong because a value of 1 does not disable compaction — it enables compaction with a 1-hour retention period. A value of 0 disables automatic compaction.
D is wrong because compaction removes old revision history but does not limit each key to a single revision. Current values are always accessible. Watch operations continue to function for future changes — they only lose access to the compacted historical window.
Q15
What is the difference between a software vulnerability and a software weakness in the context of container image security, and why does this distinction matter for image scanning?
A
Vulnerabilities are found only in OS packages; weaknesses are found only in application code — scanners like Trivy only detect OS vulnerabilities
A vulnerability (CVE) is a specific, publicly disclosed instance of exploitable code in a specific package version with a known identifier; a weakness (CWE) is a category of security design or coding flaw. Image scanners primarily detect CVEs in package versions but do not assess whether the application's code logic contains weaknesses — a container image with zero CVEs can still be exploitable through application-level weaknesses
C
Vulnerabilities require a patch from the package maintainer; weaknesses can be mitigated by the deploying team without waiting for an upstream fix
D
The two terms are interchangeable in the context of container security — both refer to conditions that can be exploited by an attacker
Correct Answer
A vulnerability (CVE) is a specific, publicly disclosed instance of exploitable code in a specific package version with a known identifier; a weakness (CWE) is a category of security design or coding flaw. Image scanners primarily detect CVEs in package versions but do not assess whether the application's code logic contains weaknesses — a container image with zero CVEs can still be exploitable through application-level weaknesses
Explanation
This distinction is important for setting accurate expectations about what image scanning achieves. CVE-based scanning (Trivy, Grype, Snyk) matches installed package versions against known vulnerability databases — it is highly effective at finding publicly disclosed vulnerabilities in third-party dependencies. However, it cannot detect: application logic flaws (authentication bypasses, injection vulnerabilities, business logic errors), insecure configurations baked into the image, weak cryptography in application code, or design weaknesses. A container image with a clean CVE scan can still be critically vulnerable through SQL injection, SSRF, or insecure direct object references. Comprehensive security requires both CVE scanning and application security testing (SAST, DAST, penetration testing).
A is wrong because modern vulnerability scanners like Trivy scan OS packages, programming language dependencies (npm, pip, Maven, Cargo), and application-level lock files — not just OS packages. The limitation is about CVEs versus application logic weaknesses, not about OS versus application layer.
C is wrong because this describes the remediation path difference, not the conceptual distinction. The question asks about the nature of the categories, not how they are fixed.
D is wrong because vulnerability and weakness have distinct technical meanings in security. CVEs (Common Vulnerabilities and Exposures) and CWEs (Common Weakness Enumeration) are separate classification systems maintained by MITRE with different purposes.
Want More Practice?
These are just the free questions. Unlock the full Kubernetes & Cloud Native Security Associate exam library with hundreds of additional questions, timed practice mode, and progress tracking.