Free Practice Questions AWS Certified Cloud Practitioner 30 Questions with Answers Free Practice Questions AWS Certified Cloud Practitioner 30 Questions with Answers
FREE QUESTIONS

AWS Certified Cloud Practitioner
Practice Questions

30 free questions with correct answers and detailed explanations.

30 Free Questions
2 Free Exams
100% With Explanations

CLF-C02 Practice Set-01

15 questions
Q1
A startup stores sensitive customer data in Amazon S3 and wants to ensure that no bucket in its AWS account is ever made publicly accessible — even accidentally by a developer. The security team wants to enforce this as an account-level guardrail, not a per-bucket setting. What is the MOST effective solution?
A Enable Amazon Macie to automatically detect and remediate public S3 buckets
B Apply a Service Control Policy (SCP) that denies s3:Put Bucket Policy for all users
Enable S3 Block Public Access settings at the AWS account level
D Attach an IAM policy to every IAM user that denies the ability to change bucket ACLs
Correct Answer
Enable S3 Block Public Access settings at the AWS account level
Explanation
S3 Block Public Access at the account level is a single, centralized control that overrides any bucket-level ACL or policy that would make data public. It is specifically designed to prevent accidental public exposure and automatically applies to all current and future buckets.

A is wrong because Amazon Macie is a detective control — it identifies and reports public buckets but does not prevent or block access.
B is wrong because blocking s3:PutBucketPolicy would prevent all bucket policy changes, breaking legitimate administrative workflows. It also doesn't cover public access granted via ACLs.
D is wrong because attaching IAM policies per user is operationally fragile. New users or roles added later would not automatically inherit the restriction, creating coverage gaps.
Q2
A media company runs a video transcoding workload that can be interrupted and restarted without data loss. Jobs run unpredictably throughout the day and the company wants to minimize compute costs. Which Amazon EC2 purchasing option is MOST cost-effective for this workload?
A On-Demand Instances
B Reserved Instances (1-year, no upfront)
Spot Instances
D Dedicated Hosts
Correct Answer
Spot Instances
Explanation
Spot Instances allow you to use spare AWS capacity at discounts of up to 90% compared to On-Demand pricing. Since the transcoding jobs can be interrupted and restarted without data loss, they are an ideal fit for Spot — the defining requirement for Spot suitability is fault tolerance to interruptions.

A is wrong because On-Demand Instances are the most expensive option for sustained or frequent compute use, with no discount applied.
B is wrong because Reserved Instances offer discounts for predictable, steady-state workloads committed over 1 or 3 years. This workload is unpredictable and interruptible, making reservations a poor fit.
D is wrong because Dedicated Hosts are the most expensive option, designed for licensing compliance or regulatory requirements that mandate single-tenant hardware — not cost optimization.
Q3
A company has a mobile app that allows users to upload profile photos. The photos must be stored durably, retrieved quickly on demand, and accessed infrequently after the initial upload. The company wants to minimize storage costs without sacrificing availability. Which AWS storage solution BEST meets these requirements?
A Amazon EBS (Elastic Block Store) General Purpose SSD volume
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
C Amazon EFS (Elastic File System) Standard tier
D Amazon S3 Glacier Instant Retrieval
Correct Answer
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Explanation
S3 Standard-IA is designed exactly for data that is accessed infrequently but must be retrieved immediately when needed. It offers the same millisecond latency and high durability (99.999999999%) as S3 Standard, but at a lower storage cost — with a per-retrieval fee that makes it economical when access is infrequent.

A is wrong because EBS is block storage attached to EC2 instances. It cannot serve as a standalone object store for a mobile app and is not cost-effective for large-scale file storage.
C is wrong because EFS is a managed file system for use with EC2 and other compute services. It's more expensive than S3 and not designed for serving user-generated content directly to mobile apps.
D is wrong because S3 Glacier Instant Retrieval is optimized for archival data accessed once per quarter. While it offers millisecond retrieval, it is priced for true archival use cases — more appropriate for backups than user profile photos that may be retrieved more regularly.
Q4
A company is deploying a web application on AWS. The application must be highly available, distribute incoming traffic across multiple EC2 instances in different Availability Zones, and automatically stop sending traffic to any instance that fails a health check. Which AWS service should the company use?
A Amazon Route 53 with latency-based routing
B AWS Global Accelerator
Application Load Balancer (ALB)
D Amazon CloudFront
Correct Answer
Application Load Balancer (ALB)
Explanation
An Application Load Balancer automatically distributes incoming HTTP/HTTPS traffic across multiple EC2 instances across Availability Zones. It performs continuous health checks and stops routing traffic to any unhealthy instance — directly satisfying all three requirements in the scenario.

A is wrong because Route 53 is a DNS service. While it supports health-check-based routing, it operates at the DNS level with TTL delays and is not designed to replace a load balancer for instance-level traffic distribution.
B is wrong because AWS Global Accelerator improves performance for global users by routing traffic through AWS's private network to the nearest edge location. It is not a regional load balancer and is more relevant for latency-sensitive global applications than for distributing traffic across AZs.
D is wrong because CloudFront is a content delivery network (CDN) used to cache and serve static content at edge locations. It does not distribute traffic across EC2 instances or perform application-layer health checks.
create 2 more questions but difficult, exam-level multiple-choice questionary 3Here are 2 significantly harder CLF-C02 exam-style questions designed to test deep
Q5
A company has multiple AWS accounts managed under AWS Organizations. The security team needs to ensure that developers in member accounts cannot disable AWS CloudTrail logging in their own accounts, cannot delete CloudTrail log files from Amazon S3, and cannot modify the CloudTrail configuration — regardless of their IAM permissions within those accounts. The solution must be enforced centrally without modifying IAM policies in each member account. What should the security team do to meet these requirements with the LEAST operational overhead?
A Create an IAM permission boundary in each member account that denies cloudtrail:DeleteTrail, cloudtrail:StopLogging, and s3:DeleteObject actions and attach it to all developer roles
B Enable AWS Config in each member account with a rule that detects and auto-remediates any changes to CloudTrail configuration
Create a Service Control Policy (SCP) at the AWS Organizations root or OU level that explicitly denies cloudtrail:DeleteTrail, cloudtrail:StopLogging, cloudtrail:UpdateTrail, and s3:DeleteObject on the CloudTrail S3 bucket
D Use AWS CloudFormation Stack Sets to deploy a Lambda function across all member accounts that monitors CloudTrail events and reverts unauthorized changes automatically
Correct Answer
Create a Service Control Policy (SCP) at the AWS Organizations root or OU level that explicitly denies cloudtrail:DeleteTrail, cloudtrail:StopLogging, cloudtrail:UpdateTrail, and s3:DeleteObject on the CloudTrail S3 bucket
Explanation
Service Control Policies (SCPs) are the only AWS mechanism that enforces permission guardrails across all identities in member accounts — including account root users and administrators — without touching IAM in each individual account. An SCP applied at the Organizations root or OU level acts as a maximum permissions boundary: even if a developer has Administrator Access in their account, the SCP deny overrides it. This satisfies all three requirements (no disabling, no deletion, no modification) from a single central policy with zero per-account overhead.

A is wrong because permission boundaries only limit the maximum permissions of the IAM entity they are attached to — they do not apply automatically to all users and roles. Every existing and future developer role would need the boundary manually attached, which introduces significant operational overhead and does not cover account root users.
B is wrong because AWS Config rules are detective controls, not preventive ones. They detect a violation after it has already occurred. Even with auto-remediation via SSM or Lambda, there is a window of exposure between the violation and the remediation — CloudTrail could already be disabled before Config reacts. This does not meet the requirement to prevent the action.
D is wrong because a Lambda-based remediation approach also reacts after the fact, just like Config. It introduces significant operational complexity — maintaining Lambda functions, IAM roles, CloudWatch Events rules, and Stack Sets across every account. This has the highest operational overhead of all options and still cannot prevent the initial action.
Q6
A company runs a business-critical three-tier web application on AWS. The database tier runs on a large RDS MySQL instance and must be available 24/7 with no planned interruptions. The application tier runs on a fleet of EC2 instances that handles a predictable baseline load of 10 instances at all times, plus an unpredictable burst of up to 6 additional instances during business hours. The company wants to MINIMIZE total compute and database costs while maintaining reliability for the steady-state workload. Which combination of purchasing strategies should the company use?
A Purchase Reserved Instances for all 16 EC2 instances and an RDS Reserved Instance for the database
B Use On-Demand Instances for all 10 baseline EC2 instances, Spot Instances for the 6 burst instances, and an RDS Reserved Instance for the database
C Purchase Reserved Instances for the 10 baseline EC2 instances, use Spot Instances for the 6 burst instances, and use an On-Demand RDS instance
Purchase Reserved Instances for the 10 baseline EC2 instances, use On-Demand Instances for the 6 burst instances, and purchase an RDS Reserved Instance for the database
Correct Answer
Purchase Reserved Instances for the 10 baseline EC2 instances, use On-Demand Instances for the 6 burst instances, and purchase an RDS Reserved Instance for the database
Explanation
This question requires you to match the right purchasing model to each workload characteristic independently.
For the 10 baseline EC2 instances: Reserved Instances (1-year or 3-year) offer discounts of up to 72% over On-Demand for steady, predictable, always-on workloads. Since these 10 instances run 24/7, Reserved Instances provide maximum savings.
For the 6 burst EC2 instances: The burst is described as unpredictable in timing and duration. Spot Instances would seem attractive, but the question states the burst supports a business-critical application. The safer and more appropriate choice is On-Demand — Spot Instances can be interrupted with only a 2-minute warning, which is unacceptable for a business-critical application tier handling active user requests. On-Demand gives flexibility without commitment for an unpredictable pattern.
For the RDS database: The database runs 24/7 with no planned interruptions. RDS Reserved Instances offer significant discounts (up to 69%) for this exactly predictable, continuous usage — making them the clear cost-optimizing choice over On-Demand RDS.

A is wrong because purchasing Reserved Instances for all 16 EC2 instances — including the 6 burst instances — means paying reservation fees for capacity that may sit idle outside of burst periods. You only benefit from a Reserved Instance if the instance is actually running, and reserving burst capacity wastes money during non-burst hours.
B is wrong because using On-Demand for the 10 baseline instances misses the biggest cost-saving opportunity. These instances run 24/7 and are the perfect use case for Reserved Instances. On-Demand pricing for always-on workloads is the most expensive option. Also, using an RDS Reserved Instance is correct for the database but this option pairs it with On-Demand baseline instances.
C is wrong specifically because of the RDS choice. The database runs 24/7 with no interruptions — the definition of a workload that benefits from Reserved Instance pricing. Keeping RDS on On-Demand when it never turns off is a straightforward missed savings opportunity that makes this combination suboptimal despite the correct EC2 choices.
Q7
A company recently experienced a security incident where an IAM user's long-term access keys were leaked on a public GitHub repository. The security team wants to implement a preventive control that reduces the risk of long-term credentials being used to access AWS resources from outside the corporate network, while still allowing developers to work normally. Which solution BEST meets this requirement?
A Enable AWS IAM Access Analyzer to automatically detect and revoke exposed access keys
Attach an IAM policy to all developer users that includes a condition denying API calls unless the request originates from the corporate IP range
C Enable Amazon GuardDuty to monitor for suspicious API calls made using compromised credentials
D Enforce multi-factor authentication (MFA) on all IAM users and require MFA to perform any sensitive API action
Correct Answer
Attach an IAM policy to all developer users that includes a condition denying API calls unless the request originates from the corporate IP range
Explanation
The core requirement is a preventive control that limits where long-term credentials can be used. An IAM policy with an aws:SourceIp condition key explicitly denies API calls originating from outside the approved corporate IP range. Even if an access key is leaked and an attacker obtains it, they cannot successfully make API calls from outside the corporate network — the condition block stops the request before any action is taken. This is a true preventive control.

A is wrong because IAM Access Analyzer identifies exposed access keys by scanning public repositories, but it does not automatically revoke them — it generates findings that humans must act on. It is a detective control, not a preventive one, and does not restrict where credentials can be used from.
C is wrong because GuardDuty is a threat detection service. It monitors CloudTrail logs and network traffic to identify suspicious behavior after it occurs. It cannot prevent an attacker from using leaked credentials — it can only alert you after the fact.
D is wrong because MFA adds a second authentication factor for console logins, but long-term access keys (used for programmatic API access) do not inherently require MFA unless you specifically add aws:MultiFactorAuthPresent conditions. Even then, MFA does not restrict the geographic origin of the API call, which is the primary concern after a key leak.
Q8
A healthcare company is migrating a patient records application to AWS and will store Protected Health Information (PHI) in an Amazon RDS database. The company's compliance officer asks who is responsible for encrypting the data at rest in RDS and who is responsible for the physical security of the servers running RDS. Which answer correctly describes the division of responsibility?
A AWS is responsible for both encryption at rest and physical server security
B The customer is responsible for both encryption at rest and physical server security
The customer is responsible for enabling and configuring encryption at rest; AWS is responsible for physical server security
D AWS is responsible for enabling encryption at rest by default; the customer is responsible for managing the encryption keys using AWS KMS
Correct Answer
The customer is responsible for enabling and configuring encryption at rest; AWS is responsible for physical server security
Explanation
The AWS Shared Responsibility Model divides duties clearly. Physical security of the infrastructure — data centers, servers, networking hardware — is entirely AWS's responsibility under "security of the cloud." AWS maintains compliance certifications (SOC 2, ISO 27001, HIPAA) covering physical controls. The customer is never involved in physical data center security.
Encryption at rest for RDS is a customer responsibility under "security in the cloud." AWS provides the capability (RDS supports encryption using AWS KMS), but the customer must make the decision to enable it when creating the instance, choose the KMS key, and manage key policies. AWS does not enable encryption by default on all RDS instances — the customer configures it.

A is wrong because AWS is never responsible for configuring encryption on customer data. AWS provides the tools and infrastructure; the customer decides how and whether to use them.
B is wrong because physical server security is entirely AWS's domain. Customers have no access to, visibility into, or responsibility for the physical hardware their workloads run on.
D is wrong because AWS does not enable encryption at rest by default on RDS. It is an opt-in feature the customer must enable at instance creation time. Additionally, key management in KMS is a customer responsibility, but that is a secondary point — the primary error here is the claim that AWS enables encryption by default.
Q9
A gaming company needs a database for its global leaderboard feature. The leaderboard must handle millions of reads and writes per second with single-digit millisecond latency, requires no complex relational queries or joins, and must scale automatically without any database administration overhead. Which AWS database service BEST meets these requirements?
A Amazon Aurora with read replicas across multiple Availability Zones
B It is not designed for simple, extremely high-throughput key-value patterns at millions of requests per second without careful tuning.
Amazon DynamoDB
D Amazon Redshift
Correct Answer
Amazon DynamoDB
Explanation
Amazon DynamoDB is a fully managed, serverless NoSQL key-value and document database built for exactly this pattern: millions of requests per second, consistent single-digit millisecond performance at any scale, and zero database administration. It scales automatically — both read and write capacity — without manual intervention. For a leaderboard (a simple key-value access pattern with no joins), DynamoDB is the purpose-built solution. DynamoDB also has a built-in feature called DynamoDB Accelerator (DAX) for microsecond caching if needed.

A is wrong because Aurora is a relational database optimized for complex SQL workloads. While it is highly performant and scalable, it still requires instance sizing, cluster management, and incurs more operational overhead than DynamoDB. It is not designed for simple, extremely high-throughput key-value patterns at millions of requests per second without careful tuning.
B is wrong because this architecture — RDS MySQL plus a caching layer — has significant operational overhead: managing RDS instances, handling failover, maintaining ElastiCache clusters, managing cache invalidation logic, and handling cache-miss scenarios. This directly violates the "no database administration overhead" requirement and introduces architectural complexity.
D is wrong because Amazon Redshift is a data warehousing service designed for complex analytical queries (OLAP) against large datasets. It is optimized for throughput on massive aggregations, not for high-velocity transactional reads and writes with millisecond latency requirements.
Q10
A company runs a web application on EC2 instances inside a VPC. The application fetches software updates and patches from the internet but must not be directly reachable from the internet for security reasons. The EC2 instances are in private subnets. What must the company configure to allow the instances to initiate outbound internet traffic while remaining unreachable from the internet?
A An Internet Gateway attached to the VPC, with a route in the private subnet route table pointing to the Internet Gateway
A NAT Gateway in a public subnet, with a route in the private subnet route table pointing to the NAT Gateway, and an Internet Gateway attached to the VPC
C A VPC Endpoint for the software update service, eliminating the need for any internet connectivity
D A bastion host in a public subnet that proxies all outbound traffic from the private subnet instances
Correct Answer
A NAT Gateway in a public subnet, with a route in the private subnet route table pointing to the NAT Gateway, and an Internet Gateway attached to the VPC
Explanation
This is a classic AWS networking architecture question. For private subnet instances to reach the internet without being reachable from it, you need all three components working together:
First, an Internet Gateway (IGW) must be attached to the VPC — this is the actual on/off ramp between the VPC and the public internet. Without it, nothing in the VPC can reach the internet at all.
Second, a NAT Gateway must be placed in a public subnet (a subnet whose route table points to the IGW). The NAT Gateway performs network address translation — it allows outbound connections initiated by private instances and returns responses to them, but blocks any connection attempts initiated from the internet.
Third, the private subnet's route table must have a route directing internet-bound traffic (0.0.0.0/0) to the NAT Gateway.

A is wrong because adding a route from the private subnet directly to the Internet Gateway would make those instances directly routable from the internet — eliminating the security boundary. An IGW alone does not provide the one-way outbound-only behavior needed.
C is wrong because VPC Endpoints only work for specific AWS services (like S3, DynamoDB, or services via Private Link). They cannot be used to access arbitrary internet destinations like third-party software update servers. This would only be valid if the updates came from an AWS service with a supported endpoint.
D is wrong because a bastion host is used for SSH/RDP administrative access to private instances, not for proxying general application traffic. Using a bastion as an internet proxy would require significant custom configuration and creates a single point of failure — this is not a standard or scalable architectural pattern on AWS.
Q11
A company is running a large workload on AWS and receives a monthly bill that is significantly higher than expected. The cloud team wants to investigate which specific AWS services, accounts, and resource tags are driving the highest costs, identify trends over time, and create custom reports that can be shared with individual business units. Which AWS tool is MOST appropriate for this requirement?
A AWS Trusted Advisor
AWS Cost Explorer
C AWS Budgets
D AWS Pricing Calculator
Correct Answer
AWS Cost Explorer
Explanation
AWS Cost Explorer is the purpose-built tool for analyzing, visualizing, and understanding AWS spending in depth. It allows you to break down costs by service, linked account, region, usage type, and resource tags. You can view historical trends, filter and group data in custom ways, create saved reports for specific business units, and even access 12 months of historical data with forecasting for future spend. It directly addresses every requirement in the scenario: service-level breakdown, account-level breakdown, tag-based allocation, trend analysis, and shareable reports.

A is wrong because AWS Trusted Advisor provides recommendations across cost optimization, security, performance, fault tolerance, and service limits. While it has a cost optimization category, it flags specific inefficiencies (e.g., idle EC2 instances) rather than providing the granular cost analysis, trend visualization, and custom reporting described in the scenario.
C is wrong because AWS Budgets is a proactive alerting tool — you set spending or usage thresholds and receive alerts when you approach or exceed them. It does not provide the historical analysis, drill-down breakdowns by tag or service, or the custom reporting capability the scenario requires.
D is wrong because the AWS Pricing Calculator is used to estimate costs for planned future architectures before you deploy them. It has no visibility into actual spending — it cannot analyze an existing bill, show historical trends, or identify what is driving current costs.
Q12
A large enterprise wants to migrate its on-premises data center to AWS over the next two years. The CTO wants a structured approach to evaluate which workloads should move first, which should be refactored, which can be retired, and which should remain on-premises. Which AWS framework or methodology is specifically designed to guide this type of cloud migration planning?
A AWS Well-Architected Framework
B AWS Cloud Adoption Framework (AWS CAF)
C AWS Migration Evaluator
The 7 Rs of cloud migration (Retire, Retain, Rehost, Replatform, Repurchase, Refactor, Relocate)
Correct Answer
The 7 Rs of cloud migration (Retire, Retain, Rehost, Replatform, Repurchase, Refactor, Relocate)
Explanation
The 7 Rs (sometimes called the Migration Strategies) are the AWS-defined framework for categorizing and planning the migration of individual workloads. Each "R" represents a migration strategy: Retire (decommission), Retain (keep on-premises), Rehost (lift-and-shift to EC2), Replatform (lift-tinker-and-shift, e.g., move to RDS), Repurchase (move to SaaS), Refactor/Re-architect (redesign using cloud-native services), and Relocate (move to VMware Cloud on AWS). The scenario describes exactly the decision-making process the 7 Rs are designed for — evaluating each workload and assigning it the right strategy.

A is wrong because the AWS Well-Architected Framework evaluates existing cloud workloads against five pillars (Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization). It is used to assess and improve architectures already running on AWS, not to plan migrations from on-premises.
B is wrong because the AWS Cloud Adoption Framework (CAF) is a higher-level organizational and strategic guidance document covering people, process, and technology perspectives for cloud transformation. It helps organizations understand what capabilities they need to build, but it does not provide the workload-by-workload migration classification methodology described in the scenario.
C is wrong because AWS Migration Evaluator is a tool that analyzes your current on-premises server inventory and utilization data to produce a cost projection for running equivalent workloads on AWS. It helps build a business case for migration but does not classify workloads by migration strategy.
Q13
A company runs a containerized microservices application. The DevOps team wants to run containers on AWS without managing or patching any underlying EC2 instances, EC2 clusters, or server infrastructure of any kind. The team still wants control over task-level configurations such as CPU, memory allocation, and IAM roles per container task. Which compute option meets these requirements?
A Amazon ECS with EC2 launch type
Amazon ECS with Fargate launch type
C Amazon EC2 with Docker installed manually
D AWS Lambda with container image support
Correct Answer
Amazon ECS with Fargate launch type
Explanation
Amazon ECS with the Fargate launch type is AWS's serverless compute engine for containers. With Fargate, you define your container task — specifying CPU, memory, networking, and IAM task roles — and AWS provisions, manages, patches, and scales the underlying infrastructure invisibly. You never interact with EC2 instances. This perfectly matches the requirement of zero server management with full task-level configuration control.

A is wrong because ECS with the EC2 launch type requires you to provision and manage a cluster of EC2 instances that serve as container hosts. You are responsible for right-sizing, patching, scaling, and maintaining those instances — the opposite of what the team wants.
C is wrong because running Docker on EC2 directly is the highest-overhead option. The team manages the EC2 instance, the OS, Docker installation and upgrades, container orchestration, and scaling — everything the scenario specifically wants to avoid.
D is wrong because AWS Lambda with container image support allows you to package Lambda functions as container images, but Lambda is an event-driven, function-level compute service with strict execution time limits (15 minutes maximum) and a stateless model. It is not designed for running persistent microservices and does not give you the task-level CPU/memory configuration model used in container orchestration. Lambda abstracts away too much of the container runtime model.
Q14
A financial services company uses AWS and must demonstrate to auditors that all API calls made in their AWS environment are logged, tamper-evident, and retained for seven years for compliance purposes. Currently, AWS CloudTrail is enabled in one region. Which combination of actions should the company take to FULLY meet these requirements?
A Enable CloudTrail in all regions, store logs in an S3 bucket with versioning enabled, and configure an S3 lifecycle policy to transition logs to S3 Glacier after 90 days
Enable a CloudTrail organization trail from AWS Organizations management account, deliver logs to a centralized S3 bucket with S3 Object Lock enabled in compliance mode, and configure a lifecycle policy to retain logs for seven years
C Enable CloudTrail in all regions, enable CloudWatch Logs integration to stream events to a CloudWatch log group, and set the log group retention policy to 7 years
D Enable AWS Config across all regions to record all configuration changes, store the configuration history in S3, and enable S3 versioning to prevent tampering
Correct Answer
Enable a CloudTrail organization trail from AWS Organizations management account, deliver logs to a centralized S3 bucket with S3 Object Lock enabled in compliance mode, and configure a lifecycle policy to retain logs for seven years
Explanation
This answer satisfies every requirement simultaneously:
An organization trail created from the management account automatically enables CloudTrail across all member accounts and all regions in one step — closing the gap of only having one region covered.
Delivering logs to a centralized S3 bucket in a dedicated security account prevents member account administrators from tampering with logs in their own accounts.
S3 Object Lock in compliance mode is the critical tamper-evidence control. In compliance mode, no user — not even the root account — can delete or modify locked objects before the retention period expires. This is the strongest tamper-evident guarantee AWS offers for stored data, and it directly satisfies the auditor's tamper-evident requirement.
A lifecycle policy set to retain objects for seven years automates the retention requirement without manual management.

A is wrong because S3 versioning alone is not tamper-evident in the way auditors require. An administrator with sufficient S3 permissions can still delete all versions of an object. Versioning prevents accidental overwrites but does not constitute a compliance-grade tamper-evident lock.
C is wrong for two reasons. First, CloudWatch Logs has a maximum retention setting of 10 years, which technically covers 7 years, but it is not designed as a compliance-grade immutable audit log store. Second, and more critically, CloudWatch Logs does not provide tamper-evidence — logs can be deleted by users with sufficient permissions. This does not satisfy the tamper-evident requirement.
D is wrong because AWS Config records resource configuration changes, not API calls. The scenario specifically requires logging of all API calls, which is CloudTrail's function. Config and CloudTrail serve different purposes and are not interchangeable. Config alone would leave API activity entirely unlogged.
Q15
A retail company runs its e-commerce platform entirely on-premises. During the holiday season, traffic increases by 800% for about three weeks, causing performance issues and lost sales. The rest of the year, most servers sit largely idle. The company wants to resolve this without over-provisioning hardware.
Which cloud benefit BEST addresses this situation?
A High availability through multiple data center redundancy
Elasticity, allowing resources to scale up during peak demand and scale down afterward
C Economies of scale, reducing the per-unit cost of compute resources
D Geographic reach, enabling the platform to serve customers in more regions
Correct Answer
Elasticity, allowing resources to scale up during peak demand and scale down afterward
Explanation
Elasticity directly solves the problem of seasonal traffic spikes. The company can provision additional capacity during the holiday rush and release it afterward, paying only for what it uses — without buying hardware that sits idle most of the year.

A is wrong because high availability addresses uptime and redundancy, not the ability to handle variable load. The problem here is capacity, not outages.
C is wrong because economies of scale refers to AWS passing on cost savings from bulk purchasing — it's a passive benefit, not something you activate to handle traffic spikes.
D is wrong because geographic reach helps serve users in distant regions with lower latency, which has nothing to do with the seasonal capacity problem described.

CLF-C02 Practice Set-02

15 questions
Q1
A startup CTO argues that moving to AWS eliminates all capital expenditure (CapEx) and converts infrastructure spending entirely to operational expenditure (OpEx). A cloud architect pushes back, saying this characterization is not entirely accurate. Which scenario would represent a remaining CapEx consideration even after migrating fully to AWS?
Paying for EC2 Reserved Instances on a 3-year all-upfront payment term
B Paying monthly On-Demand EC2 instance charges
C Paying for AWS Support at the Enterprise tier monthly
D Paying for data transfer costs between AWS regions monthly
Correct Answer
Paying for EC2 Reserved Instances on a 3-year all-upfront payment term
Explanation
The key distinction between CapEx and OpEx in cloud context is payment structure and asset ownership. CapEx traditionally involves large upfront payments for assets or commitments. When a company purchases 3-year All-Upfront Reserved Instances, they make a single large lump-sum payment at the beginning of the term for compute capacity they commit to for three years. From an accounting perspective, many organizations treat this as a capital expenditure — a significant upfront financial commitment tied to a multi-year term, even though AWS owns the hardware. This challenges the assumption that cloud is purely OpEx.

B is wrong because On-Demand pricing is the purest form of OpEx in cloud — you pay only for what you consume, billed hourly or per second, with no commitment. This is exactly the pay-as-you-go model that defines operational expenditure.
C is wrong because AWS Enterprise Support is billed monthly with no long-term commitment required. This is a recurring operational expense, not a capital commitment.
D is wrong because data transfer charges are usage-based costs billed monthly based on actual consumption. These are variable operational expenses with no upfront commitment.
Q2
A company needs to store 5 petabytes of archival data that is accessed at most once or twice per year for regulatory audits. Retrieval time of 12 hours is acceptable. The company's primary concern is achieving the absolute lowest possible storage cost. Which Amazon S3 storage class should the company use?
A Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
B Amazon S3 Glacier Instant Retrieval
C Amazon S3 Glacier Flexible Retrieval
Amazon S3 Glacier Deep Archive
Correct Answer
Amazon S3 Glacier Deep Archive
Explanation
Amazon S3 Glacier Deep Archive is the lowest-cost storage class AWS offers — designed specifically for data that is retained for years or decades and accessed extremely rarely (once or twice per year or less). It has a default retrieval time of 12 hours (or up to 48 hours for bulk retrieval), which aligns perfectly with the scenario's stated acceptable retrieval window. At petabyte scale, the cost difference between Glacier Deep Archive and other tiers is enormous, making this the unambiguous choice when lowest cost is the primary concern.

A is wrong because S3 Standard-IA is designed for data accessed infrequently but requiring immediate millisecond retrieval. Its storage cost is significantly higher than any Glacier tier, and paying for millisecond retrieval capability that the company does not need wastes money — especially at 5 petabytes.
B is wrong because S3 Glacier Instant Retrieval provides millisecond access latency and is priced accordingly. It is designed for archive data that needs to be retrieved immediately on rare occasions. The company has explicitly stated 12-hour retrieval is acceptable, so paying the premium for instant retrieval is unjustified.
C is wrong because S3 Glacier Flexible Retrieval (formerly S3 Glacier) is cheaper than Instant Retrieval and offers retrieval options from minutes to hours — but it is still more expensive per GB than Glacier Deep Archive. When retrieval time up to 12 hours is acceptable and cost is the top priority, Deep Archive is always the right choice over Flexible Retrieval.
Q3
An enterprise security team wants visibility into all AWS accounts within their organization to detect unusual API activity, cryptocurrency mining behavior, reconnaissance attempts, and potentially compromised EC2 instances — without deploying any agents on their instances. The solution must work across all accounts in AWS Organizations with a single delegated administrator setup. Which AWS service should they use?
A AWS Security Hub
B Amazon Inspector
Amazon GuardDuty
D AWS Config
Correct Answer
Amazon GuardDuty
Explanation
Amazon GuardDuty is a managed threat detection service that continuously analyzes AWS CloudTrail event logs, VPC Flow Logs, and DNS query logs to identify malicious activity and unauthorized behavior. It specifically detects cryptocurrency mining, credential compromise, reconnaissance, instance communication with known malicious IPs, and many other threat patterns — all without requiring any agents on EC2 instances. GuardDuty supports a delegated administrator model within AWS Organizations, allowing a single security account to receive and manage findings from all member accounts centrally.

A is wrong because AWS Security Hub is an aggregation and correlation service that collects findings from multiple security services (including GuardDuty, Inspector, Macie, and others) and presents them in a unified dashboard. Security Hub itself does not perform threat detection — it depends on GuardDuty and other services to generate the findings it displays. Enabling Security Hub alone without GuardDuty would not detect the threats described.
B is wrong because Amazon Inspector performs vulnerability assessments — it scans EC2 instances and container images for known CVEs and software vulnerabilities. It does not detect runtime behavioral threats like cryptocurrency mining, reconnaissance, or API anomalies. It also requires an agent on EC2 instances for deep inspection.
D is wrong because AWS Config records and evaluates AWS resource configuration states against compliance rules. It tracks whether configurations comply with desired settings (e.g., "is S3 bucket public?") but does not analyze behavioral patterns, network traffic, or API call anomalies for threat detection.
Q4
A company has been running on AWS for two years. Their monthly bill has grown significantly and the finance team wants to implement a tagging strategy to allocate costs to specific departments, projects, and cost centers for internal chargeback reporting. Which combination of steps is required to make resource tags appear in AWS cost and usage reports?
Apply tags to all AWS resources, then enable the tags as cost allocation tags in the AWS Billing Console
B Apply tags to all AWS resources and enable AWS Cost Explorer — tags automatically appear in all reports
C Enable AWS Config to track resource tags across all accounts and link it to AWS Cost Explorer
D Apply tags to all resources and create an AWS Budget for each department — the budget automatically maps costs to tags
Correct Answer
Apply tags to all AWS resources, then enable the tags as cost allocation tags in the AWS Billing Console
Explanation
AWS has a two-step requirement for tags to appear in cost and billing reports. First, tags must be applied to the AWS resources. Second — and this is the step many people miss — those tags must be explicitly activated as cost allocation tags in the AWS Billing and Cost Management Console. Only after activation does AWS begin including those tag keys as columns in Cost and Usage Reports and making them filterable in Cost Explorer. Tags applied before activation do not retroactively appear in historical cost data, and unactivated tags are invisible to billing reports regardless of how consistently they are applied to resources.

B is wrong because enabling Cost Explorer alone does not surface tags in billing reports. Tags must be explicitly activated as cost allocation tags in the Billing Console. This is one of the most commonly misunderstood billing behaviors in AWS — many teams tag resources thoroughly but never activate the tags, then wonder why they cannot filter costs by tag in Cost Explorer.
C is wrong because AWS Config tracks configuration and compliance of resources, including whether tags exist, but it has no integration with billing systems that makes tags appear in cost reports. Cost allocation tags are activated exclusively through the Billing Console, not through Config.
D is wrong because AWS Budgets is a forecasting and alerting tool. While you can filter a budget by tag, creating a budget does not activate tags for cost allocation purposes, does not make tags appear in usage reports, and does not enable chargeback reporting across all services.
Q5
A company wants to run a small script that resizes images whenever a new file is uploaded to an Amazon S3 bucket. The script typically completes in under 30 seconds and is triggered infrequently and unpredictably throughout the day. The company wants to pay only when the script actually runs and have zero server management overhead. Which solution is MOST appropriate?
A Launch a t3.micro EC2 instance that continuously polls the S3 bucket for new uploads and runs the script when a file is detected
Use AWS Lambda triggered by an S3 event notification to run the image resizing script automatically
C Use Amazon ECS with Fargate, triggered by an EventBridge rule that detects S3 uploads
D Use AWS Batch to submit a job whenever a new file is detected in S3
Correct Answer
Use AWS Lambda triggered by an S3 event notification to run the image resizing script automatically
Explanation
AWS Lambda is the canonical solution for this pattern. Lambda functions are triggered directly by S3 event notifications (e.g., s3:ObjectCreated), execute the processing logic, and terminate — with billing measured in milliseconds of actual execution time. There are no servers to manage, no idle costs between invocations, and the 30-second execution time is well within Lambda's 15-minute maximum. This is precisely the event-driven, serverless compute use case Lambda was designed for.

A is wrong because a continuously running EC2 instance polling S3 incurs charges 24/7 regardless of whether any files are uploaded. This violates the "pay only when the script runs" requirement and introduces server management overhead (OS patching, monitoring, availability). It is also architecturally inefficient — polling introduces latency and wastes compute.
C is wrong because ECS with Fargate is appropriate for containerized workloads that require more resources, longer runtimes, or more complex orchestration than Lambda supports. For a 30-second image processing script triggered by S3 events, Fargate adds unnecessary architectural complexity — container image management, task definitions, and cluster configuration — when Lambda handles this natively and more cost-effectively.
D is wrong because AWS Batch is designed for large-scale batch computing jobs that require significant compute resources, complex dependencies, or job queuing. It is overkill for a simple 30-second script and introduces job scheduling overhead and queue delays that are not appropriate for a lightweight, event-driven workload.
Q6
A company's e-commerce application experienced an outage when a single database server failed. The engineering team is reviewing the architecture to prevent this from happening again. During the review, they also discover that there are no automated backups, no monitoring alerts, and deployments are done manually with no rollback capability. Which pillar of the AWS Well-Architected Framework is MOST relevant to the specific failure that caused the outage?
A Performance Efficiency
B Operational Excellence
Reliability
D Cost Optimization
Correct Answer
Reliability
Explanation
The Reliability pillar of the AWS Well-Architected Framework focuses specifically on a workload's ability to perform its intended function correctly and consistently, including the ability to recover from failures automatically. The scenario's core failure — a single database server failing and taking down the application — is a classic single point of failure (SPOF), which the Reliability pillar addresses through design principles like using multiple Availability Zones, implementing fault isolation, and architecting for automatic recovery. Eliminating SPOFs and building resilience are the primary concerns of Reliability.

A is wrong because Performance Efficiency addresses using computing resources efficiently to meet system requirements and scaling as demand changes. Database server performance (speed, throughput, latency) would fall under this pillar, but the failure to handle a server crash is a reliability concern, not a performance concern.
B is wrong because Operational Excellence covers running and monitoring systems to deliver business value, including automating deployments, responding to events, and refining procedures. The absence of monitoring, manual deployments, and no rollback capability described in the scenario are Operational Excellence concerns — but the specific outage-causing failure (single server crash) is a Reliability issue. The question asks about the failure that caused the outage, not the other gaps discovered.
D is wrong because Cost Optimization focuses on avoiding unnecessary costs and using resources efficiently. Nothing in the outage scenario relates to overspending or cost inefficiency.
Q7
A company runs a highly sensitive internal application on AWS that should never be accessible from the public internet. All traffic must travel exclusively over private AWS network infrastructure. The company also needs to connect their on-premises data center to this AWS VPC with a consistent, low-latency, high-bandwidth dedicated connection rather than traversing the public internet. Which AWS service provides this dedicated private connectivity?
A AWS Site-to-Site VPN
AWS Direct Connect
C This provides consistent low latency, predictable bandwidth, and enhanced security — exactly what a sensitive application requiring no public internet exposure demands. Direct Connect supports connection speeds from 50 Mbps up to 100 Gbps.
D VPC Peering
Correct Answer
AWS Direct Connect
Explanation
AWS Direct Connect establishes a dedicated, private physical network connection between an on-premises data center and AWS, completely bypassing the public internet. Traffic travels over a private fiber connection between the customer's facility and an AWS Direct Connect location, then across AWS's private global network to the target VPC. This provides consistent low latency, predictable bandwidth, and enhanced security — exactly what a sensitive application requiring no public internet exposure demands. Direct Connect supports connection speeds from 50 Mbps up to 100 Gbps.

A is wrong because AWS Site-to-Site VPN creates an encrypted tunnel between on-premises and AWS, but that tunnel travels over the public internet. While encrypted, the traffic still traverses shared public internet infrastructure, introducing variable latency and bandwidth, and technically touches the public internet — which the scenario explicitly prohibits.
C is wrong because Amazon CloudFront is a content delivery network that caches and serves content at edge locations. Origin access control restricts access to S3 origins behind CloudFront. Neither CloudFront nor OAC has any relevance to dedicated private on-premises to AWS connectivity.
D is wrong because VPC Peering connects two VPCs privately within the AWS network. It does not provide connectivity between an on-premises data center and AWS. Peering also has no concept of dedicated bandwidth or physical network connections.
Q8
A company stores sensitive documents in Amazon S3. The security team wants to automatically discover which S3 buckets contain personally identifiable information (PII) such as names, email addresses, credit card numbers, and passport numbers — across hundreds of buckets — without manually reviewing file contents. Which AWS service is designed for this purpose?
A AWS Trusted Advisor
Amazon Macie
C Amazon GuardDuty
D AWS Config with a custom rule
Correct Answer
Amazon Macie
Explanation
Amazon Macie is a fully managed data security and privacy service that uses machine learning to automatically discover, classify, and protect sensitive data stored in Amazon S3. It can identify a wide range of sensitive data types including PII (names, addresses, email addresses), financial data (credit card numbers, bank account numbers), and credentials (AWS secret keys, private keys) — across an entire S3 environment. Macie generates detailed findings showing exactly which buckets and objects contain sensitive data, enabling the security team to take appropriate action without manually reviewing file contents.

A is wrong because AWS Trusted Advisor provides recommendations for cost optimization, security best practices (such as flagging publicly accessible S3 buckets), performance, and fault tolerance. It does not scan the actual contents of files stored in S3 to identify PII or sensitive data.
C is wrong because Amazon GuardDuty analyzes behavioral data — CloudTrail API logs, VPC Flow Logs, DNS logs — to detect threats and malicious activity. It does not scan object contents within S3 buckets to identify sensitive data types. GuardDuty and Macie serve complementary but distinct purposes.
D is wrong because AWS Config evaluates resource configurations against desired rules — for example, checking whether S3 buckets have encryption enabled or public access blocked. Custom Config rules can inspect resource metadata and settings, but they cannot read and classify the actual contents of files stored inside S3 buckets to identify PII.
Q9
A company runs a legacy CRM application on-premises using a commercial software license from a vendor. The vendor now offers the same CRM as a cloud-based SaaS subscription product. The company decides to cancel the on-premises license and subscribe to the vendor's SaaS version, avoiding any re-engineering work. Which of the 7 Rs migration strategies does this represent?
A Replatform
B Refactor
Repurchase
D Rehost
Correct Answer
Repurchase
Explanation
Repurchase (sometimes called "drop and shop") means moving from an existing on-premises or self-managed software license to a different product delivered as a SaaS subscription — typically from the same vendor or a competitor. The company is not migrating their existing CRM application to AWS, rewriting any code, or lifting-and-shifting servers. They are replacing the software entirely with a SaaS equivalent and abandoning the old license. This is the textbook definition of Repurchase.

A is wrong because Replatform (also called "lift-tinker-and-shift") involves migrating an existing application to the cloud with minor optimizations — for example, moving a self-managed MySQL database to Amazon RDS without changing the application code. Replatform retains the same application but changes the underlying platform. Here the company is abandoning the application entirely.
B is wrong because Refactor (also called Re-architect) involves significantly redesigning an application to be cloud-native, often adopting microservices, serverless, or containerized architectures to improve scalability or performance. This requires substantial development effort — the opposite of the zero re-engineering described in the scenario.
D is wrong because Rehost (lift-and-shift) means migrating an application to run on AWS infrastructure — typically moving on-premises servers to EC2 — with no changes to the application itself. The company is not moving their existing CRM application to AWS; they are replacing it entirely with a SaaS product.
Q10
A company wants to deploy a web application globally so that users in Asia, Europe, and North America all experience low latency. They also want the application to remain available even if an entire AWS Region becomes unavailable. Which combination of AWS features enables this architecture?
A Deploy the application in multiple Availability Zones within a single AWS Region and use an Application Load Balancer
Deploy the application in multiple AWS Regions and use Amazon Route 53 with latency-based or failover routing policies
C Deploy the application in one AWS Region and enable Amazon CloudFront to cache all content globally
D Deploy the application in multiple AWS Regions and connect them using VPC Peering
Correct Answer
Deploy the application in multiple AWS Regions and use Amazon Route 53 with latency-based or failover routing policies
Explanation
Achieving both global low latency and regional fault tolerance requires deploying the application stack in multiple geographically separate AWS Regions — for example, us-east-1, eu-west-1, and ap-southeast-1. Amazon Route 53 then handles intelligent DNS routing: latency-based routing directs each user to the Region that provides the lowest network latency, while health check-based failover routing automatically redirects traffic away from any Region that becomes unavailable. This architecture addresses both requirements simultaneously and is the standard multi-Region active-active or active-passive pattern on AWS.

A is wrong because multiple Availability Zones within a single Region provide high availability against AZ-level failures (data center failures), but all AZs in a Region share the same geographic area. If the entire Region becomes unavailable, all AZs fail together. This does not meet the requirement for global low latency across three continents or resilience to full Regional failure.
C is wrong because CloudFront effectively reduces latency for static and cacheable content by serving it from edge locations close to users. However, CloudFront is a CDN — it caches and serves content, but dynamic application logic and database writes still flow to the origin Region. If that single origin Region fails, the application becomes unavailable regardless of CloudFront. This does not provide regional fault tolerance.
D is wrong because VPC Peering connects VPCs privately for network communication between resources. Deploying in multiple Regions with VPC Peering does not automatically route user traffic to the nearest or healthiest Region — you still need Route 53 or another DNS routing mechanism to direct users. VPC Peering alone solves internal network connectivity, not user-facing traffic distribution or failover.
Q11
A company's AWS bill shows unexpected charges for data transfer. After investigation, the team identifies three data flows: (1) EC2 instances transferring data to S3 within the same AWS Region, (2) EC2 instances in us-east-1 sending data to EC2 instances in eu-west-1, and (3) EC2 instances transferring data out to end users on the public internet. Which of these data flows incur charges?
A All three data flows incur charges
B Only data flow 3 incurs charges
Data flows 2 and 3 incur charges; data flow 1 is free
D Data flows 1 and 3 incur charges; data flow 2 is free within the AWS network
Correct Answer
Data flows 2 and 3 incur charges; data flow 1 is free
Explanation
AWS data transfer pricing follows consistent rules that are important to understand:
Data flow 1 — EC2 to S3 within the same Region: Data transfer between EC2 and S3 within the same AWS Region is free. AWS does not charge for this traffic pattern, which is why co-locating compute and storage in the same Region is a cost best practice.
Data flow 2 — EC2 in us-east-1 to EC2 in eu-west-1: Data transfer between AWS Regions (inter-Region transfer) is charged in both directions. Even though both endpoints are AWS services, crossing a Regional boundary incurs data transfer fees based on the source Region's pricing.
Data flow 3 — EC2 to public internet: Data transfer out from AWS to the public internet is one of the most significant sources of AWS data transfer charges. AWS charges for outbound data transfer based on volume, with the first 100 GB per month free and tiered pricing beyond that.

A is wrong because data flow 1 (EC2 to S3 within the same Region) is free. Knowing this exemption is a key CLF-C02 exam concept around AWS pricing.
B is wrong because inter-Region data transfer (flow 2) is not free — it is explicitly charged. Only intra-Region transfers between certain services (like EC2 to S3 in the same Region) are exempt.
D is wrong because inter-Region transfer (flow 2) is not free simply because it stays on the AWS network. AWS charges for Regional boundary crossings regardless of whether both endpoints are AWS services.
Q12
A company runs a mission-critical order processing system that requires a fully managed relational database with automatic failover to a standby replica within 60 to 120 seconds if the primary instance fails, without any manual intervention. The database must also support automated backups and point-in-time recovery. Which AWS database configuration meets these requirements?
A Amazon RDS with a read replica in the same Availability Zone
Amazon RDS with Multi-AZ deployment enabled
C Amazon DynamoDB with global tables enabled
D Amazon RDS with automated backups enabled and a manual snapshot schedule
Correct Answer
Amazon RDS with Multi-AZ deployment enabled
Explanation
Amazon RDS Multi-AZ deployment maintains a synchronous standby replica of the primary database instance in a different Availability Zone. AWS automatically monitors the primary instance's health and initiates failover to the standby — promoting it to primary — within 60 to 120 seconds if the primary fails, becomes unreachable, or experiences an AZ outage. This failover is automatic with no manual intervention required. DNS is automatically updated to point to the new primary. Multi-AZ also supports automated backups and point-in-time recovery natively.

A is wrong because a read replica in the same Availability Zone serves two incorrect purposes: read replicas are designed to offload read traffic and scale read performance, not to provide high availability. More critically, a read replica in the same AZ does not protect against an AZ failure — both instances would be affected simultaneously. Additionally, read replica promotion to primary requires manual action, not automatic failover.
C is wrong because DynamoDB is a NoSQL key-value and document database, not a relational database. The scenario specifically requires a relational database. DynamoDB Global Tables provide multi-Region active-active replication for global applications, but this does not address the need for a relational database with SQL capabilities, joins, or transactional integrity in the traditional RDBMS sense.
D is wrong because automated backups and manual snapshots enable data recovery after a failure, but they do not prevent downtime. Restoring from a backup or snapshot takes significant time — often 15 minutes to hours depending on database size — far exceeding the 60 to 120 second automatic failover requirement. This is a recovery mechanism, not a high availability mechanism.
Q13
A company is building a production system on AWS that processes financial transactions. When a critical AWS service issue occurs, they need access to a dedicated Technical Account Manager (TAM), response times under 15 minutes for business-critical system failures, and proactive guidance on architecture and operational best practices. Which AWS Support plan meets all of these requirements?
A AWS Developer Support
B AWS Business Support
C AWS Enterprise On-Ramp Support
AWS Enterprise Support
Correct Answer
AWS Enterprise Support
Explanation
AWS Enterprise Support is the only plan that includes all three requirements simultaneously. It provides a designated Technical Account Manager (TAM) — a single named AWS expert who develops deep familiarity with the customer's environment and provides proactive guidance. It offers a 15-minute response time SLA for business-critical system down cases (Severity 1). It also includes proactive services such as Well-Architected Reviews, Infrastructure Event Management, and operational reviews.

A is wrong because AWS Developer Support is designed for development and testing environments. It provides email-only access to Cloud Support Associates (not engineers or TAMs), offers no TAM, and has a response time of 12 business hours for general guidance and 4 hours for system impaired — far slower than 15 minutes.
B is wrong because AWS Business Support provides 24/7 phone, chat, and email access to Cloud Support Engineers, and offers a 1-hour response SLA for production system down cases. While significantly better than Developer Support, it does not include a designated TAM, does not achieve the 15-minute response SLA, and does not include the proactive architectural guidance services of Enterprise Support.
C is wrong because AWS Enterprise On-Ramp Support provides a pool of Technical Account Managers (shared, not dedicated) and a 30-minute response SLA for critical failures — not 15 minutes. It is positioned between Business and full Enterprise Support, and the TAM access is not a dedicated individual with deep knowledge of the customer's specific environment.
Q14
A company wants to allow employees to access AWS services using their existing corporate Microsoft Active Directory credentials, without creating separate IAM users for each employee in AWS. The solution must support Single Sign-On so employees log in once with their corporate credentials and gain access to their permitted AWS accounts and services. Which AWS capability enables this?
A AWS IAM with an identity-based policy that maps Active Directory groups to IAM roles
AWS IAM Identity Center (formerly AWS SSO) integrated with Microsoft Active Directory
C Amazon Cognito with a corporate identity provider configured as a SAML federation source
D Creating IAM users for each employee and enabling IAM credential synchronization with Active Directory
Correct Answer
AWS IAM Identity Center (formerly AWS SSO) integrated with Microsoft Active Directory
Explanation
AWS IAM Identity Center (formerly AWS Single Sign-On) is specifically designed for this use case. It integrates natively with Microsoft Active Directory (either AWS Managed Microsoft AD or on-premises AD via AD Connector) and supports SAML 2.0 federation. Employees log in once through the IAM Identity Center portal using their existing corporate Active Directory credentials, and IAM Identity Center grants them access to their assigned AWS accounts and permission sets without needing separate IAM users. It manages the entire SSO lifecycle centrally, supports multi-account access, and eliminates credential duplication.

A is wrong because standard IAM identity-based policies cannot natively authenticate against Active Directory. IAM does not have built-in AD integration — you cannot simply map AD groups to IAM roles without a federation mechanism. IAM roles can be assumed via SAML federation, but that requires implementing the federation infrastructure separately, not just attaching policies.
C is wrong because Amazon Cognito is designed for customer-facing application identity — managing user pools for web and mobile app end users. While Cognito supports SAML federation with external identity providers, it is architected for application user authentication (B2C scenarios), not for granting employees access to AWS accounts and services (workforce identity). IAM Identity Center is the correct service for workforce SSO.
D is wrong because creating IAM users for each employee is exactly what the company wants to avoid — it creates administrative overhead, separate credential management, and does not constitute Single Sign-On. IAM does not have a native credential synchronization feature with Active Directory, and this approach would require employees to manage separate AWS passwords.
Q15
A company deploys web applications on AWS and wants to protect them from common web exploits such as SQL injection, cross-site scripting (XSS), and HTTP floods that could affect application availability. The security team wants a managed solution that requires minimal rule-writing and automatically updates protections as new threats emerge.
Which AWS service BEST meets this requirement?
A AWS Shield Standard
B Amazon Inspector
AWS WAF with AWS Managed Rules
D Amazon GuardDuty
Correct Answer
AWS WAF with AWS Managed Rules
Explanation
AWS WAF (Web Application Firewall) is specifically designed to protect web applications from common exploits like SQL injection and XSS by inspecting HTTP/HTTPS requests at the application layer (Layer 7). AWS Managed Rules are pre-built, continuously updated rule groups maintained by AWS and AWS Marketplace sellers that protect against known threats without requiring the customer to write or maintain individual rules. This satisfies both the protection requirement and the minimal rule-writing requirement.

A is wrong because AWS Shield Standard provides automatic protection against network and transport layer DDoS attacks (Layer 3 and 4) and is automatically enabled for all AWS customers at no extra cost. It does not inspect application-layer traffic or protect against SQL injection or XSS — those are Layer 7 threats outside Shield Standard's scope.
B is wrong because Amazon Inspector is a vulnerability assessment service that scans EC2 instances and container images for known software vulnerabilities and unintended network exposure. It analyzes your infrastructure configuration, not incoming web traffic. It cannot block SQL injection or XSS attacks in real time.
D is wrong because Amazon GuardDuty is a threat detection service that analyzes CloudTrail logs, VPC Flow Logs, and DNS logs to identify malicious behavior and anomalies. It generates findings after suspicious activity is detected but cannot intercept or block web requests in real time.

Want More Practice?

These are just the free questions. Unlock the full AWS Certified Cloud Practitioner exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to AWS Certified Cloud Practitioner Exams