Free Practice Questions•AWS Certified CloudOps Engineer – Associate•60 Questions with Answers•Free Practice Questions•AWS Certified CloudOps Engineer – Associate•60 Questions with Answers•
FREE QUESTIONS
AWS Certified CloudOps Engineer – Associate Practice Questions
60 free questions with correct answers and detailed explanations.
60Free Questions
2Free Exams
100%With Explanations
SOA-C03 Practice Set-01
30 questions
Q1
A company runs a microservices application on Amazon EKS. The operations team needs to collect and correlate logs from all pods across multiple namespaces into CloudWatch Logs. Which approach is MOST efficient?
Deploy Fluent Bit as a DaemonSet on the EKS cluster configured to forward pod logs to CloudWatch Logs with Kubernetes metadata
B
Configure each application to write logs to a shared Amazon EFS volume and create a Lambda function to ingest logs into CloudWatch
C
Enable EKS control plane logging which automatically captures all pod-level application logs
D
Install the CloudWatch agent on each EKS worker node using SSH and configure it to tail /var/log/containers
Correct Answer
Deploy Fluent Bit as a DaemonSet on the EKS cluster configured to forward pod logs to CloudWatch Logs with Kubernetes metadata
Explanation
The CloudWatch agent can be deployed as a DaemonSet on EKS to collect container logs from all pods. Combined with Fluent Bit as a log router, it efficiently forwards pod logs to CloudWatch Logs with namespace and pod-level metadata for filtering. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html
Q2
An operations engineer creates an Amazon CloudWatch Logs metric filter to count ERROR occurrences in application logs. The custom metric shows zero values even though the log group contains ERROR entries. What are TWO possible causes? (Choose TWO.)
The metric filter was created after the ERROR log entries were ingested, and metric filters do not apply retroactively to existing data
The filter pattern is case-sensitive and does not match the actual case of error messages in the logs
C
The CloudWatch agent has stopped sending logs to the log group due to network connectivity issues
D
The metric filter namespace conflicts with an existing AWS namespace causing data to be dropped
E
The CloudWatch Logs retention period has expired and the log entries have been deleted
Correct Answers
The metric filter was created after the ERROR log entries were ingested, and metric filters do not apply retroactively to existing data
The filter pattern is case-sensitive and does not match the actual case of error messages in the logs
Explanation
CloudWatch Logs metric filters only match log events that arrive after the filter is created — they do not evaluate historical data. Also, the filter pattern syntax is case-sensitive, so if logs contain 'Error' but the filter matches 'ERROR', no match occurs. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html
Q3
A production application uses Amazon Managed Service for Prometheus (AMP) to store metrics from EC2-based microservices. The team needs to visualize these metrics in dashboards with alerting capabilities. Which service provides native integration with AMP for visualization?
Amazon Managed Grafana configured with AMP as a data source
B
Amazon CloudWatch dashboards with AMP metrics imported via CloudWatch metric streams
C
Amazon QuickSight with a custom Prometheus connector
D
AWS X-Ray service map with Prometheus metric annotations
Correct Answer
Amazon Managed Grafana configured with AMP as a data source
Explanation
Amazon Managed Grafana integrates natively with Amazon Managed Service for Prometheus as a data source. It provides pre-built dashboards, alerting, and visualization capabilities specifically designed to work with Prometheus metrics. Learn more: https://docs.aws.amazon.com/grafana/latest/userguide/prometheus-data-source.html
Q4
An operations engineer needs to troubleshoot why an AWS Lambda function is timing out sporadically. The function connects to an Amazon RDS database. CloudWatch Logs show the function starts but hangs at the database connection step. Which TWO actions should the engineer take to diagnose and resolve this? (Choose TWO.)
Enable AWS X-Ray active tracing on the Lambda function to identify the exact step where latency occurs
Configure Amazon RDS Proxy for the database to manage connection pooling and reduce connection establishment time
C
Increase the Lambda function memory size to 3008 MB as this also increases CPU allocation
D
Move the Lambda function outside the VPC to reduce cold start time and improve connectivity
E
Enable Enhanced Monitoring on the RDS instance to check if database CPU is at 100%
Correct Answers
Enable AWS X-Ray active tracing on the Lambda function to identify the exact step where latency occurs
Configure Amazon RDS Proxy for the database to manage connection pooling and reduce connection establishment time
Explanation
Lambda functions in a VPC need proper networking to reach RDS. Connection timeouts often occur due to security group misconfiguration or connection pool exhaustion. Enabling RDS Proxy provides connection pooling and reduces connection overhead. Using X-Ray tracing helps identify where the latency occurs. Learn more: https://docs.aws.amazon.com/lambda/latest/dg/configuration-database.html
Q5
A company's CloudWatch billing alarm has not triggered even though the actual bill has exceeded the threshold. What is the MOST likely reason?
The billing alarm was created in a Region other than us-east-1, where billing metrics are exclusively published
B
Billing alarms only evaluate at the end of each billing cycle and not in real-time
C
The IAM user who created the alarm does not have permissions to view CloudWatch billing metrics
D
CloudWatch billing alarms are deprecated and replaced by AWS Budgets alerts
E
Create an S3 bucket in the centralized account and configure each account to write VPC Flow Logs directly to it
Correct Answer
The billing alarm was created in a Region other than us-east-1, where billing metrics are exclusively published
Explanation
CloudWatch billing metrics are only available in the us-east-1 Region. If the billing alarm was created in a different Region, it cannot evaluate the billing metrics. Additionally, the IAM user must have enabled billing alerts in the Billing and Cost Management console first. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html
Q6
An operations engineer must create a centralized logging solution that collects CloudTrail logs from all accounts in an AWS Organization. The logs must be queryable using SQL for incident investigation. Which architecture meets these requirements?
Create an organization trail that delivers logs to a centralized S3 bucket and use Amazon Athena to query the logs with SQL
B
Enable CloudTrail in each account individually and replicate logs to a central DynamoDB table for queries
C
Configure CloudTrail to send logs to CloudWatch Logs in each account and use CloudWatch Logs Insights for SQL queries across accounts
D
Use AWS Config aggregator to collect CloudTrail findings from all accounts and query them through the Config API
Correct Answer
Create an organization trail that delivers logs to a centralized S3 bucket and use Amazon Athena to query the logs with SQL
Explanation
An organization trail delivers CloudTrail logs from all member accounts to a central S3 bucket. Amazon Athena can query the logs directly from S3 using SQL without needing to load data into a database. This is the most cost-effective queryable logging solution. Learn more: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
Q7
An EC2 instance running a Java application experiences intermittent performance degradation. The CloudWatch CPUUtilization metric remains below 30%, but the application response time increases significantly. Which CloudWatch feature should the engineer use to investigate OS-level metrics that may reveal the bottleneck?
Configure the CloudWatch agent to collect detailed OS-level metrics including memory, disk I/O, and swap usage
B
Enable EC2 detailed monitoring to increase the metric reporting frequency from 5 minutes to 1 minute
C
Use AWS Compute Optimizer to analyze the instance performance and recommend right-sizing
D
Enable Enhanced Monitoring for the instance to access process-level CPU breakdown data
Correct Answer
Configure the CloudWatch agent to collect detailed OS-level metrics including memory, disk I/O, and swap usage
Explanation
The CloudWatch agent collects OS-level metrics including memory usage, disk I/O, swap usage, and network statistics that are not available through default EC2 metrics. These can reveal bottlenecks like memory pressure or disk I/O contention that would not be visible in CPU metrics alone. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html
Q8
An EC2 instance running a critical application has failed its system status check. The operations engineer needs the instance to recover automatically on different underlying hardware while retaining its instance ID, private IP, and EBS volumes. Which solution achieves this?
Create a CloudWatch alarm on the StatusCheckFailed_System metric with an EC2 recover action
B
Configure Auto Scaling with a minimum capacity of 1 to replace the failed instance automatically
C
Create an EventBridge rule that triggers a Lambda function to stop and start the instance on different hardware
D
Enable EC2 Auto Recovery in the instance settings to automatically reboot the instance
Correct Answer
Create a CloudWatch alarm on the StatusCheckFailed_System metric with an EC2 recover action
Explanation
A CloudWatch alarm on the StatusCheckFailed_System metric with an EC2 recover action automatically moves the instance to new underlying hardware while retaining the same instance ID, private IP address, Elastic IP, instance metadata, and EBS volume attachments. Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
Q9
An operations engineer needs to implement a backup strategy for a DynamoDB table that contains 500 GB of data and receives 10000 write requests per second. The backup must complete without impacting production traffic. Which approach is correct?
Use DynamoDB on-demand backup or AWS Backup scheduled backups, which create full backups without impacting table throughput
B
Enable DynamoDB Streams and use a Lambda function to write each item to an S3 backup bucket
C
Use the DynamoDB Export to S3 feature to export the table data to S3 in Apache Parquet format
D
Reduce write traffic to the table during off-peak hours before initiating the scan-based backup
Correct Answer
Use DynamoDB on-demand backup or AWS Backup scheduled backups, which create full backups without impacting table throughput
Explanation
DynamoDB on-demand backups create full backups without consuming read or write capacity and without impacting table performance. The backups are completed in seconds regardless of table size. AWS Backup integration also supports scheduled DynamoDB backups. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html
Q10
A company wants to ensure that its Route 53 failover routing policy triggers failover only when the primary endpoint is truly down, not due to transient issues. The current health check occasionally causes false failovers. How should the engineer adjust the configuration?
Increase the health check failure threshold to require more consecutive failures before marking the endpoint unhealthy
B
Decrease the health check request interval to 10 seconds for more frequent monitoring
C
Change the health check type from HTTP to TCP to reduce timeout-related false positives
D
Add multiple health check regions to ensure geographic diversity in health check evaluations
Correct Answer
Increase the health check failure threshold to require more consecutive failures before marking the endpoint unhealthy
Explanation
Increasing the Route 53 health check failure threshold means more consecutive health checks must fail before the endpoint is considered unhealthy. This reduces false positives from transient network issues. Additionally, enabling health check request interval configuration can help balance sensitivity. Learn more: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-creating-values.html
Q11
An operations engineer is writing a CloudFormation template that needs to create different resources based on whether the deployment target is production or staging. The environment type is passed as a parameter. Which CloudFormation feature should the engineer use?
Use CloudFormation Conditions with the Fn::Equals function to evaluate the environment parameter and conditionally create resources
B
Create separate CloudFormation templates for production and staging environments
C
Use CloudFormation Mappings to map environment names to resource configurations
D
Use nested stacks with conditional parameter passing based on the environment
Correct Answer
Use CloudFormation Conditions with the Fn::Equals function to evaluate the environment parameter and conditionally create resources
Explanation
CloudFormation Conditions allow you to define conditions based on parameter values. Resources can be conditionally created using the Condition property, and intrinsic functions like Fn::If can set property values based on conditions. Learn more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Q12
An operations engineer needs to ensure that a CloudFormation stack cannot be accidentally deleted by any IAM user, including administrators. Which CloudFormation feature provides this protection?
Enable termination protection on the CloudFormation stack
B
Create a stack policy that denies Delete actions on all resources
C
Attach an IAM policy to all users denying cloudformation:DeleteStack for the specific stack
D
Set DeletionPolicy to Retain on all resources in the template
Correct Answer
Enable termination protection on the CloudFormation stack
Explanation
CloudFormation termination protection prevents a stack from being accidentally deleted. When enabled, any attempt to delete the stack will fail until termination protection is explicitly disabled. This acts as a safeguard against accidental deletions. Learn more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html
Q13
A company uses AWS CDK (TypeScript) to manage infrastructure. The operations engineer needs to ensure that synthesized CloudFormation templates comply with organization policies before deployment. Which approach integrates policy validation into the CDK workflow?
Implement CDK Aspects that validate resource configurations against organizational policies during synthesis and throw errors for violations
B
Run cfn-lint on the synthesized template in a pre-deployment CI/CD pipeline step
C
Use CloudFormation Guard rules to validate templates after cdk synth
D
Create an AWS Config conformance pack that evaluates deployed resources for compliance
Correct Answer
Implement CDK Aspects that validate resource configurations against organizational policies during synthesis and throw errors for violations
Explanation
CDK Aspects allow you to apply visitor patterns across the construct tree to enforce rules and policies. You can create custom aspects that validate resource configurations during synthesis and fail the build if policies are violated. Learn more: https://docs.aws.amazon.com/cdk/v2/guide/aspects.html
Q14
An operations engineer must create a custom container image that includes a monitoring agent and deploy it to Amazon ECR. The image must be scanned for vulnerabilities before it can be used in production ECS tasks. Which configuration automates vulnerability scanning?
Enable enhanced scanning in Amazon ECR which uses Amazon Inspector to automatically scan images on push and continuously for new vulnerabilities
B
Create a CodePipeline stage that runs a third-party vulnerability scanner on the ECR image before deployment
C
Configure an S3 event trigger that invokes a Lambda function to scan container images when they are pushed to ECR
D
Enable AWS Config with the ecr-image-scanning-configured rule to ensure all images are scanned
Correct Answer
Enable enhanced scanning in Amazon ECR which uses Amazon Inspector to automatically scan images on push and continuously for new vulnerabilities
Explanation
Amazon ECR supports both basic scanning (using Clair) and enhanced scanning (powered by Amazon Inspector). Enhanced scanning with Amazon Inspector provides automated continuous scanning of container images for OS and programming language vulnerabilities. Learn more: https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html
Q15
An operations engineer deployed a CloudFormation stack that creates an Auto Scaling group with a launch template. After updating the launch template to a new version with an updated AMI, the engineer notices that existing instances continue to use the old AMI. How should the engineer force existing instances to use the updated AMI?
Start an instance refresh on the Auto Scaling group to perform a rolling replacement of instances using the new launch template version
B
Terminate all existing instances manually and let Auto Scaling launch new ones with the updated template
C
Update the CloudFormation stack with an UpdatePolicy specifying AutoScalingReplacingUpdate
D
Modify the Auto Scaling group desired capacity to 0 then back to the original value
Correct Answer
Start an instance refresh on the Auto Scaling group to perform a rolling replacement of instances using the new launch template version
Explanation
Updating the launch template version does not automatically replace existing instances. An instance refresh in the Auto Scaling group performs a rolling replacement of instances using the updated launch template. You can configure the minimum healthy percentage during the refresh. Learn more: https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html
Q16
A company uses Git for version control and wants to implement GitOps-style infrastructure deployment where merging a pull request to the main branch automatically triggers a CloudFormation deployment. Which AWS service integration achieves this with native AWS services?
Configure AWS CodePipeline with a Git repository source stage and a CloudFormation deploy stage that triggers on main branch merges
B
Create an EventBridge rule that monitors Git push events and triggers a Lambda function to deploy CloudFormation stacks
C
Configure Git webhooks to directly invoke the CloudFormation CreateStack API on each commit
D
Use AWS Config rules to detect infrastructure drift and automatically apply CloudFormation updates
Correct Answer
Configure AWS CodePipeline with a Git repository source stage and a CloudFormation deploy stage that triggers on main branch merges
Explanation
AWS CodePipeline can be configured with a source stage connected to a Git repository (CodeCommit, GitHub, etc.) that triggers on main branch changes. The deploy stage can use CloudFormation as the deployment provider, creating a fully automated GitOps pipeline. Learn more: https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CloudFormation.html
Q17
An operations engineer needs to execute a shell script on 500 EC2 instances to update a configuration file. The script must run within a 2-hour maintenance window and report success/failure for each instance. Which approach is MOST efficient?
Use Systems Manager Run Command with a custom AWS-RunShellScript document targeted to the instances within a Systems Manager maintenance window
B
Create a Lambda function that SSHs into each instance and executes the script sequentially
C
Use AWS CodeDeploy with an in-place deployment to distribute and execute the script
D
Upload the script to S3 and configure EC2 user data to download and run it on the next reboot
Correct Answer
Use Systems Manager Run Command with a custom AWS-RunShellScript document targeted to the instances within a Systems Manager maintenance window
Explanation
Systems Manager Run Command allows you to execute commands across managed instances at scale. Combined with maintenance windows, commands can be scheduled during specific periods. Run Command provides detailed output showing success or failure per instance. Learn more: https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html
Q18
An operations engineer must configure VPC endpoints so that Lambda functions in private subnets can invoke other Lambda functions and publish to SNS topics without internet access. Which VPC endpoints are required? (Choose TWO.)
Create an interface VPC endpoint for Lambda (com.amazonaws.region.lambda) to invoke other Lambda functions
Create an interface VPC endpoint for SNS (com.amazonaws.region.sns) to publish messages to SNS topics
C
Create a gateway VPC endpoint for Lambda to invoke other functions without internet
D
Create a gateway VPC endpoint for SNS to publish messages without internet
E
Create an interface VPC endpoint for STS (com.amazonaws.region.sts) for cross-service authentication
Correct Answers
Create an interface VPC endpoint for Lambda (com.amazonaws.region.lambda) to invoke other Lambda functions
Create an interface VPC endpoint for SNS (com.amazonaws.region.sns) to publish messages to SNS topics
Explanation
Lambda functions in VPCs need interface VPC endpoints to access AWS services without internet. An interface endpoint for Lambda (com.amazonaws.region.lambda) enables invoking other functions, and an interface endpoint for SNS (com.amazonaws.region.sns) enables publishing to topics. Learn more: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
Q19
An operations engineer must configure an AWS account to meet CIS AWS Foundations Benchmark requirements. Which AWS service provides automated compliance checks against CIS benchmarks?
Enable AWS Security Hub and activate the CIS AWS Foundations Benchmark standard to automatically evaluate account compliance
B
Deploy a custom AWS Config conformance pack based on CIS benchmark controls
C
Use Amazon Inspector to run CIS benchmark scans on all EC2 instances in the account
D
Hire a third-party auditor to manually assess CIS compliance quarterly
Correct Answer
Enable AWS Security Hub and activate the CIS AWS Foundations Benchmark standard to automatically evaluate account compliance
Explanation
AWS Security Hub offers the CIS AWS Foundations Benchmark standard as a built-in compliance check. When enabled, it automatically evaluates your account configuration against CIS benchmark controls and generates findings for non-compliant items. Learn more: https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html
Q20
An operations engineer needs to configure data classification for objects stored in an S3 bucket. The classification must automatically identify objects containing credit card numbers or social security numbers. Which AWS service is purpose-built for this task?
Enable Amazon Macie and create a sensitive data discovery job targeting the S3 bucket to automatically classify objects containing credit card numbers and social security numbers
B
Create S3 Object Lambda access points that scan objects for sensitive data patterns on read
C
Configure Amazon Comprehend to analyze S3 objects for PII detection
D
Write a Lambda function triggered by S3 events that uses regex patterns to scan uploaded objects for credit card and SSN formats
Correct Answer
Enable Amazon Macie and create a sensitive data discovery job targeting the S3 bucket to automatically classify objects containing credit card numbers and social security numbers
Explanation
Amazon Macie uses machine learning and pattern matching to discover, classify, and protect sensitive data in S3. It automatically identifies PII, financial data, credentials, and other sensitive data types including credit card numbers and SSNs. Learn more: https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
Q21
An operations engineer must implement a policy where EC2 instances can only be launched by IAM users who have MFA authenticated within the last 4 hours. Which IAM condition achieves this?
Create an IAM policy with a Deny statement for ec2:RunInstances with the condition aws:MultiFactorAuthAge greater than 14400 seconds (4 hours)
B
Configure the EC2 service to require MFA tokens as a launch parameter
C
Create an SCP that denies ec2:RunInstances for all users without MFA devices registered
D
Use AWS Config rules to terminate instances launched without recent MFA authentication
Correct Answer
Create an IAM policy with a Deny statement for ec2:RunInstances with the condition aws:MultiFactorAuthAge greater than 14400 seconds (4 hours)
Explanation
The aws:MultiFactorAuthAge condition key returns the number of seconds since the user authenticated with MFA. Setting a condition that denies ec2:RunInstances when MultiFactorAuthAge is greater than 14400 (4 hours in seconds) enforces the requirement. Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html
Q22
An operations engineer discovers that sensitive data is being logged in CloudWatch Logs by an application. The engineer must implement data masking to protect sensitive fields like credit card numbers in the log output. Which CloudWatch Logs feature supports this?
Configure a CloudWatch Logs data protection policy on the log group with data identifiers for credit card numbers to automatically detect and mask sensitive data
B
Modify the application code to remove sensitive data before writing to CloudWatch Logs
C
Create a CloudWatch Logs subscription filter that routes logs through a Lambda function that masks sensitive data before storing
D
Enable CloudWatch Logs encryption with a KMS key to protect sensitive data at rest
Correct Answer
Configure a CloudWatch Logs data protection policy on the log group with data identifiers for credit card numbers to automatically detect and mask sensitive data
Explanation
CloudWatch Logs data protection policies allow you to automatically detect and mask sensitive data in log events. You can define data identifiers for patterns like credit card numbers, and CloudWatch Logs will mask the matching data in log output. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html
Q23
An operations engineer must configure a VPC with both IPv4 and IPv6 addressing. Instances in public subnets need both IPv4 and IPv6 internet access, while instances in private subnets need only IPv4 internet access. Which architecture meets these requirements?
Associate an IPv6 CIDR block with the VPC, configure public subnets with routes for 0.0.0.0/0 and ::/0 to the internet gateway, and configure private subnets with only a 0.0.0.0/0 route to a NAT gateway
B
Create separate VPCs for IPv4 and IPv6 traffic and connect them with VPC peering
C
Use a transit gateway to route IPv6 traffic from public subnets and IPv4 traffic from private subnets
D
Configure dual-stack subnets with a NAT64 gateway for translating private subnet IPv6 traffic to IPv4
Correct Answer
Associate an IPv6 CIDR block with the VPC, configure public subnets with routes for 0.0.0.0/0 and ::/0 to the internet gateway, and configure private subnets with only a 0.0.0.0/0 route to a NAT gateway
Explanation
Public subnets need an internet gateway for both IPv4 and IPv6. Private subnets need a NAT gateway for IPv4 outbound access. IPv6 in private subnets should not have a route to the internet gateway. If IPv6 outbound is needed later, an egress-only internet gateway would be used. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html
Q24
An operations engineer is troubleshooting a connectivity issue where an EC2 instance can reach the internet but cannot resolve DNS names. The VPC uses the default DHCP option set. What should the engineer check FIRST?
Verify that the VPC has the enableDnsSupport attribute set to true and that the DHCP option set specifies the correct DNS server addresses
B
Check if the instance's security group allows outbound UDP traffic on port 53
C
Verify that the route table has a route to the internet gateway for DNS traffic
D
Check if the instance has a DNS client installed and properly configured
Correct Answer
Verify that the VPC has the enableDnsSupport attribute set to true and that the DHCP option set specifies the correct DNS server addresses
Explanation
The default VPC DHCP option set uses the Amazon-provided DNS server (VPC CIDR + 2). If a custom DHCP option set was applied with incorrect DNS server addresses, or if the VPC DNS resolution attribute is disabled, DNS resolution would fail while internet connectivity (by IP) works fine. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html
Q25
An operations engineer must configure CloudFront to serve different content based on the viewer's device type (mobile vs desktop). Which CloudFront feature enables content differentiation based on device type?
Configure the CloudFront cache behavior to forward CloudFront device detection headers (CloudFront-Is-Mobile-Viewer) to the origin and include them in the cache key
B
Create separate CloudFront distributions for mobile and desktop and use Route 53 to route based on User-Agent
C
Use Lambda@Edge to parse the User-Agent header and redirect to different origins based on device type
D
Configure CloudFront Functions to modify the request URI based on the detected device type
Correct Answer
Configure the CloudFront cache behavior to forward CloudFront device detection headers (CloudFront-Is-Mobile-Viewer) to the origin and include them in the cache key
Explanation
CloudFront can detect device type using the CloudFront-Is-Mobile-Viewer, CloudFront-Is-Tablet-Viewer, and CloudFront-Is-Desktop-Viewer headers. These headers can be forwarded to the origin or used in cache behaviors to serve different content. Learn more: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html
Q26
An operations engineer needs to troubleshoot an intermittent connectivity issue between an EC2 instance and an external API endpoint. The issue occurs randomly and lasts for a few seconds. Which tool helps capture and analyze the network traffic during these events?
Enable VPC Flow Logs at the ENI level with a 1-minute aggregation interval and filter for the external API endpoint's IP address to identify dropped connections
B
Install tcpdump on the EC2 instance and run it continuously to capture packet-level data during failures
C
Use AWS X-Ray to trace network requests between the instance and the external API
D
Create a CloudWatch alarm on the NetworkPacketsOut metric to detect when connectivity drops
Correct Answer
Enable VPC Flow Logs at the ENI level with a 1-minute aggregation interval and filter for the external API endpoint's IP address to identify dropped connections
Explanation
VPC Flow Logs capture information about IP traffic going to and from network interfaces. Enabling flow logs at the ENI level with a short aggregation interval (1 minute) provides granular data for identifying intermittent connectivity issues. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Q27
An operations engineer must configure an ALB to route requests to different target groups based on the HTTP request path. Requests to /api/* should go to a microservices target group, and requests to /static/* should go to a content server target group. How should the ALB be configured?
Create ALB listener rules with path-pattern conditions: one rule for /api/* forwarding to the microservices target group and another for /static/* forwarding to the content server target group
B
Create two separate ALBs — one for API traffic and one for static content — and use Route 53 to route based on path
C
Configure the ALB target group to inspect the HTTP path and route internally based on registered targets
D
Use Lambda@Edge with CloudFront to route requests to different ALB endpoints based on the URL path
Correct Answer
Create ALB listener rules with path-pattern conditions: one rule for /api/* forwarding to the microservices target group and another for /static/* forwarding to the content server target group
Explanation
ALB listener rules support path-based routing. Creating rules with path patterns (/api/* and /static/*) that forward to their respective target groups enables request routing based on URL path. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
Q28
An operations engineer has deployed an ALB with weighted target groups for canary testing. 95% of traffic goes to the stable version and 5% to the canary. After confirming the canary is healthy, the engineer wants to gradually increase canary traffic. Which approach allows this gradual traffic shift?
Modify the ALB listener rule's forward action weights to gradually increase the canary target group weight and decrease the stable target group weight
B
Create additional ALB listener rules with different path patterns to route increasing percentages to the canary
C
Use Route 53 weighted routing to gradually shift DNS resolution between the stable and canary ALB endpoints
D
Deploy additional instances in the canary target group to organically increase its traffic share
Correct Answer
Modify the ALB listener rule's forward action weights to gradually increase the canary target group weight and decrease the stable target group weight
Explanation
ALB weighted target groups use forward actions with weight values. Modifying the weight percentages in the listener rule gradually shifts traffic between the stable and canary target groups. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
Q29
An operations engineer must configure Amazon Application Recovery Controller (ARC) for a multi-Region application. What capability does ARC provide for managing application recovery?
ARC provides readiness checks to continuously verify DR environment configuration and routing controls with safety interlocks to manage traffic failover between Regions
B
ARC automatically replicates all AWS resources from the primary Region to the DR Region
C
ARC provides automated database failover and data synchronization between Regions
D
ARC monitors CloudWatch alarms and automatically triggers Regional failover when thresholds are breached
E
Configure a Lambda function that performs a rolling restart instead of instance refresh
Correct Answer
ARC provides readiness checks to continuously verify DR environment configuration and routing controls with safety interlocks to manage traffic failover between Regions
Explanation
Amazon Application Recovery Controller provides readiness checks that continuously verify that the DR environment matches the production configuration, and routing controls that allow you to shift application traffic between Regions with safety interlocks. Learn more: https://docs.aws.amazon.com/r53recovery/latest/dg/what-is-route53-recovery.html
Q30
An operations engineer needs to ensure that a CloudFormation stack update does not accidentally modify or replace the production RDS database. Other resources in the stack can be updated freely. Which feature prevents changes to the RDS resource?
Apply a CloudFormation stack policy that explicitly denies Update:Modify, Update:Replace, and Update:Delete actions on the RDS resource
B
Set the DeletionPolicy to Retain on the RDS resource in the template
C
Enable termination protection on the CloudFormation stack
D
Remove the RDS resource from the CloudFormation template and manage it separately
Correct Answer
Apply a CloudFormation stack policy that explicitly denies Update:Modify, Update:Replace, and Update:Delete actions on the RDS resource
Explanation
A CloudFormation stack policy can deny Update actions on specific resources. By setting a stack policy that denies Update:Modify and Update:Replace on the RDS resource, CloudFormation will prevent any modifications to it during stack updates. Learn more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
SOA-C03 Practice Set-02
30 questions
Q1
A company runs a fleet of EC2 instances across three AWS Regions. The operations team needs a single dashboard that displays CPU utilization and network throughput metrics from all Regions. The dashboard must update automatically and be shareable with the security team's AWS account. Which approach meets these requirements with the LEAST operational overhead?
A
Create individual CloudWatch dashboards in each Region and use a custom HTML page with iframes to combine them
Create a CloudWatch cross-account cross-Region dashboard in a central monitoring account and share it using CloudWatch dashboard sharing
C
Export all metrics to an Amazon S3 bucket and use Amazon QuickSight to build a unified dashboard
D
Install a third-party monitoring agent on each instance and aggregate metrics in a centralized Grafana server hosted on EC2
Correct Answer
Create a CloudWatch cross-account cross-Region dashboard in a central monitoring account and share it using CloudWatch dashboard sharing
Explanation
Amazon CloudWatch cross-account and cross-Region dashboards allow you to create a unified view of metrics from multiple accounts and Regions in a single dashboard. This eliminates the need to switch between consoles. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_xaxr_dashboard.html
Q2
An application running on Amazon EC2 instances generates custom application logs in a non-standard JSON format. The operations engineer must collect these logs and create a CloudWatch metric filter that triggers an alarm when the error rate exceeds 5% over a 5-minute period. The alarm must send a notification to an Amazon SNS topic. What is the correct sequence of steps?
A
Create a CloudWatch alarm directly on the EC2 instance system logs and configure the alarm action to publish to SNS
Install and configure the CloudWatch agent to send application logs to CloudWatch Logs, create a metric filter on the log group to extract error counts, then create a CloudWatch alarm on the custom metric with an SNS action
C
Use AWS CloudTrail to capture application logs, create an EventBridge rule to detect errors, and route events to SNS
D
Enable VPC Flow Logs, filter for error patterns, and create a CloudWatch alarm on the flow log metric
Correct Answer
Install and configure the CloudWatch agent to send application logs to CloudWatch Logs, create a metric filter on the log group to extract error counts, then create a CloudWatch alarm on the custom metric with an SNS action
Explanation
The CloudWatch agent must be installed and configured to send custom log files to CloudWatch Logs. Then a metric filter with a filter pattern parses the logs for errors and publishes a custom metric. Finally a CloudWatch alarm monitors the metric and triggers SNS. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
Q3
A CloudWatch composite alarm has been configured to monitor a production application. The composite alarm enters the ALARM state, but the operations team does not receive any notification. The individual child alarms are correctly configured and are also in the ALARM state. What is the MOST likely cause of this issue?
A
The composite alarm rule expression is using OR logic instead of AND logic
B
The SNS topic used by the child alarms has a subscription filter policy that blocks composite alarm messages
The composite alarm does not have an alarm action configured or actions are suppressed by the ActionsSuppressor setting
D
CloudWatch composite alarms do not support SNS notifications and require EventBridge integration
Correct Answer
The composite alarm does not have an alarm action configured or actions are suppressed by the ActionsSuppressor setting
Explanation
Composite alarms can have actions suppressed if the ActionsSuppressor property is configured or if actions are explicitly disabled on the composite alarm itself. Even if child alarms are firing correctly, the composite alarm's own notification actions must be configured and enabled. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html
Q4
An operations engineer notices that an Amazon RDS for MySQL database is experiencing intermittent slowness. Amazon RDS Performance Insights shows that the top wait event is IO:DataFileRead with a high average active sessions (AAS) count. Which TWO actions should the engineer take to improve performance? (Choose TWO.)
Modify the DB instance to use a larger instance class with more memory to increase the buffer pool size
B
Enable Multi-AZ deployment to distribute read I/O across the standby instance
Change the EBS volume type from General Purpose (gp3) to Provisioned IOPS (io1) storage
D
Enable Enhanced Monitoring at 1-second granularity to capture additional OS-level metrics
E
Increase the allocated storage size without changing the volume type
Correct Answers
Modify the DB instance to use a larger instance class with more memory to increase the buffer pool size
Change the EBS volume type from General Purpose (gp3) to Provisioned IOPS (io1) storage
Explanation
High IO:DataFileRead wait events indicate the database is performing excessive disk reads. Upgrading to a larger instance class with more memory allows the buffer pool to cache more data. Converting to Provisioned IOPS (io1/io2) storage provides consistent I/O performance. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html
Q5
A company uses Amazon S3 to store data lake files. Users report that multipart uploads of 5 GB files to a bucket in us-east-1 from offices in Asia Pacific take over 30 minutes. The operations team must reduce upload times. Which solution provides the MOST improvement?
A
Enable S3 Versioning on the bucket to improve write throughput
Enable S3 Transfer Acceleration on the bucket and update the upload endpoint to use the accelerate endpoint
C
Create an S3 bucket in ap-southeast-1 and configure S3 Cross-Region Replication from us-east-1
D
Increase the multipart upload part size from 5 MB to 100 MB
Correct Answer
Enable S3 Transfer Acceleration on the bucket and update the upload endpoint to use the accelerate endpoint
Explanation
S3 Transfer Acceleration uses Amazon CloudFront edge locations to accelerate uploads. Data is routed from the client to the nearest edge location over the optimized AWS network path to the S3 bucket, significantly reducing latency for cross-region transfers. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html
Q6
An operations engineer needs to run a predefined Systems Manager Automation runbook to automatically stop EC2 instances that have been running for more than 72 hours. The runbook should execute every hour. Which configuration achieves this?
Create an Amazon EventBridge scheduled rule that runs every hour and targets the Systems Manager Automation document
B
Create a CloudWatch alarm on the StatusCheckFailed metric with a period of 72 hours and an Auto Scaling action to terminate instances
C
Configure AWS Config with a custom rule that evaluates every hour and calls a Lambda function to stop instances
D
Use EC2 Instance Scheduler with a DynamoDB schedule table to track and terminate long-running instances
Correct Answer
Create an Amazon EventBridge scheduled rule that runs every hour and targets the Systems Manager Automation document
Explanation
Amazon EventBridge scheduled rules can trigger Systems Manager Automation documents at defined intervals. The Automation runbook AWS-StopEC2Instance or a custom runbook can evaluate instance launch time and stop instances exceeding the threshold. Learn more: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
Q7
A CloudWatch alarm is configured with a threshold of CPUUtilization > 80% for 3 consecutive datapoints with a period of 60 seconds. The alarm transitions to ALARM state, and 60 seconds later the CPU drops to 50%. The alarm remains in ALARM state. What explains this behavior?
A
The alarm uses INSUFFICIENT_DATA handling that prevents returning to OK until all datapoints are below threshold
The evaluation period requires 3 consecutive non-breaching datapoints before transitioning back to OK state
C
CloudWatch alarms have a mandatory 5-minute cooldown period before they can transition states
D
The alarm action SNS topic is throttling the OK notification
Correct Answer
The evaluation period requires 3 consecutive non-breaching datapoints before transitioning back to OK state
Explanation
CloudWatch alarms evaluate based on the number of datapoints in the evaluation period. With 3 consecutive datapoints required and a 60-second period, the alarm evaluates the last 3 minutes. A single datapoint below the threshold does not satisfy the condition to return to OK because the evaluation window still contains breaching datapoints. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
Q8
An operations engineer must configure the CloudWatch agent on an Amazon ECS cluster running on EC2 launch type to collect container-level metrics including CPU and memory usage per task. Which configuration approach is correct?
Deploy the CloudWatch agent as an ECS daemon service and enable Container Insights on the ECS cluster
B
Install the CloudWatch agent directly on the container images and configure a custom metric namespace
C
Enable CloudWatch detailed monitoring on the underlying EC2 instances which automatically captures per-container metrics
D
Configure an EventBridge rule to poll ECS task metadata every minute and publish custom metrics
Correct Answer
Deploy the CloudWatch agent as an ECS daemon service and enable Container Insights on the ECS cluster
Explanation
For ECS on EC2, Container Insights uses the CloudWatch agent deployed as a daemon service to collect container-level metrics. The agent is deployed as an ECS daemon task and is configured to collect metrics from the ECS task metadata endpoint. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-cluster.html
Q9
An operations engineer must ensure that EBS snapshots of production volumes in us-east-1 are automatically copied to us-west-2 for disaster recovery. The snapshots must be retained for 90 days in both Regions. Which solution requires the LEAST operational overhead?
Create an AWS Backup plan with a cross-Region copy rule to us-west-2 and set the retention period to 90 days in both Regions
B
Create an EventBridge rule triggered by EBS snapshot completion that invokes a Lambda function to copy snapshots to us-west-2
C
Use Amazon Data Lifecycle Manager (DLM) to create snapshots and configure a separate DLM policy in us-west-2 to copy them
D
Write a cron job on an EC2 instance that uses the AWS CLI to create and copy snapshots between Regions nightly
Correct Answer
Create an AWS Backup plan with a cross-Region copy rule to us-west-2 and set the retention period to 90 days in both Regions
Explanation
AWS Backup supports cross-Region copy of EBS snapshots as part of a backup plan. You can define lifecycle rules that automatically handle retention periods in both the source and destination Regions. Learn more: https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html
Q10
A company uses Amazon ElastiCache for Redis as a session store for a web application. The operations team must ensure the cache layer can survive a single node failure without data loss. Which ElastiCache configuration provides this capability?
Deploy an ElastiCache for Redis replication group with Multi-AZ enabled and automatic failover
B
Deploy a single ElastiCache for Redis node with daily automated backups enabled
C
Deploy an ElastiCache for Memcached cluster with nodes spread across multiple AZs
D
Deploy an ElastiCache for Redis cluster with cluster mode disabled and no replicas but enable append-only file (AOF) persistence
Correct Answer
Deploy an ElastiCache for Redis replication group with Multi-AZ enabled and automatic failover
Explanation
ElastiCache for Redis cluster mode with Multi-AZ enabled and automatic failover provides high availability. When Multi-AZ is enabled, ElastiCache maintains a replica in a different AZ and automatically promotes it if the primary node fails, preserving cached data. Learn more: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
Q11
A database administrator restores an Amazon RDS for MySQL snapshot to recover from data corruption. After the restore completes, the application cannot connect to the new database instance. What is the MOST likely cause?
The application is still using the old DB instance endpoint, which does not point to the restored instance
B
The restored instance does not have Multi-AZ enabled, causing connectivity failures
C
The snapshot was encrypted and the application does not have access to the KMS key
D
The restored RDS instance is in a different VPC than the original instance and has no route to the application
Correct Answer
The application is still using the old DB instance endpoint, which does not point to the restored instance
Explanation
Restoring an RDS snapshot creates a new DB instance with a new endpoint. The application connection string must be updated to point to the new endpoint. Security groups, parameter groups, and option groups also need to be verified as they may not match the original configuration. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html
Q12
An operations engineer must implement a failover strategy for an Amazon FSx for Windows File Server file system. The application requires the file system to be available within 30 minutes of a Regional failure. Which approach meets this requirement?
A
Configure an Amazon FSx Multi-AZ deployment which automatically replicates data to a standby Region
Schedule regular backups of the FSx file system and copy them to the DR Region, then restore from the backup during failover
C
Use AWS DataSync to continuously replicate the FSx file system to an FSx file system in the DR Region
D
Configure AWS Storage Gateway File Gateway in the DR Region to cache frequently accessed files
Correct Answer
Schedule regular backups of the FSx file system and copy them to the DR Region, then restore from the backup during failover
Explanation
Amazon FSx supports cross-Region backup copies. In a disaster scenario, you can restore a file system from a backup in the DR Region. A Multi-AZ FSx deployment provides HA within a Region, but for cross-Region DR, backup and restore is the recommended approach. Learn more: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-backups.html
Q13
A company runs a web application using Amazon RDS for MySQL. The database performs nightly batch processing jobs that consume significant I/O. During batch processing, read queries from the web application become slow. Which solution offloads read traffic from the primary DB instance while the batch job runs?
Create an RDS read replica and direct the web application's read queries to the read replica endpoint during batch processing windows
B
Increase the RDS instance class to a larger size with more IOPS to handle both batch and read workloads
C
Schedule the batch processing job to run during a maintenance window when the application is offline
D
Enable RDS Multi-AZ and route read traffic to the standby instance during batch processing
Correct Answer
Create an RDS read replica and direct the web application's read queries to the read replica endpoint during batch processing windows
Explanation
RDS read replicas handle read queries independently from the primary. Directing the web application's read queries to a read replica while the primary handles batch writes ensures that batch jobs do not impact application read performance. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
Q14
An operations engineer needs to deploy identical CloudFormation stacks across 15 AWS accounts in an AWS Organization. The stacks must be deployed in us-east-1 and eu-west-1 in each account. Which approach is the MOST operationally efficient?
Use CloudFormation StackSets with AWS Organizations integration to deploy across all accounts and specified Regions simultaneously
B
Create a CodePipeline pipeline with 15 deployment stages, one for each account, with cross-account IAM roles
C
Write a shell script that iterates through each account using assumed roles and runs aws cloudformation create-stack in each Region
D
Use AWS Service Catalog to publish the CloudFormation template as a product and have each account provision it individually
Correct Answer
Use CloudFormation StackSets with AWS Organizations integration to deploy across all accounts and specified Regions simultaneously
Explanation
CloudFormation StackSets allow you to deploy CloudFormation stacks across multiple accounts and Regions with a single operation. When integrated with AWS Organizations, StackSets can automatically deploy to all accounts in specified organizational units. Learn more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html
Q15
A CloudFormation stack update fails and enters the UPDATE_ROLLBACK_FAILED state. The engineer determines that the rollback failed because a resource was manually deleted outside of CloudFormation. What should the engineer do to recover the stack?
A
Delete the CloudFormation stack and recreate it from scratch with the original template
Use the ContinueUpdateRollback API action with the ResourcesToSkip parameter to skip the deleted resource
C
Manually recreate the deleted resource with the same logical ID and then retry the rollback
D
Import the stack into a new CloudFormation stack using the resource import feature
Correct Answer
Use the ContinueUpdateRollback API action with the ResourcesToSkip parameter to skip the deleted resource
Explanation
When a stack is in UPDATE_ROLLBACK_FAILED state, you can use the continue-update-rollback API with the ResourcesToSkip parameter to skip the resources that cannot be rolled back (because they were manually deleted). This allows the rollback to complete. Learn more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html
Q16
An operations engineer is creating a golden AMI pipeline using EC2 Image Builder. The pipeline must install the latest security patches, install the CloudWatch agent, and run CIS benchmark hardening tests before distributing the AMI to three AWS Regions. Which EC2 Image Builder components are required? (Choose THREE.)
A build component that installs security patches and the CloudWatch agent
A test component that runs CIS benchmark hardening validation
A distribution configuration that specifies the three target Regions
D
A launch template that defines the instance type for the pipeline
E
An S3 bucket policy that allows Image Builder to store AMI artifacts
Correct Answers
A build component that installs security patches and the CloudWatch agent
A test component that runs CIS benchmark hardening validation
A distribution configuration that specifies the three target Regions
Explanation
EC2 Image Builder uses a recipe that defines the source image and build components (for installing patches and the CloudWatch agent). Test components run validation tests (CIS benchmarks). A distribution configuration defines which Regions the AMI is distributed to. Learn more: https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html
Q17
A company uses AWS CDK to define infrastructure. A developer pushes a CDK change that modifies an RDS instance by replacing it, which would cause data loss. How can the operations team prevent accidental resource replacements during CDK deployments?
Run cdk diff before deployment to review changes and set the removal policy to RETAIN on the RDS construct to prevent deletion during replacement
B
Enable CloudFormation drift detection to prevent any modifications to the RDS instance
C
Configure a CloudFormation stack policy that denies all Update:Replace actions on all resources
D
Use CloudFormation change sets exclusively instead of CDK deploy to prevent accidental updates
Correct Answer
Run cdk diff before deployment to review changes and set the removal policy to RETAIN on the RDS construct to prevent deletion during replacement
Explanation
The cdk diff command shows what changes will be made before deployment. Additionally, setting the DeletionPolicy and UpdateReplacePolicy to Retain on critical resources in CDK prevents CloudFormation from deleting resources during replacements. The --require-approval flag also adds a manual confirmation step. Learn more: https://docs.aws.amazon.com/cdk/v2/guide/ref-cli-cmd-diff.html
Q18
An operations engineer must implement a blue/green deployment strategy for an application running on Amazon EC2 instances behind an Application Load Balancer. The new version must be validated by internal testers before receiving production traffic. Which approach achieves this?
Create a second target group (green) with new instances, add a test listener on a separate port for validation, then update the production listener to forward traffic to the green target group
B
Deploy the new version to the same instances using a rolling update strategy in the Auto Scaling group
C
Create a new Auto Scaling group with the new AMI and gradually shift Route 53 weighted records from old to new
D
Use EC2 instance metadata tags to route specific test users to new instances through the same ALB listener
Correct Answer
Create a second target group (green) with new instances, add a test listener on a separate port for validation, then update the production listener to forward traffic to the green target group
Explanation
Using two target groups (blue and green) with ALB listener rules allows traffic shifting. A separate test listener on a different port lets internal testers validate the green environment before switching the production listener. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
Q19
An operations engineer must prevent IAM users from creating access keys for their own accounts. All programmatic access must go through IAM Identity Center with temporary credentials. Which approach enforces this?
Attach an SCP to the organization that denies the iam:CreateAccessKey action for all IAM users, while configuring IAM Identity Center for temporary credential issuance
B
Configure the IAM account password policy to disable access key creation
C
Delete all existing access keys and rely on CloudTrail to alert if new ones are created
D
Create an AWS Config rule that detects access key creation and auto-remediates by deleting the keys
Correct Answer
Attach an SCP to the organization that denies the iam:CreateAccessKey action for all IAM users, while configuring IAM Identity Center for temporary credential issuance
Explanation
An SCP or IAM policy with an explicit deny on iam:CreateAccessKey prevents users from creating long-term access keys. When combined with IAM Identity Center, all programmatic access uses short-lived temporary credentials. Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Q20
An operations engineer needs to ensure that an EC2 instance can only decrypt data using a specific KMS key if the request originates from within a specific VPC. Which mechanism enforces this?
Add a condition to the KMS key policy using the aws:sourceVpce or aws:sourceVpc condition key to restrict access to requests from the specified VPC, and create a VPC endpoint for KMS
B
Configure the EC2 instance's security group to allow outbound traffic only to the KMS endpoint in the VPC
C
Create a VPC endpoint policy for KMS that restricts access to the specific KMS key
D
Configure network ACLs on the EC2 subnet to block all traffic except to the KMS service endpoint
Correct Answer
Add a condition to the KMS key policy using the aws:sourceVpce or aws:sourceVpc condition key to restrict access to requests from the specified VPC, and create a VPC endpoint for KMS
Explanation
KMS key policies support the aws:sourceVpc condition key, which restricts cryptographic operations to requests that originate from a specific VPC. This is enforced through VPC endpoints for KMS. Learn more: https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html
Q21
An operations engineer must enable Amazon Macie to automatically discover and classify sensitive data (PII, financial data) stored in S3 buckets. Which Macie feature performs automated sensitive data discovery?
Enable Macie's automated sensitive data discovery feature which continuously samples S3 objects and reports findings about sensitive data categories
B
Create a Macie classification job that runs manually on selected S3 buckets
C
Configure S3 event notifications to trigger a Macie scan when new objects are uploaded
D
Enable S3 Storage Lens with advanced metrics to detect sensitive data patterns in object metadata
Correct Answer
Enable Macie's automated sensitive data discovery feature which continuously samples S3 objects and reports findings about sensitive data categories
Explanation
Amazon Macie automated sensitive data discovery continuously samples and analyzes objects in S3 buckets to detect sensitive data. It uses machine learning and pattern matching to identify PII, financial data, and other sensitive content. Learn more: https://docs.aws.amazon.com/macie/latest/user/discovery-asdd.html
Q22
A company stores application configuration files in AWS Systems Manager Parameter Store as SecureString parameters. The operations engineer must ensure that only the application's EC2 instances can read these parameters. Which access control mechanism should be used?
Configure the EC2 instance's IAM role with a policy that allows ssm:GetParameter for the specific parameter paths and grants kms:Decrypt permission for the KMS key used to encrypt the SecureStrings
B
Store the parameters with a resource tag and create a tag-based access control policy in Parameter Store
C
Configure a VPC endpoint for SSM with a VPC endpoint policy that restricts access to the instance's subnet
D
Use Parameter Store's built-in access control feature to whitelist the instance's IAM role ARN
Correct Answer
Configure the EC2 instance's IAM role with a policy that allows ssm:GetParameter for the specific parameter paths and grants kms:Decrypt permission for the KMS key used to encrypt the SecureStrings
Explanation
IAM policies attached to the EC2 instance's IAM role control access to Parameter Store. The policy should allow ssm:GetParameter only for the specific parameter paths, and the KMS key policy must allow the instance role to decrypt the SecureString values. Learn more: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-access.html
Q23
An operations engineer must configure a VPC peering connection between VPC-A (CIDR: 10.0.0.0/16) in Account A and VPC-B (CIDR: 10.1.0.0/16) in Account B. After creating the peering connection, instances in VPC-A cannot communicate with instances in VPC-B. What must the engineer configure? (Choose TWO.)
Add route table entries in both VPCs with the peer VPC's CIDR block pointing to the VPC peering connection
Update security groups in both VPCs to allow inbound traffic from the peer VPC's CIDR block or reference the peer VPC's security groups
C
Enable DNS resolution support on the VPC peering connection for cross-VPC DNS lookups
D
Create an internet gateway in both VPCs to route peering traffic
E
Configure a NAT gateway in each VPC to handle cross-VPC traffic
Correct Answers
Add route table entries in both VPCs with the peer VPC's CIDR block pointing to the VPC peering connection
Update security groups in both VPCs to allow inbound traffic from the peer VPC's CIDR block or reference the peer VPC's security groups
Explanation
VPC peering requires route table entries in both VPCs pointing to the peering connection for the peer VPC's CIDR. Additionally, security groups must allow traffic from the peer VPC's CIDR or security group. Learn more: https://docs.aws.amazon.com/vpc/latest/peering/working-with-vpc-peering.html
Q24
An operations engineer needs to configure AWS Network Firewall to inspect all traffic entering and leaving a VPC. The firewall must block traffic to known malicious domains and allow all other traffic. Which configuration achieves this?
Deploy AWS Network Firewall in dedicated subnets, create a stateful rule group with a domain list that blocks known malicious domains, and update route tables to direct traffic through the firewall endpoints
B
Configure security groups on all instances to deny outbound traffic to malicious IP addresses
C
Use Route 53 Resolver DNS Firewall to block DNS queries to malicious domains
D
Configure AWS WAF on the VPC to inspect all traffic and block malicious domain requests
Correct Answer
Deploy AWS Network Firewall in dedicated subnets, create a stateful rule group with a domain list that blocks known malicious domains, and update route tables to direct traffic through the firewall endpoints
Explanation
AWS Network Firewall uses rule groups to inspect and filter traffic. A domain list rule group can block traffic to specific domains. The firewall is deployed in dedicated subnets and traffic is routed through it via route table modifications. Learn more: https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html
Q25
An operations engineer must optimize data transfer costs for a workload that transfers 10 TB of data daily between EC2 instances in us-east-1 and eu-west-1. Which approach reduces cross-Region data transfer costs?
Implement data compression before cross-Region transfer and evaluate whether processing can be moved to the same Region as the data source to eliminate cross-Region transfer
B
Set up a Direct Connect connection between the two Regions for lower data transfer pricing
C
Use S3 Transfer Acceleration for cross-Region data transfers between EC2 instances
D
Deploy a Transit Gateway in each Region and configure inter-Region peering for reduced transfer costs
Correct Answer
Implement data compression before cross-Region transfer and evaluate whether processing can be moved to the same Region as the data source to eliminate cross-Region transfer
Explanation
Cross-Region data transfer incurs per-GB charges. Compressing data before transfer reduces the volume of data transferred. Using VPC endpoints and private connectivity also avoids additional NAT gateway processing charges. Learn more: https://docs.aws.amazon.com/cur/latest/userguide/understanding-data-transfer.html
Q26
An operations engineer configures an Amazon CloudFront distribution with multiple behaviors. One behavior serves /api/* requests from an ALB origin and another serves /* from an S3 origin. API requests are returning 403 Forbidden errors. What is the MOST likely cause?
The cache behavior precedence is incorrect — the /* default behavior is matching API requests before the /api/* behavior, sending them to the S3 origin which returns 403
B
The ALB security group is blocking requests from CloudFront IP ranges
C
The CloudFront distribution does not have an SSL certificate configured for the API domain
D
The CloudFront Origin Access Control is configured on the ALB origin, which is not supported
Correct Answer
The cache behavior precedence is incorrect — the /* default behavior is matching API requests before the /api/* behavior, sending them to the S3 origin which returns 403
Explanation
CloudFront behaviors are evaluated in order of path pattern specificity. If the /* behavior is evaluated before /api/*, all requests match /* and go to S3 instead of the ALB. Also, the ALB origin may require specific headers or the OAC policy might be blocking API requests. Learn more: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html
Q27
An operations engineer must configure a Network Load Balancer (NLB) to preserve the client's source IP address when forwarding traffic to EC2 targets. Which NLB configuration achieves this?
Use instance type targets in the NLB target group which preserves the client source IP by default, or enable Proxy Protocol v2 for IP type targets
B
Configure the NLB to use X-Forwarded-For headers like an Application Load Balancer
C
Enable cross-zone load balancing on the NLB to preserve client IP addresses across AZs
D
Configure a custom TCP health check that verifies client IP preservation on the target instances
Correct Answer
Use instance type targets in the NLB target group which preserves the client source IP by default, or enable Proxy Protocol v2 for IP type targets
Explanation
By default, NLBs with instance type targets preserve the client source IP. For IP type targets, you need to enable client IP preservation. The Proxy Protocol v2 header can also be used to pass the client IP to the target. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html
Q28
An operations engineer needs to restrict outbound DNS queries from a VPC to only allow resolution of approved domains. All other DNS queries should be blocked. Which service provides this capability?
Configure Route 53 Resolver DNS Firewall with an allow list of approved domains and a default block rule for all other queries
B
Configure the VPC DHCP option set to use a custom DNS server that only resolves approved domains
C
Create network ACL rules that block outbound UDP/TCP port 53 traffic except to approved DNS servers
D
Configure AWS Network Firewall with DNS-based stateful rules to filter domain queries
Correct Answer
Configure Route 53 Resolver DNS Firewall with an allow list of approved domains and a default block rule for all other queries
Explanation
Amazon Route 53 Resolver DNS Firewall allows you to filter outbound DNS queries from a VPC. You can create domain lists of approved domains and configure rules to allow or block DNS resolution based on these lists. Learn more: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-dns-firewall.html
Q29
An operations engineer needs to connect 5 VPCs and an on-premises data center in a hub-and-spoke topology. The on-premises connection uses two AWS Direct Connect links for redundancy. Which architecture provides centralized routing with high availability?
Deploy a Transit Gateway, attach all 5 VPCs, and configure two Direct Connect connections at different locations with transit virtual interfaces attached to the Transit Gateway for redundant on-premises connectivity
B
Create VPC peering between all 5 VPCs in a full mesh and configure two Site-to-Site VPN connections for the on-premises network
C
Deploy a virtual private gateway in each VPC and configure each with a Direct Connect private virtual interface
D
Use AWS Cloud WAN to manage the network topology and connect on-premises via a single Direct Connect gateway
Correct Answer
Deploy a Transit Gateway, attach all 5 VPCs, and configure two Direct Connect connections at different locations with transit virtual interfaces attached to the Transit Gateway for redundant on-premises connectivity
Explanation
AWS Transit Gateway acts as a hub connecting all VPCs. Two Direct Connect connections with two transit virtual interfaces provide redundancy. Each connection terminates at a different Direct Connect location for maximum resilience. Learn more: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html
Q30
An operations engineer is responsible for a workload running on EC2 Spot Instances. The team needs to handle Spot interruptions gracefully by draining connections and saving state before termination. Which approach enables this?
Configure an EventBridge rule to capture EC2 Spot Instance interruption warnings and trigger a Lambda function that initiates connection draining and state saving on the affected instance
B
Set the Spot Instance interruption behavior to hibernate instead of terminate to preserve instance state
C
Use a Spot Fleet with the maintain strategy to automatically replace interrupted instances without any application changes
D
Configure the Auto Scaling group to use capacity rebalancing which replaces Spot Instances before they are interrupted
Correct Answer
Configure an EventBridge rule to capture EC2 Spot Instance interruption warnings and trigger a Lambda function that initiates connection draining and state saving on the affected instance
Explanation
EC2 Spot Instances receive a two-minute warning before interruption via the instance metadata service. An application can poll the metadata endpoint or use EventBridge Spot interruption notices to trigger graceful shutdown procedures. Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-instance-termination-notices.html
Want More Practice?
These are just the free questions. Unlock the full AWS Certified CloudOps Engineer – Associate exam library with hundreds of additional questions, timed practice mode, and progress tracking.