Free Practice Questions AWS Certified Solutions Architect – Professional 60 Questions with Answers Free Practice Questions AWS Certified Solutions Architect – Professional 60 Questions with Answers
FREE QUESTIONS

AWS Certified Solutions Architect – Professional
Practice Questions

60 free questions with correct answers and detailed explanations.

60 Free Questions
2 Free Exams
100% With Explanations

SAP-C02 Practice Set-01

30 questions
Q1
A multinational company has 200+ AWS accounts across multiple business units. Each business unit manages its own accounts independently. The central IT team needs to enforce consistent security baselines (CloudTrail enabled, S3 Block Public Access) across ALL accounts without manual setup. Which approach provides this at scale?
A Manually configure each account individually
Deploy AWS Control Tower with mandatory guardrails to enforce security baselines across all accounts in the organization
C Use AWS Config in one account and share results
D Create IAM policies in each account
E Use AWS CloudFormation only
Correct Answer
Deploy AWS Control Tower with mandatory guardrails to enforce security baselines across all accounts in the organization
Explanation
AWS Control Tower with guardrails (preventive via SCPs and detective via Config rules) automatically enforces security baselines across all accounts in an AWS Organization. Landing Zone automates account provisioning with predefined security controls. Learn more: https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
Q2
A company connects 50 VPCs across 5 AWS Regions via a mesh of VPC peering connections. The network team reports that managing 1,225 peering connections is operationally unsustainable. Which architecture simplifies connectivity while supporting inter-Region traffic?
A Continue using VPC peering with automation
Deploy AWS Transit Gateway in each Region with inter-Region Transit Gateway peering
C Use AWS Direct Connect for all VPC connectivity
D Create a single VPC for all workloads
E Use AWS Cloud WAN for global network management
Correct Answer
Deploy AWS Transit Gateway in each Region with inter-Region Transit Gateway peering
Explanation
AWS Transit Gateway provides hub-and-spoke connectivity within a Region. Transit Gateway peering connects TGWs across Regions. This reduces 1,225 peering connections to 50 TGW attachments + 10 inter-Region peering connections. Learn more: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
Q3
A company uses AWS Organizations with 50 accounts. They need to centrally manage DNS for a hybrid environment where on-premises servers resolve AWS private hosted zones and VPCs resolve on-premises domains. Direct Connect is already configured. Which architecture provides bidirectional DNS resolution?
A Use public DNS for everything
Configure Route 53 Resolver with inbound endpoints for on-premises queries and outbound endpoints for VPC-to-on-premises resolution in a shared services VPC
C Host DNS on EC2 instances
D Use CloudFront for DNS resolution
E Deploy to a single AZ
Correct Answer
Configure Route 53 Resolver with inbound endpoints for on-premises queries and outbound endpoints for VPC-to-on-premises resolution in a shared services VPC
Explanation
Route 53 Resolver inbound endpoints allow on-premises DNS to forward queries to Route 53. Outbound endpoints forward queries from VPCs to on-premises DNS servers. Both share a centralized resolver VPC via Transit Gateway or RAM. Learn more: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html
Q4
A company has a multi-account AWS Organization. The security team requires that NO account can create public S3 buckets, even if a bucket policy accidentally allows public access. Which preventive control ensures this across all accounts?
A AWS Config rules to detect after creation
Apply an SCP denying public S3 bucket creation and enable organization-level S3 Block Public Access
C Use Lambda to delete public buckets
D Rely on IAM policies in each account
E Use the root account
Correct Answer
Apply an SCP denying public S3 bucket creation and enable organization-level S3 Block Public Access
Explanation
An SCP that denies s3:PutBucketPublicAccessBlock and s3:PutBucketPolicy when public access is allowed, combined with enabling S3 Block Public Access at the organization level, prevents public bucket creation across all accounts. Learn more: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
Q5
A company needs to establish a hybrid network connecting their on-premises data center to AWS. They require encrypted connectivity with 10 Gbps bandwidth and sub-5ms latency. Which TWO services should be combined? (Choose TWO.)
AWS Direct Connect for dedicated 10 Gbps bandwidth with low latency
B Internet-based VPN for 10 Gbps
Site-to-Site VPN over Direct Connect for encryption, or MACsec encryption on the Direct Connect connection
D VPC peering for hybrid connectivity
Correct Answers
AWS Direct Connect for dedicated 10 Gbps bandwidth with low latency
Site-to-Site VPN over Direct Connect for encryption, or MACsec encryption on the Direct Connect connection
Explanation
AWS Direct Connect provides dedicated 10 Gbps bandwidth with low latency. A Site-to-Site VPN over the Direct Connect public VIF (or MACsec encryption) adds encryption. Together they meet all requirements. Learn more: https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
Q6
A company runs workloads across 3 AWS Regions. Each Region has its own VPCs. The company needs all VPCs across all Regions to communicate with each other through a centrally managed global network with segment-level isolation. Which service provides this?
A VPC peering mesh
AWS Cloud WAN for centrally managed global network with segment-level isolation
C Individual Transit Gateways without peering
D AWS Direct Connect Gateway only
E Disable monitoring
Correct Answer
AWS Cloud WAN for centrally managed global network with segment-level isolation
Explanation
AWS Cloud WAN provides a centralized dashboard to create a global network connecting VPCs across Regions with segment-level isolation (e.g., production vs development segments). It manages Transit Gateway peering automatically. Learn more: https://docs.aws.amazon.com/vpc/latest/cloudwan/what-is-cloudwan.html
Q7
A company needs to share an Aurora database cluster with a different AWS account in the same organization without creating database copies. Which AWS feature enables this cross-account resource sharing?
A Create database replicas in each account
Use AWS Resource Access Manager to share the Aurora cluster across accounts in the organization
C Copy snapshots between accounts
D Set up cross-account VPC peering and direct connection
E Switch to on-premises
Correct Answer
Use AWS Resource Access Manager to share the Aurora cluster across accounts in the organization
Explanation
AWS Resource Access Manager (RAM) enables sharing Aurora DB clusters with other accounts in the organization. The consuming account creates a clone or connects to the shared cluster. Learn more: https://docs.aws.amazon.com/ram/latest/userguide/what-is.html
Q8
A company needs to build a serverless event-driven architecture that processes orders. When an order is placed, it must trigger inventory check, payment processing, and notification sending — all independently. Which TWO services create this fan-out pattern? (Choose TWO.)
Amazon SNS for publishing the order event to multiple subscribers
B Direct Lambda invocations from the application
Amazon SQS queues subscribed per service for reliable, independent processing
D A shared database for event passing
Correct Answers
Amazon SNS for publishing the order event to multiple subscribers
Amazon SQS queues subscribed per service for reliable, independent processing
Explanation
SNS topic receives the order event. SQS queues for each service subscribe to the topic. Each queue triggers its own Lambda function independently. Failure in one service doesn't affect others. Learn more: https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
Q9
A company's application requires a database that provides single-digit millisecond latency at any scale, supports both key-value and document models, and offers built-in caching. Which database service meets these requirements?
A Amazon RDS for MySQL
Amazon DynamoDB with DAX for microsecond caching
C Amazon ElastiCache standalone
D Amazon DocumentDB
Correct Answer
Amazon DynamoDB with DAX for microsecond caching
Explanation
DynamoDB provides single-digit millisecond latency, supports key-value and document data models, and integrates with DAX for microsecond caching. It scales seamlessly to any workload size. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
Q10
A company needs to implement a data warehouse that can query both stored data and data in their S3 data lake without moving data between them. Analysts use standard SQL. Which solution provides this federated query capability?
A Copy S3 data to Redshift with COPY
Amazon Redshift with Redshift Spectrum for federated queries across Redshift cluster data and S3 data lake
C Use Athena for both
D Use EMR for all queries
Correct Answer
Amazon Redshift with Redshift Spectrum for federated queries across Redshift cluster data and S3 data lake
Explanation
Amazon Redshift with Redshift Spectrum queries data directly in S3 using the Redshift SQL engine. This enables joining Redshift tables with S3 data lake data without ETL or data movement. Learn more: https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html
Q11
A company needs to implement a solution that automatically scales their containerized microservices application. The application consists of 20 services with varying resource requirements. Some services need GPU access. The team has Kubernetes expertise. Which container orchestration solution fits?
A Deploy on EC2 with Docker Compose
Amazon EKS with managed node groups (including GPU instances) and Karpenter for automatic node provisioning
C ECS on Fargate only
D Lambda for all services
Correct Answer
Amazon EKS with managed node groups (including GPU instances) and Karpenter for automatic node provisioning
Explanation
Amazon EKS provides managed Kubernetes that supports GPU instance types for ML workloads, auto-scaling, and handles 20+ services. Karpenter or Cluster Autoscaler manages node provisioning. Learn more: https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
Q12
A company needs to design a solution where multiple applications can search and filter log data in near-real-time. Log data arrives from CloudWatch Logs across 50 accounts. Analysts need full-text search and dashboards. Which centralized logging architecture fits?
A Store all logs in S3 and use Athena
Stream logs via CloudWatch subscription filters to Kinesis Firehose delivering to Amazon OpenSearch Service for full-text search and dashboards
C Use CloudWatch Logs Insights only
D Store logs in DynamoDB
Correct Answer
Stream logs via CloudWatch subscription filters to Kinesis Firehose delivering to Amazon OpenSearch Service for full-text search and dashboards
Explanation
CloudWatch Logs subscription filters stream logs to a centralized Kinesis Data Firehose, which delivers to Amazon OpenSearch Service. OpenSearch provides full-text search, filtering, and Dashboards (Kibana) for visualization. Learn more: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html
Q13
A company is designing a machine learning inference pipeline. The model receives 50,000 requests per second with sub-10ms latency requirements. The model is 2 GB. Which deployment approach provides the required performance?
A Lambda with the model in a container
SageMaker real-time inference endpoints with GPU instances and auto-scaling for low-latency, high-throughput inference
C SageMaker Batch Transform
D EC2 instances with custom serving
Correct Answer
SageMaker real-time inference endpoints with GPU instances and auto-scaling for low-latency, high-throughput inference
Explanation
SageMaker real-time inference endpoints with auto-scaling on GPU instances keep the model in memory for sub-10ms latency. Multiple instances behind an endpoint handle 50K requests/sec. Learn more: https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html
Q14
A company runs an Auto Scaling group with a step scaling policy. During a sudden traffic spike, instances take 5 minutes to launch and become healthy. During this time, existing instances are overwhelmed. How should the architect reduce the scaling lag?
A Increase instance size permanently
Configure warm pools for pre-initialized instances and enable predictive scaling to anticipate traffic patterns
C Disable Auto Scaling and overprovision
D Reduce health check intervals
E Use Lambda instead of EC2
Correct Answer
Configure warm pools for pre-initialized instances and enable predictive scaling to anticipate traffic patterns
Explanation
Warm pools maintain pre-initialized stopped instances that can start in seconds instead of minutes. Predictive scaling pre-provisions based on traffic patterns. Together they virtually eliminate scaling lag. Learn more: https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html
Q15
A company's existing monitoring relies on CloudWatch basic metrics (5-minute intervals). They need 1-second metrics for a latency-sensitive trading application. How should the architect enhance monitoring?
A Keep 5-minute metrics
Implement high-resolution custom CloudWatch metrics with 1-second granularity and CloudWatch agent for detailed OS metrics
C Use third-party monitoring only
D Check metrics manually
Correct Answer
Implement high-resolution custom CloudWatch metrics with 1-second granularity and CloudWatch agent for detailed OS metrics
Explanation
CloudWatch high-resolution custom metrics (1-second granularity) combined with CloudWatch agent for OS-level metrics provide the detailed monitoring needed. High-resolution alarms can trigger within 10 seconds. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
Q16
A company has a Lambda function that processes S3 events. The function occasionally receives duplicate events and processes them twice. The architect needs to ensure exactly-once processing without changing the event source. What should they implement?
A Accept duplicate processing
Implement idempotent processing using DynamoDB conditional writes to track and skip duplicate event IDs
C Disable S3 event notifications
D Use SQS FIFO between S3 and Lambda
Correct Answer
Implement idempotent processing using DynamoDB conditional writes to track and skip duplicate event IDs
Explanation
Implementing idempotent processing using a DynamoDB table to track processed event IDs (S3 event notification ID) prevents duplicate processing. Conditional writes ensure only the first processing succeeds. Learn more: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
Q17
A company is migrating 500 on-premises servers to AWS. They need to discover server dependencies, right-size recommendations, and a migration plan. Which AWS service provides discovery and planning capabilities?
A Manually inventory all servers
Use AWS Application Discovery Service for automated server discovery and Migration Hub for centralized planning and tracking
C Deploy EC2 instances and guess configurations
D Use CloudFormation to replicate
Correct Answer
Use AWS Application Discovery Service for automated server discovery and Migration Hub for centralized planning and tracking
Explanation
AWS Application Discovery Service (using Discovery Agents or Agentless Discovery Connector) collects server configuration, performance, and dependency data. Migration Hub provides a centralized dashboard for tracking. Learn more: https://docs.aws.amazon.com/migrationhub/latest/ug/what-is-mhub.html
Q18
A company needs to migrate a 50 TB Oracle database to Aurora PostgreSQL. The migration must minimize downtime. The Oracle database has complex stored procedures and PL/SQL code. Which migration approach handles both schema conversion and continuous replication?
A Manual export/import with downtime
Use AWS Schema Conversion Tool (SCT) for schema and code conversion, then AWS DMS with CDC for full load and continuous replication with minimal cutover downtime
C pg_dump and pg_restore
D Recreate the database from scratch
E Use Babelfish for Aurora PostgreSQL
Correct Answer
Use AWS Schema Conversion Tool (SCT) for schema and code conversion, then AWS DMS with CDC for full load and continuous replication with minimal cutover downtime
Explanation
AWS SCT converts Oracle schemas and PL/SQL to PostgreSQL. AWS DMS performs the full load migration and then continuous replication (CDC) until cutover. This minimizes downtime to the brief cutover window. Learn more: https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
Q19
A company needs to migrate 200 TB of data from an on-premises NAS to S3. The network connection is 1 Gbps (would take ~18 days). They need the data in AWS within 1 week. Which service provides the fastest migration?
A Use the 1 Gbps connection with multi-part upload
Order 3 AWS Snowball Edge devices for parallel offline data transfer, completing within 1 week
C Use S3 Transfer Acceleration only
D Compress the data and use Direct Connect
Correct Answer
Order 3 AWS Snowball Edge devices for parallel offline data transfer, completing within 1 week
Explanation
AWS Snowball Edge (80 TB each) — ordering 3 devices allows parallel loading. Devices are shipped to AWS for S3 ingestion. Total time is typically 5-7 days including shipping. Learn more: https://docs.aws.amazon.com/snowball/latest/developer-guide/whatissnowball.html
Q20
A company is modernizing their monolithic Java application. They want to break it into microservices running on containers. The team has no Kubernetes experience and wants the simplest container management. Which AWS service fits?
A Deploy on EC2 with Docker manually
Amazon ECS with Fargate for serverless container management without Kubernetes complexity
C EKS with self-managed nodes
D Lambda for all microservices
Correct Answer
Amazon ECS with Fargate for serverless container management without Kubernetes complexity
Explanation
Amazon ECS with AWS Fargate provides serverless container management without Kubernetes complexity. ECS handles orchestration and Fargate eliminates server management. AWS Copilot simplifies the developer experience. Learn more: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
Q21
A company's CloudWatch bill is $3,000/month for custom metrics. Analysis shows they publish 500 custom metrics, but only 50 are used in dashboards or alarms. How should the architect reduce costs?
A Keep all 500 metrics
Audit and remove the 450 unused custom metrics to reduce CloudWatch custom metrics charges by 90%
C Disable all custom metrics
D Switch to third-party monitoring
Correct Answer
Audit and remove the 450 unused custom metrics to reduce CloudWatch custom metrics charges by 90%
Explanation
Removing the 450 unused custom metrics eliminates 90% of the custom metrics cost. CloudWatch charges per metric per month regardless of whether it's used. The team should audit and remove unused metrics. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
Q22
A company uses Security Hub to aggregate findings. They receive thousands of findings daily but most are low-severity informational findings that create noise. The security team wants to focus on critical and high-severity findings only. Which approach helps?
A Process all findings manually
Configure Security Hub filters and custom insights to suppress low-severity findings and focus on critical/high-severity issues
C Disable Security Hub
D Ignore all findings
Correct Answer
Configure Security Hub filters and custom insights to suppress low-severity findings and focus on critical/high-severity issues
Explanation
Security Hub supports custom insights and filters. Creating a filter that suppresses informational and low-severity findings reduces noise. Automated actions via EventBridge can handle specific finding types. Learn more: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-insights.html
Q23
A company wants to reduce their AWS costs by at least 25%. Which TWO strategies provide the MOST impactful savings for a company with a mix of steady and variable EC2 workloads? (Choose TWO.)
Compute Savings Plans for the steady-state baseline workload (up to 66% savings)
B Larger instance types
Spot Instances for fault-tolerant variable workloads (up to 90% savings)
D Keep all On-Demand
Correct Answers
Compute Savings Plans for the steady-state baseline workload (up to 66% savings)
Spot Instances for fault-tolerant variable workloads (up to 90% savings)
Explanation
Savings Plans cover the steady baseline at up to 66% discount. Spot Instances handle the variable workload at up to 90% discount. Together they provide the largest savings for mixed workloads. Learn more: https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
Q24
A company has an ECS service running on EC2. They want to improve cost efficiency by allowing multiple services to share the same instances more efficiently. The current bin-packing is suboptimal. Which ECS feature optimizes task placement?
A Add more instances
Configure ECS Capacity Providers with binpack placement strategy and managed scaling for optimal resource utilization
C Use spread placement
D Migrate to Fargate entirely
Correct Answer
Configure ECS Capacity Providers with binpack placement strategy and managed scaling for optimal resource utilization
Explanation
ECS Capacity Providers with managed scaling and the binpack placement strategy optimize task placement across instances. This maximizes resource utilization by placing tasks on the most efficiently packed instances. Learn more: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html
Q25
A company needs to migrate 5 PB of data from their on-premises Hadoop cluster to S3. Network bandwidth is limited. Which service handles petabyte-scale offline data transfer?
A Transfer over the internet
AWS Snowmobile for petabyte-scale offline data transfer (or multiple Snowball Edge devices)
C S3 Transfer Acceleration
D DataSync over Direct Connect
Correct Answer
AWS Snowmobile for petabyte-scale offline data transfer (or multiple Snowball Edge devices)
Explanation
AWS Snowmobile (up to 100 PB) or multiple Snowball Edge devices handle petabyte-scale offline transfer. For 5 PB, a Snowmobile is ideal. The data is loaded locally, then physically transported to AWS. Learn more: https://docs.aws.amazon.com/snowball/latest/developer-guide/whatissnowball.html
Q26
A company is migrating an on-premises Windows file server (10 TB, SMB protocol) to AWS. Users must access files through the same SMB interface. Which AWS storage service provides this?
A Amazon S3 with gateway
Amazon FSx for Windows File Server for managed SMB file storage with AD integration
C Amazon EFS
D Amazon EBS
Correct Answer
Amazon FSx for Windows File Server for managed SMB file storage with AD integration
Explanation
Amazon FSx for Windows File Server provides fully managed Windows file storage with native SMB protocol support, Active Directory integration, and Windows NTFS features. Learn more: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
Q27
A company wants to containerize their Java application. The application uses Apache Tomcat and a MySQL database. They want to preserve the Tomcat configuration but run on managed containers. Which AWS service provides this with LEAST code changes?
A Rewrite in Node.js
B Containerize the Tomcat application in a Docker image and run on ECS Fargate with Aurora MySQL for the database
Deploy on EC2 with manual Tomcat setup
D Use Lambda for the Java application
E Use Elastic Beanstalk Docker platform
Correct Answer
Deploy on EC2 with manual Tomcat setup
Explanation
AWS App Runner or ECS with a Tomcat Docker image runs the Java application with existing Tomcat configuration. Fargate eliminates server management. The MySQL database migrates to Aurora MySQL. Learn more: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
Q28
A company is evaluating whether to refactor or replatform their legacy applications. They need a tool that analyzes their source code and provides recommendations for modernization paths. Which AWS service provides this code analysis?
A Manual code review only
Use AWS Migration Hub Refactor Spaces for analyzing applications and managing the modernization process
C Rewrite all applications from scratch
D Keep all applications as monoliths
Correct Answer
Use AWS Migration Hub Refactor Spaces for analyzing applications and managing the modernization process
Explanation
AWS Microservice Extractor for .NET analyzes monolithic .NET applications and recommends microservice boundaries. AWS Migration Hub Refactor Spaces helps manage the refactoring process. Learn more: https://docs.aws.amazon.com/migrationhub-refactor-spaces/latest/userguide/what-is-mhub-refactor-spaces.html
Q29
A company is migrating their CI/CD pipeline from Jenkins to AWS. They want a managed CI/CD solution that supports building, testing, and deploying applications. Which combination of AWS services replaces Jenkins?
A Deploy Jenkins on EC2
AWS CodePipeline (orchestration) + CodeBuild (build/test) + CodeDeploy (deployment) as a fully managed CI/CD replacement for Jenkins
C Use Lambda for CI/CD
D Manual deployments
Correct Answer
AWS CodePipeline (orchestration) + CodeBuild (build/test) + CodeDeploy (deployment) as a fully managed CI/CD replacement for Jenkins
Explanation
CodePipeline orchestrates the pipeline. CodeBuild replaces Jenkins build agents. CodeDeploy handles deployment. Together they provide a fully managed CI/CD solution. Learn more: https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
Q30
A company is designing a solution for real-time inventory management. The system must handle 50,000 inventory updates per second. Each update must be reflected in the database within 10ms. Strong consistency is required for read-after-write. Which database fits?
A Amazon RDS
Amazon DynamoDB with On-Demand capacity and strongly consistent reads for real-time, high-throughput inventory updates
C Amazon ElastiCache
D Amazon Aurora
Correct Answer
Amazon DynamoDB with On-Demand capacity and strongly consistent reads for real-time, high-throughput inventory updates
Explanation
DynamoDB in On-Demand mode handles 50K+ writes/sec with single-digit millisecond latency. Strongly consistent reads (ConsistentRead=true) ensure read-after-write consistency. No capacity planning needed. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html

SAP-C02 Practice Set-02

30 questions
Q1
A company has 100 VPCs across 4 Regions connected via Transit Gateway. They need to implement centralized network monitoring that captures traffic patterns, top talkers, and anomalies across all VPCs. Which solution provides this visibility?
A Security groups analysis only
Centralize VPC Flow Logs from all VPCs to S3, use Athena for traffic analysis, CloudWatch Contributor Insights for top talkers, and Detective for anomalies
C CloudTrail only
D Ping tests between VPCs
E Use AWS CloudFormation only
Correct Answer
Centralize VPC Flow Logs from all VPCs to S3, use Athena for traffic analysis, CloudWatch Contributor Insights for top talkers, and Detective for anomalies
Explanation
VPC Flow Logs from all VPCs delivered to a centralized S3 bucket. Amazon Athena queries the logs for traffic patterns. CloudWatch Contributor Insights identifies top talkers. Amazon Detective identifies anomalies. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Q2
A company needs to implement a shared VPC model where a central networking team owns the VPC and subnets, but application teams deploy resources into shared subnets in their own accounts. Which AWS feature enables this?
A VPC peering for each account
VPC Sharing via Resource Access Manager allowing the central team to share subnets with application accounts
C Transit Gateway only
D Create separate VPCs per account
E Duplicate the VPC in each account
Correct Answer
VPC Sharing via Resource Access Manager allowing the central team to share subnets with application accounts
Explanation
VPC Sharing using Resource Access Manager allows the VPC owner to share subnets with other accounts. Participant accounts deploy resources into shared subnets but don't manage the networking infrastructure. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html
Q3
A company's security team needs to implement a solution that detects and responds to suspicious AWS API activity — such as an EC2 instance making API calls it shouldn't. Which service provides this ML-based threat detection?
A CloudWatch alarms on API calls
Amazon GuardDuty for ML-based threat detection analyzing CloudTrail, VPC Flow Logs, and DNS queries
C AWS Config rules
D Manual log review
E Deploy to a single AZ
Correct Answer
Amazon GuardDuty for ML-based threat detection analyzing CloudTrail, VPC Flow Logs, and DNS queries
Explanation
Amazon GuardDuty uses ML to analyze CloudTrail, VPC Flow Logs, and DNS logs for threat detection. It identifies unauthorized API calls, compromised instances, malicious IP communications, and privilege escalation. Learn more: https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html
Q4
A company is designing a multi-account strategy. Which TWO organizational units (OUs) are considered best practices for the foundational structure? (Choose TWO.)
Security OU for centralized security tooling (log archive, audit accounts)
B A single OU for everything
Infrastructure/Shared Services OU for common networking and shared resources
D One OU per employee
Correct Answers
Security OU for centralized security tooling (log archive, audit accounts)
Infrastructure/Shared Services OU for common networking and shared resources
Explanation
A Security OU hosts centralized security tooling (log archive, security audit accounts). An Infrastructure/Shared Services OU hosts shared networking, DNS, and common services. Both are foundational. Learn more: https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html
Q5
A company uses AWS Config to monitor compliance across 50 accounts. The security team needs a single dashboard showing compliance status for all accounts. Which configuration provides this centralized view?
A Check each account individually
Configure an AWS Config Aggregator to collect compliance data from all accounts into a centralized dashboard
C Use CloudTrail for compliance
D Email reports from each account
E Use the root account
Correct Answer
Configure an AWS Config Aggregator to collect compliance data from all accounts into a centralized dashboard
Explanation
AWS Config Aggregator collects compliance data from multiple accounts and Regions into a single aggregator account. The dashboard shows organization-wide compliance posture. Learn more: https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html
Q6
A company needs to implement network segmentation within a VPC. Different application tiers (web, app, database) must be isolated. The database tier should only accept traffic from the app tier. Which approach provides this micro-segmentation?
A Use separate VPCs per tier
Configure security groups with tiered rules: database SG allows traffic only from app tier SG, app SG allows only from web tier SG
C NACLs only
D A single security group for all tiers
E Disable monitoring
Correct Answer
Configure security groups with tiered rules: database SG allows traffic only from app tier SG, app SG allows only from web tier SG
Explanation
Security groups provide instance-level network segmentation. The database SG allows inbound only from the app SG. The app SG allows inbound only from the web SG. This creates tiered micro-segmentation. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Q7
A company needs to implement a solution where specific AWS Regions are completely blocked for all accounts in their organization. No resources should be deployable in restricted Regions. Which control achieves this?
A IAM policies per user
Apply an SCP denying all actions in restricted Regions using aws:RequestedRegion conditions across the organization
C AWS Config rules
D CloudWatch alarms
E Switch to on-premises
Correct Answer
Apply an SCP denying all actions in restricted Regions using aws:RequestedRegion conditions across the organization
Explanation
An SCP with a Deny for all actions when aws:RequestedRegion is in the restricted Region list blocks all resource creation in those Regions across all member accounts. Learn more: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html
Q8
A company has 100 accounts generating AWS Cost and Usage Reports (CUR). Each account's report is in its own S3 bucket. The finance team needs a consolidated view. How should the architect centralize this?
A Manually combine 100 reports
Enable an Organization-level CUR delivering consolidated cost data for all accounts to a centralized S3 bucket
C Use AWS Budgets for each account
D Query each bucket separately
Correct Answer
Enable an Organization-level CUR delivering consolidated cost data for all accounts to a centralized S3 bucket
Explanation
Enabling CUR at the Organization level delivers a single consolidated report to a centralized S3 bucket. This report includes itemized costs for all member accounts. Learn more: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html
Q9
A company has on-premises DNS resolving internal domains. They need AWS VPCs to resolve these domains via Direct Connect. Which Route 53 Resolver configuration enables this?
A Use public DNS servers
Configure Route 53 Resolver outbound endpoints with forwarding rules to send internal domain queries to on-premises DNS via Direct Connect
C Host DNS on EC2 in the VPC
D Use CloudFront for DNS
Correct Answer
Configure Route 53 Resolver outbound endpoints with forwarding rules to send internal domain queries to on-premises DNS via Direct Connect
Explanation
Route 53 Resolver outbound endpoints forward DNS queries from VPCs to on-premises DNS servers via Direct Connect. Forwarding rules specify which domains (e.g., corp.internal) are forwarded. Learn more: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html
Q10
A company uses AWS SSO (IAM Identity Center) to manage access across 200 accounts. They integrate with their on-premises Active Directory. Users should authenticate once and access any permitted AWS account. Which configuration enables this SSO?
A Create IAM users in every account
Configure AWS IAM Identity Center with AD integration, permission sets, and group-based account assignments for single sign-on
C Share root credentials
D Use Cognito for employee SSO
Correct Answer
Configure AWS IAM Identity Center with AD integration, permission sets, and group-based account assignments for single sign-on
Explanation
AWS IAM Identity Center integrates with on-premises AD (via AD Connector or AWS Managed AD). Permission sets define access levels. Users authenticate once and assume roles in any account based on their group membership. Learn more: https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html
Q11
A company is designing a real-time fraud detection system. Transactions (10K/sec) must be analyzed against ML models within 100ms. Flagged transactions trigger alerts. Which architecture provides this latency?
A Batch processing with EMR
Kinesis Data Streams → Lambda calling SageMaker real-time endpoint for ML inference → SNS for alerts, all within 100ms
C SQS with scheduled processing
D S3 with Athena analysis
Correct Answer
Kinesis Data Streams → Lambda calling SageMaker real-time endpoint for ML inference → SNS for alerts, all within 100ms
Explanation
Kinesis Data Streams ingests transactions. Lambda (or Kinesis Data Analytics) evaluates each transaction against the SageMaker endpoint in real-time. SNS sends alerts for flagged transactions. The entire pipeline operates within 100ms. Learn more: https://docs.aws.amazon.com/streams/latest/dev/introduction.html
Q12
A company needs to design a multi-Region active-active web application. Users should be routed to the nearest Region. If a Region fails, all traffic must automatically failover to the surviving Region. Which routing configuration achieves this?
A Simple routing to one Region
Route 53 latency-based routing with health checks for automatic failover to the nearest healthy Region
C CloudFront with single origin
D Manual DNS updates during failure
E Weighted routing 50/50 without health checks
Correct Answer
Route 53 latency-based routing with health checks for automatic failover to the nearest healthy Region
Explanation
Route 53 latency-based routing directs users to the nearest healthy Region. Health checks detect Region failures. When a Region's health check fails, Route 53 stops routing traffic there, achieving automatic failover. Learn more: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-latency.html
Q13
A company needs a database for their IoT application that stores 1 billion sensor readings per day. Each reading has a timestamp and device ID. Queries are always time-range based for a specific device. Which purpose-built database fits?
A Amazon RDS for MySQL
Amazon Timestream for purpose-built time-series storage optimized for IoT sensor data with time-range queries
C DynamoDB
D Amazon Redshift
Correct Answer
Amazon Timestream for purpose-built time-series storage optimized for IoT sensor data with time-range queries
Explanation
Amazon Timestream is optimized for time-series data with built-in time-based query functions, automatic data tiering (in-memory → magnetic), and support for billions of data points. It's purpose-built for IoT sensor data. Learn more: https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html
Q14
A company needs end-to-end encryption for messages between microservices within a VPC. No plaintext traffic should traverse the network, even within the VPC. Which approach provides this service-to-service encryption without modifying application code?
A Security groups only
AWS App Mesh with mutual TLS for transparent service-to-service encryption via Envoy sidecar proxies
C NACLs for encryption
D Disable network traffic
Correct Answer
AWS App Mesh with mutual TLS for transparent service-to-service encryption via Envoy sidecar proxies
Explanation
AWS App Mesh with mutual TLS (mTLS) using Envoy sidecar proxies encrypts all traffic between services transparently. No application code changes are needed — encryption is handled at the proxy level. Learn more: https://docs.aws.amazon.com/app-mesh/latest/userguide/mutual-tls.html
Q15
A company has an application behind an ALB. They need to implement A/B testing by routing 10% of traffic to a new version. Which ALB feature enables this without DNS changes?
A Route 53 weighted routing
ALB weighted target groups routing 10% to the new version's target group and 90% to the current version
C CloudFront with multiple origins
D Manual instance selection
Correct Answer
ALB weighted target groups routing 10% to the new version's target group and 90% to the current version
Explanation
ALB weighted target groups allow routing a percentage of traffic to different target groups. Setting 90% to the current version's target group and 10% to the new version enables A/B testing without DNS changes. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html
Q16
A company's CloudWatch Logs costs are $5,000/month. Analysis shows 80% of logs are from debug-level messages that are only useful for troubleshooting. How should the architect reduce costs while retaining debugging capability?
A Delete all log groups
Reduce application logging level to INFO normally, with the ability to temporarily enable DEBUG; route debug logs to S3 for cost-effective retention
C Disable CloudWatch Logs entirely
D Keep all debug logs permanently
Correct Answer
Reduce application logging level to INFO normally, with the ability to temporarily enable DEBUG; route debug logs to S3 for cost-effective retention
Explanation
Changing the application logging level to INFO in normal operation reduces log volume by 80%. When debugging is needed, temporarily increasing to DEBUG level provides detailed logs. Alternatively, CloudWatch Logs subscription filters can route debug logs to cheaper S3 storage. Learn more: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html
Q17
A company needs to implement a comprehensive tagging strategy for cost allocation. Which TWO AWS features help enforce and track tags? (Choose TWO.)
SCPs requiring mandatory tags on resource creation for preventive enforcement
B Manual tag audits
AWS Cost Allocation Tags activated in Billing for tag-based cost tracking and analysis
D CloudTrail for tag management
Correct Answers
SCPs requiring mandatory tags on resource creation for preventive enforcement
AWS Cost Allocation Tags activated in Billing for tag-based cost tracking and analysis
Explanation
SCPs with tag conditions prevent untagged resource creation (preventive). AWS Cost Allocation Tags in Cost Explorer categorize spending by tag values. Together they enforce and analyze cost allocation. Learn more: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html
Q18
A company's NAT Gateway processes 5 TB of data per month at $0.045/GB, costing $225/month for data processing alone. 90% of the traffic is to AWS public services (S3, DynamoDB, SQS). How should the architect reduce NAT costs?
A Accept the current costs
Create VPC endpoints (Gateway for S3/DynamoDB, Interface for SQS) to eliminate 90% of NAT Gateway traffic and data processing charges
C Switch to a NAT instance
D Move resources to public subnets
Correct Answer
Create VPC endpoints (Gateway for S3/DynamoDB, Interface for SQS) to eliminate 90% of NAT Gateway traffic and data processing charges
Explanation
VPC Gateway Endpoints for S3 and DynamoDB (free) and Interface Endpoints for SQS route traffic directly to these services without NAT. This eliminates 90% of NAT Gateway data processing charges. Learn more: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
Q19
A company's existing monitoring shows that their Lambda functions have variable cold start times (1-8 seconds). For customer-facing APIs, the cold start latency is unacceptable. How should the architect eliminate cold starts for critical functions?
A Increase Lambda memory
Enable Provisioned Concurrency on critical customer-facing Lambda functions to eliminate cold starts
C Switch to EC2 for all functions
D Reduce the deployment package size only
Correct Answer
Enable Provisioned Concurrency on critical customer-facing Lambda functions to eliminate cold starts
Explanation
Provisioned Concurrency keeps a specified number of function execution environments warm and initialized. These environments respond immediately without cold start initialization, providing consistent low-latency responses. Learn more: https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html
Q20
A company is planning a large-scale migration of 500 servers from on-premises to AWS. They need to assess the total cost of ownership (TCO) and develop a business case. Which AWS tool provides this analysis?
A AWS Pricing Calculator only
AWS Migration Evaluator for data-driven TCO analysis and business case development
C Manual spreadsheet comparison
D AWS Cost Explorer for on-premises costs
Correct Answer
AWS Migration Evaluator for data-driven TCO analysis and business case development
Explanation
AWS Migration Evaluator (formerly TSO Logic) collects on-premises inventory data and models the cost of running workloads in AWS. It provides a detailed TCO comparison and right-sizing recommendations for the business case. Learn more: https://docs.aws.amazon.com/migrationhub/latest/ug/what-is-mhub.html
Q21
A company needs to migrate a Microsoft SQL Server database to AWS. The database uses features like SQL Server Agent jobs, SSIS packages, and linked servers. The company wants to minimize refactoring. Which target provides the BEST compatibility?
A Aurora PostgreSQL
Amazon RDS for SQL Server for maximum compatibility with SQL Server features
C Amazon DynamoDB
D Amazon Redshift
E Aurora MySQL
Correct Answer
Amazon RDS for SQL Server for maximum compatibility with SQL Server features
Explanation
Amazon RDS for SQL Server provides native SQL Server compatibility including SQL Server Agent, SSIS, and many enterprise features. This minimizes refactoring compared to migrating to Aurora or PostgreSQL. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html
Q22
A company needs to migrate a large Hadoop cluster (200 nodes, 2 PB of HDFS data) to AWS. They want to modernize to use S3 as the primary data store while retaining Spark workloads. Which migration approach fits?
A Recreate HDFS on EC2
Migrate data to S3 and run Spark workloads on EMR using S3 as the storage layer, decoupling compute from storage
C Keep Hadoop on EC2 as-is
D Use Redshift for all workloads
Correct Answer
Migrate data to S3 and run Spark workloads on EMR using S3 as the storage layer, decoupling compute from storage
Explanation
Migrate HDFS data to S3 using AWS DataSync or S3 DistCp. Run Spark workloads on EMR with S3 as the storage layer (replacing HDFS). This decouples compute from storage, enabling independent scaling and lower costs. Learn more: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html
Q23
A company is modernizing their application from a monolith to microservices. The first phase extracts the authentication module into a separate service. How should the architect implement shared authentication across the monolith and the new microservice?
A Duplicate the authentication code in both services
Implement Amazon Cognito as a centralized authentication service shared between the monolith and microservices during gradual decomposition
C Keep authentication only in the monolith
D Use IAM for application authentication
Correct Answer
Implement Amazon Cognito as a centralized authentication service shared between the monolith and microservices during gradual decomposition
Explanation
Amazon Cognito provides centralized authentication. The monolith and microservice both validate Cognito JWT tokens. This enables gradual decomposition — each extracted service uses the same authentication provider. Learn more: https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
Q24
A company is migrating to AWS using the 7Rs framework. For a legacy COBOL application that will be retired in 18 months, which TWO migration strategies are MOST appropriate? (Choose TWO.)
Retain on-premises until retirement in 18 months
B Refactor to serverless architecture
Rehost to EC2 if cloud benefits are needed before retirement
D Rebuild from scratch on AWS
Correct Answers
Retain on-premises until retirement in 18 months
Rehost to EC2 if cloud benefits are needed before retirement
Explanation
Retain keeps the app on-premises until retirement (18 months away). If immediate cloud benefits are needed, Rehost (lift-and-shift) to EC2 migrates quickly without modernization investment for a soon-to-retire app. Learn more: https://docs.aws.amazon.com/prescriptive-guidance/latest/large-migration-guide/migration-strategies.html
Q25
A company needs to migrate 80 TB of files from an on-premises NFS server to Amazon EFS. The migration should cause minimal disruption to existing NFS clients. Which service provides the migration with LEAST downtime?
A Manual copy using rsync
AWS DataSync for automated, incremental file migration from on-premises NFS to EFS with minimal disruption
C S3 Transfer Acceleration
D AWS Snowball for file storage
Correct Answer
AWS DataSync for automated, incremental file migration from on-premises NFS to EFS with minimal disruption
Explanation
AWS DataSync transfers files from on-premises NFS to EFS with minimal disruption. It handles incremental sync, ensuring only changed files are transferred. The final cutover requires brief downtime to switch NFS mount points. Learn more: https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html
Q26
A company uses AWS Trusted Advisor but only has Basic Support, which provides limited checks. The security team wants comprehensive Trusted Advisor checks including security, performance, and cost optimization. What must be changed?
A Nothing — Basic Support includes all checks
Upgrade to Business or Enterprise Support to unlock full Trusted Advisor checks
C Use AWS Config instead
D Enable CloudWatch only
Correct Answer
Upgrade to Business or Enterprise Support to unlock full Trusted Advisor checks
Explanation
Business Support or Enterprise Support tier unlocks the full set of Trusted Advisor checks including security, fault tolerance, performance, cost optimization, and service limits. Basic only provides core checks. Learn more: https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor.html
Q27
A company runs EC2 instances with CloudWatch basic monitoring (5-minute intervals). They need 1-minute monitoring for production instances. Which configuration change provides this?
A Use third-party monitoring
Enable EC2 detailed monitoring for 1-minute CloudWatch metric intervals
C Install CloudWatch agent only
D Increase instance size for better monitoring
Correct Answer
Enable EC2 detailed monitoring for 1-minute CloudWatch metric intervals
Explanation
Enabling detailed monitoring on EC2 instances provides CloudWatch metrics at 1-minute intervals instead of 5 minutes. This is configured per instance and incurs additional charges. Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html
Q28
A company has an application that makes thousands of small S3 PUT requests per second. S3 costs are high due to the per-request pricing. How should the architect reduce S3 API costs?
A Accept the high request costs
Aggregate small objects into larger batches before writing to S3 to reduce the number of PUT requests and per-request costs
C Switch to EBS
D Use DynamoDB for small objects
Correct Answer
Aggregate small objects into larger batches before writing to S3 to reduce the number of PUT requests and per-request costs
Explanation
Batching small objects into larger ones (e.g., aggregating 1,000 small objects into 1 object) reduces the number of PUT requests by 1,000x. S3 charges per request, so fewer requests mean lower costs. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html
Q29
A company is migrating a complex enterprise application with 30 interdependent services. They need to understand service dependencies before planning the migration order. Which tool maps application dependencies?
A Manually document dependencies
AWS Application Discovery Service with Discovery Agents for automated dependency mapping and Migration Hub for visualization
C Interview application owners only
D Use CloudTrail for dependency mapping
Correct Answer
AWS Application Discovery Service with Discovery Agents for automated dependency mapping and Migration Hub for visualization
Explanation
AWS Application Discovery Service with Discovery Agents discovers server dependencies by monitoring network connections. Migration Hub visualizes the dependency map and helps plan the migration order. Learn more: https://docs.aws.amazon.com/migrationhub/latest/ug/what-is-mhub.html
Q30
A company needs to migrate a 100 TB data warehouse from Teradata to AWS. The migration should minimize downtime and support schema differences between Teradata and the target. Which combination of tools handles this heterogeneous migration?
A Manual data export and import
AWS Schema Conversion Tool (SCT) for schema conversion and DMS with SCT data extraction agents for large-scale data migration
C pg_dump/pg_restore
D Snowball for database migration
E Direct Redshift COPY from Teradata
Correct Answer
AWS Schema Conversion Tool (SCT) for schema conversion and DMS with SCT data extraction agents for large-scale data migration
Explanation
AWS SCT converts Teradata schemas and SQL to the target format (Redshift). AWS DMS migrates the data with continuous replication. SCT data extraction agents handle large-scale data migration. Learn more: https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html

Want More Practice?

These are just the free questions. Unlock the full AWS Certified Solutions Architect – Professional exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to AWS Certified Solutions Architect – Professional Exams