Free Practice Questions AWS Solutions Architect – Associate 60 Questions with Answers Free Practice Questions AWS Solutions Architect – Associate 60 Questions with Answers
FREE QUESTIONS

AWS Solutions Architect – Associate
Practice Questions

60 free questions with correct answers and detailed explanations.

60 Free Questions
2 Free Exams
100% With Explanations

SAA-C03 Practice Set-01

30 questions
Q1
A startup is building its first application on AWS. The CTO wants to follow IAM security best practices from day one. Which THREE actions should the team take immediately? (Choose THREE)
Enable multi-factor authentication (MFA) on the root account and all IAM users with console access
Create individual IAM users for each team member instead of sharing the root account credentials
Apply the principle of least privilege by granting only the permissions each user needs to perform their job
D Share IAM access keys among team members for convenience
E Use the root account for daily development tasks
Correct Answers
Enable multi-factor authentication (MFA) on the root account and all IAM users with console access
Create individual IAM users for each team member instead of sharing the root account credentials
Apply the principle of least privilege by granting only the permissions each user needs to perform their job
Explanation
MFA (Option A) adds a second layer of authentication, protecting against password compromise. Individual IAM users (Option B) provide accountability and auditability. Least privilege (Option C) limits the blast radius of compromised credentials. Sharing access keys (Option D) eliminates accountability and makes rotation difficult. Using root for daily tasks (Option E) is the most dangerous IAM anti-pattern — root has unrestricted access. Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Q2
A company stores customer data in Amazon S3. The data must be encrypted at rest. What is the SIMPLEST way to enable encryption for all new objects in the bucket?
Enable default encryption on the S3 bucket using SSE-S3 (Amazon S3-managed keys). All new objects are automatically encrypted without any changes to the application
B Encrypt each file manually before uploading
C Use a third-party encryption tool installed on EC2
D Enable HTTPS on the bucket
Correct Answer
Enable default encryption on the S3 bucket using SSE-S3 (Amazon S3-managed keys). All new objects are automatically encrypted without any changes to the application
Explanation
S3 default bucket encryption with SSE-S3 automatically encrypts all new objects stored in the bucket. No application changes or manual steps are required. SSE-S3 uses AES-256 encryption with keys managed entirely by AWS. Manual encryption (Option B) adds operational overhead. Third-party tools (Option C) add complexity. HTTPS (Option D) encrypts data in transit, not at rest. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html
Q3
A company runs EC2 instances that need to access an S3 bucket. A developer proposes embedding AWS access keys directly in the application code. Why is this approach problematic, and what is the recommended alternative?
A Access keys in code are fine as long as the code is in a private repository
Embedding access keys in code is a security risk because keys can be accidentally committed to version control, leaked, or compromised. The recommended approach is to attach an IAM role to the EC2 instance via an instance profile, which provides automatic, temporary credentials
C Access keys in code are required because EC2 cannot use IAM roles
D The keys should be stored in a text file on the instance instead
Correct Answer
Embedding access keys in code is a security risk because keys can be accidentally committed to version control, leaked, or compromised. The recommended approach is to attach an IAM role to the EC2 instance via an instance profile, which provides automatic, temporary credentials
Explanation
IAM roles for EC2 provide temporary credentials that are automatically rotated by AWS. The credentials are delivered through the instance metadata service and the AWS SDK retrieves them automatically. Hardcoded keys (Option A, C) can be leaked via source control, logs, or error messages. Text files on instances (Option D) are still static credentials that must be manually managed. Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Q4
A company wants to ensure that all data transferred between users' browsers and its web application is encrypted. The application runs on EC2 instances behind an Application Load Balancer. What should the architect configure?
A Enable S3 encryption on the backend storage
Configure an HTTPS listener on the ALB using an SSL/TLS certificate from AWS Certificate Manager (ACM). ACM provides free public certificates that automatically renew
C Enable VPC encryption
D Use a VPN for every user connection
Correct Answer
Configure an HTTPS listener on the ALB using an SSL/TLS certificate from AWS Certificate Manager (ACM). ACM provides free public certificates that automatically renew
Explanation
An HTTPS listener on the ALB terminates TLS connections from clients. ACM provides free SSL/TLS certificates for ALB with automatic renewal — no manual certificate management needed. S3 encryption (Option A) is for data at rest, not in transit. VPC encryption (Option C) does not exist as a feature. VPN per user (Option D) is impractical for a public web application. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
Q5
A company wants to control network access to its EC2 instances. Which TWO AWS features provide network-level access control in a VPC? (Choose TWO)
Security groups — stateful firewalls that control inbound and outbound traffic at the instance (ENI) level
Network Access Control Lists (NACLs) — stateless firewalls that control traffic at the subnet level
C IAM policies that control which APIs users can call
D AWS CloudTrail for logging network traffic
Correct Answers
Security groups — stateful firewalls that control inbound and outbound traffic at the instance (ENI) level
Network Access Control Lists (NACLs) — stateless firewalls that control traffic at the subnet level
Explanation
Security groups (Option A) are stateful — return traffic is automatically allowed. They operate at the instance level. NACLs (Option B) are stateless — both inbound and outbound rules must explicitly allow traffic. They operate at the subnet level. IAM policies (Option C) control AWS API access, not network traffic. CloudTrail (Option D) logs API calls, not network packets — VPC Flow Logs does that. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html
Q6
A company creates an S3 bucket to host internal reports. The bucket should NOT be accessible from the public internet under any circumstances. Which S3 feature provides this protection with a single setting?
A S3 versioning
S3 Block Public Access — when enabled at the bucket or account level, it overrides any bucket policy or ACL that would grant public access
C S3 lifecycle policies
D S3 Transfer Acceleration
Correct Answer
S3 Block Public Access — when enabled at the bucket or account level, it overrides any bucket policy or ACL that would grant public access
Explanation
S3 Block Public Access is a safeguard that prevents public access regardless of individual bucket policies or ACLs. It can be enabled at the account level (applies to all buckets) or per-bucket. Versioning (Option A) tracks object versions. Lifecycle (Option C) manages storage tiers. Transfer Acceleration (Option D) speeds up uploads. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html
Q7
A company has a VPC with public and private subnets. The architect places the web servers in public subnets and the database in private subnets. Which statement BEST describes why the database should be in a private subnet?
A Private subnets have faster network speeds than public subnets
Resources in private subnets have no direct route to the internet gateway, meaning they cannot be reached from the public internet. This protects the database from direct external attacks
C Private subnets automatically encrypt all data
D Private subnets cost less than public subnets
E Private subnets have built-in DDoS protection
Correct Answer
Resources in private subnets have no direct route to the internet gateway, meaning they cannot be reached from the public internet. This protects the database from direct external attacks
Explanation
Private subnets do not have a route to the internet gateway in their route table, so resources within them cannot be directly accessed from the internet. This is the fundamental reason for placing databases in private subnets — reducing the attack surface. All other options (A, C, D, E) are incorrect — private subnets have the same speed, no automatic encryption, same cost, and no built-in DDoS beyond standard AWS protections. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
Q8
A developer needs to store a database password securely for an application running on Lambda. The password must be retrievable at runtime. Which service is designed specifically for storing and managing secrets?
A Amazon S3 with encryption
AWS Secrets Manager, which stores, encrypts, and manages secrets like database credentials. It also supports automatic rotation of secrets on a schedule
C AWS CloudFormation parameters
D Lambda environment variables without encryption
Correct Answer
AWS Secrets Manager, which stores, encrypts, and manages secrets like database credentials. It also supports automatic rotation of secrets on a schedule
Explanation
Secrets Manager is purpose-built for storing sensitive credentials. It encrypts secrets with KMS, provides fine-grained IAM access control, supports automatic rotation, and integrates natively with RDS, Redshift, and DocumentDB. S3 (Option A) is for object storage. CloudFormation parameters (Option C) can be visible in the stack. Unencrypted environment variables (Option D) are visible in the Lambda console. Learn more: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
Q9
A company uses AWS CloudTrail. The security team asks: what does CloudTrail record? Which answer BEST describes CloudTrail's function?
A CloudTrail records VPC network traffic flow data (source IP, destination IP, ports)
CloudTrail records AWS API call history across the account — who made the call, what action was performed, which resources were affected, and when it occurred
C CloudTrail monitors application performance metrics
D CloudTrail scans EC2 instances for software vulnerabilities
E CloudTrail encrypts data stored in S3
Correct Answer
CloudTrail records AWS API call history across the account — who made the call, what action was performed, which resources were affected, and when it occurred
Explanation
CloudTrail provides a complete audit trail of AWS API activity. Each event records: the identity of the caller, the time, the source IP, the API action, the request parameters, and the response. VPC Flow Logs (not CloudTrail) record network traffic (Option A). CloudWatch monitors performance (Option C). Inspector scans for vulnerabilities (Option D). KMS/S3 handle encryption (Option E). Learn more: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
Q10
A company has a public-facing website. The security team wants to protect it from common web attacks like SQL injection and cross-site scripting (XSS). Which AWS service provides this protection?
A AWS Shield
AWS WAF (Web Application Firewall), which inspects HTTP/HTTPS requests and can block requests matching rules for SQL injection, XSS, and other attack patterns
C Amazon GuardDuty
D AWS Config
Correct Answer
AWS WAF (Web Application Firewall), which inspects HTTP/HTTPS requests and can block requests matching rules for SQL injection, XSS, and other attack patterns
Explanation
AWS WAF operates at Layer 7 (application layer) and inspects web requests for malicious patterns. AWS-managed rule groups provide pre-built protection against OWASP Top 10 threats including SQL injection and XSS. Shield (Option A) protects against DDoS attacks at Layers 3/4. GuardDuty (Option C) detects threats from VPC Flow Logs, DNS logs, and CloudTrail. Config (Option D) evaluates resource configurations. Learn more: https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
Q11
A company is setting up a new VPC for a web application. The architect plans to have public subnets for web servers and private subnets for the database. Which TWO components are required for instances in the public subnet to receive traffic from the internet? (Choose TWO)
An internet gateway attached to the VPC
A route in the public subnet's route table pointing 0.0.0.0/0 to the internet gateway
C A NAT gateway in the public subnet
D A VPN connection to the internet
Correct Answers
An internet gateway attached to the VPC
A route in the public subnet's route table pointing 0.0.0.0/0 to the internet gateway
Explanation
An internet gateway (Option A) enables communication between the VPC and the internet. A route table entry (Option B) directing internet-bound traffic (0.0.0.0/0) to the internet gateway tells the subnet how to reach the internet. Both are required. NAT gateway (Option C) is for private subnets to initiate outbound internet connections. VPN (Option D) connects to other networks, not the public internet. Learn more: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
Q12
A company wants to manage access to AWS resources for its employees. What is an IAM policy?
A A firewall rule that controls network traffic
A JSON document that defines permissions — specifying which AWS actions are allowed or denied on which resources for which principals
C A billing alert that notifies when costs exceed a threshold
D A backup schedule for AWS resources
Correct Answer
A JSON document that defines permissions — specifying which AWS actions are allowed or denied on which resources for which principals
Explanation
IAM policies are JSON documents with statements containing: Effect (Allow/Deny), Action (AWS API actions), Resource (ARNs of resources), and optionally Condition. Policies are attached to users, groups, or roles to grant or restrict permissions. Firewall rules (Option A) are security groups/NACLs. Billing alerts (Option C) are AWS Budgets. Backup schedules (Option D) are AWS Backup. Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Q13
A company has an application that uses Amazon RDS MySQL. The database is in a private subnet. A developer wants to connect to the database from their local laptop for troubleshooting. The database has no public accessibility enabled. What is the MOST secure way to connect?
A Enable public accessibility on the RDS instance temporarily
Use AWS Systems Manager Session Manager to connect to an EC2 instance in the same VPC, then connect to RDS from that instance. Session Manager requires no open inbound ports and uses IAM for authentication
C Open port 3306 on the database security group to the developer's home IP
D Move the RDS instance to a public subnet
E Share the database endpoint publicly via Route 53
Correct Answer
Use AWS Systems Manager Session Manager to connect to an EC2 instance in the same VPC, then connect to RDS from that instance. Session Manager requires no open inbound ports and uses IAM for authentication
Explanation
Session Manager provides secure shell access through the AWS console without opening inbound ports. From the EC2 instance (which is in the same VPC as RDS), the developer can connect to the database. Enabling public access (Option A) exposes the database. Opening port 3306 (Option C) creates a direct internet path. Moving to a public subnet (Option D) removes network isolation. Public DNS (Option E) does not grant network access but exposes the endpoint. Learn more: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
Q14
A company creates multiple AWS accounts for different teams (development, staging, production). They want to centrally manage these accounts. Which AWS service organizes multiple accounts under a single management structure?
A AWS IAM
AWS Organizations, which enables centralized management of multiple AWS accounts with consolidated billing, service control policies (SCPs), and organizational units (OUs)
C Amazon WorkSpaces
D AWS CloudFormation
Correct Answer
AWS Organizations, which enables centralized management of multiple AWS accounts with consolidated billing, service control policies (SCPs), and organizational units (OUs)
Explanation
AWS Organizations groups multiple AWS accounts under a single management account. It provides: consolidated billing (single payment for all accounts), organizational units (OUs) for grouping accounts, and service control policies (SCPs) for centralized permission guardrails. IAM (Option A) manages identities within a single account. WorkSpaces (Option C) provides virtual desktops. CloudFormation (Option D) manages infrastructure as code. Learn more: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html
Q15
A company wants to monitor its AWS environment for security threats. Which TWO services provide automated threat detection? (Choose TWO)
Amazon GuardDuty, which uses machine learning to analyze CloudTrail logs, VPC Flow Logs, and DNS logs to detect suspicious activity like compromised instances, unusual API calls, and reconnaissance
Amazon Macie, which uses machine learning to automatically discover, classify, and protect sensitive data (PII, financial data) stored in S3 buckets
C AWS CloudFormation for threat modeling
D Amazon EC2 Auto Scaling for handling security events
Correct Answers
Amazon GuardDuty, which uses machine learning to analyze CloudTrail logs, VPC Flow Logs, and DNS logs to detect suspicious activity like compromised instances, unusual API calls, and reconnaissance
Amazon Macie, which uses machine learning to automatically discover, classify, and protect sensitive data (PII, financial data) stored in S3 buckets
Explanation
GuardDuty (Option A) detects threats like cryptocurrency mining, port scanning, and compromised credentials. Macie (Option B) discovers sensitive data exposure in S3. Both use ML for automated, continuous detection. CloudFormation (Option C) deploys infrastructure, not threat detection. Auto Scaling (Option D) adjusts capacity, not security. Learn more: https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html
Q16
A company has an S3 bucket containing customer invoices. The bucket policy allows a partner company's AWS account to download invoices. The company wants to ensure that the partner cannot upload or delete any objects. Which bucket policy permission should be granted to the partner?
A s3:* (full access)
s3:GetObject only — this allows the partner to download (read) objects but not upload, modify, or delete them
C s3:PutObject only
D s3:DeleteObject and s3:GetObject
Correct Answer
s3:GetObject only — this allows the partner to download (read) objects but not upload, modify, or delete them
Explanation
s3:GetObject grants read-only access to objects in the bucket. The partner can download invoices but cannot perform any other operations. s3:* (Option A) grants all S3 permissions including delete. s3:PutObject (Option C) allows uploads, not downloads. Including DeleteObject (Option D) allows the partner to remove invoices. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html
Q17
A company runs a web application and wants to protect it from DDoS (Distributed Denial of Service) attacks. What is the difference between AWS Shield Standard and AWS Shield Advanced?
Shield Standard is free and automatically protects all AWS accounts against common Layer 3/4 DDoS attacks. Shield Advanced is a paid service ($3,000/month) that adds protection against larger and more sophisticated attacks, DDoS cost protection, and access to the AWS DDoS Response Team (DRT)
B Shield Standard protects against Layer 7 attacks and Shield Advanced protects against Layer 3/4
C Shield Standard and Shield Advanced provide identical protection
D Shield Advanced is free and Shield Standard requires a subscription
E Shield Standard is only available in the US regions
Correct Answer
Shield Standard is free and automatically protects all AWS accounts against common Layer 3/4 DDoS attacks. Shield Advanced is a paid service ($3,000/month) that adds protection against larger and more sophisticated attacks, DDoS cost protection, and access to the AWS DDoS Response Team (DRT)
Explanation
Shield Standard is automatically enabled at no cost and provides protection against SYN floods, UDP reflection, and other common DDoS attacks. Shield Advanced adds enhanced detection, DDoS cost protection (credits for scaling costs during an attack), AWS DDoS Response Team access, real-time attack visibility, and protection for EIP, ALB, CloudFront, and Route 53 resources. Option B reverses the layer coverage. Options C, D, and E are factually incorrect. Learn more: https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html
Q18
A company uses Amazon RDS for its database. The IT team wants to encrypt the database at rest. When should encryption be enabled for SIMPLEST implementation?
A Encryption can only be enabled by contacting AWS Support
Enable encryption when creating the RDS instance — it is a checkbox during instance creation. Enabling encryption after creation requires a snapshot-copy-restore process
C Encryption is automatically enabled on all RDS instances by default and cannot be disabled
D Encryption can be toggled on and off at any time on existing instances
Correct Answer
Enable encryption when creating the RDS instance — it is a checkbox during instance creation. Enabling encryption after creation requires a snapshot-copy-restore process
Explanation
RDS encryption at rest is easiest to enable during instance creation — it is a single checkbox. Once created, you cannot simply enable encryption on an existing unencrypted instance. The workaround requires creating a snapshot, copying it with encryption, and restoring a new instance. Encryption is not automatic by default (Option C), though AWS now defaults to encryption on in the console. It cannot be toggled on existing instances (Option D). Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
Q19
A company hosts a web application on a single EC2 instance. When the instance fails, the application is offline until a new instance is manually launched. Which AWS service automatically replaces failed instances to maintain application availability?
A Amazon CloudFront
EC2 Auto Scaling group with a minimum capacity of 1 — if the instance fails, Auto Scaling automatically launches a replacement instance in a healthy Availability Zone
C AWS Lambda
D Amazon Route 53
Correct Answer
EC2 Auto Scaling group with a minimum capacity of 1 — if the instance fails, Auto Scaling automatically launches a replacement instance in a healthy Availability Zone
Explanation
An Auto Scaling group with min=1 and max=1 (or higher) automatically detects instance health check failures and launches a replacement. Across multiple AZs, this provides self-healing with no manual intervention. CloudFront (Option A) is a CDN. Lambda (Option C) is serverless compute but not a replacement for EC2 instances. Route 53 (Option D) is DNS and does not launch instances. Learn more: https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html
Q20
A company wants its web application to remain available even if one data center experiences a failure. Which AWS concept provides this physical separation of infrastructure?
A AWS Regions
Availability Zones (AZs) — each AZ is one or more physically separate data centers with independent power, cooling, and networking within a Region. Deploying across multiple AZs provides fault tolerance against data center failures
C Edge locations
D AWS Local Zones
Correct Answer
Availability Zones (AZs) — each AZ is one or more physically separate data centers with independent power, cooling, and networking within a Region. Deploying across multiple AZs provides fault tolerance against data center failures
Explanation
Availability Zones are the primary mechanism for achieving high availability within a Region. Each AZ is isolated from other AZs' failures. Deploying across 2+ AZs ensures that a single data center failure does not bring down the application. Regions (Option A) are geographic areas containing multiple AZs. Edge locations (Option C) are for content delivery. Local Zones (Option D) extend Regions to specific locations. Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
Q21
A company runs a multi-tenant SaaS application. Each tenant's data must be isolated at the IAM policy level. When Tenant A's users access DynamoDB, they should only be able to read items where the partition key matches their tenant ID. Which IAM policy technique achieves this row-level isolation?
A Create a separate DynamoDB table for each tenant
Use IAM policy conditions with dynamodb:LeadingKeys to restrict access to items whose partition key matches the tenant ID from the session tag or Cognito identity
C Use VPC security groups to restrict which tenants can access which DynamoDB tables
D Implement application-level filtering in the Lambda function to return only tenant-specific data
Correct Answer
Use IAM policy conditions with dynamodb:LeadingKeys to restrict access to items whose partition key matches the tenant ID from the session tag or Cognito identity
Explanation
The dynamodb:LeadingKeys IAM condition key restricts access to items whose partition key value matches a specified value. Combined with session tags or Cognito identity attributes, this provides true IAM-level row isolation. Separate tables (Option A) add operational overhead and don't scale well with thousands of tenants. Security groups (Option C) operate at the network level, not data level. Application filtering (Option D) is not enforced at the IAM level — a bug could leak data. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html
Q22
A company operates a containerized application on Amazon EKS. The security team requires that container images are scanned for vulnerabilities before being deployed to production, and that only images from the company's private ECR repository are allowed to run on the cluster. Which combination of services enforces this?
Enable Amazon ECR image scanning on push, and use an OPA (Open Policy Agent) Gatekeeper admission controller on EKS to only allow images from the company's ECR repository
B Scan images manually before upload and trust that developers follow the process
C Enable AWS Shield on the EKS cluster to block vulnerable containers
D Use Amazon Inspector to scan running containers and terminate those with vulnerabilities
Correct Answer
Enable Amazon ECR image scanning on push, and use an OPA (Open Policy Agent) Gatekeeper admission controller on EKS to only allow images from the company's ECR repository
Explanation
ECR image scanning automatically scans images for CVEs when pushed to the repository. OPA Gatekeeper on EKS enforces admission policies that reject pods referencing images outside the approved ECR repository. This provides both vulnerability scanning and image source enforcement. Manual scanning (Option B) is not enforceable. Shield (Option C) is for DDoS protection. Inspector (Option D) scans after deployment, not preventing vulnerable deployments. Learn more: https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html
Q23
A company is deploying an application that must comply with HIPAA requirements for handling Protected Health Information (PHI). The application uses EC2 instances, S3, and RDS. Which THREE measures are required for HIPAA compliance on AWS? (Choose THREE)
Sign a Business Associate Addendum (BAA) with AWS for the services used
Encrypt all PHI at rest using KMS and in transit using TLS
C Use only AWS GovCloud regions, as commercial regions do not support HIPAA workloads
Implement audit logging using CloudTrail and enable access logging for S3 buckets containing PHI
E Enable AWS Shield Advanced on all resources handling PHI
Correct Answers
Sign a Business Associate Addendum (BAA) with AWS for the services used
Encrypt all PHI at rest using KMS and in transit using TLS
Implement audit logging using CloudTrail and enable access logging for S3 buckets containing PHI
Explanation
HIPAA compliance on AWS requires: (1) a BAA with AWS covering the specific services used (Option A), (2) encryption of PHI at rest and in transit (Option B), and (3) comprehensive audit logging (Option D). AWS commercial regions do support HIPAA workloads (Option C is incorrect) — GovCloud is required for specific government compliance, not HIPAA. Shield Advanced (Option E) provides DDoS protection but is not a HIPAA requirement. Learn more: https://docs.aws.amazon.com/whitepapers/latest/architecting-hipaa-security-and-compliance-on-aws/architecting-hipaa-security-and-compliance-on-aws.html
Q24
A company is implementing TLS termination for its application. The application runs on EC2 instances behind an Application Load Balancer. The security team requires end-to-end encryption — traffic must be encrypted from the client to the ALB AND from the ALB to the EC2 instances. How should this be configured?
Configure HTTPS listener on the ALB with an ACM certificate. Configure the ALB target group to use HTTPS (port 443) with a self-signed certificate on the EC2 instances
B Configure HTTPS listener on the ALB. Use HTTP between the ALB and EC2 instances since traffic within a VPC is private
C Use a Network Load Balancer with TLS passthrough so the EC2 instances handle all TLS termination
D Configure HTTP listener on the ALB and rely on the VPC security groups for encryption
Correct Answer
Configure HTTPS listener on the ALB with an ACM certificate. Configure the ALB target group to use HTTPS (port 443) with a self-signed certificate on the EC2 instances
Explanation
End-to-end encryption requires HTTPS on both the ALB frontend (client→ALB) and backend (ALB→EC2). The ALB terminates the client's TLS connection and re-encrypts traffic to the target using the target group's HTTPS configuration. EC2 instances present their own certificate (can be self-signed or from ACM Private CA). Option B leaves backend traffic unencrypted. NLB passthrough (Option C) works but you lose ALB Layer 7 features. HTTP listener (Option D) provides no encryption. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
Q25
A company has an AWS Organization with the following OU structure: Root → Production OU → Finance OU. An SCP at the Root level denies ec2:TerminateInstances. An SCP at the Production OU level allows all EC2 actions. An IAM policy on a user in the Finance OU grants full administrator access. Can the user terminate EC2 instances?
A Yes, because the IAM administrator policy grants full access
B Yes, because the Production OU SCP allows all EC2 actions
No, because the Root-level SCP Deny overrides all Allow statements from lower-level SCPs and IAM policies
D No, because the Finance OU inherits only the Production OU SCP, not the Root SCP
Correct Answer
No, because the Root-level SCP Deny overrides all Allow statements from lower-level SCPs and IAM policies
Explanation
SCPs are evaluated at every level of the OU hierarchy. An explicit Deny at ANY level in the SCP chain overrides any Allow at any other level. The Root-level Deny for ec2:TerminateInstances cannot be overridden by the Production OU's Allow or the IAM policy's Allow. SCPs restrict the maximum available permissions — they intersect with (not replace) IAM policies. The Finance OU inherits BOTH the Root SCP and the Production OU SCP. Learn more: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-effects-on-permissions
Q26
A company is building an application that needs to sign and verify documents using asymmetric cryptographic keys. The private key must NEVER leave AWS and must be protected by FIPS 140-2 Level 3 validated hardware. Which service should the architect use?
A AWS KMS with an RSA asymmetric customer managed key
AWS CloudHSM with customer-managed RSA key pairs
C IAM with X.509 certificates
D AWS Certificate Manager for digital signing
E AWS KMS with a symmetric encryption key
Correct Answer
AWS CloudHSM with customer-managed RSA key pairs
Explanation
AWS CloudHSM provides FIPS 140-2 Level 3 validated hardware security modules. Keys generated in CloudHSM never leave the HSM in plaintext. CloudHSM supports asymmetric key operations including RSA signing and verification. AWS KMS (Option A) uses FIPS 140-2 Level 2 validated HSMs (Level 2, not Level 3). IAM X.509 (Option C) is for SOAP API signing. ACM (Option D) is for TLS certificates, not document signing. Symmetric keys (Option E) cannot sign documents. Learn more: https://docs.aws.amazon.com/cloudhsm/latest/userguide/introduction.html
Q27
A company is deploying a data lake that will store sensitive customer data. The compliance team requires: (1) automatic discovery and classification of sensitive data, (2) continuous monitoring for data exfiltration, and (3) centralized security findings. Which THREE AWS services should be deployed together? (Choose THREE)
Amazon Macie for automated sensitive data discovery and classification in S3
Amazon GuardDuty for monitoring S3 data access patterns and detecting unusual API calls indicating exfiltration
AWS Security Hub for aggregating and prioritizing security findings from Macie and GuardDuty
D Amazon Inspector for scanning S3 bucket contents for vulnerabilities
E AWS Config for real-time data classification and threat detection
Correct Answers
Amazon Macie for automated sensitive data discovery and classification in S3
Amazon GuardDuty for monitoring S3 data access patterns and detecting unusual API calls indicating exfiltration
AWS Security Hub for aggregating and prioritizing security findings from Macie and GuardDuty
Explanation
Macie (Option A) automatically discovers and classifies sensitive data in S3 using ML. GuardDuty (Option B) monitors for unusual data access patterns indicating exfiltration. Security Hub (Option C) aggregates findings from both services into a centralized dashboard. Inspector (Option D) scans EC2 and container vulnerabilities, not S3 data content. Config (Option E) monitors resource configuration, not data classification or threats. Learn more: https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
Q28
A company has an API that processes webhook callbacks from a third-party payment provider. The payment provider sends callbacks to the company's API Gateway endpoint. The architect needs to verify that incoming requests genuinely originate from the payment provider and have not been tampered with. Which approach provides request authenticity verification?
A Restrict API Gateway access using a resource policy that allows only the payment provider's IP ranges
Configure the payment provider to sign request payloads with a shared secret. Use a Lambda authorizer to verify the HMAC signature on each request before routing to the backend
C Use API keys to authenticate the payment provider
D Enable AWS WAF on the API Gateway to filter non-legitimate requests
Correct Answer
Configure the payment provider to sign request payloads with a shared secret. Use a Lambda authorizer to verify the HMAC signature on each request before routing to the backend
Explanation
HMAC signature verification is the standard approach for webhook authenticity. The payment provider signs the payload with a shared secret, and the Lambda authorizer recalculates and verifies the signature. This proves both the origin (shared secret holder) and integrity (payload not tampered). IP restriction (Option A) can be spoofed and payment providers may change IPs. API keys (Option C) authenticate the caller but don't verify payload integrity. WAF (Option D) protects against attacks, not identity verification. Learn more: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
Q29
A company's network architecture includes a Transit Gateway connecting 20 VPCs. The security team needs to inspect all east-west traffic (traffic between VPCs) using a third-party firewall appliance. Where should the firewall be deployed?
A In each of the 20 VPCs as a local firewall instance
In a dedicated inspection VPC attached to the Transit Gateway. Configure Transit Gateway route tables to route all inter-VPC traffic through the inspection VPC
C On the Transit Gateway itself as a built-in firewall feature
D As a CloudFront distribution in front of all VPCs
Correct Answer
In a dedicated inspection VPC attached to the Transit Gateway. Configure Transit Gateway route tables to route all inter-VPC traffic through the inspection VPC
Explanation
The centralized inspection VPC pattern routes all inter-VPC traffic through a dedicated VPC containing firewall appliances (or AWS Network Firewall). Transit Gateway route tables are configured so that traffic between spoke VPCs transits through the inspection VPC. This provides a single inspection point for all east-west traffic. Per-VPC firewalls (Option A) are expensive and hard to manage. Transit Gateway does not have built-in firewall (Option C). CloudFront (Option D) is for internet content delivery. Learn more: https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-appliance-scenario.html
Q30
A company operates a critical payment processing system on EC2 instances. The system uses an Amazon SQS queue to receive payment requests. If the SQS queue is accidentally deleted, all in-flight payment messages would be lost. How should the architect protect against this risk?
A Create a CloudWatch alarm that monitors the queue metrics and alerts when the queue is deleted
Configure SQS queue resource policy to deny sqs:DeleteQueue from all principals except a break-glass administrator role. Additionally, subscribe an SNS dead-letter queue as a backup
C Enable Multi-AZ deployment for the SQS queue
D Create a second SQS queue and manually copy messages between queues every hour
Correct Answer
Configure SQS queue resource policy to deny sqs:DeleteQueue from all principals except a break-glass administrator role. Additionally, subscribe an SNS dead-letter queue as a backup
Explanation
A resource policy denying sqs:DeleteQueue prevents accidental deletion, even by administrators (except the break-glass role). SQS is inherently redundant across AZs (Option C is not an SQS feature). CloudWatch (Option A) alerts after deletion, when messages are already lost. Manual copying (Option D) creates consistency issues and does not prevent deletion. The resource policy provides preventive control. Learn more: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-api-permissions-reference.html

SAA-C03 Practice Set-02

30 questions
Q1
A company has an application that stores data in Amazon RDS MySQL. The database team wants to protect against accidental data deletion. If someone accidentally drops a table, they want to recover it quickly. Which RDS feature provides this capability?
A RDS read replicas
RDS point-in-time recovery (PITR), which continuously backs up transaction logs. The database can be restored to any specific second within the backup retention period (up to 35 days)
C RDS Multi-AZ deployment
D RDS Enhanced Monitoring
E RDS Performance Insights
Correct Answer
RDS point-in-time recovery (PITR), which continuously backs up transaction logs. The database can be restored to any specific second within the backup retention period (up to 35 days)
Explanation
PITR enables restoring the database to any point in time within the retention window. If a table is dropped at 2:30 PM, you can restore to 2:29 PM, losing only the last minute of data. Read replicas (Option A) would have replicated the DROP command. Multi-AZ (Option C) provides failover for infrastructure failures, not data recovery. Monitoring tools (Options D, E) observe performance, not restore data. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html
Q2
A company runs a web application on EC2 instances. Traffic varies throughout the day — high during business hours and low at night. The team wants the number of instances to automatically increase during peak hours and decrease during off-peak. Which service provides this?
A AWS CloudFormation
EC2 Auto Scaling, which automatically adjusts the number of EC2 instances based on demand using scaling policies (e.g., target tracking on CPU utilization)
C Elastic Load Balancing
D Amazon CloudWatch
Correct Answer
EC2 Auto Scaling, which automatically adjusts the number of EC2 instances based on demand using scaling policies (e.g., target tracking on CPU utilization)
Explanation
EC2 Auto Scaling dynamically adds or removes instances based on scaling policies. Target tracking policies maintain a target metric (e.g., 60% average CPU), automatically scaling out when demand increases and scaling in when it decreases. CloudFormation (Option A) deploys infrastructure but does not dynamically scale. ELB (Option C) distributes traffic but does not add/remove instances. CloudWatch (Option D) monitors metrics that trigger scaling but does not scale itself. Learn more: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Q3
A company needs to decouple its order processing system. Currently, the web tier directly calls the processing tier. If the processing tier is slow or unavailable, the web tier also becomes unresponsive. Which service decouples these components?
A Amazon CloudFront
Amazon SQS (Simple Queue Service), which acts as a buffer between the web tier and processing tier. The web tier sends messages to the queue, and the processing tier reads from the queue at its own pace
C Amazon Route 53
D AWS Direct Connect
E Amazon Kinesis
Correct Answer
Amazon SQS (Simple Queue Service), which acts as a buffer between the web tier and processing tier. The web tier sends messages to the queue, and the processing tier reads from the queue at its own pace
Explanation
SQS provides asynchronous message queuing. The web tier places an order message in the queue and immediately returns a response to the user. The processing tier processes messages independently. If the processing tier is slow, messages simply wait in the queue. CloudFront (Option A) is a CDN. Route 53 (Option C) is DNS. Direct Connect (Option D) is network connectivity. Kinesis (Option E) is for streaming data, not job queuing. Learn more: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
Q4
A company has an application that uses an Application Load Balancer (ALB) with 4 EC2 instances. What happens to incoming requests if one of the 4 instances fails its health check?
A All traffic is stopped until the failed instance recovers
The ALB stops sending traffic to the unhealthy instance and distributes requests only among the remaining 3 healthy instances
C The ALB continues sending traffic to all 4 instances including the unhealthy one
D The ALB terminates all 4 instances and launches new ones
Correct Answer
The ALB stops sending traffic to the unhealthy instance and distributes requests only among the remaining 3 healthy instances
Explanation
The ALB continuously monitors target health using configured health checks. When an instance fails its health check, the ALB marks it as unhealthy and stops routing new requests to it. Existing connections are allowed to drain. Traffic is distributed among healthy targets. The ALB does not stop all traffic (Option A), route to unhealthy targets (Option C), or terminate instances (Option D — that is Auto Scaling's responsibility). Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html
Q5
A company needs to choose a disaster recovery (DR) strategy for a critical application. The CTO wants to understand the trade-offs between different approaches. Which TWO statements are correct about DR strategies? (Choose TWO)
Backup and Restore has the lowest cost but the highest RTO (recovery time) because infrastructure must be rebuilt from backups during recovery
Multi-Site Active-Active has the lowest RTO (near-zero) but the highest cost because full production infrastructure runs simultaneously in two or more regions
C Pilot Light costs more than Multi-Site Active-Active
D Warm Standby has a higher RTO than Backup and Restore
Correct Answers
Backup and Restore has the lowest cost but the highest RTO (recovery time) because infrastructure must be rebuilt from backups during recovery
Multi-Site Active-Active has the lowest RTO (near-zero) but the highest cost because full production infrastructure runs simultaneously in two or more regions
Explanation
DR strategies form a spectrum of cost vs. RTO: Backup/Restore (cheapest, highest RTO: hours), Pilot Light (low cost, RTO: minutes to hours), Warm Standby (moderate cost, RTO: minutes), Multi-Site (highest cost, RTO: near-zero). Pilot Light (Option C) costs much less than Multi-Site. Warm Standby (Option D) has a lower RTO than Backup/Restore, not higher. Learn more: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
Q6
A company uses Amazon S3 to store important business documents. An employee accidentally deletes a critical file. How can the company recover the deleted file if S3 versioning was enabled on the bucket?
A The file is permanently lost and cannot be recovered
With versioning enabled, deleting an object places a delete marker on it. The previous versions are preserved. The file can be recovered by deleting the delete marker or by retrieving a specific previous version ID
C Contact AWS Support to recover the file
D Restore from a CloudFormation template
Correct Answer
With versioning enabled, deleting an object places a delete marker on it. The previous versions are preserved. The file can be recovered by deleting the delete marker or by retrieving a specific previous version ID
Explanation
S3 versioning preserves all versions of every object. When an object is deleted, S3 places a delete marker (a special version) instead of permanently removing the object. Previous versions remain accessible. Removing the delete marker effectively restores the object. Files are not permanently lost (Option A). AWS Support (Option C) is not needed when versioning is enabled. CloudFormation (Option D) does not store file content. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html
Q7
A company has an RDS MySQL database in a single Availability Zone. The operations team is concerned about downtime during infrastructure failures. Which RDS deployment option provides automatic failover to a standby instance in a different AZ?
A RDS read replicas
RDS Multi-AZ deployment, which maintains a synchronous standby replica in a different AZ. If the primary instance fails, RDS automatically fails over to the standby within 60-120 seconds
C RDS Performance Insights
D RDS Proxy
E RDS snapshot restore
Correct Answer
RDS Multi-AZ deployment, which maintains a synchronous standby replica in a different AZ. If the primary instance fails, RDS automatically fails over to the standby within 60-120 seconds
Explanation
Multi-AZ maintains a synchronous standby replica. During a failure (AZ outage, instance failure, storage issue), RDS performs an automatic DNS failover to the standby, typically completing within 60-120 seconds. Read replicas (Option A) are for read scaling with asynchronous replication and do not provide automatic failover to the replica. Performance Insights (Option C) monitors queries. Proxy (Option D) manages connections. Snapshot restore (Option E) creates a new instance. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Q8
A company deploys an application using a single EC2 instance. The instance has a 500 GB EBS volume with important data. How should the architect protect this data from accidental loss?
A Use instance store volumes instead of EBS for better data protection
Create regular EBS snapshots, which are stored in S3 with 99.999999999% durability. Snapshots capture the state of the volume and can be used to restore data or create new volumes
C Store a backup copy of the data on the same EBS volume in a different folder
D Rely on the EBS volume's built-in durability — no additional protection is needed
Correct Answer
Create regular EBS snapshots, which are stored in S3 with 99.999999999% durability. Snapshots capture the state of the volume and can be used to restore data or create new volumes
Explanation
EBS snapshots are incremental backups stored in S3 (highly durable). They capture a point-in-time copy of the volume and can be used to restore data. Instance store (Option A) is ephemeral — data is lost when the instance stops. Same-volume backup (Option C) does not protect against volume failure. While EBS has 99.999% availability (Option D), snapshots provide recovery from accidental deletion, corruption, or AZ failures. Learn more: https://docs.aws.amazon.com/ebs/latest/userguide/EBSSnapshots.html
Q9
A company has a web application where users upload images. The application stores images on the local EBS volume of a single EC2 instance. The team wants to add more EC2 instances behind a load balancer for scalability. What is the problem with storing images on EBS, and what is the solution?
A EBS volumes are too slow for image storage
EBS volumes are attached to a single instance. When users are load-balanced across multiple instances, an image uploaded to Instance A is not accessible from Instance B. The solution is to use Amazon S3 for shared image storage, which is accessible from all instances
C EBS volumes cannot store image files
D EBS volumes are automatically shared across all instances in an Auto Scaling group
Correct Answer
EBS volumes are attached to a single instance. When users are load-balanced across multiple instances, an image uploaded to Instance A is not accessible from Instance B. The solution is to use Amazon S3 for shared image storage, which is accessible from all instances
Explanation
EBS volumes attach to a single EC2 instance (except io2 Multi-Attach for specialized cases). In a multi-instance architecture, a shared storage layer is needed. S3 provides virtually unlimited, durable, and accessible-from-anywhere storage. Alternatively, Amazon EFS provides shared file system storage. EBS performance (Option A) and file type capability (Option C) are not issues. EBS is NOT automatically shared (Option D). Learn more: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
Q10
A company runs a website with unpredictable traffic. Some days it receives 100 visitors, other days 100,000. The company wants to avoid paying for idle servers during low-traffic periods. Which compute model scales to zero when not in use?
A EC2 Reserved Instances
AWS Lambda, which runs code in response to events and charges only for the actual compute time consumed. There is no charge when the function is not executing
C EC2 On-Demand Instances with Auto Scaling
D Amazon Lightsail
Correct Answer
AWS Lambda, which runs code in response to events and charges only for the actual compute time consumed. There is no charge when the function is not executing
Explanation
Lambda is a serverless compute service that scales from zero to thousands of concurrent executions automatically. You pay only for the milliseconds your code executes — no charge when idle. Reserved Instances (Option A) require upfront commitment. Auto Scaling (Option C) can scale down but maintains minimum instances. Lightsail (Option D) charges for running instances regardless of traffic. Learn more: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
Q11
A company uses an SQS queue to process customer orders. The consumer application processes a message and deletes it from the queue. If the application crashes during processing before deleting the message, what happens to that message?
A The message is permanently lost
After the visibility timeout expires, the message becomes visible in the queue again and can be received by another consumer for reprocessing
C SQS automatically completes the processing
D The message is moved to a dead-letter queue immediately
E The message remains invisible forever
Correct Answer
After the visibility timeout expires, the message becomes visible in the queue again and can be received by another consumer for reprocessing
Explanation
When a consumer receives a message, SQS makes it invisible for the duration of the visibility timeout. If the consumer does not delete the message (e.g., crashes), the message reappears in the queue after the timeout expires, allowing another consumer to process it. Messages are not lost (Option A). SQS does not process messages (Option C). DLQ receives messages only after exceeding the maximum receive count (Option D). Messages do not stay invisible forever (Option E). Learn more: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
Q12
A company builds a web application and wants to distribute incoming traffic across multiple EC2 instances. Which AWS service performs this function?
A Amazon Route 53
Elastic Load Balancing (ELB), which automatically distributes incoming application traffic across multiple targets (EC2 instances, containers, IPs) in one or more Availability Zones
C Amazon CloudFront
D AWS Direct Connect
Correct Answer
Elastic Load Balancing (ELB), which automatically distributes incoming application traffic across multiple targets (EC2 instances, containers, IPs) in one or more Availability Zones
Explanation
Elastic Load Balancing distributes traffic across targets for better availability and fault tolerance. It offers three types: Application Load Balancer (HTTP/HTTPS), Network Load Balancer (TCP/UDP), and Gateway Load Balancer (third-party appliances). Route 53 (Option A) is DNS, which resolves domain names but does not actively distribute requests like a load balancer. CloudFront (Option C) is a CDN. Direct Connect (Option D) is a dedicated network connection. Learn more: https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html
Q13
A company needs to ensure high availability for its three-tier web application (web tier, application tier, database tier). Which TWO design principles are essential? (Choose TWO)
Deploy each tier across at least two Availability Zones so that if one AZ fails, the application continues operating in the other AZ
Use Elastic Load Balancing in front of the web and application tiers to distribute traffic and perform health checks that route traffic only to healthy instances
C Deploy all three tiers in a single AZ for lower latency between tiers
D Use a single large EC2 instance per tier for simplicity
Correct Answers
Deploy each tier across at least two Availability Zones so that if one AZ fails, the application continues operating in the other AZ
Use Elastic Load Balancing in front of the web and application tiers to distribute traffic and perform health checks that route traffic only to healthy instances
Explanation
Multi-AZ deployment (Option A) protects against AZ-level failures. Load balancing (Option B) distributes traffic and automatically removes unhealthy targets. Together, these ensure that a single failure does not bring down the application. Single AZ (Option C) creates a single point of failure. Single instances per tier (Option D) have no redundancy. Learn more: https://docs.aws.amazon.com/whitepapers/latest/aws-overview-deployment-options/multi-az-deployments.html
Q14
A company has an application that stores user session data. If the EC2 instance hosting the application is terminated, all session data is lost. Where should session data be stored to survive instance failures?
A In the EC2 instance's local memory
In an external store like Amazon ElastiCache for Redis or Amazon DynamoDB, which persists data independently of any EC2 instance's lifecycle
C In the EC2 instance's instance store volume
D In an environment variable on the instance
Correct Answer
In an external store like Amazon ElastiCache for Redis or Amazon DynamoDB, which persists data independently of any EC2 instance's lifecycle
Explanation
External session stores (ElastiCache Redis, DynamoDB) persist data independently of EC2 instances. Any replacement instance can retrieve session data, providing continuity for users. Local memory (Option A) is lost when the instance terminates. Instance store (Option C) is ephemeral — data is lost on stop/terminate. Environment variables (Option D) are instance-specific and non-persistent. Learn more: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html
Q15
A company has an application that reads the same database records repeatedly. 80% of database queries are identical reads that return the same data. The database is under heavy load. What is the MOST effective way to reduce database load?
A Upgrade to a larger database instance
Add Amazon ElastiCache (Redis or Memcached) as an in-memory caching layer. The application first checks the cache for data before querying the database. This reduces database load by serving repeated reads from sub-millisecond in-memory cache
C Add more database read replicas only
D Increase the database connection limit
Correct Answer
Add Amazon ElastiCache (Redis or Memcached) as an in-memory caching layer. The application first checks the cache for data before querying the database. This reduces database load by serving repeated reads from sub-millisecond in-memory cache
Explanation
Caching frequently accessed data in ElastiCache provides sub-millisecond read latency and offloads 80% of read traffic from the database. This is the most effective solution for read-heavy, repetitive query patterns. Larger instance (Option A) provides some relief but is more expensive. Read replicas (Option C) help but add replication lag and cost. Connection limits (Option D) do not reduce query load. Learn more: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html
Q16
A company hosts its website's static files (images, CSS, JavaScript) on a web server in us-east-1. Users in Asia and Europe experience slow load times. Which AWS service delivers static content faster to global users?
A AWS Direct Connect
Amazon CloudFront, a content delivery network (CDN) that caches content at edge locations around the world. Users receive content from the nearest edge location, reducing latency significantly
C AWS Global Accelerator
D Amazon Route 53
Correct Answer
Amazon CloudFront, a content delivery network (CDN) that caches content at edge locations around the world. Users receive content from the nearest edge location, reducing latency significantly
Explanation
CloudFront caches static content at 400+ edge locations worldwide. When a user in Asia requests an image, CloudFront serves it from a nearby edge location instead of routing the request to us-east-1. This reduces latency from hundreds of milliseconds to single-digit milliseconds. Direct Connect (Option A) is a private network connection. Global Accelerator (Option C) optimizes routing but does not cache content. Route 53 (Option D) provides DNS, not content delivery. Learn more: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Q17
A company stores data in an Amazon S3 bucket. Data analysts need to run SQL queries against CSV files stored in S3 without loading the data into a separate database. Which service enables this?
A Amazon RDS
Amazon Athena, which runs standard SQL queries directly against data in S3. It is serverless — there is no infrastructure to manage, and you pay only for the data scanned per query
C Amazon DynamoDB
D Amazon Redshift (provisioned cluster)
E AWS Glue
Correct Answer
Amazon Athena, which runs standard SQL queries directly against data in S3. It is serverless — there is no infrastructure to manage, and you pay only for the data scanned per query
Explanation
Athena is a serverless query service that analyzes data directly in S3 using standard SQL. It supports CSV, JSON, Parquet, ORC, and other formats. No data loading or infrastructure is required. RDS (Option A) requires data import. DynamoDB (Option C) is a NoSQL database. Redshift provisioned (Option D) requires cluster management and data loading. Glue (Option E) is for ETL, not direct querying. Learn more: https://docs.aws.amazon.com/athena/latest/ug/what-is.html
Q18
A company's application stores user profile images. Images are uploaded once and read millions of times. Storage must be highly durable and cost-effective. Which storage service is BEST suited?
A Amazon EBS (Elastic Block Store)
Amazon S3 Standard, which provides 99.999999999% (11 nines) durability, scales to unlimited objects, and is optimized for frequently accessed data
C Amazon EC2 instance store
D Amazon EFS (Elastic File System)
Correct Answer
Amazon S3 Standard, which provides 99.999999999% (11 nines) durability, scales to unlimited objects, and is optimized for frequently accessed data
Explanation
S3 Standard is designed for frequently accessed data with the highest durability (11 nines). It scales infinitely with pay-per-GB pricing. EBS (Option A) is block storage for EC2 and cannot be directly accessed by web users. Instance store (Option C) is ephemeral. EFS (Option D) is a file system for EC2 instances — more expensive than S3 for this use case. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
Q19
A company has a DynamoDB table for an e-commerce application. The table uses product_id as the partition key. During a flash sale, a single popular product receives 90% of all read and write traffic. Other items receive minimal traffic. What is this problem called, and how should it be addressed?
A This is normal DynamoDB behavior and requires no changes
This is a hot partition problem — one partition key value receives disproportionate traffic. Solutions include: caching hot items with DAX, adding a random suffix to the partition key (write sharding), or using a composite key to distribute traffic more evenly
C Increase the provisioned RCU and WCU to handle the traffic
D Switch to Amazon RDS for better performance
Correct Answer
This is a hot partition problem — one partition key value receives disproportionate traffic. Solutions include: caching hot items with DAX, adding a random suffix to the partition key (write sharding), or using a composite key to distribute traffic more evenly
Explanation
A hot partition occurs when one partition key value (the popular product) receives far more traffic than DynamoDB can serve from a single partition. DAX caches hot reads. Write sharding (product_id#random_suffix) distributes writes across multiple partitions. Increasing capacity (Option C) helps overall but does not fix the per-partition limit. RDS (Option D) may have the same issue as a row-level bottleneck. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html
Q20
A company needs a database that can handle millions of read and write requests per second with single-digit millisecond latency. The data model is key-value (product ID → product details). Which AWS database is purpose-built for this?
A Amazon RDS MySQL
Amazon DynamoDB, a fully managed NoSQL database that delivers consistent single-digit millisecond performance at any scale with automatic scaling
C Amazon Aurora
D Amazon Neptune
Correct Answer
Amazon DynamoDB, a fully managed NoSQL database that delivers consistent single-digit millisecond performance at any scale with automatic scaling
Explanation
DynamoDB is designed for high-scale, low-latency key-value workloads. It automatically distributes data across partitions and scales to millions of requests per second. RDS MySQL (Option A) and Aurora (Option C) are relational databases with row-locking overhead at this scale. Neptune (Option D) is a graph database. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
Q21
A company has a time-series database workload on RDS PostgreSQL that stores sensor readings. The table has grown to 2 billion rows and query performance has degraded significantly. Most queries filter by sensor_id and time_range. Which database strategy would MOST improve performance?
A Add more read replicas to distribute the query load
Migrate to Amazon Timestream, which is purpose-built for time-series data with built-in time-series functions, automatic data tiering, and optimized storage
C Increase the RDS instance size to the maximum available
D Create a composite index on (sensor_id, timestamp) in the PostgreSQL table
E Migrate to DynamoDB with sensor_id as partition key and timestamp as sort key
Correct Answer
Migrate to Amazon Timestream, which is purpose-built for time-series data with built-in time-series functions, automatic data tiering, and optimized storage
Explanation
Amazon Timestream is purpose-built for time-series workloads with optimized storage (memory store for recent data, magnetic store for historical), built-in time-series functions, and automatic data lifecycle management. At 2 billion rows, Timestream's columnar storage and time-series query engine significantly outperform general-purpose RDBMS. More replicas (Option A) spread load but do not fix single-query performance. Indexing (Option D) helps but has limits at this scale. DynamoDB (Option E) works but lacks native time-series functions. Learn more: https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html
Q22
A company is building a real-time recommendation engine that needs to: (1) ingest user clickstream events at 100,000 events per second, (2) enrich events with user profile data, and (3) serve personalized recommendations with sub-50ms latency. Which THREE services should be used for each requirement? (Choose THREE)
Amazon Kinesis Data Streams for ingesting 100,000 events/second in real-time
AWS Lambda consuming Kinesis for real-time enrichment with user profile data from DynamoDB
Amazon ElastiCache for Redis to cache and serve recommendations with sub-50ms latency
D Amazon S3 for storing recommendation results
E Amazon Redshift for serving real-time recommendations
Correct Answers
Amazon Kinesis Data Streams for ingesting 100,000 events/second in real-time
AWS Lambda consuming Kinesis for real-time enrichment with user profile data from DynamoDB
Amazon ElastiCache for Redis to cache and serve recommendations with sub-50ms latency
Explanation
Kinesis Data Streams (Option A) handles 100K events/sec across multiple shards. Lambda with Kinesis integration (Option B) processes and enriches each event with user data from DynamoDB. ElastiCache Redis (Option C) caches pre-computed recommendations for sub-50ms serving. S3 (Option D) cannot serve sub-50ms reads for individual users. Redshift (Option E) is for batch analytics, not sub-50ms real-time serving. Learn more: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
Q23
A company is processing satellite imagery files that are each 5 GB. The application on EC2 needs to download each file from S3, process it, and upload results back to S3. Current download speed is 200 MB/s but the team wants to maximize throughput. EC2 instances use 25 Gbps networking. What should the architect recommend?
A Enable S3 Transfer Acceleration on the bucket
Use S3 multipart download (byte-range fetches) with multiple parallel connections to aggregate throughput up to the EC2 network limit
C Move the data to EBS volumes attached to the EC2 instances
D Use AWS DataSync to transfer files from S3 to the EC2 instance's local storage
Correct Answer
Use S3 multipart download (byte-range fetches) with multiple parallel connections to aggregate throughput up to the EC2 network limit
Explanation
S3 supports byte-range fetches (multipart download) where the application requests different byte ranges in parallel, aggregating throughput significantly. With 25 Gbps networking (~3.1 GB/s theoretical), parallel requests can approach the network limit. Transfer Acceleration (Option A) speeds internet-based uploads to S3, not EC2-to-S3 within AWS. EBS (Option C) adds provisioning complexity and EBS throughput limits. DataSync (Option D) is for scheduled data transfers, not high-performance processing pipelines. Learn more: https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html
Q24
A company needs to build an ETL pipeline that transforms 10 TB of raw JSON data in S3 into Parquet format, partitioned by date and customer_id, for downstream Athena queries. The transformation includes complex business logic with conditional mappings and data validation. Which service provides the MOST flexible and cost-effective serverless ETL?
A Amazon Athena CTAS (Create Table As Select) queries
AWS Glue ETL jobs with Apache Spark using Glue dynamic frames for schema management and built-in partition writing
C Amazon EMR with a persistent Spark cluster
D AWS Lambda functions processing files individually
Correct Answer
AWS Glue ETL jobs with Apache Spark using Glue dynamic frames for schema management and built-in partition writing
Explanation
AWS Glue provides serverless Apache Spark-based ETL with automatic scaling, built-in schema inference (dynamic frames), and native S3 partitioned output. It handles complex transformations on 10 TB data cost-effectively (pay per DPU-hour). Athena CTAS (Option A) works for simple transformations but has limited business logic support. EMR (Option C) requires cluster management. Lambda (Option D) has a 15-minute timeout and memory limits unsuitable for 10 TB processing. Learn more: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-data-catalog-hive.html
Q25
A company's machine learning inference API needs to serve predictions with 100ms latency. The ML model is 15 GB and takes 30 seconds to load into memory. The API receives variable traffic: 500 requests/second during peaks and near-zero during off-hours. How should the architect deploy the model for OPTIMAL performance and cost?
A Deploy the model on Lambda with 10 GB memory and container image packaging
Deploy on SageMaker real-time endpoints with auto-scaling and a minimum instance count of 1 during off-hours
C Deploy on a single large EC2 instance running 24/7
D Use SageMaker Serverless Inference for all traffic
Correct Answer
Deploy on SageMaker real-time endpoints with auto-scaling and a minimum instance count of 1 during off-hours
Explanation
SageMaker real-time endpoints with auto-scaling handle variable traffic efficiently. A minimum of 1 instance avoids cold starts (30-second model loading). The endpoint scales up during peaks and down during off-hours. Lambda (Option A) cannot hold 15 GB models (10 GB memory max and 10 GB storage max). Single EC2 (Option C) wastes money during off-hours with no scaling. SageMaker Serverless (Option D) has cold start issues for 15 GB models and a 6 GB memory limit. Learn more: https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html
Q26
A company needs to deploy a graph database for a social networking application that must traverse relationship queries (e.g., 'friends of friends who like product X') across a dataset of 500 million nodes. Query latency must be under 100ms. Which database service is BEST suited?
A Amazon DynamoDB with graph-like data modeling using adjacency lists
Amazon Neptune with the Gremlin or SPARQL query language
C Amazon RDS PostgreSQL with recursive CTEs for graph traversal
D Amazon OpenSearch Service with nested document relationships
Correct Answer
Amazon Neptune with the Gremlin or SPARQL query language
Explanation
Amazon Neptune is a purpose-built graph database optimized for traversing relationships with sub-100ms latency at scale. It supports Gremlin (property graph) and SPARQL (RDF) query languages designed for graph traversal. DynamoDB adjacency lists (Option A) work for simple graphs but not efficient multi-hop traversals at 500M nodes. RDS recursive CTEs (Option C) perform poorly for deep traversals. OpenSearch (Option D) is for full-text search, not graph traversal. Learn more: https://docs.aws.amazon.com/neptune/latest/userguide/intro.html
Q27
A company has a DynamoDB table with provisioned capacity of 10,000 WCUs and 50,000 RCUs. CloudWatch shows average utilization of 20% for writes and 15% for reads, with daily peaks reaching 80% for 2 hours. Which optimization would reduce costs the MOST while ensuring performance during peaks?
A Switch to DynamoDB on-demand capacity mode
Enable DynamoDB auto scaling with target utilization set to 70%. This scales capacity down during low usage and up during peaks, matching provisioned capacity to actual demand
C Reduce provisioned capacity to match average utilization (2,000 WCU, 7,500 RCU)
D Add a DAX cluster to reduce the read capacity requirement
Correct Answer
Enable DynamoDB auto scaling with target utilization set to 70%. This scales capacity down during low usage and up during peaks, matching provisioned capacity to actual demand
Explanation
Auto scaling with 70% target adjusts capacity dynamically. During the 22 hours of low usage, capacity scales down significantly. During the 2-hour peak, it scales up. This matches provisioned capacity to demand, reducing costs by approximately 60-70% compared to fixed over-provisioning. On-demand (Option A) is more expensive for predictable high-volume workloads. Reducing to average (Option C) would cause throttling during peaks. DAX (Option D) helps reads but does not address write over-provisioning. Learn more: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
Q28
A company runs 50 RDS PostgreSQL instances across development (30), staging (10), and production (10) environments. All instances are db.r6g.xlarge. Development instances are only used during business hours. Staging is used for testing 3 days per week. Production runs 24/7. How should the architect optimize the total RDS spend?
A Purchase Reserved Instances for all 50 instances
Purchase Reserved Instances for the 10 production instances. Use AWS Instance Scheduler to stop development instances outside business hours and staging instances on non-testing days. Keep dev and staging as On-Demand
C Switch all 50 instances to Aurora Serverless v2
D Consolidate all databases into a single large RDS instance
Correct Answer
Purchase Reserved Instances for the 10 production instances. Use AWS Instance Scheduler to stop development instances outside business hours and staging instances on non-testing days. Keep dev and staging as On-Demand
Explanation
Production instances run 24/7 with predictable usage — ideal for Reserved Instances (up to 72% savings). Development instances only need to run during business hours (~40% of the time), so scheduling saves ~60%. Staging runs 3/7 days, so scheduling saves ~57%. On-Demand for dev/staging with scheduling is cheaper than unused Reserved capacity. RIs for all (Option A) wastes money on stopped instances. Aurora Serverless (Option C) requires migration effort and may not be cheaper for all workloads. Single instance (Option D) is an anti-pattern. Learn more: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithReservedDBInstances.html
Q29
A company's monthly AWS bill has the following breakdown: EC2 On-Demand ($40,000), EBS storage ($15,000), Data Transfer ($12,000), RDS ($8,000), S3 ($5,000). Which THREE actions would provide the GREATEST total savings? (Choose THREE)
Purchase Compute Savings Plans to cover the steady-state EC2 usage, saving up to 66% on the $40,000 EC2 spend
Right-size EBS volumes by analyzing CloudWatch metrics — switch over-provisioned io2 volumes to gp3 where possible
Deploy CloudFront to reduce data transfer costs by caching content at edge locations at lower per-GB pricing
D Move all S3 data to S3 Glacier regardless of access patterns
E Increase the number of RDS read replicas to reduce costs
Correct Answers
Purchase Compute Savings Plans to cover the steady-state EC2 usage, saving up to 66% on the $40,000 EC2 spend
Right-size EBS volumes by analyzing CloudWatch metrics — switch over-provisioned io2 volumes to gp3 where possible
Deploy CloudFront to reduce data transfer costs by caching content at edge locations at lower per-GB pricing
Explanation
Savings Plans (Option A) can save ~$26,000/month (66% of $40K). Right-sizing EBS (Option B) — switching io2 to gp3 provides up to 80% savings on many volumes. CloudFront (Option C) reduces data transfer by caching and pricing is lower per GB than EC2 data transfer. Glacier for all S3 (Option D) would break application access patterns. More read replicas (Option E) increase costs, not reduce them. Learn more: https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
Q30
A company has a multi-account AWS Organization. Different teams provision resources without cost awareness. Monthly spending has grown from $50,000 to $200,000 in 6 months. Which comprehensive approach establishes financial governance?
A Send the monthly AWS bill to all team leads via email
Implement AWS Budgets with per-account and per-project budgets, enforce tagging via SCPs, use AWS Cost Anomaly Detection for unusual spending alerts, and schedule monthly Cost Explorer reviews with team leads
C Restrict all teams to the AWS Free Tier only
D Disable all non-essential AWS services using SCPs
Correct Answer
Implement AWS Budgets with per-account and per-project budgets, enforce tagging via SCPs, use AWS Cost Anomaly Detection for unusual spending alerts, and schedule monthly Cost Explorer reviews with team leads
Explanation
Comprehensive cloud financial management requires: budgets with alerts (proactive awareness), enforced tagging (cost attribution), anomaly detection (catching unexpected spikes), and regular reviews (accountability). Email (Option A) is passive and easily ignored. Free Tier (Option C) is impractical for production workloads. Disabling services (Option D) blocks legitimate work. This aligns with the Well-Architected Cost Optimization pillar's guidance on Cloud Financial Management. Learn more: https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-managing-costs.html

Want More Practice?

These are just the free questions. Unlock the full AWS Solutions Architect – Associate exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to AWS Solutions Architect – Associate Exams