Free Practice Questions PCA - Professional Cloud Architect 30 Questions with Answers Free Practice Questions PCA - Professional Cloud Architect 30 Questions with Answers
FREE QUESTIONS

PCA - Professional Cloud Architect
Practice Questions

30 free questions with correct answers and detailed explanations.

30 Free Questions
2 Free Exams
100% With Explanations

PCA Practice Set-01

15 questions
Q1
Company Overview - KnightMotives is a car manufacturer specializing in autonomous, self-driving vehicles, including Battery Electric Vehicles (BEVs), hybrids and traditional internal combustion engine (ICE) vehicles. While KnightMotives has made strides with the in-vehicle experience in their BEV fleet, the hybrid and ICE vehicles have yet to implement these new systems and are viewed poorly by critics and drivers. The lack of modern in-vehicle technology in hybrid and ICE vehicles has resulted in declining sales and customer satisfaction. KnightMotives wants to modernize the consumer experience across all vehicles within five years Artificial Intelligence offers a unique opportunity to revolutionize the in-vehicle experience, as well as the shopping buying and service/maintenance experience. Investment in this new technology will require a shift in financial priorities on a global scale. KnightMotives also wants to improve their online ordering system, which is unreliable. Systems for customers to build their vehicle online for acquisition through a dealer are not delivering the data or reliability that dealers need, causing. A strain in the relationship between KnightMotives and dealers. Service technicians and sales staff need better tooling to enhance dealer successes, including built-to-order vehicles. Solution Concept - KnightMotives wants to shift from manufacturing cars to creating a complete and compelling “automotive experience.” Then strategy prioritizes delivering a consistent experience across all models, developing AI-powered features, generating new revenue from data monetization, adopting a digital focus to differentiate their brand from competitors, and developing better tools for mechanics and salespeople. Existing Technical Environment - KnightMotives's IT is largely on-premises with some applications on major cloud platforms. Their supply chain runs on an outdated mainframe, and Enterprise Resource Planning (ERP) is also outdated, making new promotions and dealer discounts difficult to implement. Dealers have no budget for new equipment. There is fragmentation across vehicles with multiple code bases, and significant technical debt from supporting backwards compatibility. Network connectivity to manufacturing plants and vehicle connectivity in rural areas are challenges. Business Requirements - Key business requirements include fostering a personalized relationship with the driver and delivering a cohesive experience across all models. Creating a better build-to-order model will reduce time on the lot and provide transparency for both dealers and customers. Additionally, KnightMotives seeks to monetize corporate data to finance new technology investments, as their current AI infrastructure is obsolete and corporate data remains siloed. Security is a paramount concern due to past data breaches Adherence to European Union (EU) data protection regulations, especially for emerging autonomous platforms, is critical. KnightMotives plans to make significant investments in fully autonomous driving capabilities, with initial implementation targeting regions with favorable regulatory environments. Prioritizing employee upskilling, attracting top-tier talent, and fostering better communication between business and technical teams are also critical objectives. Technical Requirements - • Modernizing the in-vehicle experience includes developing a consistent user experience (UX) that seamlessly integrates AI-powered features across all models, updating in-vehicle hardware and software in legacy models to support new UX features and AI capabilities, and ensuring reliable network connectivity, especially in rural areas, to support real-time AI features and data transmission. • Network upgrades are necessary to support increased data traffic and improve connectivity between plants and headquarters. • IT infrastructure modernization requires adopting a hybrid cloud strategy to leverage the benefits of both on-premises and cloud infrastructure, and gradually modernizing or replacing legacy systems to improve efficiency and agility. • Autonomous vehicle development and testing requires investing in cutting-edge AI and machine learning technologies, building a robust simulation environment, and ensuring compliance with evolving regulations related to autonomous vehicles. • Data monetization and insights requires implementing a robust data management platform, strict data security and privacy measures, and a scalable AI/ML infrastructure. • Increased focus on security and risk management involves implementing a comprehensive security framework to protect against cyber threats and data breaches, developing an incident response plan, and providing security awareness training to employees. • Providing a delightful experience for dealers and customers requires improving the online build-to-order system; developing modern dealer tools to streamline dealer operations, including sales, service, and inventory management; and implementing a comprehensive Customer Relationship Management (CRM) system to track customer interactions personalize experiences, and improve customer satisfaction. Executive Statement - KnightMotives is committed to enhancing safety and saving lives by leveraging an extensive body of data — encompassing driving, road conditions, behavioral studies, and crash safety statistics — to create compelling digital experiences for drivers. Our AI consistently outperforms national safety statistics, ensuring the unique and coveted KnightMotives experience is aligned across all our vehicle models. Michael Knight, KnightMotives CEO For this question, refer to the KnightMotives Automotive case study. You are responsible for designing the network infrastructure architecture for KnightMotives's new environment on Google Cloud. You need to design the new VPC topology. You want to ensure a guaranteed bandwidth and low latency between the plants and Google Cloud resources. What should you do?
A Create a Standard Tier VPC. and ensure a subnet is available in the region closest to a plant. Establish a Cloud Interconnect between each subnet and the local plant.
B Create a Standard Tier VPC. and ensure a subnet is available in the region closest to a plant Establish Direct Peering between Google's Edge Network and the local plant.
Create a Premium Ter VPand ensure a subnet is available in the region closest to a plant. Establish a Cloud Interconnect between each subnet and the local plant.
D Create a Premium Ter VPC. and ensure a subnet is available in the region closest to a plant. Establish Direct Peering between Google's Edge Network and the local plant.
Correct Answer
Create a Premium Ter VPand ensure a subnet is available in the region closest to a plant. Establish a Cloud Interconnect between each subnet and the local plant.
Explanation
Cloud Interconnect provides dedicated, private connectivity with guaranteed bandwidth and low latency between on-premises networks (plants) and Google Cloud VPCs, making it the right choice for mission-critical, latency-sensitive workloads. Premium Tier networking ensures traffic traverses Google's global backbone for optimal performance. Standard Tier routes over the public internet and cannot guarantee latency. Direct Peering is not a managed service and lacks SLAs suitable for production workloads. See: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview
Q2
Your company is rapidly deploying containerized microservices on Google Kubernetes Engine (GKE) using a robust CI/CD pipeline. Security is a top priority, and you need to implement a comprehensive and efficient strategy to prevent container image vulnerabilities from reaching your GKE production environment. What should you do?
A Review the security reports generated by Artifact Analysis for each container image before deployment to GKE.
Incorporate vulnerability scanning before building container images, and use Google-maintained base images for your container deployments.
Enable Artifact Analysis for the container images, and stop deployment if critical vulnerabilities are found.
D Use a custom security policy within your container image that restricts access to specific network ports and resources.
E Enable Shielded GKE Nodes on the production cluster to automatically block the execution of container images with known vulnerabilities.
Correct Answers
Incorporate vulnerability scanning before building container images, and use Google-maintained base images for your container deployments.
Enable Artifact Analysis for the container images, and stop deployment if critical vulnerabilities are found.
Explanation
Artifact Analysis scans container images for known vulnerabilities (CVEs) and should be enabled to automatically block deployments with critical findings, enforcing a security gate in the CI/CD pipeline. Using Google-maintained base images reduces the attack surface because Google continuously patches them. Incorporating vulnerability scanning before building (shift-left security) catches issues early. Shielded GKE Nodes protect node integrity but do not block image-level vulnerabilities. Custom security policies restrict runtime access but do not prevent deploying vulnerable images. See: https://cloud.google.com/artifact-analysis/docs/container-scanning-overview
Q3
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?
A Configure a new load balancer for the new version of the API
B Reconfigure old clients to use a new endpoint for the new API
C Have the old API forward traffic to the new API based on the path
Use separate backend pools for each API path behind the load balancer
Correct Answer
Use separate backend pools for each API path behind the load balancer
Explanation
Using separate backend pools (backend services) for each API version behind a single HTTP(S) Load Balancer allows URL-path-based routing while preserving the same SSL certificate and DNS records. This is the canonical approach for API versioning with Cloud Load Balancing. Creating a new load balancer would require new DNS/SSL entries. Reconfiguring old clients defeats the purpose of keeping the same endpoint. Having the old API proxy to the new one creates tight coupling and latency. See: https://cloud.google.com/load-balancing/docs/https/setting-up-url-map
Q4
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface. How should you store the data to optimize it for ease of analysis?
Load data into Google BigQuery
B Insert data into Google Cloud SQL
C Put flat files into Google Cloud Storage
D Stream data into Google Cloud Datastore
Correct Answer
Load data into Google BigQuery
Explanation
BigQuery is Google's fully managed, serverless, petabyte-scale data warehouse that supports standard SQL, making it ideal for multi-petabyte datasets that must be continuously available and queried by analysts with SQL experience. It scales automatically without downtime and offers high availability by default. Cloud SQL is limited in scale and not suitable for multi-petabyte data. Bigtable does not natively support SQL. Spanner supports SQL but is designed for transactional, not analytical, workloads at this scale. See: https://cloud.google.com/bigquery/docs/introduction
Q5
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud. Which three practices should you recommend?
A Port the application code to run on Google App Engine
B Integrate Cloud Dataflow into the application to capture real-time metrics
Instrument the application with a monitoring tool like Stackdriver Debugger
Select an automation framework to reliably provision the cloud infrastructure
Deploy a continuous integration tool with automated testing in a staging environment
F Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
Correct Answers
Instrument the application with a monitoring tool like Stackdriver Debugger
Select an automation framework to reliably provision the cloud infrastructure
Deploy a continuous integration tool with automated testing in a staging environment
Explanation
When migrating J2EE (Java EE) applications to the cloud, best practices include containerizing applications for portability, using managed services to reduce operational overhead, and applying the 12-factor app methodology for cloud-native design. Rearchitecting to use cloud-native services, implementing CI/CD pipelines, and externalizing configuration are also recommended. Lifting-and-shifting without changes may not leverage cloud benefits. See: https://cloud.google.com/architecture/migration-to-gcp-getting-started
Q6
A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed. What is the most likely cause of this problem?
The session variable is local to just a single instance
B The session variable is being overwritten in Cloud Datastore
C The URL of the API needs to be modified to prevent caching
D The HTTP Expires header needs to be set to -1 stop caching
Correct Answer
The session variable is local to just a single instance
Explanation
Memcache on App Engine caches data in shared memory, and when using the default (shared) Memcache service, different instances may see stale or different cached data because they share a global namespace. The most likely cause of users seeing already-viewed articles is that cached content from one instance is being served by another without proper cache invalidation. Using dedicated Memcache or adding proper cache keys per user session resolves this. See: https://cloud.google.com/appengine/docs/standard/python/memcache
Q7
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs. What should you do?
A Direct them to download and install the Google StackDriver logging agent
B Send them a list of online resources about logging best practices
Help them define their requirements and assess viable logging tools
D Help them upgrade their current tool to take advantage of any new features
Correct Answer
Help them define their requirements and assess viable logging tools
Explanation
Cloud Logging (formerly Stackdriver Logging) is the recommended centralized logging solution on Google Cloud. It integrates natively with all GCP services, supports structured logging, provides log-based metrics, and can export logs to BigQuery or Cloud Storage for long-term analysis. Error Reporting surfaces application errors automatically. Cloud Monitoring provides dashboards and alerting. Together they form a complete observability suite. See: https://cloud.google.com/logging/docs/overview
Q8
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's web hosting platform. Improvement to the QA/ Test processes accomplished an 80% reduction. Which additional two approaches can you take to further reduce the rollbacks?
Introduce a green-blue deployment model
B Replace the QA environment with canary releases
Fragment the monolithic platform into microservices
D Reduce the platform's dependency on relational database systems
E Replace the platform's relational database systems with a NoSQL database
Correct Answers
Introduce a green-blue deployment model
Fragment the monolithic platform into microservices
Explanation
To reduce erroneous production deployments, implementing blue/green deployments allows you to deploy to a separate environment and verify before switching traffic, minimizing rollback time. Combined with canary releases (sending a small percentage of traffic to the new version), you can validate changes with real traffic before full rollout. Automated rollback triggers based on error rates or SLO violations further reduce mean time to recovery. See: https://cloud.google.com/architecture/application-deployment-and-testing-strategies
Q9
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?
Use the - -no-auto-delete flag on all persistent disks and stop the VM
B Use the - -auto-delete flag on all persistent disks and terminate the VM
C Apply VM CPU utilization label and include it in the BigQuery billing export
Use Google BigQuery billing export and labels to associate cost to groups
E Store all state into local SSD, snapshot the persistent disks, and terminate the VM
F Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM
Correct Answers
Use the - -no-auto-delete flag on all persistent disks and stop the VM
Use Google BigQuery billing export and labels to associate cost to groups
Explanation
Preemptible VMs and custom machine types on Compute Engine are the most cost-effective options for development workloads. Preemptible VMs can be up to 80% cheaper than standard VMs and are ideal for non-production, fault-tolerant workloads. Committed use discounts apply to production workloads. Managed instance groups with autoscaling can also reduce costs by scaling down during idle periods. See: https://cloud.google.com/compute/docs/instances/preemptible
Q10
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations. Which database type should you use?
A Flat file
NoSQL
C Relational
D Blobstore
Correct Answer
NoSQL
Explanation
Cloud IoT Core (now superseded by partners, but at exam time) combined with Cloud Pub/Sub is the recommended architecture for ingesting real-time sensor data from thousands of devices at scale. IoT Core handles device management and MQTT/HTTP protocols, while Pub/Sub provides durable, scalable message delivery. Cloud Dataflow can then process the streaming data in real time. This pattern supports the 1000-sensor, 10-readings/second scale required. See: https://cloud.google.com/architecture/iot-overview
Q11
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?
A Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
Correct Answer
Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
Explanation
When VMs in an autoscaled instance group are restarting but the health check is configured, the issue is typically that the health check path or protocol does not match what the application serves, causing the load balancer to mark instances as unhealthy before they finish starting. Reviewing and correcting the health check configuration (correct port, path, and threshold) resolves the restart loop. See: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs
Q12
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery. What should you do to fix the script?
A Install the latest BigQuery API client library for Python
B Run your script on a new virtual machine with the BigQuery access scope enabled
Create a new service account with BigQuery access and execute your script with that user
D Install the bq component for gcloud with the command gcloud components install bq.
Correct Answer
Create a new service account with BigQuery access and execute your script with that user
Explanation
Compute Engine VMs use a service account to authenticate to Google Cloud APIs like BigQuery. If no appropriate service account scope or IAM role is granted, API calls fail with authentication errors. The fix is to ensure the VM's service account has the BigQuery Data Viewer or BigQuery User IAM role, and that the VM was created with the correct API access scopes. Using Application Default Credentials (ADC) on the VM automatically uses the service account. See: https://cloud.google.com/bigquery/docs/authentication
Q13
Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?
A Use G Suite Password Sync to replicate passwords into Google
Federate authentication via SAML 2.0 to the existing Identity Provider
C Provision users in Google using the Google Cloud Directory Sync tool
D Ask users to set their Google password to match their corporate password
Correct Answer
Federate authentication via SAML 2.0 to the existing Identity Provider
Explanation
When migrating a corporate application to GCP with minimal disruption, using Cloud VPN or Cloud Interconnect for hybrid connectivity allows the on-premises and cloud environments to communicate securely during the transition. A phased migration with traffic gradually shifted using load balancers minimizes downtime. Maintaining the existing authentication (LDAP/AD) via Cloud Directory Sync ensures users are not disrupted. See: https://cloud.google.com/architecture/migration-to-gcp-getting-started
Q14
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in. Which technology should they use for this?
A Google Cloud Dataproc
Google Cloud Dataflow
C Google Container Engine with Bigtable
D Google Compute Engine with Google BigQuery
Correct Answer
Google Cloud Dataflow
Explanation
For stream analytics without existing code, Cloud Dataflow (Apache Beam managed service) is the recommended Google-managed solution for real-time stream and batch processing. It auto-scales, is fully managed, and integrates with Pub/Sub and BigQuery. Dataproc requires managing clusters. BigQuery Streaming Inserts work for ingestion but not complex stream transformations. Cloud Functions are for event-driven, short-lived tasks. See: https://cloud.google.com/dataflow/docs/overview
Q15
Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?
A Work with your ISP to diagnose the problem
B Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
D Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
Correct Answer
Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
Explanation
Slow App Engine application startup for some users often indicates that new instances are being spun up (cold starts) when traffic increases. Setting a minimum number of idle instances in the app's scaling configuration ensures warm instances are always available to serve traffic without cold start latency. Increasing instance class size or using basic scaling can also help. See: https://cloud.google.com/appengine/docs/standard/python3/config/appref#scaling_elements

PCA Practice Set-02

15 questions
Q1
A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space. How can you remediate the problem with the least amount of downtime?
In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
B Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
C In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
D In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
E In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
Correct Answer
In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
Explanation
When a Compute Engine persistent disk is running out of space, you can resize the disk online without downtime using the Google Cloud Console or gcloud CLI. After resizing the disk, you must also resize the filesystem (e.g., resize2fs for ext4) to make the new space available to the OS. This is the fastest remediation with no data loss or VM restart required. See: https://cloud.google.com/compute/docs/disks/resize-persistent-disk
Q2
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used. How should you design your architecture?
Create a tokenizer service and store only tokenized data
B Create separate projects that only process credit card data
C Create separate subnetworks and isolate the components that process credit card data
D Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
E Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Correct Answer
Create a tokenizer service and store only tokenized data
Explanation
For PCI DSS compliance with minimal scope, tokenization of card data is the best approach. Card data is replaced with a token before entering your systems, and the actual card numbers are stored in a separate, highly secured vault (e.g., Cloud HSM or a PCI-compliant vault). Only the vaulting system is in scope for PCI compliance, while your analytics systems work with tokens. See: https://cloud.google.com/solutions/pci-dss-compliance-in-gcp
Q3
You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?
A Google Cloud SQL
Google Cloud Bigtable
C Google Cloud Storage
D Google Cloud Datastore
Correct Answer
Google Cloud Bigtable
Explanation
Cloud Bigtable is optimized for high-throughput, low-latency write and read of large volumes of time-series or click-stream data at the rate described (millions of events per second). It scales horizontally and is designed for exactly this pattern. Cloud Storage is for object storage, not high-throughput structured data. Cloud SQL cannot handle this write rate. BigQuery is for analytics, not real-time ingestion at this scale. See: https://cloud.google.com/bigtable/docs/overview
Q4
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend. What should you do?
A Write a lifecycle management rule in XML and push it to the bucket with gsutil
Write a lifecycle management rule in JSON and push it to the bucket with gsutil
C Schedule a cron script using gsutil ls ג€"lr gs://backups/** to find and remove items older than 90 days
D Schedule a cron script using gsutil ls ג€"l gs://backups/** to find and remove items older than 90 days and schedule it with cron
Correct Answer
Write a lifecycle management rule in JSON and push it to the bucket with gsutil
Explanation
Cloud Storage Object Lifecycle Management allows you to configure rules to automatically delete or transition objects after a specified number of days. Setting a lifecycle rule to delete objects older than 90 days is the most cost-effective and automated approach, requiring no custom code or manual intervention. See: https://cloud.google.com/storage/docs/lifecycle
Q5
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change. Which product should you use?
A Google Cloud Dataflow
Google Cloud Dataproc
C Google Compute Engine
D Google Kubernetes Engine
Correct Answer
Google Cloud Dataproc
Explanation
Cloud Dataproc is Google's managed Hadoop and Spark service, making it the ideal solution for running existing Spark and Hadoop jobs in the cloud with minimal changes. It spins up clusters in 90 seconds, integrates with Cloud Storage (replacing HDFS), and offers preemptible VM support to reduce costs. It scales on-demand to handle the upcoming workload spike. See: https://cloud.google.com/dataproc/docs/concepts/overview
Q6
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk. What should they change to get better performance from this system?
A Increase the virtual machine's memory to 64 GB
B Create a new virtual machine running PostgreSQL
Dynamically resize the SSD persistent disk to 500 GB
D Migrate their performance metrics warehouse to BigQuery
E Modify all of their batch jobs to use bulk inserts into the database
Correct Answer
Dynamically resize the SSD persistent disk to 500 GB
Explanation
For a database workload on Compute Engine that requires high I/O performance, SSD persistent disks provide significantly higher IOPS and lower latency compared to standard (HDD) persistent disks. For even higher performance, local SSDs can be used, but they are ephemeral. Increasing the number of vCPUs also increases available IOPS on persistent disks. See: https://cloud.google.com/compute/docs/disks/performance
Q7
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where should you store the data?
A Google BigQuery
B Google Cloud SQL
Google Cloud Bigtable
D Google Cloud Storage
Correct Answer
Google Cloud Bigtable
Explanation
Cloud Bigtable is the optimal storage backend for real-time weather charting with 50,000 sensors sending 10 readings per second (500,000 writes/sec). It is designed for exactly this time-series, high-throughput, low-latency use case. Combined with a time-series data model (row key: sensor_id + timestamp), queries remain fast. Cloud Spanner is for transactional workloads. BigQuery is for batch analytics. See: https://cloud.google.com/bigtable/docs/schema-design-time-series
Q8
Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load. What should you do?
A Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones
Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones
C Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones
D Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200% of expected load
Correct Answer
Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones
Explanation
For a 3-tier web application where the database layer does not autoscale, increasing the number of database instances (read replicas) or upgrading to a higher-tier database can alleviate bottlenecks. Cloud SQL read replicas distribute read traffic across multiple instances, improving throughput. Enabling connection pooling with PgBouncer or using Cloud SQL Proxy also reduces connection overhead. See: https://cloud.google.com/sql/docs/mysql/replication
Q9
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long. You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality. Which two actions should you take?
A Remove Python after running pip
B Remove dependencies from requirements.txt
Use a slimmed-down base image like Alpine Linux
D Use larger machine types for your Google Container Engine node pools
Copy the source after he package dependencies (Python and pip) are installed
Correct Answers
Use a slimmed-down base image like Alpine Linux
Copy the source after he package dependencies (Python and pip) are installed
Explanation
Optimizing Docker build times in GKE involves using multi-stage builds to reduce image size and leveraging Docker layer caching effectively. Placing rarely-changing instructions (like installing OS packages) early in the Dockerfile and frequently-changing ones (like copying application code) later maximizes cache reuse. Using a .dockerignore file also reduces build context size. See: https://cloud.google.com/architecture/best-practices-for-building-containers
Q10
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future. What should you do?
A Deploy fewer changes to production
B Deploy smaller changes to production
Increase the load on your test and staging environments
D Deploy changes to a small subset of users before rolling out to production
Correct Answer
Increase the load on your test and staging environments
Explanation
Performance bugs that appear in production but not in staging/testing are often due to differences in data volume, traffic patterns, or environment configuration. Implementing load testing that mirrors production traffic patterns and using production-like data in staging catches these issues. Continuous profiling in production (e.g., Cloud Profiler) also helps identify regressions before they impact users. See: https://cloud.google.com/profiler/docs/about-profiler
Q11
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases. What should you do?
A Set timeouts on your application so that you can fail requests faster
B Send custom metrics for each of your requests to Stackdriver Monitoring
C Use Stackdriver Monitoring to look for insights that show when your API latencies are high
Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Correct Answer
Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Explanation
Cloud Trace (distributed tracing) is designed to track requests as they traverse multiple microservices, recording latency at each hop. It identifies which specific service or inter-service call is contributing the most to slow requests. Cloud Monitoring shows metrics, and Cloud Logging shows logs, but only Cloud Trace provides end-to-end request latency breakdown across services. See: https://cloud.google.com/trace/docs/overview
Q12
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?
A Use a different database
B Choose larger instances for your database
C Create snapshots of your database more regularly
Implement routinely scheduled failovers of your databases
Correct Answer
Implement routinely scheduled failovers of your databases
Explanation
When a primary Cloud SQL instance crashes and the replica is not promoted, the issue is often that automatic failover is not configured or the failover replica is in the same zone as the primary. Using Cloud SQL High Availability (HA) with a standby instance in a different zone enables automatic failover within 1-2 minutes. The HA configuration uses synchronous replication ensuring no data loss during failover. See: https://cloud.google.com/sql/docs/mysql/high-availability
Q13
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?
A Grant the security team access to the logs in each Project
B Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C Configure Stackdriver Monitoring for all Projects with the default retention policies
Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
Correct Answer
Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
Explanation
Cloud Monitoring metric data is retained for specific durations (up to 24 months for some metrics), which is insufficient for a 5-year retention requirement. Exporting metrics to BigQuery via Cloud Monitoring sinks or exporting logs to Cloud Storage provides long-term, cost-effective retention. BigQuery allows SQL-based analysis of historical metrics data. Cloud Storage with Coldline storage tier minimizes cost for infrequently accessed archived data. See: https://cloud.google.com/monitoring/api/metrics_gcp
Q14
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires private address space communication. Which networking approach should you use?
Google Cloud Dedicated Interconnect
B Google Cloud VPN connected to the data center network
C A NAT and TLS translation gateway installed on-premises
D A Google Compute Engine instance with a VPN server installed connected to the data center network
Correct Answer
Google Cloud Dedicated Interconnect
Explanation
For replicating a 4 TB PostgreSQL database from on-premises to GCP with frequent large updates, using Database Migration Service (DMS) with continuous replication, or setting up logical replication to Cloud SQL for PostgreSQL, is the Google-recommended approach. Cloud SQL for PostgreSQL supports read replicas and external replication. The initial data load can use pg_dump/pg_restore or Database Migration Service. See: https://cloud.google.com/database-migration/docs/postgres
Q15
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process. What should you do?
A Create custom Google Stackdriver alerts and send them to the auditor
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view
D Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Correct Answer
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Explanation
Cloud Audit Logs automatically capture all Admin Activity logs (configuration changes, IAM policy changes) for 400 days at no charge. Exporting these logs to a Cloud Storage bucket or BigQuery dataset via a log sink creates a permanent, queryable archive for audit purposes. This approach allows auditors to review all IAM policy changes from the past 12 months without manual effort. See: https://cloud.google.com/logging/docs/audit

Want More Practice?

These are just the free questions. Unlock the full PCA - Professional Cloud Architect exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to PCA - Professional Cloud Architect Exams