Free Practice Questions Terraform Authoring & Operations Professional 30 Questions with Answers Free Practice Questions Terraform Authoring & Operations Professional 30 Questions with Answers
FREE QUESTIONS

Terraform Authoring & Operations Professional
Practice Questions

30 free questions with correct answers and detailed explanations.

30 Free Questions
2 Free Exams
100% With Explanations

TA-PRO Practice Set-01

15 questions
Q1
A platform team maintains a shared VPC module used by 12 product teams. The module is stored in a private Git repository. A product team reports that after the platform team released a new version of the module, their terraform plan started failing due to a breaking change in the module's input variables. The product team's module block does not specify a version. What is the most appropriate remediation going forward?
A Use source = "git::https://github.com/org/vpc-module.git" without any ref so Terraform always fetches the latest stable commit
Pin the module source with a ref tag such as ?ref=v1.2.0 and enforce semantic versioning with a CHANGELOG so consumers can opt in to breaking changes
C Copy the module code directly into each product team's repository to prevent unintended upgrades
D Use terraform get -update=false in CI pipelines to prevent module refreshes
Correct Answer
Pin the module source with a ref tag such as ?ref=v1.2.0 and enforce semantic versioning with a CHANGELOG so consumers can opt in to breaking changes
Explanation
B is correct because pinning a Git module source to a specific tag (e.g., ?ref=v1.2.0) gives consuming teams full control over when they absorb breaking changes. Coupling this with semantic versioning and a changelog is the professional standard for shared module governance. Teams can upgrade deliberately rather than being surprised by upstream changes. A is wrong because omitting a ref causes Terraform to pull the default branch HEAD on every terraform get, which is exactly the pattern that caused the original problem — no version isolation. C is wrong because copying module code into each repo creates unmaintainable duplication, destroys single-source-of-truth benefits, and makes security patches extremely difficult to propagate. D is wrong because terraform get -update=false only prevents re-downloading already cached modules. It does not version-pin anything and provides no reliable protection in fresh CI environments where no cache exists.
Q2
A module accepts a variable environment that is used to construct resource names and tags. A downstream team passes environment = "PROD" (uppercase), but all naming conventions require lowercase. The module author wants to enforce lowercase without requiring callers to remember the casing rule. What is the best approach inside the module?
A Add a validation block to the variable that throws an error if the value contains uppercase letters
B Use lower(var.environment) at every reference site within the module
Use lower(var.environment) in a locals block and reference local.environment throughout the module
D Set default = "prod" and rely on callers to not override it incorrectly
Correct Answer
Use lower(var.environment) in a locals block and reference local.environment throughout the module
Explanation
C is correct because defining local.environment = lower(var.environment) in a single locals block normalizes the value once and propagates it consistently everywhere in the module. This is the DRY (Don't Repeat Yourself) principle applied to Terraform — one transformation, referenced many times. A is wrong because a validation block would reject the input and force the caller to fix it, adding friction and shifting responsibility to every consumer. The module should be defensive and handle reasonable variations gracefully. B is wrong because calling lower() at every reference site is error-prone — a developer can easily forget one occurrence — and violates DRY. It makes future refactoring harder. D is wrong because a default value only applies when the caller omits the variable entirely. It does nothing to normalize a value that is explicitly passed with incorrect casing.
Q3
A module outputs a complex object representing an AWS RDS instance, including its endpoint, port, and ARN. A consuming root module needs to pass only the endpoint to an application module. The consuming module does this: endpoint = module.rds.instance.endpoint. After a refactor, the RDS module changes the output name from instance to rds_instance. What happens during terraform plan in the consuming root module if it is not updated?
A Terraform silently uses a cached value of the old output and plan succeeds
Terraform produces an error: the output instance is not defined in the child module
C Terraform automatically resolves the renamed output using its internal state graph
D Terraform detects the rename and prompts the operator to map the old name to the new one
Correct Answer
Terraform produces an error: the output instance is not defined in the child module
Explanation
B is correct because Terraform evaluates module output references at plan time by inspecting the child module's declared outputs. If instance no longer exists in the module, Terraform raises a configuration error stating the output is undefined. There is no automatic resolution. A is wrong because Terraform does not use cached output values to paper over missing output declarations. The configuration must be syntactically and semantically valid before a plan can succeed. C is wrong because Terraform has no built-in rename tracking for module outputs. It treats output names as opaque identifiers. Rename detection is the responsibility of the module author and consumer through versioning and changelogs. D is wrong because Terraform does not have an interactive output-mapping prompt. Changes to output names are breaking changes that must be handled manually in consuming configurations.
Q4
A team uses an S3 remote backend with DynamoDB for state locking. A developer runs terraform apply from their local machine and the process is killed mid-apply due to a network failure. When the team tries to run terraform plan afterward, they receive the error: "Error acquiring the state lock." What is the safest recovery procedure?
A Delete the DynamoDB lock table item manually and immediately run terraform apply again
Run terraform force-unlock <LOCK_ID> after confirming no other process holds the lock, then run terraform plan
C Delete the .terraform directory and re-run terraform init to clear the lock
D Run terraform state pull to retrieve state and then manually clear the lock by overwriting the S3 object
Correct Answer
Run terraform force-unlock <LOCK_ID> after confirming no other process holds the lock, then run terraform plan
Explanation
B is correct because terraform force-unlock is the purpose-built command for releasing a stale lock. The critical qualifier is confirming no other apply is actually in progress — force-unlocking an active apply would allow concurrent state writes, which can corrupt state. After confirming the lock is stale, force-unlock is safe and leaves state intact. A is wrong because directly deleting the DynamoDB item bypasses Terraform's lock management abstractions and could be risky if the lock ID is reused or if another process is legitimately holding it. The correct tool is terraform force-unlock, not manual DynamoDB intervention. C is wrong because deleting .terraform only removes locally cached provider plugins and module code. It has no effect on remote backend lock state. D is wrong because terraform state pull only reads state — it does not interact with the locking mechanism at all. Overwriting the S3 object manually risks corrupting the state file if it was partially written.
Q5
An organization runs three environments (dev, staging, prod) using separate Terraform workspaces in a single configuration directory, all backed by the same S3 bucket. A security audit finds this architecture unacceptable. What is the primary security risk the auditor is most likely citing?
A Workspaces do not support remote backends, so state is stored locally
A single IAM role or credential set with access to the backend has read and write access to all environment state files, including production secrets stored in state
C Terraform workspaces cannot isolate provider configurations, so all environments share the same AWS region
D The terraform.workspace interpolation is deprecated and may expose environment names in plan output
Correct Answer
A single IAM role or credential set with access to the backend has read and write access to all environment state files, including production secrets stored in state
Explanation
B is correct because the core security concern with shared-backend workspaces is blast radius. If a single set of credentials (CI service account, developer laptop) can access the backend, it can read and write state for all environments including production. Since Terraform state may contain sensitive values (passwords, private keys, tokens), a compromise of any one credential set exposes all environments. A is wrong because workspaces fully support remote backends including S3. This is one of their primary use cases. C is wrong because provider configurations are entirely independent of workspace boundaries. Different provider blocks with aliases or workspace-conditional configuration are both valid approaches. D is wrong because terraform.workspace is a supported, non-deprecated built-in value. Plan output does show the workspace name, but that is not a meaningful security risk.
Q6
A Terraform configuration manages 200 resources across a monolithic state file. Operations are becoming slow and risky — a failed apply on one resource can block unrelated resources. A team proposes splitting the state. What is the recommended pattern for cross-state references after the split?
A Hardcode ARNs and resource IDs discovered manually from the AWS console into the consuming configuration
Use terraform_remote_state data source to read outputs from the upstream state, referencing only explicitly declared outputs
C Use data blocks pointing directly to the AWS resources to discover their attributes at plan time
D Merge all resources back into a single configuration and use -target to scope applies
Correct Answer
Use terraform_remote_state data source to read outputs from the upstream state, referencing only explicitly declared outputs
Explanation
B is correct because terraform_remote_state is the canonical Terraform mechanism for sharing values between independent state files. It reads the outputs block of a remote state, creating a clear, version-controlled contract between configurations. Only explicitly declared outputs are accessible, which enforces encapsulation. A is wrong because hardcoding resource IDs creates unmaintainable configurations that break silently when infrastructure changes and introduces human error. It defeats the purpose of infrastructure-as-code. C is wrong because data source lookups depend on naming conventions and cloud API queries, which can be fragile. They also create implicit dependencies on resource names rather than explicit output contracts, and they do not reflect Terraform-managed attributes like computed values. D is wrong because -target is explicitly documented as a last-resort escape hatch, not an architectural pattern. It suppresses dependency graph analysis and can leave state inconsistent. It does not solve the scalability problem.
Q7
A CI pipeline runs terraform plan -out=tfplan on pull request creation and stores the plan file as a pipeline artifact. On merge to main, a separate pipeline stage runs terraform apply tfplan. A senior engineer raises a concern about this pattern. What is the most significant risk?
A The plan file is human-readable and exposes resource configurations to pipeline log viewers
Infrastructure or external state may change between plan creation and apply, causing the apply to act on stale assumptions, potentially in a destructive or incorrect way
C terraform apply with a saved plan file ignores the state lock, enabling concurrent applies
D The plan file contains provider credentials that could be extracted from the artifact store
Correct Answer
Infrastructure or external state may change between plan creation and apply, causing the apply to act on stale assumptions, potentially in a destructive or incorrect way
Explanation
B is correct because a saved plan file is a snapshot of intent computed at a specific moment. If cloud resources, remote state outputs, or data sources change between plan creation (PR time) and apply (merge time), Terraform applies the old plan without re-evaluating current reality. This can create, modify, or destroy resources based on outdated information — a subtle but serious risk in active environments. A is wrong because plan files are binary, not human-readable text. They can be inspected with terraform show tfplan but are not exposed as plain text in logs unless explicitly printed. C is wrong because terraform apply with a plan file still acquires the state lock normally. Plan files do not bypass locking mechanisms. D is wrong because plan files do not embed provider credentials. Credentials are resolved at runtime from environment variables, instance profiles, or credential files — they are not serialized into the plan artifact.
Q8
A developer runs terraform destroy on a staging environment and realizes afterward that the configuration included a shared DNS zone used by other teams. The zone has been deleted. Which workflow control, had it been in place, would most directly have prevented this accidental deletion?
prevent_destroy = true in the lifecycle block of the DNS zone resource
B A depends_on reference from all other resources to the DNS zone
C A separate workspace for the DNS zone configuration
D Using terraform plan -destroy and reviewing output before applying
Correct Answer
prevent_destroy = true in the lifecycle block of the DNS zone resource
Explanation
A is correct because prevent_destroy = true is a lifecycle meta-argument that causes Terraform to raise an error and refuse to proceed whenever a plan would destroy the marked resource, regardless of how the destroy is triggered (targeted destroy, full destroy, or replacement). It is a hard guardrail baked into the configuration. B is wrong because depends_on controls ordering of resource creation and destruction — it does not prevent destruction. Adding dependencies would change destroy order but would not stop the zone from being deleted. C is wrong because using a separate workspace for the DNS zone would help isolate it from accidental terraform destroy runs in the staging workspace, and is a valid architectural control, but the question asks which control would "most directly" prevent deletion. prevent_destroy is the explicit, precise guardrail. A separate workspace is an organizational control, not a technical lock on the resource itself. D is wrong because terraform plan -destroy shows what would be destroyed but does not prevent it. A developer under time pressure could review and proceed anyway. It is an informational step, not a guardrail.
Q9
A Terraform configuration creates an AWS IAM role and an EC2 instance. The EC2 instance uses an instance profile backed by the IAM role. During a fresh terraform apply, the apply fails intermittently with an error from the AWS API stating the instance profile does not exist. The resource block for the EC2 instance already references aws_iam_instance_profile.this.arn. Why does this still fail intermittently, and what is the correct fix?
A Terraform is applying resources in alphabetical order; rename the IAM resources so they sort before aws_instance
The implicit dependency on the ARN is sufficient for ordering but AWS IAM has eventual consistency delays; add depends_on referencing the IAM role and instance profile, or add a time_sleep resource
C Add create_before_destroy = true to the EC2 instance's lifecycle block to allow retries
D Set skip_credentials_validation = true in the AWS provider to bypass IAM validation
Correct Answer
The implicit dependency on the ARN is sufficient for ordering but AWS IAM has eventual consistency delays; add depends_on referencing the IAM role and instance profile, or add a time_sleep resource
Explanation
B is correct because Terraform's dependency graph ensures the IAM role and instance profile are created before the EC2 instance, but AWS IAM is an eventually consistent global service. Even after the API returns a success response for the instance profile creation, the resource may not yet be visible to the EC2 service in all regions. The practical fixes are adding explicit depends_on (which sometimes helps by adding slight serialization delay) or adding a time_sleep resource after the IAM resource to wait for propagation. A is wrong because Terraform does not apply resources in alphabetical order. It uses a dependency graph. Renaming resources changes nothing about execution order. C is wrong because create_before_destroy is a replacement strategy for managing resource lifecycle during updates — it controls the order of create/destroy during resource replacement, not initial creation sequencing or API consistency issues. D is wrong because skip_credentials_validation skips validation of the provider credentials themselves, not IAM resource propagation. It does not address eventual consistency and introduces a security risk by skipping credential checks.
Q10
A team wants to replace an AWS RDS instance with a new one using a different instance class, but the application must not experience more than 60 seconds of downtime. The default Terraform behavior for this change would destroy the old instance before creating the new one. Which lifecycle configuration achieves the goal?
A Set ignore_changes = [instance_class] to prevent Terraform from modifying the instance
Set create_before_destroy = true in the lifecycle block of the RDS instance resource
C Use -replace=aws_db_instance.main flag during apply to force an in-place replacement
D Set prevent_destroy = true and manually provision the new instance outside Terraform
Correct Answer
Set create_before_destroy = true in the lifecycle block of the RDS instance resource
Explanation
B is correct because create_before_destroy = true instructs Terraform to provision the new RDS instance first, wait for it to be in an available state, and then destroy the old one. This is exactly the blue/green replacement pattern that minimizes downtime — the application can be pointed at the new endpoint before the old instance disappears. A is wrong because ignore_changes tells Terraform to pretend the specified attribute has not changed. This would suppress the replacement entirely, leaving the instance class unchanged — the opposite of the desired outcome. C is wrong because -replace forces Terraform to include a resource in the plan as a replacement, but without create_before_destroy, the default destroy-then-create sequence applies, which maximizes downtime. D is wrong because prevent_destroy blocks the operation entirely. Manually provisioning outside Terraform defeats infrastructure-as-code principles and leaves state inconsistent.
Q11
A team uses count to create three S3 buckets from a list variable. Later, the requirement changes to remove the middle bucket. After updating the list and running terraform plan, the engineer is surprised to see that Terraform plans to destroy and recreate two buckets instead of just deleting one. Why?
A Terraform detected configuration drift and is reconciling all three buckets with the new desired state
count uses index-based addressing; removing an element from the middle shifts the indices of subsequent elements, causing Terraform to see them as different resources requiring replacement
C Terraform cannot delete individual resources from a count-managed set; it must destroy all and recreate the desired number
D The S3 bucket resource does not support count; this behavior indicates a provider bug
Correct Answer
count uses index-based addressing; removing an element from the middle shifts the indices of subsequent elements, causing Terraform to see them as different resources requiring replacement
Explanation
B is correct because count addresses resources as resource_type.name[0], resource_type.name[1], etc. When you remove the middle element from a list, the element that was at index 2 shifts to index 1. Terraform sees resource_type.name[1] as changing from the old value to the new value (triggering a modify or replace), and resource_type.name[2] as no longer existing (triggering destroy). This index-shift problem is the canonical reason to prefer for_each with a set or map for resources where the collection may change. A is wrong because drift detection identifies differences between state and real infrastructure. This is a configuration change scenario — the desired state itself has changed, not the real infrastructure drifting from state. C is wrong because Terraform can absolutely delete individual resources from a count-managed set. The problem is not that deletion is impossible but that index shifting causes unexpected modifications. D is wrong because S3 buckets fully support count. The described behavior is expected and documented, not a provider bug.
Q12
A module needs to create an AWS Security Group with a variable number of ingress rules passed in as a list of objects. Each object has port, protocol, and cidr attributes. A dynamic block is used inside the aws_security_group resource. Which snippet correctly implements this?
``` dynamic "ingress" { for_each = var.ingress_rules content { from_port = ingress.value.port to_port = ingress.value.port protocol = ingress.value.protocol cidr_blocks = [ingress.value.cidr] } } ```
B ``` dynamic "ingress" { for_each = var.ingress_rules content { from_port = each.value.port to_port = each.value.port protocol = each.value.protocol cidr_blocks = [each.value.cidr] } } ```
C ```for_each ingress in var.ingress_rules { ingress { from_port = ingress.port } } ```
D ``` dynamic "ingress" { iterator = rule for_each = var.ingress_rules content { from_port = rule.port to_port = rule.port protocol = rule.protocol cidr_blocks = [rule.cidr] } } ```
Correct Answer
``` dynamic "ingress" { for_each = var.ingress_rules content { from_port = ingress.value.port to_port = ingress.value.port protocol = ingress.value.protocol cidr_blocks = [ingress.value.cidr] } } ```
Explanation
A is correct because inside a dynamic block, the iteration variable is the label of the dynamic block itself (in this case, ingress). The current element is referenced as ingress.value, and the key as ingress.key. This is the standard dynamic block syntax. B is wrong because each.value and each.key are valid inside resource or module blocks that use for_each at the top level — not inside dynamic blocks. Inside a dynamic block, the iterator is named after the block label, not each. C is wrong because there is no for_each loop syntax of this form in Terraform HCL. This resembles a programming language for-loop construct that does not exist in Terraform configuration language. D is wrong but close. When an iterator is specified, the content block should reference rule.value.port, not rule.port. The iterator argument renames the iteration object, but .value is still required to access the element's attributes.
Q13
A configuration uses for_each = toset(var.regions) to deploy identical infrastructure in multiple AWS regions. The provider block for each region is defined using the alias argument. What additional configuration is required in the resource block for Terraform to use the correct regional provider?
A Set region = each.value inside the resource block
Reference provider = aws.<alias> inside the resource block, where the alias corresponds to the target region
C Use a depends_on pointing to the provider block for ordering
D Declare count = length(var.regions) in addition to for_each to trigger multi-provider behavior
Correct Answer
Reference provider = aws.<alias> inside the resource block, where the alias corresponds to the target region
Explanation
B is correct because when multiple provider aliases are defined, each resource must explicitly declare which provider instance to use via the provider meta-argument (e.g., provider = aws.us_east_1). Without this, Terraform uses the default (unaliased) provider. For dynamic multi-region deployments, this typically requires using for_each on a module and passing the provider through providers = { aws = aws.<alias> } at the module call level. A is wrong because region is an attribute of the provider block, not a meta-argument supported on resource blocks. Setting it inside a resource block would cause a configuration error. C is wrong because depends_on controls execution ordering, not provider association. A resource depending on a provider block is meaningless — Terraform inherently initializes providers before creating resources. D is wrong because count and for_each are mutually exclusive meta-arguments and cannot be used together on the same resource. Using both results in a configuration error.
Q14
An engineer uses terraform.workspace to conditionally set the instance type in a configuration: instance_type = terraform.workspace == "prod" ? "m5.2xlarge" : "t3.micro". A new team member runs terraform apply from the default workspace, not realizing they should be in prod. What is the architectural risk this pattern introduces?
A The default workspace does not support conditional expressions, so the apply will fail
The workspace name becomes an implicit, invisible control plane — a human error in workspace selection silently deploys undersized or incorrectly configured infrastructure with no validation failure
C Terraform will detect that prod resources exist in a different workspace and refuse the apply
D terraform.workspace is only available in HCP Terraform and does not work with local or S3 backends
Correct Answer
The workspace name becomes an implicit, invisible control plane — a human error in workspace selection silently deploys undersized or incorrectly configured infrastructure with no validation failure
Explanation
B is correct because the workspace-as-configuration-branch pattern moves critical infrastructure decisions (instance sizing, replica counts, security settings) into an implicit runtime variable — the active workspace. There is no compile-time or plan-time enforcement. A developer in the wrong workspace gets a valid, successful apply that creates wrong infrastructure. This is a significant operational risk, especially for security-sensitive settings. A is wrong because conditional expressions are fully supported in all workspaces including default. The apply would succeed without error, which is precisely the problem. C is wrong because Terraform workspaces maintain completely isolated state files. Terraform has no mechanism to detect or cross-reference resources in other workspaces. D is wrong because terraform.workspace is a built-in Terraform value available in all configurations regardless of backend type, including local, S3, Azure Storage, and HCP Terraform.
Q15
A company uses HCP Terraform (formerly Terraform Cloud) with separate workspaces per environment (dev, staging, prod), each connected to different variable sets containing environment-specific credentials. A developer needs to promote a configuration change from staging to prod. What is the recommended workflow?
A Run terraform workspace select prod locally and apply the same plan file from staging
Merge the code change to the branch connected to the prod workspace in version control, triggering HCP Terraform's VCS-driven run
C Use terraform state mv to copy staging resources into the prod workspace state
D Export the staging plan with terraform show -json and import it into the prod workspace
Correct Answer
Merge the code change to the branch connected to the prod workspace in version control, triggering HCP Terraform's VCS-driven run
Explanation
B is correct because HCP Terraform's VCS integration is designed exactly for this workflow. Each workspace is connected to a branch or tag in version control. Promoting to prod means merging code through a standard pull request process, which triggers a speculative plan (and optionally requires approval) in the prod workspace using the prod variable set and credentials. This provides auditability, approval gates, and credential isolation. A is wrong because saved plan files are tied to the workspace and state they were generated against. Applying a staging plan file against prod state would use staging state references, which reference wrong resource IDs or may not exist in prod. HCP Terraform also does not support terraform workspace select in the traditional sense. C is wrong because terraform state mv moves state entries within or between state files. It does not promote configuration changes and would corrupt the prod state by injecting staging resource state. D is wrong because a JSON plan output is a human-readable representation for analysis. It cannot be imported or applied in another workspace. Plans are not portable artifacts between states.

TA-PRO Practice Set-02

15 questions
Q1
An engineer needs to bring an existing AWS VPC (already created manually) under Terraform management. The VPC ID is vpc-0abc1234. After writing the corresponding aws_vpc resource block in the configuration and running terraform import aws_vpc.main vpc-0abc1234, what must the engineer do before running terraform apply?
A Nothing — after import, terraform apply will confirm the resource is managed with no changes
B Run terraform refresh to synchronize the imported state with the configuration
Inspect the state with terraform state show aws_vpc.main, compare all attributes to the configuration, and update the configuration to match the real resource's settings to avoid an unintended in-place modification or replacement
D Delete the manually created VPC from AWS before running terraform apply to avoid duplicate resource errors
Correct Answer
Inspect the state with terraform state show aws_vpc.main, compare all attributes to the configuration, and update the configuration to match the real resource's settings to avoid an unintended in-place modification or replacement
Explanation
C is correct because terraform import only writes the resource's current attributes into state — it does not generate or validate configuration. If the configuration's resource block does not match all the imported resource's attributes (e.g., different cidr_block, missing tags, different enable_dns_support), the subsequent terraform plan will show a diff and may attempt to modify or even replace the resource. The engineer must inspect the imported state and reconcile the configuration with it. A is wrong because this is a common misconception. Import does not automatically make the configuration match the real resource. A mismatch between config and imported state will produce planned changes on the next apply. B is wrong because terraform refresh updates state to match real-world infrastructure — it does not help reconcile the configuration file with state. It also does not apply configuration changes. D is wrong because deleting the existing VPC before applying would be catastrophic — dependent resources (subnets, route tables, instances) would lose their VPC. The entire purpose of import is to manage existing resources without recreating them.
Q2
Terraform 1.5 introduced native import blocks as an alternative to the terraform import CLI command. Which of the following is a key advantage of the import block approach over the CLI command?
A Import blocks allow importing resources that the provider does not support, using the AWS CLI as a fallback
Import blocks are executed during terraform plan, enabling the import to be reviewed, version-controlled, and applied as part of a normal workflow including CI/CD pipelines
C Import blocks automatically generate the correct HCL configuration for the resource, eliminating the need to write resource blocks manually
D Import blocks bypass state locking, making them faster than the CLI import command for large infrastructure sets
Correct Answer
Import blocks are executed during terraform plan, enabling the import to be reviewed, version-controlled, and applied as part of a normal workflow including CI/CD pipelines
Explanation
B is correct because the import block approach integrates with the standard terraform plan / terraform apply workflow. The import action appears in the plan output, can be reviewed before execution, is tracked in version control alongside the configuration change, and can be gated behind CI/CD approvals. The CLI terraform import is an imperative, stateful mutation that runs immediately with no plan-review step. A is wrong because import blocks have the same provider support requirements as CLI imports. Importability is determined by the provider's resource implementation, not the import mechanism. C is wrong as a standalone claim. Terraform 1.5 also introduced terraform plan -generate-config-out=generated.tf which can generate configuration for import blocks, but the import block itself does not auto-generate configuration. This is a subtle but important distinction. D is wrong because import blocks do not bypass state locking. They acquire the state lock during apply just like any other operation. Bypassing state locking would be a correctness defect, not a feature.
Q3
A security team discovers that someone manually added an ingress rule directly to an AWS Security Group that is managed by Terraform. terraform plan shows no changes. Why, and what must be done to detect the drift?
A Terraform automatically detects all out-of-band changes in real time using AWS CloudWatch Events
B The security group's ingress rules are tracked in state; if plan shows no changes, the state already reflects the manual addition because a terraform refresh was run automatically by a previous operation
Run terraform plan -refresh=true (the default) or terraform refresh to force Terraform to re-query the real infrastructure and compare with state; by default Terraform may be run with -refresh=false in some pipelines, suppressing drift detection
D Drift detection requires HCP Terraform's drift detection feature; open-source Terraform does not detect manual changes
Correct Answer
Run terraform plan -refresh=true (the default) or terraform refresh to force Terraform to re-query the real infrastructure and compare with state; by default Terraform may be run with -refresh=false in some pipelines, suppressing drift detection
Explanation
C is correct because by default terraform plan does refresh state (queries current infrastructure via provider APIs) before computing the diff. However, in some CI pipelines -refresh=false is used for speed, which skips drift detection. If the plan shows no changes unexpectedly, confirming that refresh is enabled is the right diagnostic step. Running terraform plan with the default -refresh=true or explicitly running terraform refresh will re-query the security group and show the manual ingress rule as a planned deletion (since it is not in the configuration). A is wrong because Terraform has no real-time change detection mechanism. It is a declarative tool that computes desired-vs-actual state only when explicitly invoked. B is wrong because terraform refresh is not run automatically between applies by default in all pipeline configurations, and the scenario states plan shows no changes — meaning refresh either was not run or the state already has the discrepancy factored in via a previous refresh that failed to trigger a plan diff for some other reason. The question is designed to test knowledge of the refresh mechanism. D is wrong because drift detection via terraform plan (which includes a refresh phase) is a core OSS Terraform capability. HCP Terraform's drift detection feature provides scheduled continuous drift detection without requiring manual plan runs, which is an enhancement, not the only option.
Q4
After running terraform plan and seeing that Terraform wants to delete a tag from an EC2 instance that was added manually by the operations team, an engineer decides to keep the manual tag and not manage it through Terraform. What is the cleanest configuration change to achieve this?
A Add the tag key to the resource's tags block with an empty string value to suppress the diff
Add the tag key to ignore_changes in the resource's lifecycle block
C Run terraform state rm aws_instance.web and re-import the resource to capture the new tag
D Use terraform apply -target=aws_instance.web to apply only this resource with the tag change reversed
Correct Answer
Add the tag key to ignore_changes in the resource's lifecycle block
Explanation
B is correct because ignore_changes in the lifecycle block tells Terraform to ignore differences in specific attributes when computing a plan. Adding the tag key to ignore_changes = [tags["ops_team_tag"]] or ignore_changes = [tags] will cause Terraform to stop managing that attribute's drift, leaving manually added tags intact. This is the canonical solution for allowing out-of-band changes to coexist with Terraform management. A is wrong because setting a tag to an empty string would cause Terraform to try to create the tag with an empty value, which may fail or create unintended state. It does not suppress the diff for manually added tags. C is wrong because removing from state and re-importing would add the tag to state — but on the next plan, Terraform would still try to delete it because the configuration doesn't include it. The problem is unsolved. D is wrong because -target scopes the apply to specific resources but does not change what Terraform does to those resources. Without ignore_changes, the tag deletion would still be applied, just scoped to this resource.
Q5
A Terraform configuration specifies version = "~> 4.0" for the AWS provider in the required_providers block. The engineering team upgrades to Terraform 1.6 and the AWS provider releases version 5.0. Which versions will Terraform install?
A 5.0, because ~> means "approximately greater than" and allows major version increments
The latest 4.x release (e.g., 4.67.0), because ~> 4.0 allows only patch and minor version increments within the 4.x series
C 4.0.0 exactly, because the constraint pins to that specific version
D No version will be installed; ~> is not a valid version constraint operator in Terraform
Correct Answer
The latest 4.x release (e.g., 4.67.0), because ~> 4.0 allows only patch and minor version increments within the 4.x series
Explanation
B is correct because the pessimistic constraint operator ~> in Terraform locks the leftmost non-patch version component. ~> 4.0 means ">= 4.0, < 5.0" — it allows any 4.x version but rejects 5.0. This is a deliberate safety mechanism to prevent breaking changes from major version increments while still allowing bug fixes and new features within the major version. A is wrong because ~> explicitly excludes major version increments. This is the primary purpose of the operator — to allow safe upgrades without absorbing breaking changes. C is wrong because ~> 4.0 is not a pinned constraint. A pinned constraint would be = 4.0.0 or simply "4.0.0". The pessimistic operator allows a range. D is wrong because ~> is a fully valid and commonly used version constraint operator documented in both the Terraform language specification and provider documentation.
Q6
A team's Terraform configuration uses a community provider not available in the public Terraform Registry. They need to use this provider in a restricted environment with no internet access. What must be configured to make terraform init succeed?
A Set TF_PLUGIN_CACHE_DIR to a network share and populate it with the provider binary
Configure a network_mirror or filesystem_mirror in the CLI configuration file (~/.terraformrc or terraform.rc) pointing to an internal mirror hosting the provider binary
C Copy the provider binary into the .terraform/providers directory before running terraform init
D Use terraform providers lock to generate a lockfile that allows offline use without downloading providers
Correct Answer
Configure a network_mirror or filesystem_mirror in the CLI configuration file (~/.terraformrc or terraform.rc) pointing to an internal mirror hosting the provider binary
Explanation
B is correct because Terraform's provider installation behavior is controlled via the CLI configuration file. A filesystem_mirror points to a local directory containing provider binaries in the expected directory structure, and a network_mirror points to an internal HTTP server implementing the Terraform provider mirror protocol. Both allow terraform init to resolve providers without reaching the public registry. A is wrong because TF_PLUGIN_CACHE_DIR is a provider caching mechanism — it avoids re-downloading already-downloaded providers across projects. However, populating it manually does not help if Terraform still tries to contact the registry first to resolve version metadata. Without a mirror configuration, init will still attempt registry contact. C is wrong because manually placing binaries in .terraform/providers will be overwritten or ignored by terraform init unless the provider is also declared in a lockfile with matching checksums. This is fragile and not a supported pattern. D is wrong because terraform providers lock generates or updates the dependency lockfile (.terraform.lock.hcl) with checksums. It does not enable offline operation — it still requires downloading providers to compute checksums unless the -platform flag is used with specific platform targets.
Q7
A Terraform configuration has both azurerm and aws providers. During terraform init, the error appears: Provider registry.terraform.io/hashicorp/aws: there is no package for registry.terraform.io/hashicorp/aws 5.12.0 cached in the plugin cache directory. The .terraform.lock.hcl file references version = "5.12.0" but the local cache only has 5.11.0. What should the engineer do?
A Delete the .terraform.lock.hcl file and re-run terraform init to allow Terraform to select any compatible version
Run terraform init -upgrade to allow Terraform to download the version specified in the lockfile (or a newer compatible version) and update the cache
C Manually rename the 5.11.0 binary to 5.12.0 in the cache directory to satisfy the lockfile requirement
D Add version = "~> 5.11" to the required_providers block to downgrade the constraint to match the cached version
Correct Answer
Run terraform init -upgrade to allow Terraform to download the version specified in the lockfile (or a newer compatible version) and update the cache
Explanation
B is correct because the lockfile explicitly requires version 5.12.0 (with its specific checksums). terraform init -upgrade allows Terraform to download the locked version or resolve newer compatible versions. Without -upgrade, terraform init will fail if the exact locked version is not in cache. The -upgrade flag is the correct mechanism to resolve lockfile-vs-cache mismatches. A is wrong because deleting the lockfile removes version pinning for the entire team. The lockfile ensures all team members and CI systems use identical provider versions. Deleting it is a drastic action that introduces version inconsistency risk. C is wrong because provider binaries are identified not just by version number in the filename but by cryptographic checksums in the lockfile. Renaming a binary does not change its checksum, so Terraform would reject it during integrity verification. D is wrong because changing required_providers constraints does not retroactively change what is already in the lockfile. Changing the constraint and running terraform init -upgrade would be required to actually resolve the version. Also, ~> 5.11 allows 5.11.x but not 5.12.0, so it still would not match the lockfile entry.
Q8
A Terraform module creates an AWS RDS instance with a master password. The engineer declares: variable "db_password" { type = string } and passes it in via a .tfvars file checked into version control. A security engineer flags this as a critical vulnerability. What is the correct remediation?
A Rename the variable to db_password_secret to indicate it should not be logged
B Add sensitive = true to the variable declaration, which prevents the value from being stored in state
Mark the variable sensitive = true, source the value from environment variable TF_VAR_db_password or a secrets manager integration (e.g., Vault provider), and ensure the .tfvars file is removed from version control and added to .gitignore
D Use nonsensitive(var.db_password) when passing the value to the resource to prevent Terraform from redacting it in plan output
Correct Answer
Mark the variable sensitive = true, source the value from environment variable TF_VAR_db_password or a secrets manager integration (e.g., Vault provider), and ensure the .tfvars file is removed from version control and added to .gitignore
Explanation
C is correct because it addresses all three failure modes: marking sensitive = true redacts the value from plan/apply console output; using an environment variable or secrets manager prevents the secret from ever entering version control; removing the .tfvars file from git history and adding it to .gitignore remediates the existing exposure. Security requires all three layers. A is wrong because variable naming is cosmetic and has zero effect on security. Terraform does not infer sensitivity from variable names. B is wrong because sensitive = true does NOT prevent the value from being stored in state — it only suppresses the value in console output. Terraform state always contains sensitive values in plaintext (unless using encryption at rest). State must be protected separately. D is wrong because nonsensitive() is the opposite of what is needed. It explicitly tells Terraform to stop treating a value as sensitive, causing it to appear in plan output. This is sometimes used for debugging but is a security regression, not an improvement.
Q9
A team uses the HashiCorp Vault provider to retrieve a database password during Terraform runs. The secret is read using a vault_generic_secret data source and passed to an aws_db_instance resource. A colleague points out a remaining security concern even with this approach. What is it?
A The Vault provider is not compatible with AWS resources and requires a separate provider configuration
Secrets retrieved via data sources are written into the Terraform state file in plaintext, meaning state must be encrypted at rest and access tightly controlled
C Using a data source to read Vault secrets is deprecated; the Vault provider now requires using the vault_kv_secret_v2 resource instead of data sources
D Terraform does not support dynamic secret rotation when using Vault; the secret will be permanently cached after the first apply
Correct Answer
Secrets retrieved via data sources are written into the Terraform state file in plaintext, meaning state must be encrypted at rest and access tightly controlled
Explanation
B is correct because Terraform state captures the output of all data sources, including secrets retrieved from Vault. Even if the secret is marked sensitive in the provider, its value is stored in the state file as plaintext JSON. This means the state file itself becomes a secret that must be protected with encryption at rest (S3 SSE, Vault backend, etc.) and strict IAM access controls. This is a frequently misunderstood residual risk. A is wrong because the Vault provider is fully compatible with any other provider, including AWS. Providers operate independently and can coexist in any configuration. C is wrong because Vault data sources remain supported and are not deprecated. Both vault_generic_secret and vault_kv_secret_v2 data sources are valid depending on the KV engine version. D is wrong because while Terraform does not natively rotate secrets between runs, Vault's dynamic secrets can generate short-lived credentials on each terraform apply. The characterization that secrets are "permanently cached" is incorrect.
Q10
A developer proposes storing the Terraform state file locally (no remote backend) for a production AWS environment to avoid complexity. What are the two most critical risks with this approach?
A Local state files are compressed differently than remote state, causing plan failures; and local runs cannot use provider aliases
State will not be shareable with team members, causing conflict and inconsistency; and there is no state locking, enabling concurrent applies that can corrupt state
C Local backends do not support terraform plan, only terraform apply; and local state cannot be encrypted
D Terraform CLI does not allow local state for AWS resources; an S3 backend is mandatory per AWS provider requirements
Correct Answer
State will not be shareable with team members, causing conflict and inconsistency; and there is no state locking, enabling concurrent applies that can corrupt state
Explanation
B is correct and identifies the two canonical risks of local state in a team/production context. First, the state file on one developer's machine is not visible to others, leading to configuration drift, duplicate resource creation, or conflicting changes. Second, without a remote backend there is no locking mechanism — two people running apply simultaneously write to the same state file, causing corruption. Both risks are existential for a production environment. A is wrong because local state files use the same JSON format as remote state files. Compression differences do not exist. Provider aliases work identically regardless of backend type. C is wrong because terraform plan works fully with local backends — it is the default in open-source Terraform. While local state is not encrypted at rest by default, this is a separate concern from the two primary risks identified in B. D is wrong because Terraform does not enforce backend requirements based on the provider being used. Using a local backend with AWS resources is allowed (though inadvisable). There is no such requirement in the AWS provider.
Q11
A CI/CD pipeline for Terraform uses the following sequence: terraform init → terraform validate → terraform plan → terraform apply. After a policy review, the security team requires that infrastructure changes be approved by a human before apply runs in production. Which approach integrates approval into this workflow without abandoning the saved-plan approach?
A Add terraform plan -refresh=false before apply to skip state changes that might require approval
Generate a plan with terraform plan -out=tfplan, upload the plan file as a pipeline artifact, require manual approval in the pipeline gate, then run terraform apply tfplan in a subsequent stage that only runs after approval
C Replace terraform apply with terraform apply -auto-approve and rely on code review of the plan output in the PR as the approval mechanism
D Run terraform apply with the -lock-timeout=300s flag to give approvers five minutes to review before the lock expires
Correct Answer
Generate a plan with terraform plan -out=tfplan, upload the plan file as a pipeline artifact, require manual approval in the pipeline gate, then run terraform apply tfplan in a subsequent stage that only runs after approval
Explanation
B is correct because this is the standard pattern for human-in-the-loop Terraform pipelines. The plan is generated and saved, a pipeline approval gate (available in GitHub Actions, GitLab CI, Azure DevOps, etc.) pauses execution, a human reviews the plan output, and apply runs against the exact same plan that was approved. This ensures what was reviewed is exactly what gets applied. A is wrong because -refresh=false skips drift detection, which reduces safety rather than adding approval control. It does not introduce a human approval step. C is wrong because -auto-approve removes all interactive prompts and bypasses the apply confirmation. Relying on PR code review of plan output is not equivalent to a formal approval gate — the plan can differ from the PR if infrastructure has changed. D is wrong because -lock-timeout controls how long Terraform waits to acquire a state lock when it is held by another process. It has nothing to do with human approval workflows.
Q12
A terraform apply fails with the error: Error: Invalid provider configuration. Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. The configuration has a valid provider "aws" block. What is the most likely cause?
A The AWS provider version is incompatible with the current Terraform version
The configuration is using a module that references the AWS provider but the root module is not passing the provider to the module via the providers argument, and the module has no default provider configured
C The required_providers block is missing from the root module's terraform block
D The provider "aws" block is inside a module file rather than the root main.tf file
Correct Answer
The configuration is using a module that references the AWS provider but the root module is not passing the provider to the module via the providers argument, and the module has no default provider configured
Explanation
B is correct because this specific error message is commonly produced when a module uses a provider that requires explicit configuration (e.g., an aliased provider or a provider that needs credentials) but the root module has not passed that provider instance to the module using the providers map in the module call block. The module cannot inherit an implicitly configured provider in all cases — particularly with aliases or when providers are defined in the calling module. A is wrong because a version incompatibility between Terraform and the provider would produce a different error, typically mentioning protocol version or API compatibility, not "requires explicit configuration." C is wrong because a missing required_providers block causes a warning or a different validation error during init, not the specific message described. Terraform can still use providers declared via provider blocks without required_providers, though the latter is best practice. D is wrong because provider blocks can appear in any .tf file within the root module directory. Terraform merges all .tf files in a directory. File naming (main.tf vs other names) has no functional significance.
Q13
A root module calls a child module twice with different configurations to create resources in two different AWS accounts using assumed roles:```
module "account_a" {
source = "./modules/networking"
providers = { aws = aws.account_a }
}

module "account_b" {
source = "./modules/networking"
providers = { aws = aws.account_b }
}
```
The child module's required_providers block does not specify a configuration_aliases entry for the AWS provider. What is the result?
A Terraform will raise an error because child modules cannot accept provider configurations from parent modules
B The child module will use the default AWS provider, ignoring the providers map in the module call
Terraform may raise a warning or error indicating that the child module does not declare that it accepts an alternate provider configuration; configuration_aliases must be declared in the child module's required_providers for this pattern to work correctly
D Both module instances will use aws.account_a because it is declared first in the root module
Correct Answer
Terraform may raise a warning or error indicating that the child module does not declare that it accepts an alternate provider configuration; configuration_aliases must be declared in the child module's required_providers for this pattern to work correctly
Explanation
C is correct because when a root module passes a non-default (aliased) provider to a child module, the child module must declare in its own required_providers block that it accepts alternate provider configurations using configuration_aliases. Without this declaration, Terraform may produce an error or warning because the child module has no way to signal that it is designed to accept injected provider configurations. This is a subtle but important requirement for multi-account or multi-region module patterns. A is wrong because child modules absolutely can accept provider configurations from parent modules — this is a supported and common pattern. The limitation is the requirement for configuration_aliases declaration. B is wrong because when providers is explicitly specified in a module call, Terraform uses the mapping — it does not silently fall back to the default provider. The issue is validation, not silent fallback. D is wrong because Terraform does not use declaration order to assign providers. Each module instance uses the provider explicitly mapped in its providers argument.
Q14
A team needs to refactor a Terraform configuration by moving a resource from one module to a different module without destroying and recreating it. The resource is module.old_module.aws_s3_bucket.data. It should become module.new_module.aws_s3_bucket.data. Which approach correctly achieves this?
A Remove the resource from the old module, add it to the new module, and run terraform apply — Terraform will detect the move via resource matching
B Use terraform state mv module.old_module.aws_s3_bucket.data module.new_module.aws_s3_bucket.data to update the state address, then update the configuration
C Use a moved block in the Terraform configuration declaring from = module.old_module.aws_s3_bucket.data and to = module.new_module.aws_s3_bucket.data, then update the configuration and run terraform apply
Both B and C are correct approaches; B is imperative and C is declarative
Correct Answer
Both B and C are correct approaches; B is imperative and C is declarative
Explanation
D is correct because both approaches are valid and supported. terraform state mv (option B) is the imperative CLI approach — it directly manipulates state and must be done before or in conjunction with the configuration change. The moved block (option C, introduced in Terraform 1.1) is the declarative approach — it lives in the configuration, is version-controlled, can be applied as part of a normal plan/apply cycle, and communicates the refactor intent to all team members. Both achieve the same result: updating the state address without destroying and recreating the resource. The moved block is generally preferred for team environments due to its auditability. A is wrong because Terraform does not automatically match resources across module boundaries. Without a moved block or state mv, Terraform will plan to destroy the resource in the old location and create a new one in the new location — exactly the outcome to avoid. B alone is incomplete as the single correct answer because C is also fully correct, making D the most accurate choice. C alone is incomplete as the single correct answer for the same reason.
Q15
A Terraform configuration provisions an AWS Lambda function and an API Gateway. The API Gateway depends on the Lambda function's invocation URI. During destroy, Terraform attempts to destroy the Lambda function before the API Gateway, causing the API Gateway destroy to fail with a dependency error. The Lambda resource block has no explicit depends_on. Why is this happening, and what is the correct fix?
A Terraform always destroys resources in creation order; reverse the resource declarations in the configuration file to fix destroy order
The API Gateway resource references the Lambda's invoke_arn attribute, creating an implicit dependency that Terraform should honor during destroy; the issue is likely that the reference is via an interpolated local or data source that breaks the dependency chain — audit the dependency graph with terraform graph and add an explicit depends_on on the API Gateway resource referencing the Lambda
C Add create_before_destroy = true to the Lambda function's lifecycle block to ensure it is created last and destroyed first
D Use terraform destroy -target=aws_api_gateway_rest_api.main first, then run terraform destroy for the remaining resources
Correct Answer
The API Gateway resource references the Lambda's invoke_arn attribute, creating an implicit dependency that Terraform should honor during destroy; the issue is likely that the reference is via an interpolated local or data source that breaks the dependency chain — audit the dependency graph with terraform graph and add an explicit depends_on on the API Gateway resource referencing the Lambda
Explanation
B is correct because Terraform builds its dependency graph from explicit attribute references between resources. If the API Gateway references aws_lambda_function.main.invoke_arn directly, Terraform knows the API Gateway depends on the Lambda and should destroy the API Gateway first. If the dependency chain is broken — for example, the ARN is passed through a local value that is computed independently, or via a variable — Terraform may lose track of the implicit dependency and produce an incorrect destroy order. Running terraform graph | dot -Tsvg > graph.svg visualizes the dependency graph to confirm. Adding depends_on = [aws_lambda_function.main] to the API Gateway resource restores the explicit dependency and corrects destroy ordering. A is wrong because Terraform does not determine resource order from file declaration order. It uses the dependency graph exclusively. Reordering declarations in a file has zero effect on plan or destroy order. C is wrong because create_before_destroy controls replacement behavior — during a destroy-and-recreate cycle, it ensures the new resource is created before the old one is deleted. It does not control the order in which independent resources are destroyed during a terraform destroy. D is wrong because using -target in this way is a manual workaround that does not fix the root cause. It also carries the documented risk of leaving state partially applied and should not be used as a routine operational pattern.

Want More Practice?

These are just the free questions. Unlock the full Terraform Authoring & Operations Professional exam library with hundreds of additional questions, timed practice mode, and progress tracking.

← Back to Terraform Authoring & Operations Professional Exams