Terraform Certification Knowledge Cards (Master) Flashcards

1
Q

What is Infrastructure as Code?

A

You write and execute the code to define, deploy, update, and destroy your infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the benefits of IaC?

A

Automation
Reusability of the code
Versioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does using IaC make it easy to provision infrastructure?

A

IaC makes it easy to provision and apply infrastructure configurations, saving time. It standardizes workflows across different infrastructure providers (e.g., VMware, AWS, Azure, GCP, etc.) by using a common syntax across all of them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Idempodent in terms of IaC?

A

The idempotent characteristic provided by IaC tools ensures that, even if the same code is applied multiple times, the result remains the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are Day 0 and Day 1 activities?

A

IaC can be applied throughout the lifecycle, both on the initial build, as well as throughout the life of the infrastructure. Commonly, these are referred to as Day 0 and Day 1 activities.
“Day 0” code provisions and configures your initial infrastructure.
“Day 1” refers to OS and application configurations you apply after you’ve initially built your infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the use cases of Terraform?

A
Heroku App Setup
Multi-Tier Applications
Self-Service Clusters
Software Demos
Disposable Environments
Software Defined Networking
Resource Schedulers
Multi-Cloud Deployment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the advantages of Terraform?

A

Platform Agnostic
State Management
Operator Confidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Where do you describe all the components or your entire datacenter so that Terraform provision those?

A

Configuration files ends with *.tf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can Terraform build infrastructure so efficiently?

A

Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is multi-cloud deployment?

A

Provisioning your infrastructure into multiple cloud providers to increase fault-tolerance of your applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How multi-cloud deployment is useful?

A

By using only a single region or cloud provider, fault tolerance is limited by the availability of that provider.
Having a multi-cloud deployment allows for more graceful recovery of the loss of a region or entire provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is cloud-agnostic in terms of provisioning tools?

A

cloud-agnostic and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is Terraform cloud-agostic?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the use of terraform being cloud-agnostic?

A

It simplifies management and orchestration, helping operators build large-scale multi-cloud infrastructures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Where is Terraform State Stored When running locally?

A

By default, when you run Terraform in the folder /some/folder, Terraform creates the file /some/folder/terraform.tfstate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of the Terraform State?

A

Mapping to the Real World
Terraform requires some sort of database to map Terraform config to the real world because you can’t find the same functionality in every cloud provider. You need to have some kind of mechanism to be cloud-agnostic

Metadata
Terraform must also track metadata such as resource dependencies, pointer to the provider configuration that was most recently used with the resource in situations where multiple aliased providers are present.

Performance
When running a terraform plan, Terraform must know the current state of resources in order to effectively determine the changes that it needs to make to reach your desired configuration.
For larger infrastructures, querying every resource is too slow. Many cloud providers do not provide APIs to query multiple resources at once, and the round trip time for each resource is hundreds of milliseconds. So, Terraform stores a cache of the attribute values for all resources in the state. This is the most optional feature of Terraform state and is done only as a performance improvement.

Syncing
When two people works on the same file and doing some changes to the infrastructure. Its very important for everyone to be working with the same state so that operations will be applied to the same remote objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the name of the terraform state file?

A

terraform.tfstate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you manually install terraform?

A

step 1: Download the zip fille

step 2: mv ~/Downloads/terraform /usr/local/bin/terraform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Where do you put terraform configurations so that you can configure some behaviors of Terraform itself?

A
The special terraform configuration block type is used to configure some behaviors of Terraform itself, such as requiring a minimum Terraform version to apply your configuration.
terraform {
  # ...
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Only constants are allowed inside the terraform block. Is this correct?

A

Yes
Within a terraform block, only constant values can be used; arguments may not refer to named objects such as resources, input variables, etc, and may not use any of the Terraform language built-in functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the Providers?

A

A provider is a plugin that Terraform uses to translate the API interactions with the service. A provider is responsible for understanding API interactions and exposing resources. Because Terraform can interact with any API, you can represent almost any infrastructure type as a resource in Terraform.
https://www.terraform.io/docs/configuration/providers.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do you configure a Provider?

A

provider “google” {
project = “acme-app”
region = “us-central1”
}
The name given in the block header (“google” in this example) is the name of the provider to configure. Terraform associates each resource type with a provider by taking the first word of the resource type name (separated by underscores), and so the “google” provider is assumed to be the provider for the resource type name google_compute_instance.
The body of the block (between { and }) contains configuration arguments for the provider itself. Most arguments in this section are specified by the provider itself; in this example both project and region are specific to the google provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the meta-arguments that are defined by Terraform itself and available for all provider blocks?

A

version: Constraining the allowed provider versions
alias: using the same provider with different configurations for different resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is Provider initialization and why do we need?

A

Each time a new provider is added to configuration – either explicitly via a provider block or by adding a resource from that provider – Terraform must initialize the provider before it can be used.
Initialization downloads and installs the provider’s plugin so that it can later be executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How do you initialize any Provider?

A

Provider initialization is one of the actions of terraform init. Running this command will download and initialize any providers that are not already initialized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

When you run terraform init command, all the providers are installed in the current working directory. Is this true?

A

Providers downloaded by terraform init are only installed for the current working directory; other working directories can have their own installed provider versions.
Note that terraform init cannot automatically download providers that are not distributed by HashiCorp. See Third-party Plugins below for installation instructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How do you constrain the provider version?

A
To constrain the provider version as suggested, add a required_providers block inside a terraform block:
terraform {
  required_providers {
    aws = "~> 1.0"
  }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How do you upgrade to the latest acceptable version of the provider?

A

terraform init –upgrade
It upgrade to the latest acceptable version of each provider
This command also upgrades to the latest versions of all Terraform modules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How many ways you can configure provider versions?

A
1. With required_providers blocks under terraform block
terraform {
  required_providers {
    aws = "~> 1.0"
  }
}
2. Provider version constraints can also be specified using a version argument within a provider block
provider {
  version= "1.0"
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

How do you configure Multiple Provider Instances?

A
alias
You can optionally define multiple configurations for the same provider, and select which one to use on a per-resource or per-module basis.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why do we need Multiple Provider instances?

A

Some of the example scenarios:

a. multiple regions for a cloud platform
b. targeting multiple Docker hosts
c. multiple Consul hosts, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How do we define multiple Provider configurations?

A
To include multiple configurations for a given provider, include multiple provider blocks with the same provider name, but set the alias meta-argument to an alias name to use for each additional configuration.
# The default provider configuration
provider "aws" {
  region = "us-east-1"
}
# Additional provider configuration for west coast region
provider "aws" {
  alias  = "west"
  region = "us-west-2"
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do you select alternate providers?

A

By default, resources use a default provider configuration inferred from the first word of the resource type name. For example, a resource of type aws_instance uses the default (un-aliased) aws provider configuration unless otherwise stated.
resource “aws_instance” “foo” {
provider = aws.west

  # ...
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the location of the user plugins directory?

A

Windows %APPDATA%\terraform.d\plugins

All other systems ~/.terraform.d/plugins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Third-party plugins should be manually installed. Is that true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

The command terraform init cannot install third-party plugins? True or false?

A

True
Install third-party providers by placing their plugin executables in the user plugins directory. The user plugins directory is in one of the following locations, depending on the host operating system
Once a plugin is installed, terraform init can initialize it normally. You must run this command from the directory where the configuration files are located.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the naming scheme for provider plugins?

A

terraform-provider-_vX.Y.Z

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the CLI configuration File?

A

The CLI configuration file configures per-user settings for CLI behaviors, which apply across all Terraform working directories.
It is named either .terraformrc or terraform.rc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Where is the location of the CLI configuration File?

A

On Windows, the file must be named named terraform.rc and placed in the relevant user’s %APPDATA% directory.
On all other systems, the file must be named .terraformrc (note the leading period) and placed directly in the home directory of the relevant user.
The location of the Terraform CLI configuration file can also be specified using the TF_CLI_CONFIG_FILE environment variabl

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is Provider Plugin Cache?

A

By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained. As a consequence, if you have multiple configurations that use the same provider then a separate copy of its plugin will be downloaded for each configuration.
Given that provider plugins can be quite large (on the order of hundreds of megabytes), this default behavior can be inconvenient for those with slow or metered Internet connections.
Therefore Terraform optionally allows the use of a local directory as a shared plugin cache, which then allows each distinct plugin binary to be downloaded only once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

How do you enable Provider Plugin Cache?

A

To enable the plugin cache, use the plugin_cache_dir setting in the CLI configuration file.
plugin_cache_dir = “$HOME/.terraform.d/plugin-cache”
Alternatively, the TF_PLUGIN_CACHE_DIR environment variable can be used to enable caching or to override an existing cache directory within a particular shell session:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

When you are using plugin cache you end up growing cache directory with different versions. Whose responsibility to clean it?

A

User
Terraform will never itself delete a plugin from the plugin cache once it’s been placed there. Over time, as plugins are upgraded, the cache directory may grow to contain several unused versions which must be manually deleted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Why do we need to initialize the directory?

A
When you create a new configuration — or check out an existing configuration from version control — you need to initialize the directory
// Example
provider "aws" {
  profile = "default"
  region  = "us-east-1"
}
resource "aws_instance" "example" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
}
Initializing a configuration directory downloads and installs providers used in the configuration, which in this case is the aws provider. Subsequent commands will use local settings and data during initialization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is the command to initialize the directory?

A

terraform init

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

If different teams are working on the same configuration. How do you make files to have consistent formatting?

A

terraform fmt

This command automatically updates configurations in the current directory for easy readability and consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

If different teams are working on the same configuration. How do you make files to have syntactically valid and internally consistent?

A

terraform validate
This command will check and report errors within modules, attribute names, and value types.
Validate your configuration. If your configuration is valid, Terraform will return a success message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is the command to create infrastructure?

A

terraform apply

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the command to show the execution plan and not apply?

A

terraform plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

How do you inspect the current state of the infrastructure applied?

A

terraform show
When you applied your configuration, Terraform wrote data into a file called terraform.tfstate. This file now contains the IDs and properties of the resources Terraform created so that it can manage or destroy those resources going forward.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

If your state file is too big and you want to list the resources from your state. What is the command?

A

terraform state list

https://learn.hashicorp.com/terraform/getting-started/build#manually-managing-state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is plug-in based architecture?

A

Defining additional features as plugins to your core platform or core application. This provides extensibility, flexibility and isolation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What are Provisioners?

A

If you need to do some initial setup on your instances, then provisioners let you upload files, run shell scripts, or install and trigger other software like configuration management tools, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How do you define provisioners?

A

resource “aws_instance” “example” {
ami = “ami-b374d5a5”
instance_type = “t2.micro”

provisioner “local-exec” {
command = “echo hello > hello.txt”
}
}
Provisioner block within the resource block. Multiple provisioner blocks can be added to define multiple provisioning steps. Terraform supports multiple provisioners
https://learn.hashicorp.com/terraform/getting-started/provision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What are the types of provisioners?

A

local-exec

remote-exec

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is a local-exec provisioner and when do we use it?

A

The local-exec provisioner executing a command locally on your machine running Terraform.
We use this when we need to do something on our local machine without needing any external URL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What is a remote-exec provisioner and when do we use it?

A

Another useful provisioner is remote-exec which invokes a script on a remote resource after it is created.
This can be used to run a configuration management tool, bootstrap into a cluster, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Are provisioners runs only when the resource is created or destroyed?

A

Provisioners are only run when a resource is created or destroyed. Provisioners that are run while destroying are Destroy provisioners.
They are not a replacement for configuration management and changing the software of an already-running server, and are instead just meant as a way to bootstrap a server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What do we need to use a remote-exec?

A
In order to use a remote-exec provisioner, you must choose an ssh or winrm connection in the form of a connection block within the provisioner.
Here is an example
provider "aws" {
  profile = "default"
  region  = "us-west-2"
}
resource "aws_key_pair" "example" {
  key_name   = "examplekey"
  public_key = file("~/.ssh/terraform.pub")
}
resource "aws_instance" "example" {
  key_name      = aws_key_pair.example.key_name
  ami           = "ami-04590e7389a6e577c"
  instance_type = "t2.micro"
connection {
    type        = "ssh"
    user        = "ec2-user"
    private_key = file("~/.ssh/terraform")
    host        = self.public_ip
  }
provisioner "remote-exec" {
    inline = [
      "sudo amazon-linux-extras enable nginx1.12",
      "sudo yum -y install nginx",
      "sudo systemctl start nginx"
    ]
  }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

When might terraform mark resources as tainted?

A

If a resource successfully creates but fails during provisioning, Terraform will error and mark the resource as “tainted”.
A resource that is tainted has been physically created, but can’t be considered safe to use since provisioning failed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You applied the infrastructure with terraform apply and you have some tainted resources. You run an execution plan now what happens to those tainted resources?

A

When you generate your next execution plan, Terraform will not attempt to restart provisioning on the same resource because it isn’t guaranteed to be safe.
Instead, Terraform will remove any tainted resources and create new resources, attempting to provision them again after creation.
https://learn.hashicorp.com/terraform/getting-started/provision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Terraform also does not automatically roll back and destroy the resource during the apply when the failure happens. Why?

A

Terraform also does not automatically roll back and destroy the resource during the apply when the failure happens, because that would go against the execution plan: the execution plan would’ve said a resource will be created, but does not say it will ever be deleted. If you create an execution plan with a tainted resource, however, the plan will clearly state that the resource will be destroyed because it is tainted.
https://learn.hashicorp.com/terraform/getting-started/provision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

How do you manually taint a resource?

A

terraform taint resource.id

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Does the taint command modify the infrastructure?

A

terraform taint resource.id
This command will not modify infrastructure, but does modify the state file in order to mark a resource as tainted. Once a resource is marked as tainted, the next plan will show that the resource will be destroyed and recreated and the next apply will implement this change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

By default, provisioners that fail will also cause the Terraform apply itself to fail. Is this true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

By default, provisioners that fail will also cause the Terraform apply itself to fail. How do you change this?

A

The on_failure setting can be used to change this.
The allowed values are:
continue: Ignore the error and continue with creation or destruction.
fial: Raise an error and stop applying (the default behavior). If this is a creation provisioner, taint the resource.

// Example
resource "aws_instance" "web" {
  # ...
  provisioner "local-exec" {
    command  = "echo The server's IP address is ${self.private_ip}"
    on_failure = "continue"
  }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

How do you define destroy provisioner and give an example?

A

You can define destroy provisioner with the parameter when
provisioner “remote-exec” {
when = “destroy”

# 

}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

How do you apply constraints for the provider versions?

A
The required_providers setting is a map specifying a version constraint for each provider required by your configuration.
terraform {
  required_providers {
    aws = ">= 2.7.0"
  }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What should you use to set both a lower and upper bound on versions for each provider?

A
~>
terraform {
  required_providers {
    aws = "~> 2.7.0"
  }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

How do you try experimental features?

A
In releases where experimental features are available, you can enable them on a per-module basis by setting the experiments argument inside a terraform block:
terraform {
  experiments = [example]
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

When does the terraform does not recommend using provisions?

A

Passing data into virtual machines and other compute resources
https://www.terraform.io/docs/provisioners/#passing-data-into-virtual-machines-and-other-compute-resources
Running configuration management software
https://www.terraform.io/docs/provisioners/#running-configuration-management-software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Expressions in provisioner blocks cannot refer to their parent resource by name. Is this true?

A

True
The self object represents the provisioner’s parent resource, and has all of that resource’s attributes.
For example, use self.public_ip to reference an aws_instance’s public_ip attribute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What does this symbol version = “~> 1.0” mean when defining versions?

A

Any version more than 1.0 and less than 2.0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Terraform supports both cloud and on-premises infrastructure platforms. Is this true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Terraform assumes an empty default configuration for any provider that is not explicitly configured. A provider block can be empty. Is this true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

How do you configure the required version of Terraform CLI can be used with your configuration?

A

The required_version setting can be used to constrain which versions of the Terraform CLI can be used with your configuration. If the running version of Terraform doesn’t match the constraints specified, Terraform will produce an error and exit without taking any further actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Terraform CLI versions and provider versions are independent of each other. Is this true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

ou are configuring aws provider and it is always recommended to hard code aws credentials in *.tf files. Is this true?

A

False
HashiCorp recommends that you never hard-code credentials into *.tf configuration files. We are explicitly defining the default AWS config profile here to illustrate how Terraform should access sensitive credentials.
If you leave out your AWS credentials, Terraform will automatically search for saved API credentials (for example, in ~/.aws/credentials) or IAM instance profile credentials. This is cleaner when .tf files are checked into source control or if there is more than one admin user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

You are provisioning the infrastructure with the command terraform apply and you noticed one of the resources failed. How do you remove that resource without affecting the whole infrastructure?

A

You can taint the resource and the next apply will destroy the resource
terraform taint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

What is command fmt?

A

The terraform fmt command is used to rewrite Terraform configuration files to a canonical format and style. This command applies a subset of the Terraform language style conventions, along with other minor adjustments for readability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What is the recommended approach after upgrading terraform?

A

The canonical format may change in minor ways between Terraform versions, so after upgrading Terraform we recommend to proactively run terraform fmt on your modules along with any other changes you are making to adopt the new version.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

By default, fmt scans the current directory for configuration files. Is this true?

A

True
By default, fmt scans the current directory for configuration files. If the dir argument is provided then it will scan that given directory instead. If dir is a single dash (-) then fmt will read from standard input (STDIN).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You are formatting the configuration files and what is the flag you should use to see the differences?

A

terraform fmt -diff

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You are formatting the configuration files and what is the flag you should use to process the subdirectories as well?

A

terraform fmt -recursive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

You are formatting configuration files in a lot of directories and you don’t want to see the list of file changes. What is the flag that you should use?

A

terraform fmt -list=false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What is the command taint?

A

The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
This command will not modify infrastructure, but does modify the state file in order to mark a resource as tainted. Once a resource is marked as tainted, the next plan will show that the resource will be destroyed and recreated and the next apply will implement this change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

When you are tainting a resource terraform reads the default state file terraform.tfstate. What is the flag you should use to read from a different path?

A

terraform taint -state=path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Give an example of tainting a single resource?

A
terraform taint aws_security_group.allow_all
The resource aws_security_group.allow_all in the module root has been marked as tainted.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

What is the command import?

A

The terraform import command is used to import existing resources into Terraform.
Terraform is able to import existing infrastructure. This allows you take resources you’ve created by some other means and bring it under Terraform management.
This is a great way to slowly transition infrastructure to Terraform, or to be able to be confident that you can use Terraform in the future if it potentially doesn’t support every feature you need today.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What is the command import usage?

A

terraform import [options] ADDRESS ID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What is the default workspace name?

A

default

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What are workspaces?

A

Each Terraform configuration has an associated backend that defines how operations are executed and where persistent data such as the Terraform state are stored.
The persistent data stored in the backend belongs to a workspace. Initially the backend has only one workspace, called “default”, and thus there is only one Terraform state associated with that configuration.
Certain backends support multiple named workspaces, allowing multiple states to be associated with a single configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What is the command to list the workspaces?

A

terraform workspace list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What is the command to create a new workspace?

A

terraform workspace new

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What is the command to show the current workspace?

A

terraform workspace show

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What is the command to switch the workspace?

A

terraform workspace select

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What is the command to delete the workspace?

A

terraform workspace delete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Can you delete the default workspace?

A

No. You can’t ever delete default workspace

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

You are working on the different workspaces and you want to use a different number of instances based on the workspace. How do you achieve that?

A

resource “aws_instance” “example” {
count = “${terraform.workspace == “default” ? 5 : 1}”

  # ... other arguments
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

You are working on the different workspaces and you want to use tags based on the workspace. How do you achieve that?

A

resource “aws_instance” “example” {
tags = {
Name = “web - ${terraform.workspace}”
}

  # ... other arguments
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

You want to create a parallel, distinct copy of a set of infrastructure in order to test a set of changes before modifying the main production infrastructure. How do you achieve that?

A

Workspaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What is the state command used for?

A

The terraform state command is used for advanced state management. As your Terraform usage becomes more advanced, there are some cases where you may need to modify the Terraform state. Rather than modify the state directly, the terraform state commands can be used in many cases instead.
https://www.terraform.io/docs/commands/state/index.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

You are working on terraform files and you want to list all the resources. What is the command you should use?

A

terraform state list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

How do you list the resources for the given name?

A

terraform state list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

What is the command that shows the attributes of a single resource in the state file?

A

terraform state show ‘resource name’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

How do you do debugging with terraform?

A

Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to any value.
This will cause detailed logs to appear on stderr.
You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to change the verbosity of the logs. TRACE is the most verbose and it is the default if TF_LOG is set to something other than a log level name.
To persist logged output you can set TF_LOG_PATH in order to force the log to always be appended to a specific file when logging is enabled.
Note that even when TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled.
https://www.terraform.io/docs/internals/debugging.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

If terraform crashes where should you see the logs?

A

crash.log
If Terraform ever crashes (a “panic” in the Go runtime), it saves a log file with the debug logs from the session as well as the panic message and backtrace to crash.log.
https://www.terraform.io/docs/internals/debugging.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

What is the first thing you should do when the terraform crashes?

A

panic message
The most interesting part of a crash log is the panic message itself and the backtrace immediately following. So the first thing to do is to search the file for panic
https://www.terraform.io/docs/internals/debugging.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

You are building infrastructure for different environments for example test and dev. How do you maintain separate states?

A

There are two primary methods to separate state between environments:
directories
workspaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

What is the difference between directory-separated and workspace-separated environments?

A

Directory separated environments rely on duplicate Terraform code, which may be useful if your deployments need differ, for example to test infrastructure changes in development. But they can run the risk of creating drift between the environments over time.
Workspace-separated environments use the same Terraform code but have different state files, which is useful if you want your environments to stay as similar to each other as possible, for example if you are providing development infrastructure to a team that wants to simulate running in production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

What is the command to pull the remote state?

A

terraform state pull
This command will download the state from its current location and output the raw format to stdout.
https://www.terraform.io/docs/commands/state/pull.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

What is the command is used manually to upload a local state file to a remote state

A

terraform state push
The terraform state push command is used to manually upload a local state file to remote state. This command also works with local state.
https://www.terraform.io/docs/commands/state/push.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

The command terraform taint modifies the state file and doesn’t modify the infrastructure. Is this true?

A

True
This command will not modify infrastructure, but does modify the state file in order to mark a resource as tainted. Once a resource is marked as tainted, the next plan will show that the resource will be destroyed and recreated and the next apply will implement this change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Your team has decided to use terraform in your company and you have existing infrastructure. How do you migrate your existing resources to terraform and start using it?

A

You should use terraform import and modify the infrastrcuture in the terraform files and do the terraform workflow (init, plan, apply)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

When you are working with the workspaces how do you access the current workspace in the configuration files?

A

${terraform.workspace}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

When you are using workspaces where does the Terraform save the state file for the local state?

A

terraform.tfstate.d

For local state, Terraform stores the workspace states in a directory called terraform.tfstate.d.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

When you are using workspaces where does the Terraform save the state file for the remote state?

A

For remote state, the workspaces are stored directly in the configured backend.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

How do you remove items from the Terraform state?

A

terraform state rm ‘packet_device.worker’
The terraform state rm command is used to remove items from the Terraform state. This command can remove single resources, single instances of a resource, entire modules, and more.
https://www.terraform.io/docs/commands/state/rm.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

How do you move the state from one source to another?

A

terraform state mv ‘module.app’ ‘module.parent.module.app’
The terraform state mv command is used to move items in a Terraform state. This command can move single resources, single instances of a resource, entire modules, and more. This command can also move items to a completely different state file, enabling efficient refactoring.
https://www.terraform.io/docs/commands/state/mv.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

How do you rename a resource in the terraform state file?

A

terraform state mv ‘packet_device.worker’ ‘packet_device.helper’
The above example renames the packet_device resource named worker to helper:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

Where do you find and explore terraform Modules?

A
The Terraform Registry makes it simple to find and use modules.
The search query will look at module name, provider, and description to match your search terms. On the results page, filters can be used further refine search results.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

How do you make sure that modules have stability and compatibility?

A

By default, only verified modules are shown in search results.
Verified modules are reviewed by HashiCorp to ensure stability and compatibility.
By using the filters, you can view unverified modules as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

How do you download any modules?

A
You need to add any module in the configuration file like below
module "consul" {
  source = "hashicorp/consul/aws"
  version = "0.1.0"
}
terraform init command will download and cache any modules referenced by a configuration.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

What is the syntax for referencing a registry module?

A
//
// for example
module "consul" {
  source = "hashicorp/consul/aws"
  version = "0.1.0"
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

What is the syntax for referencing a private registry module?

A
///
// for example
module "vpc" {
  source = "app.terraform.io/example_corp/vpc/aws"
  version = "0.9.3"
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

The terraform recommends that all modules must follow semantic versioning. Is this true?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

What is a Terraform Module?

A

A Terraform module is a set of Terraform configuration files in a single directory. Even a simple configuration consisting of a single directory with one or more .tf files is a module.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Why do we use modules?

A
  • Organize configuration
  • Encapsulate configuration
  • Re-use configuration
  • Provide consistency and ensure best practices
    https: //learn.hashicorp.com/terraform/modules/modules-overview
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

How do you call modules in your configuration?

A
Your configuration can use module blocks to call modules in other directories. 
When Terraform encounters a module block, it loads and processes that module's configuration files.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

How many ways you can load modules?

A

Local and remote modules
Modules can either be loaded from the local filesystem, or a remote source.
Terraform supports a variety of remote sources, including the Terraform Registry, most version control systems, HTTP URLs, and Terraform Cloud or Terraform Enterprise private module registries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

What are the best practices for using Modules?

A
  1. Start writing your configuration with modules in mind. Even for modestly complex Terraform configurations managed by a single person, you’ll find the benefits of using modules outweigh the time it takes to use them properly.
  2. Use local modules to organize and encapsulate your code. Even if you aren’t using or publishing remote modules, organizing your configuration in terms of modules from the beginning will significantlty reduce the burden of maintaining and updating your configuration as your infrastructure grows in complexity.
  3. Use the public Terraform Registry to find useful modules. This way you can more quickly and confidently implement your configuration by relying on the work of others to implement common infrastructure scenarios.
  4. Publish and share modules with your team. Most infrastructure is managed by a team of people, and modules are important way that teams can work together to create and maintain infrastructure. As mentioned earlier, you can publish modules either publicly or privately. We will see how to do this in a future guide in this series.
    https: //learn.hashicorp.com/terraform/modules/modules-overview#module-best-practices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

What are the different source types for calling modules?

A
Local paths
Terraform Registry
GitHub
Generic Git, Mercurial repositories
Bitbucket
HTTP URLs
S3 buckets
GCS buckets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

What are the arguments you need for using modules in your configuration?

A
source and version
// example
module "consul" {
  source = "hashicorp/consul/aws"
  version = "0.1.0"
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

How do you set input variables for the modules?

A
The configuration that calls a module is responsible for setting its input values, which are passed as arguments in the module block. Aside from source and version, most of the arguments to a module block will set variable values.
On the Terraform registry page for the AWS VPC module, you will see an Inputs tab that describes all of the input variables that module supports.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

How do you access output variables from the modules?

A

You can access them by referring

module..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Where do you put output variables in the configuration?

A

Module outputs are usually either passed to other parts of your configuration, or defined as outputs in your root module. You will see both uses in this guide.
Inside your configuration’s directory, outputs.tf will need to contain:
output “vpc_public_subnets” {
description = “IDs of the VPC’s public subnets”
value = module.vpc.public_subnets
}

output “ec2_instance_public_ips” {
description = “Public IP addresses of EC2 instances”
value = module.ec2_instances.public_ip
}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

How do you pass input variables in the configuration?

A
You can define variables.tf in the root folder
variable "vpc_name" {
  description = "Name of VPC"
  type        = string
  default     = "example-vpc"
}

Then you can access these varibles in the configuration like this
module “vpc” {
source = “terraform-aws-modules/vpc/aws”
version = “2.21.0”

name = var.vpc_name
cidr = var.vpc_cidr

azs = var.vpc_azs
private_subnets = var.vpc_private_subnets
public_subnets = var.vpc_public_subnets

enable_nat_gateway = var.vpc_enable_nat_gateway

tags = var.vpc_tags
}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

What is the child module?

A

A module that is called by another configuration is sometimes referred to as a “child module” of that configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

When you use local modules you don’t have to do the command init or get every time there is a change in the local module. why?

A

When installing a local module, Terraform will instead refer directly to the source directory.
Because of this, Terraform will automatically notice changes to local modules without having to re-run terraform init or terraform get.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

When you use remote modules what should you do if there is a change in the module?

A

When installing a remote module, Terraform will download it into the .terraform directory in your configuration’s root directory.
You should initialize with terraform init

140
Q

A simple configuration consisting of a single directory with one or more .tf files is a module. Is this true?

141
Q

When using a new module for the first time, you must run either terraform init or terraform get to install the module. Is this true?

142
Q

When installing the modules and where does the terraform save these modules?

A
.terraform/modules
// Example
.terraform/modules
├── ec2_instances
│   └── terraform-aws-modules-terraform-aws-ec2-instance-ed6dcd9
├── modules.json
└── vpc
    └── terraform-aws-modules-terraform-aws-vpc-2417f60
143
Q

What is the required argument for the module?

A
source
All modules require a source argument, which is a meta-argument defined by Terraform CLI. Its value is either the path to a local directory of the module's configuration files, or a remote module source that Terraform should download and use. This value must be a literal string with no template sequences; arbitrary expressions are not allowed. For more information on possible values for this argument, see Module Sources.
144
Q

What are the other optional meta-arguments along with the source when defining modules

A
version - (Optional) A version constraint string that specifies which versions of the referenced module are acceptable. The newest version matching the constraint will be used. version is supported only for modules retrieved from module registries.
providers - (Optional) A map whose keys are provider configuration names that are expected by child module and whose values are corresponding provider names in the calling module. This allows provider configurations to be passed explicitly to child modules. If not specified, the child module inherits all of the default (un-aliased) provider configurations from the calling module.
145
Q

What is the Core Terraform workflow?

A

The core Terraform workflow has three steps:

  1. Write - Author infrastructure as code.
  2. Plan - Preview changes before applying.
  3. Apply - Provision reproducible infrastru
146
Q

What is the command init?

A

The terraform init command is used to initialize a working directory containing Terraform configuration files.
This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
It is safe to run this command multiple times.

147
Q

You recently joined a team and you cloned a terraform configuration files from the version control system. What is the first command you should use?

A

terraform init
This command performs several different initialization steps in order to prepare a working directory for use.
This command is always safe to run multiple times, to bring the working directory up to date with changes in the configuration.
Though subsequent runs may give errors, this command will never delete your existing configuration or state.
If no arguments are given, the configuration in the current working directory is initialized. It is recommended to run Terraform with the current working directory set to the root directory of the configuration, and omit the DIR argument.
https://www.terraform.io/docs/commands/init.html

148
Q

What is the flag you should use to upgrade modules and plugins a part of their respective installation steps?

A

upgrade

terraform init -upgrade

149
Q

When you are doing initialization with terraform init, you want to skip backend initialization. What should you do?

A

terraform init -backend=false

150
Q

When you are doing initialization with terraform init, you want to skip child module installation. What should you do?

A

terraform init -get=false

151
Q

When you are doing initialization where do all the plugins stored?

A

On most operating systems : ~/.terraform.d/plugins

on Windows : %APPDATA%\terraform.d\plugins

152
Q

When you are doing initialization with terraform init, you want to skip plugin installation. What should you do?

A

terraform init -get-plugins=false

153
Q

What does the command terraform validate does?

A

The terraform validate command validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.
Validate runs checks that verify whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.
It is thus primarily useful for general verification of reusable modules, including correctness of attribute names and value types.
https://www.terraform.io/docs/commands/validate.html

154
Q

What does the command plan do?

A

The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.

155
Q

What does the command apply do?

A

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.
https://www.terraform.io/docs/commands/apply.html

156
Q

ou are applying the infrastructure with the command apply and you don’t want to do interactive approval. Which flag should you use?

A

terraform apply -auto-approve

https://www.terraform.io/docs/commands/apply.html

157
Q

What does the command destroy do?

A

The terraform destroy command is used to destroy the Terraform-managed infrastructure.

158
Q

How do you preview the behavior of the command terraform destroy?

A

terraform plan -destroy

159
Q

What are implicit and explicit dependencies?

A

Implicit dependency:
By studying the resource attributes used in interpolation expressions, Terraform can automatically infer when one resource depends on another.
Terraform uses this dependency information to determine the correct order in which to create the different resources.
Implicit dependencies via interpolation expressions are the primary way to inform Terraform about these relationships, and should be used whenever possible.
Explicit dependency:
Sometimes there are dependencies between resources that are not visible to Terraform. The depends_on argument is accepted by any resource and accepts a list of resources to create explicit dependencies for.

160
Q

Give an example of implicit dependency?

A
In the example below, the reference to aws_instance.example.id creates an implicit dependency on the aws_instance named example.
provider "aws" {
  profile    = "default"
  region     = "us-east-1"
}
resource "aws_instance" "example" {
  ami           = "ami-b374d5a5"
  instance_type = "t2.micro"
}
resource "aws_eip" "ip" {
    vpc = true
    instance = aws_instance.example.id
}
161
Q

Give an example of explicit dependency?

A
In the example below, an application we will run on our EC2 instance expects to use a specific Amazon S3 bucket, but that dependency is configured inside the application code and thus not visible to Terraform. In that case, we can use depends_on to explicitly declare the dependency
resource "aws_s3_bucket" "example" {
  bucket = "some_bucket"
  acl    = "private"
}
resource "aws_instance" "example" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"

depends_on = [aws_s3_bucket.example]
}

162
Q

How do you save the execution plan?

A

terraform plan -out=tfplan
you can use that file with apply
terraform apply tfplan

163
Q

You have started writing terraform configuration and you are using some sample configuration as a basis. How do you copy the example configuration into your working directory?

A

terraform init -from-module=MODULE-SOURCE

https://www.terraform.io/docs/commands/init.html#copy-a-source-module

164
Q

What is the flag you should use with the terraform plan to get detailed on the exit codes?

A

terraform plan -detailed-exitcode
Return a detailed exit code when the command exits. When provided, this argument changes the exit codes and their meanings to provide more granular information about what the resulting plan contains:
* 0 = Succeeded with empty diff (no changes)
* 1 = Error
* 2 = Succeeded with non-empty diff (changes present)

165
Q

How do you target only specific resources when you run a terraform plan?

A

-target=resource - A Resource Address to target. This flag can be used multiple times. See below for more information.

166
Q

How do you update the state prior to checking differences when you run a terraform plan?

A

terraform plan -refresh=true

167
Q

The behavior of any terraform destroy command can be previewed at any time with an equivalent terraform plan -destroy command. Is this true?

168
Q

You have the following file and created two resources docker_image and docker_container with the command terraform apply and you go to the terminal and delete the container with the command docker rm. You come back to your configuration and run the command again. Does terraform recreates the resource?

esource “docker_image” “nginx” {
name = “nginx:latest”
keep_locally = false
}

resource "docker_container" "nginx" {
    image = docker_image.nginx.latest
    name = "nginxtutorial"
    ports {
        internal = 80
        external = 8080
    }
    upload {
        source = "${abspath(path.root)}/files/index.html"
        file = "/usr/share/nginx/html/index.html"
    }
}
A

Yes. Terrsform creates the resource again since the execution plan says two resources and the terraform always maintains the desired state

169
Q

You created a VM instance on AWS cloud provider with the terraform configuration and you log in AWS console and removed the instance. What does the next apply do?

A

It creates the instance again

170
Q
ou have the following file and created two resources docker_image and docker_container with the command terraform planand you go to the terminal and delete the container with the command docker rm. You come back to your configuration and run the command again. What is the output of the command plan?
resource "docker_image" "nginx" {
    name = "nginx:latest"
    keep_locally = false
}
resource "docker_container" "nginx" {
    image = docker_image.nginx.latest
    name = "nginxtutorial"
    ports {
        internal = 80
        external = 8080
    }
    upload {
        source = "${abspath(path.root)}/files/index.html"
        file = "/usr/share/nginx/html/index.html"
    }
}
A
Terraform will perform the following actions:
# docker_container.nginx will be created
Plan: 1 to add, 0 to change, 0 to destroy.
171
Q

What are Backends?

A

A “backend” in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.
By default, Terraform uses the “local” backend, which is the normal behavior of Terraform

172
Q

What is local Backend?

A
The local backend stores state on the local filesystem, locks that state using system APIs, and performs operations locally.
// Example
terraform {
  backend "local" {
    path = "relative/path/to/terraform.tfstate"
  }
}
173
Q

What is the default path for the local backend?

A

This defaults to “terraform.tfstate” relative to the root module by default.

174
Q

What is State Locking?

A

If supported by your backend, Terraform will lock your state for all operations that could write state. This prevents others from acquiring the lock and potentially corrupting your state.
State locking happens automatically on all operations that could write state. You won’t see any message that it is happening. If state locking fails, Terraform will not continue.

175
Q

Does Terraform continue if state locking fails?

A

No.

If state locking fails, Terraform will not continue.

176
Q

Can you disable state locking?

A

Yes.

You can disable state locking for most commands with the -lock flag but it is not recommended.

177
Q

What are the types of Backend?

A

Standard: State management, functionality covered in State Storage & Locking
Enhanced: Everything in standard plus remote operations.

178
Q

What are remote Backends?

A

Remote backends allow Terraform to use a shared storage space for state data, so any member of your team can use Terraform to manage the same infrastructure.

179
Q

What is the benefit of using remote backend?

A

Remote state storage makes collaboration easier and keeps state and secret information off your local disk.
Remote state is loaded only in memory when it is used.

180
Q

If you want to switch from using remote backend to local backend. What should you do?

A

If you want to move back to local state, you can remove the backend configuration block from your configuration and run terraform init again.
Terraform will once again ask if you want to migrate your state back to local.

181
Q

What does the command refresh do?

A

The terraform refresh command is used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure.
This can be used to detect any drift from the last-known state, and to update the state file.

182
Q

Does the command refresh modify the infrastructure?

A

The command refresh does not modify infrastructure, but does modify the state file.
If the state is changed, this may cause changes to occur during the next plan or apply.

183
Q

How do you backup the state to the remote backend?

A
  1. When configuring a backend for the first time (moving from no defined backend to explicitly configuring one), Terraform will give you the option to migrate your state to the new backend. This lets you adopt backends without losing any existing state.
  2. To be extra careful, we always recommend manually backing up your state as well. You can do this by simply copying your terraform.tfstate file to another location.
184
Q

What is a partial configuration in terms of configuring Backends?

A

You do not need to specify every required argument in the backend configuration. Omitting certain arguments may be desirable to avoid storing secrets, such as access keys, within the main configuration. When some or all of the arguments are omitted, we call this a partial configuration.

185
Q

What are the ways to provide remaining arguments when using partial configuration?

A

Interactively: Terraform will interactively ask you for the required values, unless interactive input is disabled. Terraform will not prompt for optional values.
File: A configuration file may be specified via the init command line. To specify a file, use the -backend-config=PATH option when running terraform init. If the file contains secrets it may be kept in a secure data store, such as Vault, in which case it must be downloaded to the local disk before running Terraform.
Command-line key/value pairs: Key/value pairs can be specified via the init command line. Note that many shells retain command-line flags in a history file, so this isn’t recommended for secrets. To specify a single key/value pair, use the -backend-config=”KEY=VALUE” option when running terraform init.
https://www.terraform.io/docs/backends/config.html

186
Q

What is the basic requirement when using partial configuration?

A

What is the basic requirement when using partial configuration?

187
Q

Give an example of passing partial configuration with Command-line Key/Value pairs?

A

terraform init \

- backend-config="address=demo.consul.io" \
- backend-config="path=example_app/terraform_state" \
- backend-config="scheme=https"
188
Q

How to unconfigure a backend?

A

If you no longer want to use any backend, you can simply remove the configuration from the file. Terraform will detect this like any other change and prompt you to reinitialize.
As part of the reinitialization, Terraform will ask if you’d like to migrate your state back down to normal local state. Once this is complete then Terraform is back to behaving as it does by default.

189
Q

How do you encrypt sensitive data in the state?

A

Terraform Cloud always encrypts state at rest and protects it with TLS in transit. Terraform Cloud also knows the identity of the user requesting state and maintains a history of state changes. This can be used to control access and track activity. Terraform Enterprise also supports detailed audit logging.
The S3 backend supports encryption at rest when the encrypt option is enabled. IAM policies and logging can be used to identify any invalid access. Requests for the state go over a TLS connection.

190
Q

Backends are completely optional. Is this true?

A

Backends are completely optional. You can successfully use Terraform without ever having to learn or use backends. However, they do solve pain points that afflict teams at a certain scale. If you’re an individual, you can likely get away with never using backends.

191
Q

What are the benefits of Backends?

A

Working in a team: Backends can store their state remotely and protect that state with locks to prevent corruption. Some backends such as Terraform Cloud even automatically store a history of all state revisions.
Keeping sensitive information off disk: State is retrieved from backends on demand and only stored in memory. If you’re using a backend such as Amazon S3, the only location the state ever is persisted is in S3.
Remote operations: For larger infrastructures or certain changes, terraform apply can take a long, long time. Some backends support remote operations which enable the operation to execute remotely. You can then turn off your computer and your operation will still complete. Paired with remote state storage and locking above, this also helps in team environments.

192
Q

Why should you be very careful with the Force unlocking the state?

A

Terraform has a force-unlock command to manually unlock the state if unlocking failed.
Be very careful with this command. If you unlock the state when someone else is holding the lock it could cause multiple writers. Force unlock should only be used to unlock your own lock in the situation where automatic unlocking failed.
To protect you, the force-unlock command requires a unique lock ID. Terraform will output this lock ID if unlocking fails. This lock ID acts as a nonce, ensuring that locks and unlocks target the correct lock.

193
Q

You should only use force unlock command when automatic unlocking fails. Is this true?

194
Q

How do you define a variable?

A

variable “region” {
default = “us-east-1”
}
This defines the region variable within your Terraform configuration.

195
Q

How do you access the variable in the configuration?

A
// accessing a variable
provider "aws" {
  region = var.region
}
196
Q

How many ways you can assign variables in the configuration?

A
Command-line flags
terraform apply -var 'region=us-east-1'
From a file
To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars with the following contents:
region = "us-east-1"
terraform apply \
  -var-file="secret.tfvars" \
  -var-file="production.tfvars"
From environment varibles
Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_region variable can be set in the shell to set the region variable in Terraform.
UI input
If you execute terraform apply with any variable unspecified, Terraform will ask you to input the values interactively. These values are not saved, but this provides a convenient workflow when getting started with Terraform. UI input is not recommended for everyday use of Terraform.
197
Q

Does environment variables support List and map types?

A

No
Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other mechanisms.

198
Q

How do you provision infrastructure in a staging environment or a production environment using the same Terraform configuration?

A
You can use different varible files with the same configuration
// Example
// For development
terraform apply -var-file="dev.tfvars"
// For test
terraform apply -var-file="test.tfvars"
199
Q

How do you assign default values to variables?

A

If no value is assigned to a variable via any of these methods and the variable has a default key in its declaration, that value will be used for the variable.
variable “region” {
default = “us-east-1”
}

200
Q

What are the data types for the variables?

A
string
number
bool
list()
set()
map()
object({ = , ... })
tuple([, ...])
201
Q

Give an example of data type List variables?

A
Lists are defined either explicitly or implicitly.
variable "availability_zone_names" {
  type    = list(string)
  default = ["us-west-1a"]
}
202
Q

Give an example of data type Map variables?

A
variable "region" {}
variable "amis" {
  type = map(string)
}
amis = {
  "us-east-1" = "ami-abc123"
  "us-west-2" = "ami-def456"
}
// accessing
resource "aws_instance" "example" {
  ami           = var.amis[var.region]
  instance_type = "t2.micro"
}
203
Q

What is the Variable Definition Precedence?

A

The above mechanisms for setting variables can be used together in any combination. If the same variable is assigned multiple values, Terraform uses the last value it finds, overriding any previous values. Note that the same variable cannot be assigned multiple values within a single source.
Terraform loads variables in the following order, with later sources taking precedence over earlier ones:
* Environment variables
* The terraform.tfvars file, if present.
* The terraform.tfvars.json file, if present.
* Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.
* Any -var and -var-file options on the command line, in the order they are provided. (This includes variables set by a Terraform Cloud workspace.)

204
Q

What are the output variables?

A

output variables as a way to organize data to be easily queried and shown back to the Terraform user.
Outputs are a way to tell Terraform what data is important. This data is outputted when apply is called, and can be queried using the terraform output command.

205
Q

How do you define an output variable?

A

output “ip” {
value = aws_eip.ip.public_ip
}
Multiple output blocks can be defined to specify multiple output variables.

206
Q

How do you view outputs and queries them?

A

You will see the output when you run the following command
terraform apply
You can query the output with the following command
terraform output ip

207
Q

What are the dynamic blocks?

A

some resource types include repeatable nested blocks in their arguments, which do not accept expressions
You can dynamically construct repeatable nested blocks like setting using a special dynamic block type, which is supported inside resource, data, provider, and provisioner blocks:
A dynamic block acts much like a for expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value, and generates a nested block for each element of that complex value.
https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks

resource “aws_elastic_beanstalk_environment” “tfenvtest” {
name = “tf-test-name”
application = “${aws_elastic_beanstalk_application.tftest.name}”
solution_stack_name = “64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6”

  dynamic "setting" {
    for_each = var.settings
    content {
      namespace = setting.value["namespace"]
      name = setting.value["name"]
      value = setting.value["value"]
    }
  }
}
208
Q

What are the best practices for dynamic blocks?

A

Overuse of dynamic blocks can make configuration hard to read and maintain, so we recommend using them only when you need to hide details in order to build a clean user interface for a re-usable module.
Always write nested blocks out literally where possible.

209
Q

What are the Built-in Functions?

A

The Terraform language includes a number of built-in functions that you can call from within expressions to transform and combine values.
max(5, 12, 9)

210
Q

Does Terraform language support user-defined functions?

A

No
The Terraform language does not support user-defined functions, and so only the functions built in to the language are available for use.

211
Q

What is the built-in function to change string to a number?

A

parseint parses the given string as a representation of an integer in the specified base and returns the resulting number. The base must be between 2 and 62 inclusive.
> parseint(“100”, 10)
100
More Number Functions here
https://www.terraform.io/docs/configuration/functions/abs.html

212
Q

What is the built-in function to evaluates given expression and returns a boolean whether the expression produced a result without any errors?

A

can
condition = can(formatdate(“”, var.timestamp))
https://www.terraform.io/docs/configuration/functions/can.html

213
Q

What is the built-in function to evaluates all of its argument expressions in turn and returns the result of the first one that does not produce any errors?

A
try
locals {
  example = try(
    [tostring(var.example)],
    tolist(var.example),
  )
}
214
Q

What is Resource Address?

A
A Resource Address is a string that references a specific resource in a larger infrastructure. An address is made up of two parts:
[module path][resource spec]
215
Q

What is the Module path?

A
A module path addresses a module within the tree of modules. It takes the form:
module.A.module.B.module.C...
Multiple modules in a path indicate nesting. If a module path is specified without a resource spec, the address applies to every resource within the module. If the module path is omitted, this addresses the root module.
216
Q

What is the Resource spec?

A

A resource spec addresses a specific resource in the config. It takes the form:
resource_type.resource_name[resource index]
* resource_type - Type of the resource being addressed.
* resource_name - User-defined name of the resource.
* [resource index] - an optional index into a resource with multiple instances, surrounded by square brace characters ([ and ]).
// Examples
resource “aws_instance” “web” {
# …
count = 4
}
aws_instance.web[3] // Refers to only last instance
aws_instance.web // Refers to all four “web” instances.
resource “aws_instance” “web” {
# …
for_each = {
“terraform”: “value1”,
“resource”: “value2”,
“indexing”: “value3”,
“example”: “value4”,
}
}
aws_instance.web[“example”] // Refers to only the “example” instance in the config.

217
Q

What are complex types and what are the collection types Terraform supports?

A
A complex type is a type that groups multiple values into a single value.
There are two categories of complex types: 
collection types (for grouping similar values)
* list(...): a sequence of values identified by consecutive whole numbers starting with zero.
* map(...): a collection of values where each is identified by a string label.
* set(...): a collection of unique values that do not have any secondary identifiers or ordering.
structural types (for grouping potentially dissimilar values).
* object(...): a collection of named attributes that each have their own type.
* tuple(...): a sequence of elements identified by consecutive whole numbers starting with zero, where each element has its own type.
218
Q

What are the named values available and how do we refer to?

A

Terraform makes several kinds of named values available. Each of these names is an expression that references the associated value; you can use them as standalone expressions, or combine them with other expressions to compute new values.

  • . is an object representing a managed resource of the given type and name. The attributes of the resource can be accessed using dot or square bracket notation.
  • var. is the value of the input variable of the given name.
  • local. is the value of the local value of the given name.
  • module.. is the value of the specified output value from a child module called by the current module.
  • data.. is an object representing a data resource of the given data source type and name. If the resource has the count argument set, the value is a list of objects representing its instances. If the resource has the for_each argument set, the value is a map of objects representing its instances.
  • path.module is the filesystem path of the module where the expression is placed.
  • path.root is the filesystem path of the root module of the configuration.
  • path.cwd is the filesystem path of the current working directory. In normal use of Terraform this is the same as path.root, but some advanced uses of Terraform run it from a directory other than the root module directory, causing these paths to be different.
  • terraform.workspace is the name of the currently selected workspace.
219
Q

What is the built-in function that reads the contents of a file at the given path and returns them as a base64-encoded string?

A

filebase64(path)

https://www.terraform.io/docs/configuration/functions/filebase64.html

220
Q

What is the built-in function that converts a timestamp into a different time format?

A

formatdate(spec, timestamp)

https://www.terraform.io/docs/configuration/functions/formatdate.html

221
Q

What is the built-in function encodes a given value to a string using JSON syntax?

A

jsonencode({“hello”=”world”})

https://www.terraform.io/docs/configuration/functions/jsonencode.html

222
Q

What is the built-in function that calculates a full host IP address for a given host number within a given IP network address prefix?

A

What is the built-in function that calculates a full host IP address for a given host number within a given IP network address prefix?

223
Q
  1. What is Sentinel?
A

Sentinel is an embedded policy-as-code framework integrated with the HashiCorp Enterprise products. It enables fine-grained, logic-based policy decisions, and can be extended to use information from external sources.

224
Q

What is the benefit of Sentinel?

A

Codifying policy removes the need for ticketing queues, without sacrificing enforcement.
One of the other benefits of Sentinel is that it also has a full testing framework.
Avoiding a ticketing workflow allows organizations to provide more self-service capabilities and end-to-end automation, minimizing the friction for developers and operators.
https://www.hashicorp.com/blog/why-policy-as-code/

225
Q

What is the Private Module Registry?

A

Terraform Cloud’s private module registry helps you share Terraform modules across your organization. It includes support for module versioning, a searchable and filterable list of available modules, and a configuration designer to help you build new workspaces faster.

226
Q

What is the difference between public and private module registries when defined source?

A
The public registry uses a three-part // format
private modules use a four-part /// format
// example
module "vpc" {
  source  = "app.terraform.io/example_corp/vpc/aws"
  version = "1.0.4"
}
227
Q

Where is the Terraform Module Registry available at?

A

https://registry.terraform.io/

228
Q

What is a workspace?

A

A workspace contains everything Terraform needs to manage a given collection of infrastructure, and separate workspaces function like completely separate working directories.

229
Q

you are configuring a remote backend in the terraform cloud. You didn’t create an organization before you do terraform init. Does it work?

A

While the organization defined in the backend stanza must already exist,

230
Q

You are configuring a remote backend in the terraform cloud. You didn’t create a workspace before you do terraform init. Does it work?

A

Terraform Cloud will create it if necessary. If you opt to use a workspace that already exists, the workspace must not have any existing states.

231
Q

Terraform workspaces when you are working with CLI and Terraform workspaces in the Terraform cloud. Is this correct?

A

If you are familiar with running Terraform using the CLI, you may have used Terraform workspaces. Terraform Cloud workspaces behave differently than Terraform CLI workspaces. Terraform CLI workspaces allow multiple state files to exist within a single directory, enabling you to use one configuration for multiple environments. Terraform Cloud workspaces contain everything needed to manage a given set of infrastructure, and function like separate working directories.

232
Q

How do you authenticate the CLI with the terraform cloud?

A

Newer Versions:
1. terraform login
2. it will open the terraform cloud and generate the token
3. paste that token back in the CLI
https://learn.hashicorp.com/terraform/tfc/tfc_login
Older versions:
keep the following token in the CLI configuration file
credentials “app.terraform.io” {
token = “xxxxxx.atlasv1.zzzzzzzzzzzzz”
}
https://www.terraform.io/docs/commands/cli-config.html#credentials

233
Q

you are building infrastructure on your local machine and you changed your backend to remote backend with the Terraform cloud. What should you do to migrate the state to the remote backend?

A

terraform init
Once you have authenticated the remote backend, you’re ready to migrate your local state file to Terraform Cloud. To begin the migration, reinitialize. This causes Terraform to recognize your changed backend configuration.
During reinitialization, Terraform presents a prompt saying that it will copy the state file to the new backend. Enter “yes” and Terraform will migrate the state from your local machine to Terraform Cloud.
https://learn.hashicorp.com/terraform/tfc/tfc_migration#migrate-the-state-file

234
Q

How do you configure remote backend with the terraform cloud?

A
You need to configure in the terraform block
terraform {
  backend "remote" {
    hostname      = "app.terraform.io"
    organization  = ""
    workspaces {
      name = "state-migration"
    }
  }
}
235
Q

What is Run Triggers?

A

Terraform Cloud’s run triggers allow you to link workspaces so that a successful apply in a source workspace will queue a run in the workspace linked to it with a run trigger.
For example, adding new subnets to your network configuration could trigger an update to your application configuration to rebalance servers across the new subnets.

236
Q

What is the benefit of Run Triggers?

A

When managing complex infrastructure with Terraform Cloud, organizing your configuration into different workspaces helps you to better manage and design your infrastructure.
Configuring run triggers between workspaces allows you to set up infrastructure pipelines as part of your overall deployment strategy.

237
Q

What are the available permissions that terraform clouds can have?

A

Terraform Cloud teams can have read, plan, write, or admin permissions on individual workspaces.

238
Q

Who can grant permissions on the workspaces?

A

Organization owners grant permissions by grouping users into teams and giving those teams priviliges based on their need for access to individual workspaces.

239
Q

Which plan do you need to manage teams on Terraform cloud?

240
Q

How can you add users to an organization?

A

You can add users to an organization by inviting them using their email address.
Even if your team member has not signed up for Terraform Cloud yet, they can still accept the invitation and create a new account.

241
Q

The Terraform Cloud Team plan charges you on a per-user basis. Is this true?

A

Yes. The Terraform Cloud Team plan is charged on a per-user basis so adding new users to your organization incurs cost.

242
Q

How can you see all the versions of the installed plugins along with Terraform versions? (Hint there are 3 ways to attack this problem with slightly different results)

A

terraform providers (lists providers)
This shows the providers Terraform is using along with their versions and the source (like hashicorp/aws).

Example output:
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] 4.67.0
└── provider[registry.terraform.io/hashicorp/random] 3.5.1

To check your Terraform CLI version:
Run:
terraform version
Example output:
Terraform v1.6.3
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.67.0
+ provider registry.terraform.io/hashicorp/random v3.5.1
This also shows the main Terraform version and the versions of installed providers used in your working directory.

🧪Method 3:
If you want to explicitly check what’s in your .terraform directory (where plugins are downloaded), you can manually inspect:
ls .terraform/providers/

But for the exam, focus on using:
terraform version
terraform providers

243
Q

Why Specify the provider Argument in Terraform

A

?
You may need to specify the provider argument in a resource block when:

You use multiple providers of the same type (e.g., multiple AWS accounts or regions).
You alias a provider for different configurations.

🔧 Example: Multiple AWS Providers
Let’s say you’re managing resources in two AWS regions. You’d create two provider blocks with different aliases:
—-
provider “aws” {
region = “us-east-1”
}
provider “aws” {
alias = “west”
region = “us-west-2”
}
Now, you can use the default provider for one resource:

resource “aws_instance” “east_server” {
ami = “ami-12345678”
instance_type = “t2.micro”
}
And the aliased provider for another:
resource “aws_instance” “west_server” {
provider = aws.west
ami = “ami-87654321”
instance_type = “t2.micro”
}

244
Q

Write a main.tf that includes an aliased AWS provider. This is especially useful when:
You want to use multiple AWS accounts or regions.
You want to keep provider configurations organized.

A

terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0”
}
google = {
source = “hashicorp/google”
version = “~> 5.0”
}
}

required_version = “>= 1.3.0”
}

AWS provider with alias
provider “aws” {
alias = “west”
region = “us-west-2”
profile = “default”
}

Google Cloud provider
provider “google” {
project = “my-gcp-project-id”
region = “us-central1”
credentials = file(“gcp-service-account.json”)
}

245
Q

Describe the root module with alternate provider configurations:

A

provider “aws” {
alias = “usw1”
region = “us-west-1”
}

provider “aws” {
alias = “usw2”
region = “us-west-2”
}

module “tunnel” {
source = “./tunnel”
providers = {
aws.src = aws.usw1
aws.dst = aws.usw2
}
}

246
Q

To reduce the risk of unexpected infrastructure changes when provider APIs or functionality change, it’s essential to implement some Terraform best practices.

A

Pin Provider Versions Lock to a specific version (or version range) in your terraform block to avoid auto-updating to incompatible versions.
Example:
hcl<br></br>required_providers {<br></br> aws = {<br></br> source = “hashicorp/aws”<br></br> version = “~> 5.32.0”<br></br> }<br></br>}

247
Q
A

The error “Failed to instantiate provider” after adding a kubernetes_deployment resource in Terraform usually points to missing or misconfigured provider setup for the EKS-Kubernetes provider.

🚨 Likely Reasons for the Error
You added a kubernetes_deployment resource, but you did not configure the kubernetes provider in your Terraform code.

You didn’t download the provider using the init command for the kubernetes provider.

Terraform doesn’t know how to talk to your EKS cluster unless you explicitly configure the Kubernetes provider with connection details (usually derived from the EKS cluster or kubeconfig).

✅ How to Fix It
You need to:

Configure the kubernetes provider

Pass in the correct host, token, and cluster_ca_certificate (or use kubeconfig)

Option A: Configure Kubernetes provider using provider “kubernetes” {
host = aws_eks_cluster.my_cluster.endpoint
token = data.aws_eks_cluster_auth.my_cluster.token
cluster_ca_certificate = base64decode(aws_eks_cluster.my_cluster.certificate_authority[0].data)
}
You’ll also need to add:
data “aws_eks_cluster” “my_cluster” {
name = aws_eks_cluster.my_cluster.name
}

data “aws_eks_cluster_auth” “my_cluster” {
name = aws_eks_cluster.my_cluster.name
}

Make sure your EKS cluster resource exists:
resource “aws_eks_cluster” “my_cluster” {
# your cluster config here
}
Option B: Use kubeconfig file (not ideal in automation)
provider “kubernetes” {
config_path = “~/.kube/config”
}

248
Q

Where do we set the version of the provider that is required by a terraform configuration?

A

✅ In the required_providers block inside the terraform block

✅ Example:
h
Copy
Edit
terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0”
}
kubernetes = {
source = “hashicorp/kubernetes”
version = “~> 2.20”
}
}

required_version = “>= 1.3.0”
}
💡 Why This Is Correct
The required_providers block declares the source and version of each provider your configuration depends on.

This ensures Terraform uses a specific version (or range), preventing breaking changes from newer versions.

249
Q

The key feature that allows Terraform to use a declarative approach is:

A

✅ The use of a Desired State Configuration model with its declarative language (HashiCorp Configuration Language - HCL)

🔍 Explanation
Terraform is declarative, meaning you describe what infrastructure you want, not how to create it step by step.

For example:

hcl
Copy
Edit
resource “aws_instance” “web” {
ami = “ami-12345678”
instance_type = “t3.micro”
}
You simply declare what you want, and Terraform figures out:

What’s already deployed (terraform state)

What needs to change (terraform plan)

How to make your current state match the desired one (terraform apply)

250
Q

What is the first command to use after downloading new code?

A

terraform init

251
Q

To run a terraform apply without a confirmation prompt (i.e., automatically approve the changes), you can use ???

A

terraform apply -auto-approve

What it does:
Skips the interactive prompt that normally asks:
“Do you want to perform these actions?”

Immediately applies the execution plan.

Use cases:
Useful for automation in CI/CD pipelines.

Helpful in scripting or testing environments

252
Q

what is immutable infrastructure?

A

Immutable Infrastructure is an approach to managing infrastructure where components (like servers, VMs, containers) are never modified after they’re deployed. If you need to change something—whether it’s a configuration update, a package change, or a bug fix—you create a new version of the infrastructure component and replace the old one.

🔍 Definition:
Immutable infrastructure is infrastructure that is replaced instead of changed.

Packer – Builds new golden images for VMs.

Docker – Containers are inherently immutable.

Kubernetes – Deploys new Pods rather than changing existing ones.

AWS AMIs / Launch Templates – New AMI per deploy instead of patching instances.

💡 Real-World Example:
Let’s say you need to update a web app running on EC2:

Mutable approach: SSH into the instance and update the software.

Immutable approach: Build a new AMI with the updated software, deploy new EC2 instances with that AMI, and terminate the old ones.

🆚 Immutable vs Mutable Infrastructure
Aspect Mutable Infrastructure Immutable Infrastructure
Changes Made on the live system Made by redeploying new systems
Drift Possible due to ad hoc updates No drift – consistent across all environments
Rollback May require manual fixes Easy – just deploy the previous version
Complexity Can become messy over time Cleaner and easier to manage

253
Q

Where can you use version constraints in terraform?

A

ChatGPT said:
In Terraform, version constraints can be used in three key areas:

🔑 1. Terraform CLI Version
You can restrict the required version of Terraform itself using the required_version block:
terraform {
required_version = “>= 1.3.0, < 1.6.0”
}
✅ Use this to ensure compatibility with team members or CI/CD pipelines.

🔑 2. Provider Version Constraints
Specify which versions of a provider (like AWS, GCP, etc.) your configuration supports:
terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0”
}
}
}
✅ Keeps your code stable and avoids breaking changes from newer provider versions.

🔑 3. Module Version Constraints
When calling Terraform modules, you can set version constraints (for modules from the Registry):
module “vpc” {
source = “terraform-aws-modules/vpc/aws”
version = “~> 3.0”
}
✅ Ensures consistent behavior of reused code from public or private modules.

254
Q

what is a version constraint with syntax? How many condtions can it contain?

A

Version constraint syntax
A version constraint is a string literal containing one or more conditions separated by commas.

Each condition consists of an operator and a version number.

Version numbers are a series of numbers separated by periods, for example 1.2.0. It is optional, but you can include a suffix to indicate a beta release. Refer to Specify a pre-release version for additional information.

Use the following syntax to specify version constraints:

version = “<operator> <version>"</version></operator>

In the following example, Terraform installs a versions 1.2.0 and newer, as well as version older than 2.0.0:

version = “>= 1.2.0, < 2.0.0”

255
Q

Why will a terraform apply fail if the variable does not match the validation rule, provide an example.

A

ChatGPT said:
A terraform apply will fail if a variable’s value does not match the validation rule because Terraform enforces input variable validation before applying infrastructure changes. This ensures that only expected, safe, and consistent values are used, preventing misconfiguration or runtime errors.

🔍 Why it fails:
Terraform checks variable validation rules during plan and apply stages. If a variable’s value violates a custom validation rule, it aborts with an error and does not proceed with deployment.

✅ Example:
variables.tf
variable “environment” {
type = string

description = “The deployment environment”

validation {
condition = contains([“dev”, “staging”, “prod”], var.environment)
error_message = “Environment must be one of: dev, staging, or prod.”
}
}

256
Q

The purpose of an output block in Terraform is to display or expose information about your infrastructure after you run terraform apply. It’s especially useful for:

A

🎯 Key Purposes:
Sharing information between modules.

Debugging or inspection of resources after apply.

Providing key values like IPs, IDs, or URLs for users or other tools.

📦 Example:
Terraform Code (main.tf)
resource “aws_instance” “web” {
ami = “ami-0c55b159cbfafe1f0”
instance_type = “t2.micro”
}

output “instance_public_ip” {
description = “The public IP of the web server”
value = aws_instance.web.public_ip
}
module “network” {
source = “./network”
}

output “subnet_id” {
value = module.network.subnet_id
}

257
Q

walk through what happens to output variable values at each stage of the Terraform workflow.

A

Terraform attempts to evaluate output values based on the current state and configuration.

If the output depends on resources that don’t exist yet, you’ll see something like:
~ output “instance_ip” {
value = (known after apply)
}
Outputs are not finalized yet because the resources aren’t created.
Terraform init - no effect
Terraform plan
erraform attempts to evaluate output values based on the current state and configuration.
If the output depends on resources that don’t exist yet, you’ll see something like:
~ output “instance_ip” {
value = (known after apply)
}
Outputs are not finalized yet because the resources aren’t created.
3. terraform apply
Terraform creates or modifies resources, then evaluates output values from the actual state.

Outputs are:
Printed to the terminal
Stored in the state file (terraform.tfstate)

Available for other modules (if using terraform output or remote modules)
Example:
Outputs:
instance_ip = “3.22.120.98”

  1. terraform output
    You can run this command anytime after apply to retrieve output values from the current state.

Useful in CI/CD scripts or just to recall a value:
terraform output instance_ip
You can also export as JSON for scripting:
terraform output -json

258
Q

Inspect the below code and choose expected behavior when you run a terraform plan or apply:

variable “is_this_correct” { }

A

When you run terraform plan or terraform apply:
Terraform will prompt for a value for the variable is_this_correct unless:

You provide it via CLI (-var flag),
You define it in a terraform.tfvars file,
Or you set it in the environment (TF_VAR_is_this_correct).

If no value is provided:
var.is_this_correct
Enter a value:
If you just press Enter without typing anything, Terraform will throw an error because the variable is required but has no default.

259
Q

In Terraform, each variable label (name) must be unique within a module. Here’s why:

A

🎯 Why must variable names be unique?
Avoid Ambiguity
Terraform uses variable names (labels) to reference values in expressions like var.variable_name. If two variables had the same name, Terraform wouldn’t know which one to use.

Summary:
Each variable label must be unique within the same module.

This prevents conflicts, ensures clear references, and maintains valid HCL syntax.

260
Q

A variable block is given below. Inspect it and check if it’s valid?

variable “providers” {
type = string
}

A

Causes an error, providers is a special keyword in Terraform used to configure cloud or other providers (e.g., AWS, GCP).

Using it as a variable name might confuse other developers or readers of your code.

261
Q

Output block has a few optional arguments that let you customize how the output behaves or is displayed.

A

✅ Optional Arguments for the output Block
Argument Type Description
description string A helpful description of what the output value is.
sensitive bool If true, hides the value from CLI output (useful for secrets).
depends_on list Explicitly declares dependencies to ensure output is not calculated before certain resources are created.
precondition block (Terraform 1.5+) Allows runtime validation of output values.

🧱 Example:
output “instance_ip” {
description = “The public IP of the web server”
value = aws_instance.web.public_ip
sensitive = false
depends_on = [aws_instance.web]
}

262
Q

Which meta-argument is used when Terraform cannot infer dependencies between different parts of your infrastructure?

A

depends_on

263
Q

Which among the following best explains the need of the dependency concept in terraform?

A

Allows resources to be created and destroyed in he correct order?

264
Q

What is the primary benefit of using a remote backend for Terraform state management?
a) It stores the state file locally.
b) It enables team collaboration and prevents state corruption.
c) It eliminates the need for a state file.
d) It automatically applies changes without user intervention.

A

Answer: b) It enables team collaboration and prevents state corruption.

265
Q

Which of the following is a commonly used service for implementing a Terraform remote backend?
a) Local file system
b) Microsoft Word
c) Amazon S3
d) Web browser

A

Answer: c) Amazon S3

266
Q

Which attribute is typically required when configuring an S3 bucket as a Terraform remote backend?
a) access_key
b) bucket
c) region
d) Both b and c

A

Answer: d) Both b and c

267
Q

What does state locking prevent in Terraform?
a) Unauthorized access to the state file.
b) Concurrent modifications to the state file.
c) Accidental deletion of resources.
d) Changes to resource attributes.

A

Answer: b) Concurrent modifications to the state file.

268
Q

In a Terraform remote state configuration, what is the purpose of the key attribute?
a) Specifies the encryption key for the state file.
b) Defines the name of the state file in the remote storage location.
c) Sets the access key for the remote backend.
d) Determines the region where the state file is stored.

A

Answer: b) Defines the name of the state file in the remote storage location

269
Q

What is the primary benefit of using a remote backend for Terraform state management?
a) It stores the state file locally.
b) It enables team collaboration and prevents state corruption.
c) It eliminates the need for a state file.
d) It automatically applies changes without user intervention.

A

Answer: b) It enables team collaboration and prevents state corruption.

270
Q

Which of the following is a commonly used service for implementing a Terraform remote backend?
a) Local file system
b) Microsoft Word
c) Amazon S3
d) Web browser

A

Answer: c) Amazon S3

271
Q

Which Terraform command is used to initialize the configured backend?
a) terraform apply
b) terraform init
c) terraform plan
d) terraform destroy

A

Answer: b) terraform init

272
Q

True or False: A Terraform remote backend is mandatory for all Terraform configurations.
a) True
b) False

A

Answer: b) False

273
Q

Which of the following is NOT a benefit of using a remote backend?
a) State sharing among team members
b) State locking to prevent conflicts
c) Increased execution speed of Terraform commands
d) Centralized state storage

A

Answer: c) Increased execution speed of Terraform commands

274
Q

What happens if state locking is not enabled in a remote backend and two users attempt to apply changes simultaneously?
a) Terraform will automatically merge the changes.
b) The first user’s changes will be applied, and the second user’s changes will be lost.
c) Terraform will prevent the second user from applying changes until the first user’s apply is complete.
d) The state file may become corrupted.

A

Answer: d) The state file may become corrupted.

275
Q

Which of the following can be used to store sensitive data, such as access keys, when configuring a remote backend?
a) Directly in the Terraform configuration file
b) Environment variables
c) Hardcoded in the key attribute
d) In plain text

A

Answer: b) Environment variables

276
Q

What is the purpose of the terraform state pull command, and when would you use it?

A

What is the purpose of the terraform state pull command, and when would you use it?

Answer:

The terraform state pull command is used to download the current Terraform state from the configured remote backend to the local machine.

Purpose: It retrieves the state data, which contains information about the resources managed by Terraform.

When to use it:

To inspect the current state: If you need to examine the state data to understand the current configuration of your infrastructure.

To debug issues: When troubleshooting, having a local copy of the state can be helpful.

To perform advanced state manipulations: In rare cases, you might need to manipulate the state data directly (e.g., with terraform state rm), and terraform state pull lets you get a local copy to work with.

To create a backup: You can use terraform state pull to create a local backup of your remote state.

277
Q

What purpose does the terraform.tfstate file serve?

A

The terraform.tfstate file serves as the source of truth for Terraform about the resources it manages. It stores the state data, which contains information about these resources and their current configuration. 1

278
Q

what is the purpose of the terraform state show and the difference between that and terraform state pull?

A

Okay, here’s an explanation of terraform state show and its difference from terraform state pull, based on the “Terraform State Pull Q&A” Canvas document.

The primary difference between terraform state show and terraform state pull is what information they display and how they’re used:

terraform state show: This command is used to display the attributes of a specific resource within the Terraform state. You use it to inspect the details of a particular resource that Terraform is managing.
terraform state pull: This command downloads the entire Terraform state file from the remote backend to your local machine. You would use this to get a local copy of the complete state data.
In essence, terraform state show gives you focused information about a single resource, while terraform state pull gives you the entire state file.

Sources and related content

279
Q

How can you disable state locking?

A

State locking is typically managed by the remote backend configuration. Here’s how it relates to common backends:

Backends that Support Locking: Backends like Amazon S3 with DynamoDB, Azure Blob Storage, and Google Cloud Storage with Cloud Storage locking do provide state locking.

Disabling Locking:

Generally, it’s not recommended to disable state locking. It’s a crucial mechanism for preventing state corruption when multiple users or processes are applying changes.

If you absolutely must, the method depends on the specific backend:

For Amazon S3: State locking is enabled by using a DynamoDB table. To disable it, you would remove the DynamoDB table configuration from your Terraform backend configuration.

For Azure Blob Storage: State locking uses a blob in the storage account. To disable it, you would remove the storage account configuration..

For Google Cloud Storage: State locking uses a separate resource. To disable it, you would remove that resource configuration.

Important Considerations:

Data Corruption Risk: Disabling state locking can lead to state corruption if multiple users attempt to modify the state concurrently.

Alternatives: Instead of disabling locking, consider using Terraform Cloud or Terraform Enterprise, which provide more robust collaboration and state management features.

280
Q

How does Terraform protect itself where concurrent operations are run against the same configuration?

A

How does Terraform protect itself where concurrent operations are run against the same configuration?

Answer:

Terraform uses state locking to protect against concurrent operations.

When a Terraform operation that modifies the state (like apply) begins, Terraform acquires a lock on the state file.

This lock prevents other Terraform operations from modifying the state until the first operation is complete.

This prevents state corruption, which can occur if multiple processes try to write to the state file at the same time.

State locking is typically implemented by the remote backend (e.g., using DynamoDB with Amazon S3, a blob in Azure Blob Storage, or a separate resource in Google Cloud Storage).

281
Q

You wanted to play with terraform to check what it has to offer. After a while you remembered that you didn’t specify any configuration for the backend. What default behaviour is expected here of terraform?

282
Q

What are the steps required to remove a resource from the management of terraform?

A

terraform rm

283
Q

explain what terraform apply does

A

Terraform apply is the command in Terraform that executes the actions proposed in a Terraform plan to create, update, or delete infrastructure resources in your target environment (e.g., AWS, Azure, GCP, etc.).

Here’s a breakdown of what terraform apply does:

  1. Reads the Terraform Configuration:
    It reads the .tf files in your current working directory (or the directory you specify). These files define the desired state of your infrastructure resources.
  2. Optionally Creates a New Execution Plan (if no plan file is provided):
    If you run terraform apply without specifying a pre-generated plan file (using the -out option with terraform plan), Terraform will automatically generate a new execution plan.
    This plan compares the desired state defined in your configuration with the current state of your infrastructure as recorded in the Terraform state file.
    The plan outlines the specific actions Terraform needs to take to reach the desired state (e.g., create, update, or destroy resources).
  3. Shows the Execution Plan:
    Terraform will then display the generated execution plan in your terminal. This plan details each resource that will be created, updated, or destroyed, along with the changes that will be made to their properties.
    This is a crucial step for reviewing the changes before they are applied to your actual infrastructure.
  4. Prompts for Confirmation (by default):
    Unless you use the -auto-approve flag, Terraform will prompt you to confirm whether you want to proceed with the changes outlined in the plan.
    You need to type yes and press Enter to approve the application of the plan. Any other input will cancel the apply operation.
  5. Executes the Planned Actions:
    Upon confirmation, Terraform communicates with the relevant cloud providers or services using their APIs to perform the actions described in the execution plan.
    It creates, updates, or deletes resources according to the plan.
    Terraform respects resource dependencies defined in your configuration, ensuring that resources are created or modified in the correct order.
  6. Updates the Terraform State File:

Once the actions are completed, Terraform updates the state file (terraform.tfstate).
The state file stores metadata about the managed resources and their current configuration in the real-world infrastructure.
This state file is essential for Terraform to track the resources it manages and to determine the necessary changes in future plan and apply operations.
In summary, terraform apply is the command that makes your infrastructure as code a reality. It takes the blueprint defined in your configuration and the proposed changes in your plan and enacts those changes in your environment, while also keeping track of the current state of your infrastructure.

284
Q

Two teams are working on a single terraform project that concerns the provisioning of various services on the AWS cloud platform. Your team just handed over the remaining last bits of the project that needs to be cross-checked. While going through the configuration files you want to look if they consist of any syntax discrepancies and fix some indentation issues that are hampering the readability of files. Select from the following that can solve the issue at hand:

A

e best tool to solve the issues of syntax discrepancies and indentation problems in Terraform configuration files is:

terraform fmt

Here’s why:

terraform fmt specifically addresses formatting and syntax: This command automatically reformats your Terraform configuration files to adhere to the canonical style. It handles:

Indentation: Ensures consistent and correct indentation throughout your .tf files.
Spacing: Standardizes spacing around operators, arguments, and blocks.
Line Breaks: Applies consistent rules for line breaks to improve readability.
Minor Syntax Corrections: While not a full-fledged linter, terraform fmt can sometimes correct very basic syntax issues related to formatting.
Focus on Readability: The primary goal of terraform fmt is to make Terraform code more readable and consistent across the project, which directly addresses your concern about indentation hampering readability.

Idempotent: Running terraform fmt multiple times on the same file will result in the same output. This makes it safe to run and integrate into your workflow.

Why the other options are less suitable for the specific issues you described:

terraform plan: This command generates an execution plan, showing the changes Terraform will make to your infrastructure. While it can indirectly highlight some syntax errors that prevent the plan from being created, it doesn’t specifically focus on formatting or fix indentation issues.

terraform apply: This command applies the changes defined in your Terraform configuration to your AWS environment. It will fail if there are significant syntax errors, but it doesn’t help you identify or fix formatting problems beforehand.

terraform init: This command initializes your Terraform working directory, downloading provider plugins and setting up the state backend. It’s a necessary first step for any Terraform project but doesn’t deal with syntax or formatting.

terraform validate: This command checks the syntax of your Terraform configuration files and verifies that the configuration is syntactically valid. It will catch many errors that would prevent terraform plan or terraform apply from working. However, it primarily focuses on correctness and doesn’t automatically reformat the code for better readability.

Therefore, the most direct and effective solution for checking syntax discrepancies and fixing indentation issues to improve readability is to use terraform fmt. You would typically run this command in the root directory of your Terraform project.

Have Gemini write a document or code that you can edit

285
Q

Is it necessary to initialize the terraform configuration directory prior to execution of terraform validate.

A

Yes, it is generally necessary to initialize the Terraform configuration directory using terraform init before running terraform validate.

Here’s why:

Provider Plugins: Terraform relies on provider plugins (e.g., aws, azure, google) to understand and interact with specific infrastructure platforms. terraform init is the command that downloads and installs these necessary provider plugins based on the required_providers block in your configuration. Without the relevant provider plugins, terraform validate might not be able to correctly parse and validate resource configurations that depend on those providers.

Module Downloads: If your Terraform configuration uses modules (either from the Terraform Registry, a Git repository, or a local path), terraform init is responsible for downloading and setting up these modules. terraform validate needs access to the module code to properly validate the entire configuration.

Backend Configuration: While terraform validate primarily focuses on syntax and configuration structure, terraform init also sets up the backend where Terraform state will be stored. Although terraform validate itself doesn’t directly interact with the state, the initialization process ensures a consistent and properly configured environment for the overall Terraform project lifecycle.

However, there might be some very basic scenarios where terraform validate could run without a full initialization, but these are exceptions and not recommended practice:

Configurations with no external dependencies: If your Terraform configuration consists only of very basic, provider-agnostic syntax or local modules that are already present, terraform validate might be able to parse and check the syntax without requiring downloaded plugins or modules.
In practice, it’s always best to run terraform init as the first step in your Terraform workflow for any project, including before terraform validate. This ensures that all necessary dependencies are in place and that the validation process can accurately assess your configuration.

Why it’s important to initialize before validating:

Avoid False Negatives: Running terraform validate without initialization might lead to the tool not recognizing provider-specific syntax or resource types, resulting in false negatives (i.e., the configuration appears valid when it actually isn’t once the provider is loaded).
Ensure Comprehensive Validation: Initialization ensures that the validation process considers the specific requirements and syntax of the providers you are using.
Standard Workflow: Following the standard terraform init, terraform plan, terraform apply workflow helps maintain consistency and avoids potential issues down the line.
In conclusion, while there might be very limited edge cases where terraform validate might execute without a prior terraform init, it is the standard and recommended practice to always initialize your Terraform configuration directory before running any other Terraform commands, including terraform validate, to ensure a reliable and accurate validation process.

286
Q

The terraform providers mirror command in Terraform is used to?

A

The terraform providers mirror command in Terraform is used to download the provider plugins required by your current Terraform configuration and copy them into a directory on your local filesystem.

Purpose:

The primary reason for using terraform providers mirror is to create a local mirror of Terraform provider plugins. This is particularly useful in environments where:

Internet access is limited or unreliable: If your Terraform execution environment (e.g., an air-gapped network) cannot directly access the Terraform Registry to download providers during terraform init, you can pre-download the necessary plugins using terraform providers mirror on a machine with internet access and then transfer the mirror directory to the isolated environment.
Network performance is a concern: Downloading providers from a local mirror can be faster than repeatedly fetching them from the public registry.
Security and compliance requirements restrict external access: Organizations with strict security policies might prefer to host their own internal mirror of approved provider plugins.
How it Works:

Analyzes Configuration: When you run terraform providers mirror <target-dir>, Terraform analyzes your current .tf configuration files to determine which providers and versions are required.
Downloads Providers: It then connects to the Terraform Registry (or any configured custom registries) and downloads the .zip files for the specified providers and versions for your current operating system and architecture.
Creates Mirror Structure: Terraform creates a specific directory structure within the <target-dir> that mirrors the structure expected for filesystem-based provider plugin mirrors. This structure typically includes subdirectories for the registry hostname (registry.terraform.io), the provider namespace (hashicorp), and the provider name (aws, azurerm, etc.), along with the .zip files for each version.
Generates Index Files (Optional for Network Mirrors): For setting up a network mirror (accessible via HTTP), the command also generates .json index files that serve as responses to the Terraform CLI when it queries the mirror for available providers and versions.
Example:</target-dir></target-dir>

Let’s say your Terraform configuration in the current directory requires the hashicorp/aws provider with version ~> 3.0 and the hashicorp/azurerm provider with version >= 2.5.

You want to create a local mirror of these providers in a directory named ./local-provider-mirror. You would run the following command:

Bash

terraform providers mirror ./local-provider-mirror
Terraform would then:

Identify that you need hashicorp/aws and hashicorp/azurerm.

Download the latest compatible versions of these providers (based on the version constraints) as .zip files.

Create a directory structure like this within ./local-provider-mirror:

./local-provider-mirror/
└── registry.terraform.io/
├── hashicorp/
│ ├── aws/
│ │ └── 3.x.x/ # Actual version number will vary
│ │ └── terraform-provider-aws_3.x.x_linux_amd64.zip
│ └── azurerm/
│ └── 2.5.x/ # Actual version number will vary
│ └── terraform-provider-azurerm_2.5.x_linux_amd64.zip
(The exact directory structure and filename will depend on your operating system and the downloaded versions).

Using the Local Mirror:

After creating the local mirror, you need to configure Terraform to use it instead of the default registries. This is typically done in the Terraform CLI configuration file (.terraformrc on Linux/macOS, terraform.rc in %APPDATA% on Windows) using the provider_installation block with a filesystem_mirror or network_mirror configuration.

Example .terraformrc configuration to use the local filesystem mirror:

Terraform

provider_installation {
filesystem_mirror {
path = “/path/to/your/local-provider-mirror” # Replace with the actual path
include = [“registry.terraform.io//”]
}
direct {
exclude = [“registry.terraform.io//”]
}
}
With this configuration, when you run terraform init, Terraform will look for the required providers in the specified local mirror directory first. If found, it will use those instead of trying to download them from the internet.

In summary, terraform providers mirror is a command-line tool for pre-downloading Terraform provider plugins to create a local mirror, enabling Terraform to function in environments with limited or no internet access or to improve performance and security. 1
1.
Manage plugins overview - Terraform - HashiCorp Developer

developer.hashicorp.com

287
Q

The terraform graph command is used to generate a visual representation in which format? Expalain.

A

he terraform graph command generates a visual representation of your Terraform configuration or execution plan in the DOT format.

Explanation of the DOT Format:

DOT Language: DOT is a plain text graph description language. It’s a simple way to define the structure of a graph, including its nodes (representing resources or modules) and edges (representing dependencies between them).
Machine-Readable: The DOT format is designed to be machine-readable. This means that while it’s somewhat human-readable, its primary purpose is to be processed by graph visualization software.
GraphViz: The most common software used to render DOT files into visual diagrams (like PNG, SVG, PDF, etc.) is GraphViz. GraphViz is a collection of open-source tools for graph drawing. The dot command-line utility within GraphViz takes a .dot file as input and generates the specified output format.
How terraform graph Uses DOT:

Dependency Analysis: When you run terraform graph, Terraform analyzes your configuration files or execution plan to understand the dependencies between the resources you’ve defined. For example, an EC2 instance might depend on a VPC and a subnet.

DOT Output: The terraform graph command then outputs this dependency information in the DOT language. This output is a text-based description of the graph, defining nodes for each resource and directed edges to show which resources depend on others.

Visualization with GraphViz: To see the actual visual representation, you typically pipe the output of terraform graph to the dot command from GraphViz:

Bash

terraform graph | dot -Tpng -o graph.png
terraform graph: Generates the DOT format output.
| dot: Pipes the DOT output to the dot command.
-Tpng: Specifies that the output should be in PNG image format (you can choose other formats like SVG, PDF, etc.).
-o graph.png: Specifies the name of the output file.
Example of DOT Output (Simplified):

Code snippet

digraph {
“aws_vpc.main” -> “aws_subnet.public”;
“aws_subnet.public” -> “aws_instance.web”;
}
In this simplified example:

digraph indicates a directed graph.
“aws_vpc.main” and “aws_subnet.public” are nodes representing Terraform resources.
“aws_vpc.main” -> “aws_subnet.public” indicates that aws_subnet.public depends on aws_vpc.main.
“aws_subnet.public” -> “aws_instance.web” indicates that aws_instance.web depends on aws_subnet.public.
In summary, the terraform graph command outputs the dependency graph of your Terraform configuration or plan in the DOT format, a plain text graph description language that can be processed by tools like GraphViz to generate visual diagrams. This visualization is extremely helpful for understanding the relationships between your infrastructure resources, identifying potential issues like circular dependencies, and planning changes.

288
Q

You are working in collaboration with your team on some project involving automation using terraform. One of the team members modified a resource on the real cloud infrastructure on the azure platform. You want those resources to be reflected in your local machine. What would you go with:

A

The command you would use to reflect the changes made to the real cloud infrastructure (on Azure) by your team member into your local Terraform state file is:

terraform refresh

Explanation:

terraform refresh is specifically designed to update the Terraform state file with the current state of the real-world infrastructure. It queries the provider (in this case, the Azure provider) to get the latest attributes and status of the resources that Terraform is managing.

Why other options are not suitable:

terraform init: This command initializes your Terraform working directory by downloading provider plugins and setting up the backend. It doesn’t directly query the cloud provider for the current state of resources. You would typically run terraform init only once or when you change your backend configuration or add new providers.
terraform plan: This command generates an execution plan by comparing your Terraform configuration with the current state stored in your state file. It doesn’t actively fetch the latest state from the cloud provider. Running terraform plan after terraform refresh would show you the differences between your configuration and the updated real-world state.
terraform apply: This command applies the changes defined in your Terraform configuration to the cloud infrastructure. It’s used to create, update, or delete resources based on your configuration and the plan. It’s not used to import the current state of existing, manually modified resources.
terraform import: This command is used to bring existing, unmanaged infrastructure resources into Terraform’s management. While it does interact with the cloud provider, it’s for resources that Terraform wasn’t previously aware of. In your scenario, the resource is already managed by Terraform (as another team member modified it), so you just need to update the local state.
How to use terraform refresh:

Simply navigate to your Terraform project directory in your local terminal and run:

Bash

terraform refresh
Terraform will then connect to Azure, query the resources defined in your configuration, and update your terraform.tfstate file to reflect their current status and attributes.

After running terraform refresh:

It’s a good practice to then run terraform plan to see the differences between your local configuration and the refreshed state. This will show you what changes Terraform might propose to bring your configuration back in sync with the actual infrastructure (if there are any discrepancies). You can then decide whether to apply those changes using terraform apply.

289
Q

Question: What are Terraform lifecycle rules?

A

Answer: Terraform lifecycle rules are a set of directives within a resource block that allow you to customize how Terraform manages the creation, updating, and deletion of that resource. They provide more control over the resource’s lifecycle.

290
Q

Question: What is the purpose of the create_before_destroy lifecycle rule?

A

Answer: The create_before_destroy rule ensures that when a resource needs to be replaced (due to a configuration change), Terraform creates the new replacement resource before destroying the old one. This can help minimize downtime.

291
Q

Question: How does prevent_destroy work, and when would you use it?

A

Answer: The prevent_destroy rule is a boolean flag. When set to true, it prevents Terraform from destroying the associated resource. This is useful for protecting critical resources from accidental deletion.

292
Q

Question: What does the ignore_changes lifecycle rule do?

A

Answer: The ignore_changes rule allows you to specify a list of resource attributes that Terraform should ignore when planning updates. This is helpful when attributes are modified by external processes, and you don’t want Terraform to revert those changes.

293
Q

Question: Can you provide an example of using create_before_destroy?

A

Answer:

resource “aws_instance” “example” {
ami = “ami-123456”
instance_type = “t2.micro”
lifecycle {
create_before_destroy = true
}
}

In this example, if the ami changes, Terraform will create a new instance with the new AMI before terminating the old one.

294
Q

Question: When might you use prevent_destroy?

A

Answer: You might use prevent_destroy for:

Critical databases

Essential network components

Any resource whose accidental deletion would cause significant disruption.

295
Q

Question: Give an example of a scenario where ignore_changes is useful.

A

Answer: If an external monitoring system automatically updates tags on your EC2 instances, you could use ignore_changes to prevent Terraform from constantly trying to revert those tag changes.

resource “aws_instance” “example” {
ami = “ami-123456”
instance_type = “t2.micro”
tags = {
Name = “MyInstance”
Environment = “Production”
}
lifecycle {
ignore_changes = [tags]
}
}

296
Q

Question: What happens if you try to destroy a resource with prevent_destroy = true?

A

Answer: Terraform will generate an error and halt the destroy operation, preventing the resource from being deleted.

297
Q

Question: How do lifecycle rules relate to Terraform’s state?

A

Answer: Lifecycle rules affect how Terraform manages resources within its state. For example, ignore_changes influences which attribute changes Terraform detects and stores in the state file, thus affecting subsequent plans and applies.

298
Q

Question: Are lifecycle rules applied during terraform plan or terraform apply?

A

Answer: Lifecycle rules influence both terraform plan and terraform apply.

terraform plan: The plan will reflect the behavior defined by the lifecycle rules (e.g., showing a create-before-destroy action).

terraform apply: The apply command executes the actions as dictated by the plan, which in turn are governed by the lifecycle rules.

299
Q

Which argument of the lifecycle meta-argument supports a list as a value ?

A

Based on the information, the answer is ignore_changes.
kay, let’s clarify why ignore_changes accepts a list, while the others do not:

ignore_changes: This lifecycle argument is designed to tell Terraform to disregard changes to specific attributes of a resource. Because a resource can have multiple attributes, and you might want to ignore changes to more than one of them, it makes sense that ignore_changes accepts a list of attribute names.

For example, you might have a resource with attributes like name, tags, and security_groups. If you only want Terraform to ignore changes to the tags and security_groups, you’d use:

resource “aws_instance” “example” {
# … other configuration …
lifecycle {
ignore_changes = [tags, security_groups]
}
}

create_before_destroy: This argument is a boolean value (true or false). It’s a simple on/off switch to control whether Terraform creates a new resource before destroying the old one. It doesn’t need a list of values.

prevent_destroy: This argument is also a boolean (true or false). It’s a single flag that either prevents Terraform from destroying the resource or allows it. Again, a list of values wouldn’t be relevant here.

In summary, ignore_changes needs a list to specify which attributes should be excluded from Terraform’s change detection, while create_before_destroy and prevent_destroy are single-condition toggles and thus only require a boolean value.

300
Q

Which of the following statements is FALSE about Immutable infrastructure?

A

Okay, let’s clarify why ignore_changes accepts a list, while the others do not:

ignore_changes: This lifecycle argument is designed to tell Terraform to disregard changes to specific attributes of a resource. Because a resource can have multiple attributes, and you might want to ignore changes to more than one of them, it makes sense that ignore_changes accepts a list of attribute names.

For example, you might have a resource with attributes like name, tags, and security_groups. If you only want Terraform to ignore changes to the tags and security_groups, you’d use:

resource “aws_instance” “example” {
# … other configuration …
lifecycle {
ignore_changes = [tags, security_groups]
}
}

create_before_destroy: This argument is a boolean value (true or false). It’s a simple on/off switch to control whether Terraform creates a new resource before destroying the old one. It doesn’t need a list of values.

prevent_destroy: This argument is also a boolean (true or false). It’s a single flag that either prevents Terraform from destroying the resource or allows it. Again, a list of values wouldn’t be relevant here.

In summary, ignore_changes needs a list to specify which attributes should be excluded from Terraform’s change detection, while create_before_destroy and prevent_destroy are single-condition toggles and thus only require a boolean value

301
Q
A

Okay, let’s clarify why ignore_changes accepts a list, while the others do not:

ignore_changes: This lifecycle argument is designed to tell Terraform to disregard changes to specific attributes of a resource. Because a resource can have multiple attributes, and you might want to ignore changes to more than one of them, it makes sense that ignore_changes accepts a list of attribute names.

For example, you might have a resource with attributes like name, tags, and security_groups. If you only want Terraform to ignore changes to the tags and security_groups, you’d use:

resource “aws_instance” “example” {
# … other configuration …
lifecycle {
ignore_changes = [tags, security_groups]
}
}

create_before_destroy: This argument is a boolean value (true or false). It’s a simple on/off switch to control whether Terraform creates a new resource before destroying the old one. It doesn’t need a list of values.

prevent_destroy: This argument is also a boolean (true or false). It’s a single flag that either prevents Terraform from destroying the resource or allows it. Again, a list of values wouldn’t be relevant here.

In summary, ignore_changes needs a list to specify which attributes should be excluded from Terraform’s change detection, while create_before_destroy and prevent_destroy are single-condition toggles and thus only require a boolean value.

Available arguments for the lifecycle meta-argument:

create_before_destroy

prevent_destroy

ignore_changes

302
Q

What is a common use case for utilizing Terraform workspaces in a real-world scenario?

A

Answer: A common use case is managing separate environments for an application lifecycle, such as development, staging, and production. Each workspace would have its own configuration and state, allowing you to make changes and test them in isolation before deploying to production. This helps prevent unintended impacts between different environments.

303
Q

Where does Terraform store the state files for different workspaces by default?

A

Answer: By default, when using local state storage, Terraform creates a subdirectory named terraform.tfstate.d within your project directory. Inside this directory, each workspace has its own subdirectory containing its respective terraform.tfstate file.

304
Q

How do you create a new Terraform workspace named “staging”? How do you switch to it?

A

Answer:
To create a new workspace:
Bash

terraform workspace new staging
To switch to the “staging” workspace:
Bash

terraform workspace select staging

305
Q

Can you import multiple resources with a single terraform import command? If not, how would you handle importing several existing resources?

A

Answer: No, you cannot import multiple resources with a single terraform import command. You need to run the terraform import command once for each individual resource you want to bring under Terraform management. You would iterate through each existing resource, identify its Terraform resource address and unique ID, and execute the terraform import command for each one.

306
Q

What happens if you try to import a resource into a Terraform state where a resource with the same address already exists?

A

Answer: Terraform will typically return an error indicating that a resource with the specified address already exists in the state. You cannot have two resources with the same address within the same Terraform state.

307
Q

After successfully importing a resource, what is the typical next step you should take in your Terraform configuration? Why is this important?

A

Answer: The typical next step is to define the configuration arguments within the corresponding resource block in your .tf file to match the actual configuration of the imported resource. This is crucial because Terraform needs to know the desired state of the resource to manage it effectively. Without these arguments, Terraform will only know that the resource exists in the state file but won’t be able to detect or apply any configuration changes.

308
Q

Provide an example of a terraform import command to bring an existing AWS S3 bucket named my-existing-bucket under Terraform management, assuming you have a resource block defined as resource “aws_s3_bucket” “my_bucket” {}.

A

Answer:
Bash

terraform import aws_s3_bucket.my_bucket my-existing-bucket

309
Q

Explain the concept of a Terraform workspace. What problem do workspaces help solve?

A

Answer: A Terraform workspace is an isolated environment within a single Terraform configuration. Each workspace has its own state file, allowing you to manage multiple distinct deployments of the same infrastructure (e.g., development, staging, production) without them interfering with each other. Workspaces help solve the problem of managing multiple environments with potentially different configurations and resource instances using the same Terraform code.

310
Q

hat is the primary purpose of the terraform import command?

A

Answer: The primary purpose of the terraform import command is to bring existing infrastructure resources (that were not initially created by Terraform) under Terraform’s management. This allows you to manage and modify these resources using Terraform configurations without having to recreate them.

311
Q

What is the usage pattern of theterraform importcommand?

A

primary usage pattern of the terraform import command involves the following steps:

Identify Existing Infrastructure: Determine the specific infrastructure resources that you want to bring under Terraform management. This could be resources created manually through a cloud provider’s console, CLI, or other Infrastructure as Code tools.

Define the Resource in Terraform Configuration: In your Terraform .tf files, you need to create a resource block that corresponds to the existing infrastructure. This block defines the type of resource (e.g., aws_instance, azurerm_virtual_machine) and gives it a local name within your Terraform configuration (the ADDRESS). You don’t need to define all the attributes of the resource at this stage, but the resource block must exist.

Terraform

resource “aws_instance” “my_imported_instance” {
# Configuration attributes will be discovered during import
}
Obtain the Resource ID: You need the unique identifier of the existing infrastructure resource as recognized by the provider. This ID is specific to the resource type and provider (e.g., an EC2 instance ID like i-abcdef1234567890, an Azure Virtual Machine resource ID). Consult the documentation for the specific resource provider to understand how to find this ID.

Execute the terraform import Command: Run the terraform import command in your terminal, providing the Terraform resource address and the resource ID:

Bash

terraform import <resource_type>.<resource_name> <resource_id>
For example:</resource_id></resource_name></resource_type>

Bash

terraform import aws_instance.my_imported_instance i-abcdef1234567890
Verify the Terraform State: After the import command is successful, Terraform will have added the resource to your state file. You can inspect the state file (usually terraform.tfstate) to see the imported resource and its basic attributes.

Update Terraform Configuration: The terraform import command only brings the resource into the state; it does not automatically generate the full Terraform configuration for it. You need to manually update the resource block in your .tf file to accurately reflect the current configuration of the imported resource. This involves adding the relevant arguments and their values based on the actual state of the infrastructure.

Terraform

resource “aws_instance” “my_imported_instance” {
ami = “ami-xxxxxxxxxxxxxxxxx”
instance_type = “t2.micro”
subnet_id = “subnet-xxxxxxxxxxxxxxxxx”
# … other configuration attributes
}
Plan and Apply: Once your Terraform configuration accurately describes the imported resource, you can run terraform plan to see if Terraform detects any differences between your configuration and the actual state. If there are differences, you can adjust your configuration accordingly and then run terraform apply to reconcile the state. From this point forward, Terraform will manage the imported resource.

312
Q

When you run the terraform import command, it directly modifies the Terraform state file(s). Here’s a breakdown of what happens:

A

Adds a New Resource to the State: The primary action of terraform import is to add a new resource entry to your Terraform state. This entry records that a specific real-world infrastructure object (identified by the ID you provide) is now associated with a particular resource address in your Terraform configuration (the ADDRESS you specify).

Records Basic Attributes: Terraform queries the provider for the current attributes and properties of the existing infrastructure resource using the provided ID. It then stores a snapshot of these attributes within the newly created resource entry in the state file. This allows Terraform to be aware of the current configuration of the imported resource.

Links Configuration to Real Infrastructure: The import operation establishes a link between the resource block you’ve defined in your .tf configuration file (using the specified ADDRESS) and the actual infrastructure resource in your cloud provider or service (identified by the ID).

Does Not Modify the Configuration Files: Importantly, terraform import only modifies the state file. It does not automatically generate or update your .tf configuration files to reflect the imported resource’s attributes. You must manually update your configuration to match the actual state of the imported resource after a successful import.

Workspace Specific: The changes made by terraform import are specific to the currently selected Terraform workspace. If you have multiple workspaces (e.g., default, staging, production), importing a resource while in the staging workspace will only affect the state file associated with that specific workspace. The state files of other workspaces will remain unchanged.

In summary, terraform import bridges the gap between existing infrastructure and your Terraform management by:

Creating a new entry in the state file for the imported resource.
Storing the current attributes of the real-world resource in the state.
Associating the state with a specific resource address defined in your configuration.
It is crucial to remember that terraform import is only half the battle. The next critical step is to manually update your Terraform configuration files to accurately describe the imported resource so that Terraform can effectively manage it going forward.

Sources and related content

313
Q

When using the local backend for Terraform state storage, the state files for different workspaces are stored within a subdirectory named terraform.tfstate.d inside your current working directory.

A

1
1.
Manage Workspaces Overview - Terraform CLI - HashiCorp Developer

developer.hashicorp.com

Within this terraform.tfstate.d directory, each workspace will have its own subdirectory named after the workspace itself (e.g., default, staging, production). The actual state file for that workspace is then named terraform.tfstate within its respective subdirectory.

Example:

If you have three workspaces: default, staging, and production, and you are using the local backend, your state file structure would look something like this:

.
├── .terraform/
├── main.tf
├── terraform.tfstate.d/
│ ├── default/
│ │ └── terraform.tfstate
│ ├── production/
│ │ └── terraform.tfstate
│ └── staging/
│ └── terraform.tfstate
└── variables.tf
Important Note: When using a remote backend (like AWS S3, Azure Blob Storage, or Terraform Cloud), the state files for different workspaces are typically stored in separate paths or prefixes within the remote storage location, as configured in your backend settings. The terraform.tfstate.d directory is specific to the local backend.

314
Q

When we start off and create a configuration interraform, what is the workspace that is created, to begin with?

A

When you start off and create a configuration in Terraform, the default workspace is automatically created for you.

You don’t need to explicitly create it. This workspace is active by default when you initialize your Terraform project (terraform init) and start defining your infrastructure.

Think of it as the initial, built-in environment where Terraform begins managing your state. If you don’t create any other workspaces, all your resources and state will be associated with this default workspace.

You can verify the currently active workspace by running the command:

Bash

terraform workspace show
This will output default if you haven’t created or switched to any other workspaces.

315
Q
  1. What is the usage pattern of the terraform import command?
A

The usage pattern you provided is correct:

terraform import [options] ADDRESS ID
Where:

terraform import: This is the main command to import existing infrastructure.
[options]: These are optional flags that can modify the import behavior. Some common options include -config, -var, -var-file, etc. You can see the full list with terraform import -help.
ADDRESS: This is the Terraform resource address that you want to associate with the existing infrastructure. This address must correspond to a resource block defined in your Terraform configuration files (even if the block is currently empty). The format is typically <resource_type>.<resource_name>. For example, aws_instance.server.
ID: This is the unique identifier of the existing infrastructure object in the provider's system. This ID is specific to the cloud provider or service you are working with. For example, an AWS instance ID (i-xxxxxxxxxxxxxxxxx), an Azure virtual machine ID, or a Google Cloud Compute Engine instance name.</resource_name></resource_type>

316
Q

Select the invalid subcommand for terraform workspace among the following:
new: Create a new workspace.
select: Select an existing workspace.
rename a workspace
list: List existing workspaces.
delete: Delete an existing workspace.
show: Show the name of the current workspace.

A

rename

The terraform workspace command does not have a rename subcommand. The valid subcommands typically include:

new: Create a new workspace.
select: Select an existing workspace.
list: List existing workspaces.
delete: Delete an existing workspace.
show: Show the name of the current workspace.

317
Q

What does Terraform import do?

A

Terraform import does not update the configuration files at all, only updates the state file with the details of the infrastructure being imported.

This statement is correct. terraform import only modifies the Terraform state file to record that an existing infrastructure object is now managed by the specified resource address in your configuration. It does not automatically generate or modify your .tf configuration files. You will need to manually write the corresponding resource block in your configuration to fully manage the imported resource.

318
Q

You intend to import two resources to your terraform configuration. You executed only the terraform import command until now and it worked. Will the terraform apply work if executed now?

A

It will throw an error, We haven’t updated the resource with correct argument values yet

This is the correct answer. While terraform import successfully adds the resources to your state file, your Terraform configuration files (.tf files) still lack the resource blocks with the necessary arguments (attributes) that define the imported infrastructure.

When you run terraform apply, Terraform compares the desired state defined in your configuration with the current state recorded in the state file. Since your configuration doesn’t yet describe the imported resources with their attributes, Terraform will likely detect a drift or an incomplete configuration and will either:

Show a plan with changes that attempt to create new resources (because it doesn’t see a corresponding configuration for the imported state).
Throw an error indicating that the resource configuration is incomplete or that it cannot reconcile the state with the missing configuration.
You need to manually add the resource blocks to your .tf files and define the arguments that match the attributes of the imported infrastructure. Only after your configuration accurately reflects the imported resources will terraform apply be able to manage them correctly.

319
Q

For local state, Terraform stores the workspace states in a directory called:

A

terraform.tfstate.d

This is correct. When using the default local backend, Terraform stores the state files for different workspaces within a subdirectory named terraform.tfstate.d inside your Terraform project directory. Each workspace will have its own state file within this directory.

320
Q

When we start off and create a configuration in Terraform, what is the workspace that is created, to begin with?

A

default

This is correct. When you initialize a Terraform project (terraform init) without explicitly creating or selecting a workspace, Terraform automatically creates and selects the default workspace. This is the initial and active workspace until you create or switch to another one.

321
Q

Explain the potential issues that can arise when using count with a list of variable length, and how for_each addresses these issues.

A

Explanation: This question aims to test the understanding of how Terraform tracks resources in state. When using count with a list, Terraform identifies each resource by its numerical index. If the list changes (e.g., an item is added or removed), the indices of subsequent items shift. This can lead to Terraform destroying and recreating resources unnecessarily, as Terraform sees a change in the index, and thus a “new” resource.

for_each, on the other hand, uses keys from a map or set to identify resources. This means that if an item is added or removed, only that specific resource is affected, and other resources remain unchanged. This makes for_each more robust for managing resources based on dynamic data.

322
Q

Describe a scenario where you need to create multiple instances of a resource with varying configurations, and explain how you would implement this using either count or for_each, justifying your choice.

A

Explanation: This question tests the ability to apply the concepts to a practical situation.

Example using for_each:

Scenario: You need to create multiple virtual machines with different sizes and operating systems. You have a map variable where the keys are VM names, and the values are objects containing the size and OS.

Implementation: You would use for_each because you have a map with unique keys (VM names) and varying configurations (sizes and OS). for_each allows you to iterate over the map and create a VM resource for each entry, using the key to identify the VM and the value to configure it.

Example using count:

Scenario: You need to create a specific number of identical subnets within a VPC.

Implementation: You could use count if the subnets are largely identical. You’d provide a number, and then use count.index to create slight variations (like the CIDR block) if needed.

323
Q

What is the primary purpose of the count meta-argument in Terraform?

A

The primary purpose of the count meta-argument in Terraform is to create multiple instances of a resource.

324
Q

hat data type does the count meta-argument accept?

A

The count meta-argument accepts a numeric value.

325
Q

How can you reference individual resources created with the count meta-argument?

A

You can reference individual resources created with the count meta-argument using count.index.

326
Q

What is the primary purpose of the for_each meta-argument in Terraform?

A

The primary purpose of the for_each meta-argument in Terraform is to create multiple instances of a resource, iterating over a map or a set.

327
Q

What data types does the for_each meta-argument accept?

A

The for_each meta-argument accepts a map or a set of strings.

328
Q

What is the key difference between count and for_each in terms of how they handle resource dependencies and changes?

A

count identifies resources by their numerical index, which can cause unnecessary recreation of resources if the list order changes. for_each uses keys from a map or set, making it more robust to changes, as only the specific resource associated with the changed key is affected.

329
Q

Can you use both count and for_each within the same resource block? Why or why not?

A

No, you cannot use both count and for_each within the same resource block. They are mutually exclusive meta-arguments that serve different purposes for creating multiple instances of a resource. Terraform needs to know definitively how the multiple instances should be created, either by a numerical count or by iterating over a collection of key-value pairs.

330
Q

When would you typically use count over for_each? Provide an example.

A

You would typically use count over for_each when you need to create a specific number of identical resources, and the order or any changes to the resources are not critical.

331
Q

When would you typically use for_each over count? Provide an example.

A

You would typically use for_each over count when you need to create multiple resources with varying configurations, and you want Terraform to manage each resource’s lifecycle independently based on a unique key.

Example: Creating multiple virtual machines with different sizes and operating systems, where each VM is defined by a unique name and its configuration.

332
Q

When would you typically use for_each over count? Provide an example.

A

You would typically use for_each over count when you need to create multiple resources with varying configurations, and you want Terraform to manage each resource’s lifecycle independently based on a unique key.

Example: Creating multiple virtual machines with different sizes and operating systems, where each VM is defined by a unique name and its configuration.

Example using count
resource “aws_instance” “example_count” {
count = 3
ami = “ami-0c55b956cb0f91f71” # Example AMI - change as needed
instance_type = “t2.micro”
tags = {
Name = “Count-Instance-${count.index + 1}”
}
}

Example using for_each
variable “vm_configs” {
type = map(object({
ami = string
instance_type = string
tags = map(string)
}))
default = {
vm1 = {
ami = “ami-0c55b956cb0f91f71” # Example AMI - change as needed
instance_type = “t2.micro”
tags = {
Name = “VM-1”
}
}
vm2 = {
ami = “ami-02357210d73c7f1e5” # Example AMI - change as needed
instance_type = “t3.small”
tags = {
Name = “VM-2”
}
}
}
}

resource “aws_instance” “example_for_each” {
for_each = var.vm_configs
ami = each.value.ami
instance_type = each.value.instance_type
tags = each.value.tags
}

333
Q

To invoke a provisioner when resources are destroyed, you can use the when = destroy argument within the provisioner block.

A

Here’s how it works, based on the latest version of the “terraform_count_for_each_questions” immersive:

Key points:

when = destroy: This tells Terraform to execute the provisioner when the associated resource is being destroyed.
Placement: The provisioner block with when = destroy is placed within the resource block, just like a regular provisioner.
Use Case: This is useful for cleanup tasks, such as deleting temporary files, removing database entries, or notifying other systems.

334
Q

What can be used to determine the length of a given list, map, or string.

335
Q

In Terraform, you can concatenate a list of strings with a delimiter using the ??function.

A

In Terraform, you can concatenate a list of strings with a delimiter using the join function.

Syntax:

join(delimiter, list)

delimiter: The string to use between the elements.

list: The list of strings to join.

Example:

variable “string_list” {
type = list(string)
default = [“apple”, “banana”, “cherry”]
}

output “concatenated_string” {
value = join(“, “, var.string_list) # Joins the list with “, “ as the delimiter
}

In this example, the output will be:

“apple, banana, cherry”

336
Q

Types of Provisioners

A

Terraform has a few different types of provisioners:

file: Copies files or templates to the resource.

local-exec: Executes a command on the machine running Terraform.

remote-exec: Executes a command on the remote resource (e.g., a server).

337
Q

Which keyword is used for provisioner utilization within terraform script?

A

provisioner

338
Q

Which of the following provisioners does not need a connection block defined?

A

local-exec

339
Q

Why not file or templatefile for dynamically generated files?

A

You’re correct that the file and templatefile functions are not designed to read files that your Terraform configuration generates dynamically during the plan or apply steps. Here’s why:

Timing: Terraform’s execution has distinct phases:

plan: Terraform analyzes your configuration and determines the changes needed to reach the desired state.

apply: Terraform executes the actions defined in the plan to create, modify, or destroy resources.

Function Evaluation: The file and templatefile functions are evaluated during the plan phase. At this point, Terraform needs to know the file’s contents to determine how it will be used.

Dynamic File Generation: When a file is generated dynamically as part of the apply step (e.g., by a provisioner or as a result of a resource’s action), its content is not available during the plan phase. Terraform cannot predict what the file will contain.

In essence, Terraform needs to know the file’s content before it executes the steps that create the file. file and templatefile operate too early in the process.

340
Q

built-in function that returns the closest whole number that is greater than or equal to the given value:

A

Okay, let’s find the Terraform built-in function that returns the closest whole number greater than or equal to a given value.

The function you’re looking for is ceil().

341
Q

Terraform console

A

This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state.

342
Q

What is a root module? is it necessary?

A

Okay, I’ll answer your question about root modules in Terraform, based on the information in the Canvas.

A root module is the main directory in a Terraform configuration. 1 Every Terraform configuration has at least one module, which is the root module. 1 So, it’s not a question of necessity; it’s a fundamental aspect of how Terraform configurations are structured.

343
Q

types of terraform modules based on the credibility tier.

A

official
Verified
Community

344
Q

Which module can call other modules and connect them together by passing output values from one as input values to another.

345
Q

What terrform commands can download modules

A

terraform get
terraform pull
teraform init