Thursday, May 1, 2025

Terraform Debugging

Terraform Debugging
Class 16th Terraform Debugging May 1st(Devops)
Terraform Debugging
Terraform state 
depends and parallelism 

Debugging help understand the issue (like misconfiguration or dependency error can occur) and fix them efficiently
 Trace log need set give the log it will store into log file 
Stages:TRACE,DEBUG,INFO,WARN,ERROR
export TF_LOG=TRACE 
export TF_LOG_PATH="log.txt"
terraform apply
Unset 
unset TF_LOG
unset TF_LOG_PATH
Stages


Step1: log file create current directory
[root@ip-10-0-1-221 ~]# sudo yum install -y yum-utils shadow-utils
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Package yum-utils-1.1.31-46.amzn2.0.1.noarch already installed and latest version
Package 2:shadow-utils-4.1.5.1-24.amzn2.0.3.x86_64 already installed and latest version
Nothing to do
[root@ip-10-0-1-221 ~]# sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
[root@ip-10-0-1-221 ~]# sudo yum -y install terraform
[root@ip-10-0-1-221 ~]# mkdir ccit
[root@ip-10-0-1-221 ~]# vi cloudinfra.tf
[root@ip-10-0-1-221 ~]# cat cloudinfra.tf
provider "aws" {
region="eu-west-1"
access_key="AKIATFBMO7H4MQLOWPFY"
secret_key="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"
}
resource "aws_s3_bucket" "bucket" {
bucket ="ccitbucketerraform"
}
[root@ip-10-0-1-221 ~]# terraform init (All plugins downloaded in the directory)
[root@ip-10-0-1-221 ~]# terraform plan
[root@ip-10-0-1-221 ~]# terraform apply -auto-approve
Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 1s [id=ccitbucketerraform]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

                         Script for Instance
provider "aws"{
region="eu-west-1"
access_key="AKIATFBMO7H4MQLOWPFY"
secret_key="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"
}
resource "aws_instance" "ccitinst" {
ami="ami-04e7764922e1e3a57"
instance_type="t2.micro"
subnet_id     = "subnet-0477e85088645156b"
}

Debugging  TRACE
[root@ip-10-0-1-221 ~]# export TF_LOG=TRACE
[root@ip-10-0-1-221 ~]# terraform apply -auto-approve
[root@ip-10-0-1-221 ~]# export TF_LOG_PATH="logs.txt"
[root@ip-10-0-1-221 ~]# export TF_LOG_PATH="logs.txt"
[root@ip-10-0-1-221 ~]# terraform apply -auto-approve

[root@ip-10-0-1-221 ~]# ls -lrt
total 1924
drwxr-xr-x 2 root root       6 May 29 17:06 ccit
-rw-r--r-- 1 root root     194 May 29 17:31 cloudinfra.tf
-rw-r--r-- 1 root root    2664 May 29 17:50 terraform.tfstate.backup
-rw-r--r-- 1 root root    2663 May 29 17:50 terraform.tfstate
-rw-r--r-- 1 root root 1957198 May 29 17:50 logs.txt

[root@ip-10-0-1-221 ~]# unset TF_LOG
[root@ip-10-0-1-221 ~]# unset TF_LOG_PATH

                                       Alias and Providers
-> In terraform, providers are responsible for managing and interacting with external service(like AWS,Azure,GCP,etc)
->Aliases allow you to define multiple configurations of the same provider within single terraform configuration,
What is a provider ?
A provider is a plugin that enables terraform to interact with an external service 
Provide "aws",Azure,GCP.. etc
What is use Alias?
The alias agruments allows multiple configuration of the same provider .This is Useful when:
 1.You can deploy resources in multiple  AWS regions,
2.You can deploy multiple aws accounts
3.You can deploy to different service providers.

CLI
AWS Service need communicate with server ,need provide access security key 
using aws console Iam user access key and security.
    
[root@ip-10-0-1-221 ccit]# aws configure
AWS Access Key ID [None]: AKIATFBMO7H4MQLOWPFY
AWS Secret Access Key [None]: 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
Default region name [None]: eu-west-1
Default output format [None]:
Access key stored in below file 
[root@ip-10-0-1-221 ccit]# vim ~/.aws/credentials

[ccitapr]
aws_access_key_id = AKIATFBMO7H4CHQFIBIT
aws_secret_access_key =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
[default]
aws_access_key_id = AKIATFBMO7H4MQLOWPFY
aws_secret_access_key = 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

[root@ip-10-0-1-221 ccit]# cat cloudinfra.tf
provider "aws" {
region="eu-west-1"
profile="default"
}

provider "aws" {
alias="prod"
region="eu-west-2"
profile="ccitapr"
}

resource "aws_s3_bucket" "bucket" {
provider =aws
bucket ="ccitbucketerrawest1"
}

resource "aws_s3_bucket" "euwestbucket2" {
provider =aws.prod
bucket ="ccitbucketerrawest2"
}
[root@ip-10-0-1-221 ccit]# terraform apply -auto-approve
Plan: 2 to add, 0 to change, 0 to destroy.
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.euwestbucket2: Creating...
aws_s3_bucket.bucket: Creation complete after 1s [id=ccitbucketerrawest1]
aws_s3_bucket.euwestbucket2: Creation complete after 1s [id=ccitbucketerrawest2]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

[root@ip-10-0-1-221 ccit]# terraform destroy -auto-approve


                               Terraform state file Lock 
Practical two session same server project , one terraform apply and other session terraform destroy. 

[root@ip-10-0-1-221 ccit]# terraform apply

[root@ip-10-0-1-221 ccit]# terraform destroy -auto-approve
│ Error: Error acquiring the state lock
│ Error message: resource temporarily unavailable
│ Lock Info:
│   ID:        aba92f57-ee31-0a44-9387-0e2f004f6dad
│   Path:      terraform.tfstate
│   Operation: OperationTypeApply
│   Who:       root@ip-10-0-1-221.eu-west-1.compute.internal
│   Version:   1.12.1
│   Created:   2025-05-29 20:32:11.65976967 +0000 UTC
│   Info:
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.

Once we apply confirm yes , we can destroy other session

                                             S3 and Dynamo DB Configuration
  • Add that block to existing code and run terraform init -upgrade
  • DynamoDB ->Create table ->Partition key :LockID ->create 
  • now after apply state file will go to s3 bucket 
  • dev-1 type destroy and dev-2 type apply now state file locked 
  • you can check lockid in new items of table 
  • Once destroy done for dev-1 state file will be unlocked and dev-2 can work 
Dynamo db is no sql database, Need create one table 
DynamoDB >Tables >Create table, LockID one partition key need to add 


Step2:Create one s3 bucket ccitbuckets3bucket and enbled versioning 
[root@ip-10-0-1-188 ccit]# aws configure
AWS Access Key ID [None]: AKIATFBMO7H4CHQFIBIT
AWS Secret Access Key [None]: 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
Default region name [None]: eu-west-1
Default output format [None]:
[root@ip-10-0-1-188 ccit]# aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************IBIT shared-credentials-file
secret_key     ****************pRsC shared-credentials-file
    region                eu-west-1      config-file    ~/.aws/config
[root@ip-10-0-1-188 ccit]# vi cloudinfra.tf
[root@ip-10-0-1-188 ccit]# cat cloudinfra.tf
provider "aws" {
  region     = "eu-west-1"
  profile = "default"
}
terraform {
backend "s3"{
bucket = "ccitbuckets3bucket"
key = "ccit/terraform.tfstate"
region = "eu-west-1"
dynamodb_table = "terraformstate"
}
}

resource "aws_s3_bucket" "bucket" {
bucket ="ccitbucketerraform"
}
s3 related plugins are downloaded 
[root@ip-10-0-1-188 ccit]# terraform init -upgrade
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Using previously-installed hashicorp/aws v5.99.1

Terraform has been successfully initialized!

[root@ip-10-0-1-188 ccit]# terraform apply -auto-approve
Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 1s [id=ccitbucketerraform]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
S3 bucket created successfully


One state file created successfully

Step3:
[root@ip-10-0-1-188 ccit]# terraform destroy -auto-approve
Destroy complete! Resources: 1 destroyed

See here below screen shot bkp file created while destroy and existing versioning state file size very less.

Step4:
Dynamodb table one created terraform state file added 

After destroy again apply ,wait for while say yes
[root@ip-10-0-1-188 ccit]# terraform apply

Go to DynamoDB table again ,see below screen terraform state file locked sate
once apply change click 
Enter a value: yes
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
[root@ip-10-0-1-188 ccit]#
See lock released ,record cleared automatically

Step5: See here below terraform communicate,with dynomdb table and s3 bucket version
without terraform state file presents in the local 
[root@ip-10-0-1-188 ccit]# ls -lrt
total 12
-rw-r--r-- 1 root root 2664 May 31 03:54 terraform.tfstate.backup
-rw-r--r-- 1 root root  181 May 31 03:54 terraform.tfstate
-rw-r--r-- 1 root root  381 May 31 04:34 cloudinfra.tf
[root@ip-10-0-1-188 ccit]# rm -rf t*
[root@ip-10-0-1-188 ccit]# terraform apply -auto-approve
[root@ip-10-0-1-188 ccit]# ls -lrt
total 4
-rw-r--r-- 1 root root 381 May 31 04:34 cloudinfra.tf

                                                                       Parallelism
  • Terraform executes resource creation, updates and deletion in parallel whenever possible to speedup deployment.
  • However, it also respects dependencies between resouces,ensuring that dependent resources are processed in the correct order.
  • it will execute all the resources at a time, by default ,parallelism limit is 10
  • terraform apply -auto-approve -parallelism=1
Note it will application apply and destroy.

Dependency 
The depends on argument in terraform is used to explicitly define dependencies between resources. This ensures that one resource is created ,updated or destroyed only after another resource has been properly handled.
  we use "depends_on" keyword to implement explicit dependency 
Why use depends_on?
Terraform automatically determines dependencies between resource based on references.
However, in some cases, dependencies are not directly referenced, and terraform may try to create resource in parallel 
A resource doe not have a direct reference but must still wait for another resource.
There are implicit dependencies, like provisioning network resources before instances.
A resource need to be updated only after another has changed

Step1:
Previous dynamodb storing s3,migrate-state store tfstate locally 

[root@ip-10-0-1-188 ccit]# terraform init -migrate-state
Successfully unset the backend "s3". Terraform will now operate locally.
[root@ip-10-0-1-188 ccit]# terraform apply -auto-approve
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

[root@ip-10-0-1-188 ccit]# cat cloudinfra.tf
provider "aws" {
  region     = "eu-west-1"
  profile = "default"
}

resource "aws_instance" "instone" {
ami="ami-04e7764922e1e3a57"
instance_type="t2.micro"
subnet_id     = "subnet-0477e85088645156b"
 count = 2

tags = {
 Name = "ccit${count.index}"
}
}
See below two instance initializing same time 


[root@ip-10-0-1-188 ccit]# ls -lrt
total 16
-rw-r--r-- 1 root root  256 May 31 05:58 cloudinfra.tf
-rw-r--r-- 1 root root 9355 May 31 05:59 terraform.tfstate
Step2:
[root@ip-10-0-1-188 ccit]# terraform destroy -auto-approve

Step3: see after apply parallelism creating one after another instances 
[root@ip-10-0-1-188 ccit]# terraform apply -parallelism=1


Step4: if i add s3 bucket in the same script first s3 bucket will create, after instance will create,
so can make put some dependency instance need create first and then create s3 bucket , wait for completion of instance creation.

 [root@ip-10-0-1-188 ccit]# cat cloudinfra.tf
provider "aws" {
  region     = "eu-west-1"
  profile = "default"
}

resource "aws_instance" "instance" {
ami="ami-04e7764922e1e3a57"
instance_type="t2.micro"
subnet_id     = "subnet-0477e85088645156b"
 count = 2

tags = {
 Name = "ccit${count.index}"
}
}
resource "aws_s3_bucket" "bucket"{
bucket ="ccitbucketdependent"
depends_on =[aws_instance.instance]
}
with above script Instances created first and then s3 bucket create, Please check timeframe below
Instances and s3 bucket 
[root@ip-10-0-1-188 ccit]# terraform apply -auto-approve

S3 bucket created 12.3. 44 seconds after two instance create above 


--Thanks


No comments:

Post a Comment