🏗️ Infrastructure as Code with Terraform

Multi-Cloud Infrastructure Automation - AWS, Azure & GCP Provisioning

🎯 What is Infrastructure as Code (IaC)?

Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than manual configuration or interactive tools. Terraform is the industry-leading IaC tool that enables multi-cloud infrastructure deployment with a single, declarative language.

This project demonstrates provisioning identical infrastructure across AWS, Azure, and Google Cloud Platform using Terraform. It includes VPC/VNet setup, compute instances, databases, load balancers, and storage resources - all defined as code and version-controlled.

3
Cloud Providers
50+
Resources Managed
100%
Infrastructure as Code
5min
Deployment Time
0
Manual Steps
Reproducibility

✨ Why Terraform?

🌐
Multi-Cloud Support
Single tool for AWS, Azure, GCP, and 3,000+ providers. Avoid vendor lock-in.
📝
Declarative Language
Define desired state, not steps. Terraform figures out how to achieve it.
🔄
State Management
Tracks real infrastructure state. Detects drift and enables safe updates.
👁️
Plan Before Apply
Preview changes before execution. No surprises or accidental deletions.
📦
Modular & Reusable
Create modules for common patterns. Share and reuse across projects.
🔐
Version Control
Infrastructure changes tracked in Git. Audit trail and rollback capability.
🌐
VPC/Network Setup
Automated creation of Virtual Private Clouds (VPC/VNet) with public and private subnets, route tables, internet gateways, and security groups. Consistent networking across all clouds.
💻
Compute Instances
Provision EC2 (AWS), Virtual Machines (Azure), and Compute Engine (GCP) instances with auto-scaling groups, load balancers, and health checks.
🗄️
Managed Databases
Deploy RDS (AWS), Azure Database, and Cloud SQL (GCP) with automated backups, replication, and high availability configurations.
📦
Object Storage
Configure S3 (AWS), Blob Storage (Azure), and Cloud Storage (GCP) buckets with versioning, lifecycle policies, and encryption.
⚖️
Load Balancing
Set up Application Load Balancers (AWS), Load Balancers (Azure), and Cloud Load Balancing (GCP) with SSL termination and health checks.
🔐
IAM & Security
Define IAM roles, policies, security groups, firewalls, and access controls. Implement least-privilege access across all resources.
🔔
Monitoring & Alerts
Configure CloudWatch (AWS), Monitor (Azure), and Cloud Monitoring (GCP) with custom metrics, dashboards, and alerting rules.
🔄
CI/CD Integration
Terraform workflows integrated with GitHub Actions, GitLab CI, and Jenkins for automated infrastructure deployment pipelines.
🌍
Multi-Region
Deploy resources across multiple regions for disaster recovery and low-latency access. Automated DNS routing and failover.

☁️ Multi-Cloud Infrastructure

Same infrastructure patterns deployed across AWS, Azure, and Google Cloud Platform. Each provider-specific implementation uses native services while maintaining consistent architecture.

🟠
Amazon Web Services

Resources Provisioned:

  • VPC with public/private subnets
  • EC2 instances with Auto Scaling
  • Application Load Balancer (ALB)
  • RDS PostgreSQL database
  • S3 buckets for storage
  • Route53 DNS management
  • CloudWatch monitoring & logs
  • IAM roles & policies
🔵
Microsoft Azure

Resources Provisioned:

  • Virtual Network (VNet) with subnets
  • Virtual Machines with Scale Sets
  • Azure Load Balancer
  • Azure Database for PostgreSQL
  • Blob Storage accounts
  • Azure DNS zones
  • Azure Monitor & Log Analytics
  • Azure RBAC & Managed Identities
🔴
Google Cloud Platform

Resources Provisioned:

  • VPC network with custom subnets
  • Compute Engine with Managed Instance Groups
  • Cloud Load Balancing
  • Cloud SQL PostgreSQL
  • Cloud Storage buckets
  • Cloud DNS managed zones
  • Cloud Monitoring & Logging
  • IAM roles & Service Accounts

💻 Terraform Code Examples

Production-ready Terraform configurations for multi-cloud infrastructure deployment.

main.tf - AWS Infrastructure
# AWS Provider Configuration
terraform {
  required_version = ">= 1.0"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
  
  backend "s3" {
    bucket = "terraform-state-bucket"
    key    = "production/terraform.tfstate"
    region = "us-east-1"
    encrypt = true
  }
}

provider "aws" {
  region = var.aws_region
  
  default_tags {
    tags = {
      Environment = var.environment
      ManagedBy   = "Terraform"
      Project     = "Multi-Cloud-Infrastructure"
    }
  }
}

# VPC Configuration
module "vpc" {
  source = "./modules/vpc"
  
  vpc_cidr           = "10.0.0.0/16"
  availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
  public_subnets     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_subnets    = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
  
  enable_nat_gateway   = true
  enable_dns_hostnames = true
  
  tags = {
    Name = "${var.project_name}-vpc"
  }
}

# Application Load Balancer
module "alb" {
  source = "./modules/alb"
  
  vpc_id             = module.vpc.vpc_id
  public_subnet_ids  = module.vpc.public_subnet_ids
  security_group_ids = [module.security_groups.alb_sg_id]
  
  certificate_arn = var.ssl_certificate_arn
  
  tags = {
    Name = "${var.project_name}-alb"
  }
}

# Auto Scaling Group
module "asg" {
  source = "./modules/asg"
  
  vpc_id            = module.vpc.vpc_id
  private_subnet_ids = module.vpc.private_subnet_ids
  target_group_arns  = [module.alb.target_group_arn]
  
  instance_type     = "t3.medium"
  min_size          = 2
  max_size          = 10
  desired_capacity  = 3
  
  ami_id            = data.aws_ami.amazon_linux_2.id
  user_data         = file("${path.module}/scripts/user-data.sh")
  
  tags = {
    Name = "${var.project_name}-asg"
  }
}

# RDS Database
module "rds" {
  source = "./modules/rds"
  
  vpc_id              = module.vpc.vpc_id
  private_subnet_ids  = module.vpc.private_subnet_ids
  security_group_ids  = [module.security_groups.rds_sg_id]
  
  engine               = "postgres"
  engine_version       = "15.3"
  instance_class       = "db.t3.medium"
  allocated_storage    = 100
  
  database_name        = var.database_name
  master_username      = var.database_username
  master_password      = var.database_password
  
  multi_az             = true
  backup_retention     = 7
  
  tags = {
    Name = "${var.project_name}-rds"
  }
}

# S3 Bucket
module "s3" {
  source = "./modules/s3"
  
  bucket_name = "${var.project_name}-storage-${var.environment}"
  
  versioning_enabled = true
  encryption_enabled = true
  
  lifecycle_rules = [
    {
      id      = "transition-to-glacier"
      enabled = true
      
      transition = [
        {
          days          = 90
          storage_class = "GLACIER"
        }
      ]
    }
  ]
  
  tags = {
    Name = "${var.project_name}-s3"
  }
}
variables.tf - Input Variables
variable "aws_region" {
  description = "AWS region for resource deployment"
  type        = string
  default     = "us-east-1"
}

variable "environment" {
  description = "Environment name (dev, staging, production)"
  type        = string
  default     = "production"
}

variable "project_name" {
  description = "Project name for resource naming"
  type        = string
  default     = "multi-cloud-app"
}

variable "database_name" {
  description = "RDS database name"
  type        = string
  sensitive   = true
}

variable "database_username" {
  description = "RDS master username"
  type        = string
  sensitive   = true
}

variable "database_password" {
  description = "RDS master password"
  type        = string
  sensitive   = true
}

variable "ssl_certificate_arn" {
  description = "ARN of SSL certificate for ALB"
  type        = string
}
outputs.tf - Output Values
output "vpc_id" {
  description = "VPC ID"
  value       = module.vpc.vpc_id
}

output "alb_dns_name" {
  description = "Application Load Balancer DNS name"
  value       = module.alb.dns_name
}

output "rds_endpoint" {
  description = "RDS database endpoint"
  value       = module.rds.endpoint
  sensitive   = true
}

output "s3_bucket_name" {
  description = "S3 bucket name"
  value       = module.s3.bucket_name
}

output "asg_name" {
  description = "Auto Scaling Group name"
  value       = module.asg.name
}

output "cloudwatch_dashboard_url" {
  description = "CloudWatch dashboard URL"
  value       = "https://console.aws.amazon.com/cloudwatch/home?region=${var.aws_region}#dashboards:name=${var.project_name}"
}
azure.tf - Azure Infrastructure
# Azure Provider Configuration
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

# Resource Group
resource "azurerm_resource_group" "main" {
  name     = "${var.project_name}-rg"
  location = var.azure_region
  
  tags = {
    Environment = var.environment
    ManagedBy   = "Terraform"
  }
}

# Virtual Network
resource "azurerm_virtual_network" "main" {
  name                = "${var.project_name}-vnet"
  address_space       = ["10.1.0.0/16"]
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
}

# Subnets
resource "azurerm_subnet" "public" {
  name                 = "public-subnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.main.name
  address_prefixes     = ["10.1.1.0/24"]
}

resource "azurerm_subnet" "private" {
  name                 = "private-subnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.main.name
  address_prefixes     = ["10.1.11.0/24"]
}

# Virtual Machine Scale Set
resource "azurerm_linux_virtual_machine_scale_set" "main" {
  name                = "${var.project_name}-vmss"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  sku                 = "Standard_D2s_v3"
  instances           = 3
  
  admin_username = "azureuser"
  
  admin_ssh_key {
    username   = "azureuser"
    public_key = file("~/.ssh/id_rsa.pub")
  }
  
  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-focal"
    sku       = "20_04-lts"
    version   = "latest"
  }
  
  os_disk {
    storage_account_type = "Standard_LRS"
    caching              = "ReadWrite"
  }
  
  network_interface {
    name    = "vmss-nic"
    primary = true
    
    ip_configuration {
      name      = "internal"
      primary   = true
      subnet_id = azurerm_subnet.private.id
      
      load_balancer_backend_address_pool_ids = [
        azurerm_lb_backend_address_pool.main.id
      ]
    }
  }
}

# Azure Database for PostgreSQL
resource "azurerm_postgresql_flexible_server" "main" {
  name                   = "${var.project_name}-psql"
  resource_group_name    = azurerm_resource_group.main.name
  location               = azurerm_resource_group.main.location
  
  version                = "15"
  administrator_login    = var.database_username
  administrator_password = var.database_password
  
  sku_name   = "GP_Standard_D2s_v3"
  storage_mb = 102400
  
  backup_retention_days = 7
  geo_redundant_backup_enabled = true
}

# Storage Account
resource "azurerm_storage_account" "main" {
  name                     = "${replace(var.project_name, "-", "")}storage"
  resource_group_name      = azurerm_resource_group.main.name
  location                 = azurerm_resource_group.main.location
  account_tier             = "Standard"
  account_replication_type = "GRS"
  
  blob_properties {
    versioning_enabled = true
  }
}
gcp.tf - Google Cloud Infrastructure
# GCP Provider Configuration
terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
}

provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region
}

# VPC Network
resource "google_compute_network" "main" {
  name                    = "${var.project_name}-vpc"
  auto_create_subnetworks = false
}

# Subnets
resource "google_compute_subnetwork" "public" {
  name          = "public-subnet"
  ip_cidr_range = "10.2.1.0/24"
  region        = var.gcp_region
  network       = google_compute_network.main.id
}

resource "google_compute_subnetwork" "private" {
  name          = "private-subnet"
  ip_cidr_range = "10.2.11.0/24"
  region        = var.gcp_region
  network       = google_compute_network.main.id
}

# Instance Template
resource "google_compute_instance_template" "main" {
  name         = "${var.project_name}-template"
  machine_type = "e2-medium"
  
  disk {
    source_image = "debian-cloud/debian-11"
    auto_delete  = true
    boot         = true
  }
  
  network_interface {
    subnetwork = google_compute_subnetwork.private.id
    
    access_config {
      // Ephemeral IP
    }
  }
  
  metadata_startup_script = file("${path.module}/scripts/startup.sh")
}

# Managed Instance Group
resource "google_compute_instance_group_manager" "main" {
  name               = "${var.project_name}-mig"
  base_instance_name = "${var.project_name}-instance"
  zone               = "${var.gcp_region}-a"
  target_size        = 3
  
  version {
    instance_template = google_compute_instance_template.main.id
  }
  
  named_port {
    name = "http"
    port = 80
  }
  
  auto_healing_policies {
    health_check      = google_compute_health_check.main.id
    initial_delay_sec = 300
  }
}

# Cloud SQL PostgreSQL
resource "google_sql_database_instance" "main" {
  name             = "${var.project_name}-db"
  database_version = "POSTGRES_15"
  region           = var.gcp_region
  
  settings {
    tier = "db-custom-2-7680"
    
    backup_configuration {
      enabled    = true
      start_time = "03:00"
    }
    
    ip_configuration {
      ipv4_enabled    = false
      private_network = google_compute_network.main.id
    }
  }
  
  deletion_protection = true
}

# Cloud Storage Bucket
resource "google_storage_bucket" "main" {
  name          = "${var.project_name}-storage-${var.environment}"
  location      = var.gcp_region
  force_destroy = false
  
  versioning {
    enabled = true
  }
  
  lifecycle_rule {
    condition {
      age = 90
    }
    action {
      type          = "SetStorageClass"
      storage_class = "NEARLINE"
    }
  }
  
  encryption {
    default_kms_key_name = google_kms_crypto_key.bucket_key.id
  }
}

🔄 Terraform Workflow

Standard Terraform workflow for infrastructure deployment and management.

1
terraform init
Initialize Terraform working directory. Downloads provider plugins and sets up backend for state management. Must be run before any other commands.
2
terraform fmt
Format Terraform configuration files to canonical style. Ensures consistent formatting across team and version control.
3
terraform validate
Validate configuration syntax and internal consistency. Catches errors before planning or applying changes.
4
terraform plan
Preview changes Terraform will make to infrastructure. Shows what will be created, modified, or destroyed. No changes are applied.
5
terraform apply
Apply the planned changes to infrastructure. Prompts for confirmation before executing. Updates state file with real infrastructure state.
6
terraform destroy
Destroy all resources managed by Terraform. Useful for tearing down temporary environments or cleaning up resources.

🔧 Advanced Commands

Command Description Use Case
terraform state list List all resources in state Inventory of managed resources
terraform state show Show detailed resource state Inspect specific resource attributes
terraform import Import existing infrastructure Bring manually created resources under Terraform
terraform taint Mark resource for recreation Force replacement of specific resource
terraform refresh Update state with real infrastructure Sync state with actual resources
terraform workspace Manage multiple environments Separate dev, staging, production states
terraform output Display output values Get DNS names, IPs, resource IDs

✨ Best Practices

📦
Use Modules
Create reusable modules for common patterns. Share across projects and teams.
🔐
Remote State
Store state in S3/Azure Blob/GCS with locking. Never commit state to version control.
🏷️
Consistent Tagging
Tag all resources with Environment, Owner, Project. Enables cost tracking and compliance.
🔒
Sensitive Data
Use variables with sensitive = true. Store secrets in vault services, not in code.
📝
Version Control
Commit all .tf files to Git. Use pull requests for infrastructure changes review.
🔄
CI/CD Pipeline
Automate terraform plan on PRs. Require approval before terraform apply runs.