Modularity with Terraform Modules

In order to keep your configurations clean and easy to read, you should avoid code duplication. In a programming language, such as Python and Go, you can do so by putting the code you want to reuse in a function. For example:

func avg(numbers ...int) int { var sum int for _, number := range numbers { sum += number } return sum/len(numbers) } func main() { var average int average = avg(1, 2, 3, 4, 5) fmt.Println("average is", average) // using the avg function instead of duplicating the average logic average = avg(1, 2, 3, 4, 5, 9, 10, 18) fmt.Println("average is", average) // using the avg function instead of duplicating the average logic average = avg(1, 2, 3, 4, 5, 9, 10, 18, 87, 90, 23) fmt.Println("average is", average) }

We have the same concept in Terraform as well. Terraform calls them modules instead of functions.

A Terraform module is any number of Terraform configuration files (.tf files) in a folder. Modules, just like function, can accept inputs using Terraform variables, and can also return values using Terraform outputs. Hence, a typical module structure contains these 3 files:

  • main.tf (function code in a programming context)
  • variables.tf (function inputs in a programming context)
  • outputs.tf (function returns/ouputs in a programming context)

It's important to note that modules on their own do not create any resources unless the root module (the module with the terraform and provider blocks) use them. Just as a function won't do anything unless the main function calls and uses it.

Here's an example for creating multiple S3 buckets with versioning and encryption enabled. We use a module to create the bucket so that we don't have to repeat ourselves. The module receives one input (via the variables.tf file) named bucket_name and return one value (via the outputs.tf file) named bucket_arn which is the Amazon Resource name of the bucket.

Here's the directory structure of our configurations:

infra └───modules | └───s3_version_encryption │ │ main.tf │ │ variables.tf │ │ outputs.tf │ main.tf │ variables.tf │ outputs.tf │ versions.tf

modules/s3_version_encryption/main.tf

resource "aws_s3_bucket" "my_bucket" { bucket = var.bucket_name } resource "aws_s3_bucket_versioning" "versioning" { bucket = aws_s3_bucket.my_bucket.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" { bucket = aws_s3_bucket.my_bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } }

modules/s3_version_encryption/variables.tf

variable "bucket_name" { type = string }

modules/s3_version_encryption/outputs.tf

output "bucket_arn" { value = aws_s3_bucket.my_bucket.arn }

main.tf

module "s3_1" { source = "./modules/s3_version_encryption" bucket_name = "bucket_1_7873847384" } module "s3_2" { source = "./modules/s3_version_encryption" bucket_name = "bucket_2_90328327788" }

outputs.tf

# note how we are addressing the output from a module output "bucket_1_arn" { value = module.s3_1.bucket_arn } output "bucket_2_arn" { value = module.s3_2.bucket_arn }

versions.tf

terraform { required_providers { aws = { source = "hashicorp/aws" } } } provider "aws" { region = "ca-central-1" }

variables.tf

# nothing here

After adding a module, you must run terraform init first in order to download the modules. Here, we have our modules locally in our system, but Terraform still needs to map the module you're using to the source code of the module whether it's on your system or on a remote repository. So, don't forget to run terraform init first, otherwise, you'll run into errors.

And as easy as that, now you have a re-usable module to create an S3 bucket with versioning and encryption enabled. All you have to do is to pass a unique name for your S3 bucket. Our configuration is DRY and clean and easy to read.

You will find the above configuration code in the repo.