terraform init. It will first check the terraform configuration for specific plugins to then download the files from Hashicorp servers. If there is no internet connection, the plugins can be downloaded manually from releases.hashicorp.com and unzipped where the terraform binary is located, under .terraform/plugins/<os>_<architecture>.
export EC2_REGION=us-east-1 export AWS_REGION=us-east-1 export AWS_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxx export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxx
ansible-playbook -vv --extra-vars "instance_name=ansible-rds-poc admin_username=foo admin_password=Foobar!9 environment_name=dev vpcid=vpc-xxxxx" create_rds.yml
rds_instance. Then, you should create a test.tf file with the following content:
provider "aws" { access_key = "xxxxxxxxxxxxxxxxxxxxxx" secret_key = "xxxxxxxxxxxxxxxxxxxxxxxxx" region = "us-east-1" }
module "rds-instance" { source = "./rds_instance" instance_name = "tf-rds-poc" vpcid = "vpc-xxxxx" admin_username = "foo" admin_password = "Foobar!9" }
.tfvar file is the right thing to do as these can be encrypted and they are usually registered in .gitignore. The directory structure should look like this:
./test.tf ./rds_instance/main.tf ./rds_instance/variables.tf
terraform init and then terraform apply to create the resources. You can also run terraform plan to preview what terraform is planning to do.
source parameter for something like git::ssh://git@github.com/gabocic/gitests.git//rds_instance, the code for the terraform module will be pulled from my public Github repository automatically. If any changes to the files are made, you can update the local code copy using terraform get -update.
terraform destroy, the steps executed to create your resources will be reverted to eliminate them, also taking dependencies into consideration.
[rds_instances] enviro1 ansible_connection=local instance_name=ansible-poc-1 environment_name=enviro1 enviro2 ansible_connection=local instance_name=ansible-poc-2 environment_name=enviro2 enviro3 ansible_connection=local instance_name=ansible-poc-3 environment_name=enviro3
[rds_instances:vars] admin_username=foo admin_password=Foobar vpcid=vpc-xxxxx
hosts: rds_instances within the Ansible play and making sure the inventory file is on the default path or it is passed explicitly, we can provision three copies of the environment using different instance_name values. Of course, passing the variables values to ansible-playbook explicitly is no longer required. To obtain something equivalent with Terraform, you can make use of workspaces. A full explanation of how they work exceeds this blog post but it is basically a way to keep different deployments independent, by maintaining separate state data. You can add workspaces by running terraform workspace new <workspace_name>. You can then move between them by running terraform workspace select <workspace_name>. Once the workspaces are created, you just need to check under which one your configuration is running and change the input variables accordingly. There are two modifications needed for this: first, we need to create a variables.tf file at the same level where the test.tf is and declare a map holding all workspaces names:
variable "workspaces" { type = "map" default = { enviro1 = "enviro1" enviro2 = "enviro2" enviro3 = "enviro3" dev = "dev" } }
test.tf, we need to add some logic to detect which workspace the configuration is being launched from:
... region = "us-east-1" }
locals { environment = "${lookup(var.workspaces, terraform.workspace, "dev")}" }
module "rds-instance" { ... instance_name = "${local.environment}" } Once the changes are implemented, the instance_name variable will match the workspace name at runtime. Using this as a key, you can access different maps holding values for each environment.
As a first point, and as mentioned in its own documentation, Terraform was not meant to replace configuration managers and some functionality overlaps. Terraform describes itself as a "tool for building, changing, and versioning infrastructure safely and efficiently". And that's where its sweet spot is: infrastructure as code. We can say that perhaps configuration managers like Ansible are more flexible in the sense that you can both provision and configure with them, but you can't configure a resource properly with Terraform. You can, however, integrate it with configuration managers through provisioners to set up resources after they were created. Now, specifically around provisioning RDS environments, using Terraform seems simpler to set up, it handles dependencies and parallelism automatically, and it allows you to destroy the resources created without writing additional code.
Are you ready to save up to 60% in operational costs?