Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11411 articles
Browse latest View live

Terraform and AWS Tag Policy

$
0
0

@riyadali15 wrote:

Hi guys, i have a question. On my AWS i have Tag Policies and assigned to a user whereby for example an EC2 instance cant be made without the mandatory tags within the policy (Department: HR, Finance, Customer: XXXX) etc.)

If i make a terraform script to make a simple EC2 instance, do i need to add in the tag syntax in the script regardless and if i dont will it stop me?

tags = {
Deptartment= “test”
}
}

Posts: 1

Participants: 1

Read full topic


Multiple Plan+Apply stages

$
0
0

@skolodyazhnyy wrote:

I’m trying to provision my environment using terraform, it creates everything from scratch, so in case of disaster I can simply run terraform apply and create all AWS and K8s resources I need.

Unfortunately, I keep hitting same wall. Some resources need to be created before others, furthermore some resources need to be created before planning changes in other resources. Terraform’s PLAN everything > APPLY everything model, does not work very well for me. What seems to be missing (or I don’t know how to set it up) is multiple plan+apply > plan+apply runs so I can create basic infrastructure, deploy next tier and then next tier.

For example, I have aws_eks_cluster resource to provision EKS cluster where my application will run and I have kubernetes_deployment resource for my application.
Logically, I can’t plan kubernetes_deployment before aws_eks_cluster created, because terraform can’t connect to EKS cluster to check what exists and what not. It somehow works when nothing exists, because Kubernetes resources don’t exist on first run, then EKS cluster is provisioned and resources are created… but it quickly fails if terraform decides to re-create EKS cluster, because during planning terraform sees Kubernetes resources and thinks “ok, everything is up-to-date”, then during apply terraform kills cluster and does not re-create Kubernetes resources (they were ok during planning).

Another example, is for_each and count resources, terraform can’t plan these without knowing for_each and count values, which makes sense. You have to run terraform apply with -target to create dependant resources first. It makes sense but shows same issue, you need to create some resources before planning others.

Another example, planning and creating rabbitmq_user resources after provisioning kubernetes_deployment for rabbitmq server. I hit same problem again and again.

I tried to use Terraform Cloud, but it simply does not work because there is no way to use -target argument :frowning:

Then, i tried to run terraform with -target, but it’s very complicated, you need to keep track of “first-tier resources” so you can provision them first (like eks cluster, or resources used in for_each or count operations).

I also tried to separate everything into independent terraform folders terraform apply A, terraform apply B but it created even more mess because there is no way to pass parameters between two terraform folders and I have my application tf code spread among multiple folders: create SSL certificate in one folder, create SSL verification DNS records in other.

Modules don’t work because within one run terraform does PLAN everything + APPLY everything.

Maybe there is a way to handle this, and I would really appreciate if somebody could point it out. But it seems to me terraform lacks some sort of staging system which would PLAN stage 1 > APPLY stage 1 > Plan Stage 2 > Apply Stage 2, everything within single terraform run so I can pass variables around and provision modules partially in each stage.

The way I imagine it with multiple stages (ideally running in Terraform Cloud) would be something like:

$ terraform apply
Plan stage 1: VPC, EKS etc
Apply stage 1: VPC, EKS etc
Plan stage 2: k8s workloads
Apply stage 2: k8s workloads

This way if on stage 1 EKS cluster is re-created, terraform will get correct state during stage 2 planning, so stage 2 apply will create missing resources. Maybe it can be achieved with some sort of plan_depends_on parameter.

Posts: 1

Participants: 1

Read full topic

How to output multiple public IPs?

$
0
0

@joaooamaral wrote:

Hello all,

I’m kinda new to Terraform, and I have a project where we need to use count to create multiple resources based on an input of a variable. In the end I’d like to output all Public IPs provisioned but I’m not being able to do this as we are using count.

I’ve learned that Public IP Addresses aren’t allocated until they’re attached to a device and here is an example on how to fetch a public IP of a new allocated VM. But I can’t get it to work when using count.

Here is the relevant code that I have so far:

data "azurerm_public_ip" "FEs-PIP" {
  name                = azurerm_public_ip.pip-coaching.*.name
  resource_group_name = azurerm_virtual_machine.vm-coaching.*.resource_group_name
}

resource "azurerm_public_ip" "pip-coaching" {
  count                   = var.coaching-persons * 3
  name                    = "Pip-Coaching-${count.index}"
  location                = var.location
  resource_group_name     = azurerm_resource_group.rg-coaching.name
  allocation_method       = "Dynamic"
  idle_timeout_in_minutes = 30
}

resource "azurerm_network_interface" "nic-coaching" {
  count               = var.coaching-persons * 3
  name                = "Nic-${count.index}-Coaching"
  location            = var.location
  resource_group_name = azurerm_resource_group.rg-coaching.name

  ip_configuration {
    name                          = "Nic-${count.index}-Coaching-ipconfig"
    subnet_id                     = azurerm_subnet.snet-coaching.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.pip-coaching["${count.index}"].id
  }
}


resource "azurerm_virtual_machine" "vm-coaching" {
  count                            = var.coaching-persons * 3
  name                             = "VM-${count.index}-Coaching"
  network_interface_ids            = [azurerm_network_interface.nic-coaching["${count.index}"].id]
...}


output "FEs-IPs" {
  description = "IPs of all FEs provisoned."
  value       = azurerm_virtual_machine.vm-coaching.*.public_ip_address
}

The ultimate goal is to display a list of all provisioned Public IPs after running terraform apply
I can’t get it to work like this. If you guys could point me in the right direction I’d be thankful.

Posts: 1

Participants: 1

Read full topic

Configuring a static IP for an EKS cluster

$
0
0

@JesterOrNot wrote:

I’m kind of new to AWS and Terraform, how can I configure a static IP that would be resolvable for use with Cloud DNS to host the app?

Posts: 1

Participants: 1

Read full topic

Confused about workspace variables

$
0
0

@robinbryce wrote:

Hi,

I don’ t understand how values set in the terraform cloud worspace UI should be referenced in terraform files.

I have FOO=“123” set in the workspace

I expected to be able to do something like
variable “FOO” {
type = string
}
In the root module and any inferiour consuming modules.

But I get "The argument “FOO” is required, but no definition was
found.

If I provide a default value, then that default value is used, the value set in the work space doesn’t get picked up.

I have attempted to follow the advice here

But can’t put it together at all.

Posts: 1

Participants: 1

Read full topic

Using module provider option, can't see the created resources after successful terraform apply

$
0
0

@nickbawt wrote:

Using Terraform v.0.12.23, provider.aws v2.59.0

Very new to terraform so apologies in advanced for wrong terminology, methodology.

Having a strange quirk when using the provider option within a module block. The setup is very basic so not sure where I’m going wrong. My intent is to create AWS resources within multiple regions so want to use the default provider and alias’d providers.

My main.tf:
provider “aws” {
region = “ap-southeast-2”
access_key = var.access_key
secret_key = var.secret_key
}

provider “aws” {
alias = “Singapore”
region = “ap-southeast-1”
}

When I call my module to create a VPC without the provider option the VPC resource is created successfully through terraform apply -> yes. I can then login to my AWS portal with root access and see the new resource in the correct region (ap-southeast-2 in this example).

Module without provider option:
module “VPC_create” {
source = “./modules/VPC_create”
}

Now when I include the provider information into the module block the resources are still created against the correct region (ap-southeast-1 in this example) according to terraform (with vpc_id’s being generated) but I can’t see the resource in the AWS portal using an account with root access.

module “VPC_create” {
source = “./modules/VPC_create”
providers = {
aws = aws.Singapore
}

Is this something I’m doing wrong with terraform, or perhaps an IAM issue with AWS? Quite fustraing as calling on the provider option within the module would suit my use case. Is there a better way of creating AWS resources across multiple regions? Thanks for any assistance.

Posts: 1

Participants: 1

Read full topic

Error: Unsupported block type (in code defining cloudwatch metric alarm)

How to read a CSV file from Blob storage to take as input

$
0
0

@seetumbd wrote:

I’m creating one terraform template and taking input from a csv file and using
csvdecode(file("${path.module}/file.csv")) and its working fine if file is on local or any other location.
Now, my query is my csv file is stored in Blob storage and want to read file from there and take input from csv file.
Is there anyway, I can do that.

Thanks

Posts: 1

Participants: 1

Read full topic


Suppress terraform init -force-copy doesn't work in a bash script

$
0
0

@bkalai321 wrote:

I need to suppress Do you want to migrate all workspaces to “local”?
so I can run it in a script
First command is terraform workstapce new example_1
Second command terraform init -force-copy -backend-config=path=
getting Do you want to migrate all workspaces to “local”?
how do I get by this so this can be run in a script

Terraform v0.12.24

  • provider.aws v2.58.0

Posts: 1

Participants: 1

Read full topic

Specifying a date in the future

$
0
0

@bingerk wrote:

Hi Rock-stars, I have a question/situation: I am trying to setup an Azure Automation Schedule to run at 8AM the following day; however, I cannot seem to find any examples or abilities to do this for succession deployments. Meaning, I can set it up like this:

start_time = “2020-05-02T13:00:00Z”

But if we run a future deployment it will fail since it is in the past. I tried using the following code:

start_time = “${split(“T”, timestamp())[0]}T13:00:00Z”

However, that will only work if you execute the deployment sometime between midnight and 7:55AM the next day.

I’ve tried using the timeadd but that only allows time not days (again, I need this to run at the next morning at 8AM regardless of what time the deployment runs).

Below is the entire block I’m working with for reference.

resource “azurerm_automation_schedule” “resumedw” {
name = “${azurerm_automation_runbook.resumedw.name}-Mon-Fri-8AM”
resource_group_name = azurerm_resource_group.shared.name
automation_account_name = azurerm_automation_account.shared.name
frequency = “Week”
interval = 1
start_time = “2020-05-02T13:00:00Z”
timezone = “Central Standard Time”
description = “Resume Data Warehouses at 8AM Monday-Friday”
week_days = [“Monday”, “Tuesday”, “Wednesday”, “Thursday”, “Friday”]

lifecycle {
ignore_changes = [
start_time
]
}

Certainly appreciate any assisistance!

Posts: 2

Participants: 2

Read full topic

How to mark a null resource with a local provisioner as "done"

$
0
0

@albpal wrote:

Hi,

We have deployed an environment on Azure with Terraform. However, we have refactored Terraform templates to work with modules. Now, we are trying to build the new tfstate based on modules. We can import (with terraform import) without problems terraform builtin Azure resources but we don’t know how to “mark as done” the null_resources with local_provisioner configured that shouldn’t be executed anymore.

I.E: We want to “mark as done” tasks like the following one:

resource “null_resource” “databricks_ad_sync_token” {
depends_on = [
data.local_file.management_token,
data.local_file.application_token,
var.module_depends_on
]

triggers = {
trigger = timestamp()
}

provisioner “local-exec” {
command = <<EOT
curl -s --request POST ‘https://${var.region.name}.azuredatabricks.net/api/2.0/token/create’
–header ‘Content-Type: application/json’
–header ‘X-Databricks-Azure-Workspace-Resource-Id: /subscriptions/${data.azurerm_subscription.current.subscription_id}/resourceGroups/${var.resource_group_name}/providers/Microsoft.Databricks/workspaces/${var.databricks_ws_name}’
–header ‘X-Databricks-Azure-SP-Management-Token: ${jsondecode(data.local_file.management_token.content).access_token}’
–header ‘Authorization: Bearer ${jsondecode(data.local_file.application_token.content).access_token}’
–data-raw ‘{“comment”: “Additional token for synchronization with AD”}’
–output ‘${path.cwd}/.temp/databricks_ad_sync_token_data.txt’
EOT
}
}

So it’s imported to the new tfstate and doesn’t appear when doing terraform plan/apply as modified.

Anyone knows how to proceed? Any help is welcome.

Thanks in advance!

Best regards,
Albert.

Posts: 1

Participants: 1

Read full topic

Export SQL database as a Backup using Terraform

$
0
0

@asbchakri wrote:

So i’m trying to export SQL Database as a Backup using the Export functionality. I have the gcloud version,
gcloud beta sql export bak [INSTANCE_NAME] gs://[BUCKET_NAME]/sqldumpfile.gz
–database=[DATABASE_NAME]

I’m looking for the equivalent command using Terraform. Can anyone please help.

Posts: 1

Participants: 1

Read full topic

Increase storage size in RDS Read-replica

$
0
0

@boobalana wrote:

Hi,
I am using terraform to create a RDS and a read replica. while creating RDS instance with resource “aws_db_instance” with size 100GB in allocated_storage . so it creates DB instance with 100GB and Read-replica with Same 100GB.

Now i want to increase RDS size if i update allocated_storage as 150 GB . it will update DB instance size but will it increase Read-replica storage aswell ? if not how to handle this ?

some advise will be helpful.

Thanks

Posts: 1

Participants: 1

Read full topic

Terraform AWS Aurora PostgreSQL Serverless "currently unavailable"

$
0
0

@haloflightleader wrote:

Could someone please help me instantiate a Serverless Aurora PostgreSQL through Terraform?

I am using the resource aws_rds_cluster and aws_rds_cluster_instance to build a Serverless Aurora PostgreSQL in us-west-2 with the latest versions. However, I keep getting the error message.

My research indicated this is likely because of engine_version, but I do not know exactly. I have been playing with this all day without any luck.

Error Message:
“InvalidParameterValue: The engine mode serverless you requested is currently unavailable.”

Terraform version:
Terraform v0.12.24

  • provider.aws v2.60.0
  • provider.external v1.2.0
  • provider.http v1.2.0
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.2.1

Posts: 2

Participants: 1

Read full topic

Learn Terraform - Azure - Resource Dependencies sample code issue

$
0
0

@pvandervelde wrote:

The sample at the bottom of the Learn Terraform - Azure - Resource Dependencies page seems to be out of sync with the current version of Terraform (0.12.24).
Line 61:

# Create network interface
resource "azurerm_network_interface" "nic" {
name                      = "myNIC"
location                  = "westus2"
resource_group_name       = azurerm_resource_group.rg.name
network_security_group_id = azurerm_network_security_group.nsg.id

ip_configuration {
    name                          = "myNICConfg"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = azurerm_public_ip.publicip.id
  }
}

The network_security_group_id doesn’t seem to be a part of the azurerm_network_interface resource anymore.

I haven’t quite figured out where to link the NSG but if I do I’ll update this post :slight_smile:

Posts: 2

Participants: 1

Read full topic


Enable aws config

Storage : FSX Scratch 2 deployment

How to get instances IDs lunched with auto-scaling groupe #aws #terraform

$
0
0

@chanez18 wrote:

Hi, i’m using aws, I’ve created an autoscaling group using terraform, and i want to know how to get the instances ids created.

Posts: 1

Participants: 1

Read full topic

Conditional environment variable creation inside a kubernetes_deployement

$
0
0

@tontondematt wrote:

Hello there, so i am looking to deploy my app on my eks cluster using kubernetes_deployment, it works like a charm for one specific app.

Now i am looking into generalizing my deployment to use it on multiple apps with minimal differences, one difference is that for app A i need to set git credentials as env vars and app B i don’t need those
I am creating these env vars from a secret like so.
My question is can i have these be conditional ? IE if (app A => create my env vars ) if app B don’t

      env {
          name = "GIT_USERNAME"
          value_from {
            secret_key_ref {
              key  = "username"
              name = "my-secret"
            }
          }
      }

      env {
          name = "GIT_PASSWORD"
          value_from {
            secret_key_ref {
              key  = "password"
              name = "my-secret"
            }
          }
      }

Posts: 1

Participants: 1

Read full topic

Terraform destroy

$
0
0

@deasunk wrote:

terraform destroy

Error: Apply not allowed for workspaces with a VCS connection

Why am I getting this error on the CLI if I have Environment variable CONFIRM_DESTROY=1 in the Terraform workspace?

Also, when I run terraform taint the resource is removed from state but when I run terraform init is appears again?

Posts: 1

Participants: 1

Read full topic

Viewing all 11411 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>