Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11429 articles
Browse latest View live

Starting k8s cluster && creating deployments in same apply

$
0
0

Hi all, I’m attempting to use terraform to both create a gke cluster and start up kubernetes deployments/statefulsets/etc. I’d like to be able to do the entire thing in one apply statement without having to do non-terraform steps (e.g. setting my kubeconfig to reference the newly created cluster). I was wondering if anyone has done this before and had some advice for me. The cluster creation works as expected but then my deployment/pod initializations fail with “no route to host”. This makes sense to me since I’m not telling terraform to reference the new cluster but I’m unclear on how to correct it. Any help would be greatly appreciated.

1 post - 1 participant

Read full topic


Terraform Cloud Backend configuration?

$
0
0

In order to avoid hard coding backend configurations, in TF it’s possible to use -backend-config="KEY=VALUE". Is there some equivalent way to avoid this in TFC?

1 post - 1 participant

Read full topic

No stored state was found for the given workspace in the given backend

$
0
0

I have a few regions that each have remote state stored in them, and a few regions that lack that remote state. I’m trying to figure out a way to ignore the regions that lack that remote state, but I either get:

Error: Invalid count argument               
                                                                         
  on bla.tf line 59, in resource "aws_security_group" "SG":          
  59:   count       = length(lookup(data.terraform_remote_state.bla[0].outputs, "bla_bla", [])) > 0 ? 1 : 0
                                                                           
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the                                                                                                                                                                                                                                                                                                     
resources that the count depends on.                                                                                                        

or

Error: Unable to find remote state

  on bla.tf line 8, in data "terraform_remote_state" "bla_bla":
   8: data "terraform_remote_state" "bla_bla" {

No stored state was found for the given workspace in the given backend.

Is there a reason why defaults doesn’t allow me to skip remote states that don’t exist?

1 post - 1 participant

Read full topic

Codependent values across different modules?

$
0
0

I’m trying to bring together three Terraform modules° that comprise a webapp. Separately the modules deploy OK, and my goal now is to arrange things so there’s one terraform apply to run.

A problem seems to arise when module A generates values at apply time that module B needs to know about, and vice versa. Were it strictly a linear dependency situation I imagine I could get away with output to pass values on, but in this case AFAICT, the generated values needs to traverse across modules back and forth.

I’ll try to explain my conundrum with an example. As you may know, typically you need CORS config when BE and FE live on separate origins. The BE needs to be configured with the origin of the FE, and vice versa.

For reasons outside of my control°° I’m using the random_id module to append a random id to certain names (similar to Kubernetes). For example:

# Backend
resource "random_id" "instance_suffix" {
  byte_length = 2
}
resource "azurerm_app_service" "backend" {
  name = "mybackend-${random_id.instance_suffix.hex}"
  ...
}
# Frontend
...
resource "azurerm_app_service" "frontend" {
  name = "myfrontend-${random_id.instance_suffix.hex}"
  ...
}

becomes e.g.
BE name: “mybackend-ab23.azurewebsites.net
FE name: “myfrontend-ed78.azurewebsites.net

Ultimately the FE should have the correct BE origin, something like:

# Frontend
resource "azurerm_app_service" "frontend" {
  ...
  cors = [ "https://mybackend-ab23.azurewebsites.net" ]
 ...
}

but not sure how I should write that into my FE module. Is there a way to use a local?
A data value? Not sure.
It doesn’t seem very declarative/idiomatic to configure CORS in some kind of “final pass” when both values are known, but if that’s necessary happy to try.

Been reading https://www.terraform.io/docs/configuration/expressions.html#values-not-yet-known but not sure it applies.

Hoping there is a best practice on this.
Appreciate your insight! :slight_smile:


° The three modules are: a static FE, BE, and DB. Running on Azure.
°° Azure uses a global shared namespace for certain public facing things

2 posts - 2 participants

Read full topic

MongoDB on GCP without MongoDB Atlas

$
0
0

Hi, I’m new to Terraform and struggle setting up a MongoDB Replica Set on GCP.
Without Terraform, the easiest way would probably be to use e.g. the Bitnami MongoDB Replica Set on the GCE Marketplace. As far as I know, it is currently not possible (paradigm wise also not intended) to be able to create a VM based on a Marketplace Template using Terraform.
I know the straight forward way would be to use to use MongoDB Atlas incl. the special Provider for it, but I don’t want to do that for now…

I tried to create my own dockerized MongoDB Replica Set on GCE, but ran into quite a few challenges… One learning from that, that I want to share is to use the Equivalent “REST” Button on GCP, it often gives hints on how to rebuild something with Terraform, especially meta-data and things like gce-container-declaration.

Does anyone has experience with configuring MongoDB Replica Sets using Terraform on GCP or has any tips what the easiest way would be? I can’t find a lot of information on that and struggle…

Thanks

Tobi

1 post - 1 participant

Read full topic

Trying to import Runscope tests using Runscope provider

Null_resource Powershell

$
0
0

So currently the Azure provider does not include the ability to create Cost alerts for resources (feature is in preview). The azure powershell module does provide methods and cmdlets to create and update them though.

I am trying to fill the capability gap by using a powershell script to create the budget and notifications that we require but I was wondering if anyone had any best practice advice for things like this? I am guessing that the budget and notification will not be in the state as they are essentially created outside of terraform which means my script will probably fail trying to create it on each run. Should the logic to handle this be placed in the script or is the logic better placed in a trigger before script execution and base it off a variable in the module we call perhaps?

1 post - 1 participant

Read full topic

Using for, foreach to declare multiple resources

$
0
0

I am trying to create a module that creates an EC2 instance with a variable number of attached volumes. The volumes need to be declared as separate resources and become attached to the instance via the volume_attachment resource, because this is for a fileserver where the volumes will persist if I do a terraform apply using a new AMI id (i.e., I deploy the server, use it to write some data to the volumes, then I create a new AMI, run a terraform apply, and terraform destroys the attachment resource, retains the volumes because they haven’t changed, destroys the instance, launches a new instance, and creates new attachment resources which will attach the old volumes to the new instance).

I am in need of help getting the syntax and correct method for declaring the various resources and their links using for_each.

Example:
I have a .tf file which will contain 3 modules. The modules are for the same subfolder, ec2_instance, where the module definition passes parameters in, which takes a “vols” parameter containing the volume definitions. For each invocation of the module, there are differing numbers of volumes:

Module 1 might use:

vols = {
    data01 = {
        device_name = "xvde"
        size        = "100"
        type        = "gp2"
    }
    data02 = {
        device_name = "xvdh"
        size        = "200"
        type        = "gp2"
    }
}

and Module 2 might use:

vols = {
    output01 = {
        device_name = "xvde"
        size        = "800"
        type        = "gp2"
    }
}

Currently, this is my file which declares 1 instance, 2 volumes and 2 volume_attachments. It will create a single instance with 2 volumes attached (and will be the basis of the inside of my new module):

resource "aws_instance" "storage" {
    ami                     = data.aws_ami.storage.id
    ...
    user_data               = data.template_file.userdata_executescript.rendered
    root_block_device {
        volume_type           = "gp2"
        volume_size           = 80
        delete_on_termination = true
    }

    tags = merge(
        var.common_tags,
        {
        "Name"   = format("%s_storage", var.instance_name)
        "Role"   = format("%s_storage", var.stack_name)
        "Backup" = "true"
        },
    )
}

resource "aws_ebs_volume" "volume_input" {
    availability_zone = local.availability_zone
    size              = 100
    type              = "gp2"
    encrypted         = true

    tags = merge(
        var.common_tags,
        {
        "Name"   = format("%s_storage-%s", var.stack_name, "data01")
        "Backup" = "true"
        },
    )
}

resource "aws_ebs_volume" "volume_input__backup" {
    availability_zone = local.availability_zone
    size              = 200
    type              = "gp2"
    encrypted         = true
    tags = merge(
        var.common_tags,
        {
        "Name"   = format("%s_storage-%s", var.stack_name, "data02")
        "Backup" = "true"
        },
    )
}

resource "aws_volume_attachment" "volume_input" {
    device_name = "xvde"
    volume_id   = aws_ebs_volume.volume_input.id
    instance_id = aws_instance.storage.id
}

resource "aws_volume_attachment" "volume_input__backup" {
    device_name = "xvdh"
    volume_id   = aws_ebs_volume.volume_input__backup.id
    instance_id = aws_instance.storage.id
}

As there will be a differing number of volumes and volume attachments each time the module is invoked, I thought I’d do this using a for_each to declare the volume and volume_attachment resources (at the top of each resource I’ve put the common values, then the ‘each’ values beneath where they differ per resource):

resource "aws_ebs_volume" "volume" {
    for_each = var.vols

    availability_zone = local.availability_zone
    encrypted         = true

    size = each.value.size
    type = each.value.type

    tags = merge(
        var.common_tags,
        {
        "Name"   = format("%s_storage-%s", var.stack_name, each.key)
        "Backup" = "true"
        },
    )
}

resource "aws_volume_attachment" "volume" {
    for_each = var.vols

    instance_id = aws_instance.storage.id

    device_name = each.value.device_name
    volume_id   = aws_ebs_volume.volume["${lookup(var.vols[each.key], each.key, "")}"].id
}

What I need help with is understanding how to specify the volume_id in the aws_volume_attachment resource. I’ve made an attempt at it so you can see the type of value I am trying to lookup/get.

Once I’ve got this set of declarations working, I also need help understanding how to output the volume details into the userdata. I’m using EC2Launch v2 for Windows, which takes yaml input. Essentially, I need to create a userdata file which contains something like this, but with a different number of devices each time:

version: 1.0
tasks:
  - task: initializeVolume
    inputs:
      initialize: devices
      devices:
        - device: xvde
          name: data01
          letter: D
          partition: gpt
        - device: xvdh
          name: data02
          letter: E
          partition: gpt

I’m new to the for/for_each concepts in terraform and can’t seem to figure out the correct syntaxes and formats for repeating elements. Help is appreciated.

3 posts - 1 participant

Read full topic


Pausing apply runs when applying modules

$
0
0

Hi,

Is there a possibility to pause after a module is applied and get the output data before proceeding to the next module ?

I would like pause on 1st run since the module creates keys that i need for another tool. If it keys are created then it could skip the pause on the 2nd run

Kevin

1 post - 1 participant

Read full topic

Dynamic data block using feature flag?

$
0
0

Hello all!

I’d like to add a data block to find an aws security group using a feature flag/var, if the variable is false, then the security group does not get added. Currently I have something like this:

data "aws_security_group" "sggroup" {
  count = var.enable_sggroup ? 1 : 0
  tags = {
Usage = "sggroup"
  } 
}

Then based off of the above, I would use the concat function in the aws_lb resource block in order to add or not add the security group in question

resource "aws_lb" "loadbalancer" {
  name               = "lb-${var.name_prefix}"
  internal           = false
  load_balancer_type = "application"
  security_groups    = concat(["${var.lb_security_group}", "${data.aws_security_group.sggroup2.id}"], "${data.aws_security_groups.sggroups.ids}", ["${data.aws_security_group.sggroup.id}"])

Would anyone be able to let me know if I am on the right track?

Getting this error when running a plan

Because data.aws_security_group.sggroup has "count" set, its attributes must
be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    data.aws_security_group.sggroup[count.index]

1 post - 1 participant

Read full topic

Tranposing a Map of Maps

$
0
0

Hi,

I’ve got a map of Azure AD users (data.azuread_user) which looks like a consecutive bunch of these:

"XXXXX-XXXXX-XXXXX-XXXXX" = {
    "account_enabled" = true
    "display_name" = "Poppa Smurf"
    "id" = "XXXXX-XXXXX-XXXXX-XXXXX"
    "immutable_id" = ""
    "mail" = "poppa.smurf@company.com"
    "mail_nickname" = "poppa.smurf"
    "object_id" = "XXXXX-XXXXX-XXXXX-XXXXX"
    "onpremises_sam_account_name" = ""
    "onpremises_user_principal_name" = ""
    "usage_location" = "US"
    "user_principal_name" = "poppa.smurf@company.com"
  }    

Due to the way I generated it, via a for_each off the list of members on the all users group in Azure AD, the key is the UUID.

How can I transpose or otherwise munge the list so that the mail_nickname subvalue becomes the top level key instead? I’ve look at transpose and “for blah in blah” but can’t work out how to put either of those together to do the thing.

End result will hopefully be:

"poppa.smurf" = {
    "account_enabled" = true
    "display_name" = "Poppa Smurf"
    "id" = "XXXXX-XXXXX-XXXXX-XXXXX"
    "immutable_id" = ""
    "mail" = "poppa.smurf@company.com"
    "mail_nickname" = "poppa.smurf"
    "object_id" = "XXXXX-XXXXX-XXXXX-XXXXX"
    "onpremises_sam_account_name" = ""
    "onpremises_user_principal_name" = ""
    "usage_location" = "US"
    "user_principal_name" = "poppa.smurf@company.com"
  }  

Cheers,

1 post - 1 participant

Read full topic

Post apply status on PR merge to GitHub

$
0
0

The VCS integration is pretty nice, both with auto-apply and manual apply on push.

Is there a way to post the apply result back to a GitHub PR once it’s merged? Would be nice to see the result, esp for the case that the PR was merged but apply failed rather than having to also go check in TFC.

1 post - 1 participant

Read full topic

Use local provider instead from providers repository

$
0
0

Hello

I’m trying write some feature to the vsphere provider. I have local version and I want to use version with my changes instead of original version.
How can I do this? Because terraform init install original provider.
Should I put my version of provider to the ~/.terraform.d/plugins dirrectory?

1 post - 1 participant

Read full topic

JSON plan output data missing?

$
0
0

Hi All,

I am experimenting with using the JSON representation of terraform plans and wanted to confirm something.

I have noticed if there is a nested structure such as this in the plan output
ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}

The JSON does not include the inner keys such as delete_on_termination etc The only reference is in the after_unknown section of the json

"ebs_block_device": true,

If all of the internal values of an object are unknown are they simply omitted from the JSON ?
I note this comment in https://www.terraform.io/docs/internals/json-format.html#change-representation

" The after value will be incomplete if there are values within it that won’t be known until after apply"

but that page doesn’t reference the unknown section of JSON specifically.

Any guidance much appreciated.

1 post - 1 participant

Read full topic

Install error: SHA sum does not match

$
0
0

I have installed the HashiCorp Terraform extension from VSCode marketplace, but when I enable the extension in the workspace, this error is shown:
Install error: SHA sum for terraform-ls_0.5.1_windows_amd64.zip does not match

I have tried to install the extension from .vsix file but to the same result.


Kenneth ML

1 post - 1 participant

Read full topic


Terraform refresh

$
0
0

Hello.
I have lost my terraform state files, I’ve tried to use refresh to get a new one but no luck .
Is it possible to somehow tell terraform to scan my current infrastructure and to recreate terraform state files ?

1 post - 1 participant

Read full topic

How to use output value form one folder to another folder

$
0
0

Hey,
I have a two folder structure named as “alb” and “cloudfront” I have to use the output value of “alb” dns_name in “cloudfront” main.tf file as input, because I have to update the dns value in cloudfront distribution.

2 posts - 2 participants

Read full topic

Terrraform For each example

$
0
0

Hi,

I have the following code

data "aws_nat_gateway" "shoot_nat_gateway_z0" {
  vpc_id = element(tolist(data.aws_vpcs.shoot_vpc_id.ids), 0)
  tags = {
    "Name" = "test-natgw-z0"
 }
}

data "aws_nat_gateway" "shoot_nat_gateway_z1" {
  vpc_id = element(tolist(data.aws_vpcs.shoot_vpc_id.ids), 0)
  tags = {
    "Name" = "test-natgw-z1"
 }
}

data "aws_nat_gateway" "shoot_nat_gateway_z2" {
  vpc_id = element(tolist(data.aws_vpcs.shoot_vpc_id.ids), 0)
  tags = {
    "Name" = "test-natgw-z2"
 }
}

How can i simplify with a for each loop assuming i count the length of the cidr block list ?

output "subnet_cidr_block" {
  value = [for s in data.aws_subnet.shoot_vpc_subnet : s.cidr_block]
}

How i can generate the value to test-natgw-z0, test-natgw-z1, test-natgw-z2 … on the fly in terraform ?

1 post - 1 participant

Read full topic

Prefix is being ignored?

$
0
0

Hey there,

I’m a bit new to Terraform/Terraform Cloud so please bear with me.

In my code I have a backend block defined as such:

terraform {
  backend "remote" {
    organization = "myorg"

    workspaces {
      prefix = "myapp-ecs-"
    }
  }
}

In the documentation I read that while working with workspaces using the CLI, I can use short names that do not include the prefix, but this does not seem to be working.

When I create a new workspace called “dev-us-east-1”, the expected behavior when I do “terraform apply” should be to run in the Terraform Cloud workspace called “myapp-ecs-dev-us-east-1”, but this is not at all what is happening.

Instead, when I create the new workspace “dev-us-east-1”, a workspace named “myorg-dev-us-east-1” is automatically created in Terraform Cloud, and all my runs go to it. It seems my prefix is being completely ignored.

So what am I missing here? :slight_smile:

1 post - 1 participant

Read full topic

Looping over google_secret_manager_secret to create secrets fails

$
0
0

Not sure this should be here on with the google provider.

If I have a series of locals defined:

locals {
  exports = {
    "test-secret1" = { name = "test-secret", object = "secret data" }
    "test-secret2" = { name = "test-secret", object = "secret data" }
  }
}

If I then create a resource, the following works:

resource "google_secret_manager_secret" "test-secret" {
  secret_id = local.exports.test-secret.name

  replication {
    automatic = true
  }

}

However if I loop using ‘for_each’:

resource "google_secret_manager_secret" "test-secrets" {
  for_each = local.exports
  secret_id = each.value.name

  replication {
    automatic = false
  }
}

this does not work and produces the following error:

2020-07-16T17:36:56.014+0100 [DEBUG] plugin.terraform-provider-google_v3.30.0_x5: 
2020/07/16 17:36:56 [DEBUG] Retry Transport: Returning after 1 attempts
2020/07/16 17:36:56 [DEBUG] google_secret_manager_secret.test-secrets["test-secret1"]: 
apply errored, but we're indicating that via the Error pointer rather than returning it: Error 
creating Secret: googleapi: Error 400: Secret must be provided.
2020/07/16 17:36:56 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: Error creating 
Secret: googleapi: Error 400: Secret must be provided.
2020/07/16 17:36:56 [ERROR] <root>: eval: *terraform.EvalSequence, err: Error creating 
Secret: googleapi: Error 400: Secret must be provided.

Error: Error creating Secret: googleapi: Error 400: Secret must be provided.

  on test.tf line 1, in resource "google_secret_manager_secret" "test-secrets":
   1: resource "google_secret_manager_secret" "test-secrets" {



Error: Error creating Secret: googleapi: Error 400: Secret must be provided.

  on test.tf line 1, in resource "google_secret_manager_secret" "test-secrets":
   1: resource "google_secret_manager_secret" "test-secrets" {

Does anyone have any idea if this is a bug in Terraform, the Google provider (have tried versions 3.8 and 3.30) or whether this is something I am doing wrong?

1 post - 1 participant

Read full topic

Viewing all 11429 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>