Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11395 articles
Browse latest View live

Terraform image installing v 0.11, but wanted v .012

$
0
0

@VarunBhaskara wrote:

I am using terraform in an EC2 instance using terraform image by using below code.

image:
name: hashicorp/terraform:light

entrypoint:
- ‘/usr/bin/env’
- ‘PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin’

It installs the terraform v 0.11. But i wanted to install v 0.12 using the terraform image. Is there any way where i can specify the terraform v .012 version using the terraform image.

Posts: 1

Participants: 1

Read full topic


Ignoring API error for a specific resource

$
0
0

@j-martin wrote:

Hi,

We are experiencing an issue on Google’s side where one of their API consistently fails (today: gke pools in us-central-1c). This prevents us from running plans altogether (on totally unrelated resources).

We were wondering if there was a way to completely ignore the problematic resource.

Ignoring changes in the lifecycle does not prevent the API call from happening.

      lifecycle {
        ignore_changes = all
      }

We could delete the resource and remove it from the state, but it seems quite drastic for a temporary issue.

Posts: 1

Participants: 1

Read full topic

For_each vs. count.index: How to do simple arithmetic?

$
0
0

@dsantanu wrote:

Hi there,
What’s the equivalent of calculation with count.index (e.g. count.index+1) when using for_each?
I have a aws_network_acl_rule resource, where I calculate the rule_number based on the index (e.g. 400+10*(count.index+1)) and that seems not possible using for_each. So basically , need to rewrite this using for_each:

resource "aws_network_acl_rule" "ig_ssh" {
  count          = length(local.v_cidrs)
  egress         = false
  protocol       = "tcp"
  to_port        = 22
  from_port      = 22
  rule_number    = 400 + 10 * (count.index+1)
  rule_action    = "allow"
  cidr_block     = local.v_cidrs[count.index]
  network_acl_id = aws_network_acl.nacls.id
}

What’s the recommended way of doing this?

-S

Posts: 1

Participants: 1

Read full topic

How can I enable / disable a field in a .tf file

$
0
0

@saloneerege wrote:

I am working on writing a custom provider. For a particular resource I have a field defined in the .tf file host_instance_type. The specific value of host_instance_type determines if another field in the same resource (storage_capacity) value must be set or not. How can I write a condition on the field storage_capacity whose value is determined by the value of host_instance_type ?

Posts: 2

Participants: 2

Read full topic

AWS Athena : Create table/view with sql DDL

$
0
0

@simonB2020 wrote:

I am trying to create Athena Views by executing SQL code.

resource "aws_athena_database" "metadb" {
  name   = "mydb" 
  bucket = aws_s3_bucket.meta_target_bucket.id
}
resource "null_resource" "views" {
  for_each = {
    for filename in fileset("${var.sql_files_dir}/", "**/*.sql") :
    replace(replace(filename, "/", "_"), ".sql", "") => "${var.sql_files_dir}/${filename}"
  }
    provisioner     "local-exec" {
    command = <<-EOF
aws athena start-query-execution --query-string file://${each.value} --output json --query-execution-context Database=${aws_athena_database.metadb.id} --result-configuration OutputLocation=s3://${aws_s3_bucket.meta_target_bucket.id}
    EOF
  }

  provisioner "local-exec" {
    when    = destroy
    command = <<EOF
aws athena start-query-execution --query-string 'DROP VIEW IF EXISTS ${each.key}' --output json --query-execution-context Database=${aws_athena_database.metadb.id} --result-configuration OutputLocation=s3://${aws_s3_bucket.meta_target_bucket.id}
     EOF
  }
}

As a result, the creation section works well, and it does this by passing the SQL code in from a file.

However, in the destroy section, I have to pass in the ‘Drop’ SQL to be executed as a string - not from a file - as it is dynamic. This is where my problem lies. Whilst the CLI output shows the command being executed appearing valid

aws athena start-query-execution --query-string 'DROP VIEW IF EXISTS Query6' --output json --query-execution-context Database=mydb --result-configuration OutputLocation=s3://mybucket

I get the following:

Error: Error running command 'aws athena start-query-execution --query-string ‘DROP VIEW IF EXISTS Query6’ --output json --query-execution-context Database=mydb --result-configuration OutputLocation=s3://mybucket ': exit status 255. Output: usage: aws [options] [ …] [parameters] To see help text, you can run:

What confuses mem is that if I copy that command and paste it into CLI without the TF, then it executes perfectly. Any ideas as to why it will not execute when run by the provisioner ??

Posts: 1

Participants: 1

Read full topic

Passing arguments into a rancher2_role_template

$
0
0

@MarcoPalomo wrote:

Hello,
I try to pass these variables into this template to create security rules into a rancher2 RKE cluster. These rules has to be overwrighten like that :

   rules {
     api_groups = ["kibana.k8s.elastic.co"]
     resources = ["*"]
     verbs = ["create","delete","get","list","patch","update","watch"]
}

So my main.tf is there :

provider "rancher2"{ 
  api_url = "https://k8s.cloud/v3" //api-endpoint
  access_key = "token-8jqsr" 
  secret_key = "vs5qwjjk2rx2kk65p482zbttz5bssxj8fthx5gfphc82vwnd4jwrn9"
}

resource "rancher2_role_template" "bu-crd-right_b" { 

  rules {  
    elastic = "${var.elastic}"
  } 
 }

And my variables.tfvars looks like that :

variable "elastic" {

    api_groups = ["common.k8s.elastic.co"]
    resources = ["*"]
    verbs = ["create","delete","get","list","patch","update","watch"]
    }

This is the error :


Error: Variable declaration in .tfvars file

  on test2/variable.var line 1:
   1: variable "elastic" {

A .tfvars file is used to assign values to variables that have already been
declared in .tf files, not to declare new variables. To declare variable
"elastic", place this block in one of your .tf files, such as variables.tf.

To set a value for this variable in test2/variable.var, use the definition
syntax instead:
    elastic = <value>

Posts: 1

Participants: 1

Read full topic

Go-plugin interface design

$
0
0

@bcatubig wrote:

Hi! I’m looking to use the go-plugin framework with gRPC and I was wondering if there were any best practices for interface design.

I’d like to have a common interface with a handful of methods that all plugins could call, but each plugin could have different arguments to those methods.

I normally would get around this by instantiating a struct first, but it doesn’t seem like I can do this with the go-plugin framework.

Would I be forced to use the empty interface in go for all method signatures?

Thanks!

Posts: 1

Participants: 1

Read full topic

Best practices of Terraform staging testing

$
0
0

@xike41 wrote:

I am looking for some advices of terraform testing, before firing up terraform-apply in prod env. What we are having now are code-review and terraform-plan checks, these are not enough sometimes as terraform-apply could still fail on prod for many reasons (permission, cross-account config, and etc).

Is there any good practice to test the code on staging environment ? Given that our networking infra is huge and complex, duplicating the entire prod to staging just for the testing purpose is probably not an option yet for us.

Posts: 1

Participants: 1

Read full topic


Using json as input to create multiple servers

$
0
0

@tejz1386 wrote:

I am trying to use json as input in order to create multiple servers with all values as input in the json file.
“server_name”: “abctest01”,
“zone”: “us-west2-a”,
“data_disk_03”: “10”,
“data_disk_04”: “20”,
“disk_number”: “3”,
“os_disk”: “10”,
“instance_type”: “windows-2016”,
“location”: “us-west2”,
“machine_type”: “n1-standard-1”,
“ip_address”: “10.10.10.5”,
“backup”: “10.10.10.105”,
“data_disk_02”: “10”,
“data_disk_01”: “20”,
“ha_enabled”: “no”
.
I am using another module and using jsondecode to try to get these values as variables which I can then use.
I have tried using it in the below ways and its still throwing an error stating that value does not have attribute.
locals {
json_data = jsondecode(file(var.json_input_file_name))
}

server_name = local.json_data.server_name /
server_name= jsondecode(file(var.json_input_file_name)).server_name /
server_name = jsondecode(file(var.json_input_file_name)).server_name[0]

error:
4: server_name = jsondecode(file(var.json_input_file_name)).server_name[0]
|----------------
| var.json_input_file_name is “./development/gcp_server_input_test_disks.json”

This value does not have any attributes.

Any help to make it work is highly appreciated.

Posts: 1

Participants: 1

Read full topic

Another request of help :)

$
0
0

@skydion wrote:

Hello,

I’m trying write custom provider and faced with next error

Error: roles: must be a map

I have the next field in the schema

"roles" : &schema.Schema {
    Type          : schema.TypeMap,
        Computed      : true,
        Elem : &schema.Resource {
          Schema : datasourceRoleSchema(),
    },
 },

and next code to setup

tmp := flattenRoles(value.([]Roles))
if len(tmp) > 0 {
    err = d.Set(fieldName, tmp)
}

flattenRoles looks like

func flattenRoles(roles []Roles) []map[string]interface{} {
  flattened := make([]map[string]interface{}, len(roles))

  for i, v := range roles {
    m := make(map[string]interface{})
    m["role"] = flattenRole(v.Role)

    flattened[i] = m
  }

  return flattened
}

func flattenRole(role *Role) map[string]interface{} {
  m := make(map[string]interface{})

  m["id"]           = role.ID
  m["label"]        = role.Label
  m["identifier"]   = role.Identifier

  return m
}

JSON input looks like:

"roles": [
                {
                    "role": {
                        "created_at": "2020-03-10T14:34:27.000+02:00",
                        "id": 3,
                        "identifier": "locations_manager",
                        "label": "Cloud Locations Manager",
                        "permissions": [
                            {
                                "permission": {
                                    "created_at": "2020-03-10T14:34:19.000+02:00",
                                    "id": 137,
                                    "identifier": "cdn_locations",
                                    "updated_at": "2020-03-10T14:34:19.000+02:00"
                                }
                            },
                            {
                                "permission": {
                                    "created_at": "2020-03-10T14:34:21.000+02:00",
                                    "id": 450,
                                    "identifier": "location_groups",
                                    "updated_at": "2020-03-10T14:34:21.000+02:00"
                                }
                            }
                        ],
                        "system": false,
                        "updated_at": "2020-03-10T14:34:27.000+02:00",
                        "users_count": 1
                    }
                }
            ],

Posts: 1

Participants: 1

Read full topic

How to use conditional SchemaVersion?

$
0
0

@cbmdfc wrote:

Hello,

I’m writing a custom terraform provider and one of the use cases is that an ID on the backend server looks X in version 1.0, but Y in version 1.1.

I thought about using a SchemaVersion and upgrade it if you’re using a backend 1.1… But then it wouldn’t work for those users connecting to 1.0. (Upgrading is required, but downgrading is impossible).

Is there any way to define a SchemaVersion that changes only if the migration was successfully applied?

Posts: 3

Participants: 3

Read full topic

SignInWithApple cognito_identity_provider configuration

$
0
0

@mwawrusch wrote:

I want to configure an identity provider for AppleId Sign In. However I can’t find any description of what the provider_details object should look like, in particular the correct variable names. Below is where I am at now.


resource "aws_cognito_identity_provider" "appleid_provider" {
  user_pool_id  = "${aws_cognito_user_pool.roji_user_pool.id}"
  provider_name = "Apple" # Check if correct
  provider_type = "SignInWithApple"

  provider_details = {
   ???
   Apple Services Id, 
Team Id,
Key Id
    private_key = "${file("./private_key.p8")}"
  }

  attribute_mapping = {
    email    = "email"
    name     = "name"
    username = "sub"
  }
}

Posts: 1

Participants: 1

Read full topic

Aws_glue_script - how to output to S3?

$
0
0

@scorpian62 wrote:

The bare minimum description of this topic leaves much to be desired. I would like to use aws_glue_script to take the data from the glue catalog database and put it out as a CSV file on a S3 bucket.

The DataSink4 configuration needs to be like this (from a python script @https://github.com/progress/DataDirect-Code-Samples/blob/master/AutonomousRESTGlueSample/IngestREST.py)

##Write Dynamic Frames to S3 in CSV format. You can write it to any rds/redshift, by using the connection that you have defined previously in Glue

datasink4 = glueContext.write_dynamic_frame.from_options(frame = dynamic_dframe, connection_type = “s3”, connection_options = {“path”: “s3://glueuserdata”}, format = “csv”, transformation_ctx = “datasink4”)

I don’t see any way to do dynamic frames in terriform. Any ideas? Thanks

Posts: 1

Participants: 1

Read full topic

Aws_config_delivery_channel Region provided in sns arn: us-east-1, does not match the expected region: us-west-2

$
0
0

@kylecompassion wrote:

Trying to enable AWS Config in the 4 US regions. Got that working but am now trying to update the delivery channel to send to an SNS topic in a different account + in the us-east-2 region. Getting the error below. I get this error from all aws_config_delivery_channel resources that arent in us-east-1.
I did some searching but didnt find any topics about this yet on google and the aws_config_delivery_channel KB article on terraform.io doesnt mention any Notes about cross-region topics, but I but am starting to think that AWS Config can’t handle sending to topics in another region? Does anyone know how to work around this error?

Error: Creating Delivery Channel failed: InvalidSNSTopicARNException: The sns topic arn ‘arn:aws:sns:us-east-1:##########:MultiAccount_Config_Topic’ is not valid.Region provided in sns arn: us-east-1, does not match the expected region: us-west-2.

EDIT: I tried manually editing the delivery channel via AWS console and that threw an error saying “The specified SNS topic ARN is invalid.” so that reinforces my belief that AWS config delivery channel cant handle sending to SNS topics in regions different than the source AWS config resource.

Posts: 1

Participants: 1

Read full topic

How to test Terraform Enterprise with Self Signed Certificates


Iterating through a list but already using for_each (0.12)

$
0
0

@Computer15776 wrote:

Hello,
I’m currently trying to use TF 0.12 to create AWS Organizations accounts. Right now I have a map of accounts with applicable info, here is an example where “Services” is the account name:

accountMap = {
…
  Services = {
    OU = [“Development”, “Production”]
  },
…
}

OU refers to the org units the account should be a part of. I’m currently already using for_each to loop through this map of account names, but I’m stuck on how to use the OUs as a suffix, so the org account name would become “Services-Development” and “Services-Production”. I have tried similar to the following:

resource “aws_organizations_account” “main” {
for_each = var.ouMap

  name     = "${each.key}-${var.accountMap["${each.value[*]}"]}"
  ...
}

However, “name” requires a string and I get an error since I am providing a list of the OUs. So, how can I either convert the list to a string one at a time, while in the same for_each iteration (but for my differing OUs)?

I’m open to other suggestions on best practice to map AWS Org accounts to multiple OUs as I’m still rather new to Terraform.

Posts: 1

Participants: 1

Read full topic

Orchrestation to create a 1st resource, then 2nd and change settings of 1st based on 2nd resource in same terraform code

$
0
0

@mridul0709 wrote:

I need help for below 2 points on Azure deployment using terraform, could you please help me.

  1. How can I get the output of DNS servers in terraform template deployment (azurerm_template_deployment) of Azure AD Domain Services. Please help me what should I mention in the outputs section of json template (Attached).template.txt (2.3 KB)
  2. I need to update the DNS server names (which should come from Azure AD DS) in the vnet. For that first I am creating a vnet (and subnet), then in vnet creation the Azure AD DS and after Azure AD DS completion, update the custom DNS server settings of the same vnet. How could I achieve that ? terraform.txt (1.9 KB)

I am attaching the section of terraform code which I am using. Please let me know if I miss something.

Posts: 2

Participants: 2

Read full topic

GCP Instance - Best way to upload an SSL cert?

$
0
0

@jenki99 wrote:

Hi everyone

I manage GCP infrastructure with Terraform and one area I can’t quite figure out is the inclusion of a specific SSL certificate that I need to upload to the machine

As a temporary workaround I just include the cert & key in the startup script in plain text but this isn’t scalable or secure, plus it’s shown in plain text within the GCP console for custom metadata for that host

What is the best way to do this? I had thought about adding some SCP and pulling the cert from aother box in a more secure way but that seems like a clunky way to do it

I have been also looking at if there is a Vault use case for this too, to pull the file once the machine is running, but when I look at SSL/PKI related stuff for vault is seems to be for more complex use cases

Thanks in advance for any advice!

Posts: 1

Participants: 1

Read full topic

Can I reference a file for a template variable?

$
0
0

@jenki99 wrote:

Hi Everyone,

I use a template to parse a startup script I have for a Google Cloud Instance, similar to this:

metadata_startup_script = templatefile("./bootstrap.sh", {var = “whatever”})

Is it possible for the value of a variable here to be a file? I want to reference a file rather than having this particular config directly in the script itself

I have tried to use ${file…} but that doesn’t seem to work:

Error: Invalid character

17:   metadata_startup_script = templatefile("./bootstrap.sh", { var = ${file(var.myfile)} })

This character is not used within the language.

I use ${file…} in other parts of my code without issue but it doesn’t seem to like it here. Is this something that is supported?

Thanks!

Posts: 1

Participants: 1

Read full topic

How to unlock tf state with same workspace name after permanently deleted from S3

$
0
0

@thazinmk wrote:

I would like to create new workspace with the same name it was removed manually from S3 bucket before.

I tried using force-unlock and it seems that terraform is still expecting old tf state (which is empty now in S3).

Error message when performing force-unlock : terraform force-unlock

Failed to unlock state: failed to retrieve lock info: unexpected end of JSON input

Posts: 1

Participants: 1

Read full topic

Viewing all 11395 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>