Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11432 articles
Browse latest View live

Azurerm_virtual_machine os_profile custom_data not being executed


For syntax problem

$
0
0

@arcotek-ltd wrote:

Hi,

I have two complex objects that I’ve “merged” based on a unique key, which gives me the desired result, however, I now need to create attributes and I’m having a hard time getting the syntax right, incorportating the new attribute and the if clause in local.vnet_data.subnet_data.

locals:

locals {
  search      = "\\d{0,3}" #Finds nnn in 12/24 (Last octet)
  
  subnet_data = flatten([
    for instance_id, settings in var.network_data : [
      for netnum, subnet in settings : {
        instance_id          = "${instance_id}.${netnum + 1}"
        subnet_object        = subnet.subnets
      }
    ]
  ])
  
  vnet_data   = flatten([
    for vnet_instance_id, vnet in module.virtual_network.object : {
      instance_id     = vnet_instance_id
      virtual_network = vnet
      subnet_data     = flatten([
        for netnum, subnet in local.subnet_data : 
          subnet.subnet_object if subnet.instance_id == vnet_instance_id
          
          #Problematic line:
          net_prefix = cidrsubnet(vnet.address_block, subnet.newbits, netnum)
      ])
    }
  ])
}

var.network_data comes from a .tfvars file that contains data used to build subnets in the resource for creating subnets as shown below:
module.virtual_network.object just creates the vnet, with no subnets.

Subnet resource (not working):

resource "azurerm_subnet" "pool" {
  for_each             = {
    for ns, vnet in local.vnet_data : 
    "${vnet.instance_id}.${ns}" => vnet
  }
  name                  = format("%s%s%s%03d%s%03d", "nsb_", each.value.purpose, "_", each.value.octet_3, "_", each.value.octet_4)
  resource_group_name   = each.value.virtual_network.resource_group_name
  virtual_network_name  = each.value.virtual_network.name
  address_prefix        = each.value.net_prefix
}

Referencing the address_prefix attribute from azurerm_subnet.pool resource, each.value.net_prefix doesn’t work because I don’t have the right syntax for the net_prefix attribute in local.vnet_data.

How do I incorportate the if “filter” and the net_prefix attribute in the same local block?

I have tried using matchkeys() function instead, but as I need to create attribute values inside the “for subnet” loop, I can’t see it will work.

I’ve also tried writing the code like:

for vnet in module_virtual_network.object : [
  for subnet in local.subnet_data : {...}
]

But, I can’t work out how to use the if clause, I just get syntax errors.

Any help, greatly apreciated.
TIA.

Posts: 1

Participants: 1

Read full topic

"Create Mode" Update In Place Azure Database

$
0
0

@KehindeOwens wrote:

Hello, simple question I hope:

I created my database in Azure manually and imported it via Terraform Import to my Terraform build. With that said, when I run Terraform Plan it wants to update the database in place. The value it wants to update is “Create Mode”; I assume this value is left blank when created manually. So question, will updating in place for the create mode value to default erase the data from my database? I understand it isn’t recreating the DB but because I’m not certain what all happens when updating in place I wanted to verify. Thank you

Posts: 1

Participants: 1

Read full topic

Terraform Console examples

$
0
0

@Jim420 wrote:

Hi, I think it will really helpful if someone could create a simple tutorial on Terraform console. I understand it is very powerful and helpful.
I have not been able to find enough documentation.

Thanks !

Posts: 1

Participants: 1

Read full topic

How do we code in AWS terraform the keypair (PEM) download?

$
0
0

@kimdav111 wrote:

How to code terraform so that each VM can have a unique keypair (PEM) downloaded? The below link in “background” shows a single key-pair in the provider block so I am assuming that there will be only one key-pair for this deployment which will include multiple VMs with unique server names. I read it is best practice to provide a unique key-pair for each EC2?,… how can I do this?. Is there a way to add this kind of parameter under the EC2 deployment process?,… did not see this type of parameter? Please provide a understandable example. If you are going to use a counter please include this in the example ( ie I am a novice in the usage of the counter?)

Create keypair in the below “provider block”

 provider "aws" {                      ### This is referred to "AWS 
 provider block"
   region     = "us-west-2"
   access_key = "my-access-key"
   secret_key = "my-secret-key"

}

Where is the key pair located after the deployment? If the provider block does not include the access_key/secret_key how will the user access the EC2 once deployed? What is the default action if the key-pair is stated in the provider block? (ie I am seeing example terraform code with out any key-pair).

Is there another alternative for Windows users who are often are not experienced with SSH? Any possibility in AWS terraform can create a standard user and password for an RDP session? Please provide an example.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

The below resource aws_key_pair can be used for providing a key. Is the below key_name/Public_key equivalent to the access_key/secret_key? Is there any way to provide a user name and password for both Windows VMs and Linux VMs using aws terraform?

Must the “aws_key_pair” be created during creation of the EC2 only?,… or can we apply this resource after creation?

How do we associate this “aws_key_pair” resource with a specific EC2 in terraform? Please provide an example.

The issue with the below is that it would require alot of work to gather and input the rsa certificate information when you have many servers. This is why I was interested in finding a way to provide a name and password to provide for initial logon (ie Azure Terraform provides this option) then the system owners can make the user name and password complex. Note-Enterprise level-policies are in place for both linux and windows to provide complex password for local admin accounts.

What I am looking for is a way use terraform to create an initial user name and password for created EC2 so that system owners can reset these credentials.

 resource "aws_key_pair" "deployer" {
   key_name   = "deployer-key"
   public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"

}

Posts: 1

Participants: 1

Read full topic

Alternative way for variables in S3 backend

$
0
0

@venu1428 wrote:

Variables are not supported in S3 backend I need alternative way to do this can any one suggests I go through online some are saying terragrunt some are say like python, workspaces.environments.Actually we are built some Dev environment for clients from the app they will enter the details like for ec2 they will enter count, ami, type from here all are fine but with the backend state file issue which does not support variables I need to change every time bucket name,path.Can some one please explain me the structure and sample code to resolve this thanks in advance. #23208

Posts: 1

Participants: 1

Read full topic

Object value interpolation in .tf.json not working as documented

$
0
0

@syskill wrote:

I am working with programmatically-generated Terraform configuration files using the JSON syntax. I am trying to update the generator to emit configuration for Terraform v0.12, but can’t figure out how to interpolate a complex expression.

The documentation says,

When a JSON string is encountered in a location where arbitrary expressions are expected, its value is first parsed as a string template and then it is evaluated to produce the final result.

If the given template consists only of a single interpolation sequence, the result of its expression is taken directly, without first converting it to a string. This allows non-string expressions to be used within the JSON syntax…

But when I try to run this configuration:

  "locals": {
    "transition": {
      "days": 366,
      "storage_class": "STANDARD_IA"
    },
    "expiration": {
      "days": 732
    },
    "lifecycle_rule": {
      "enabled": true,
      "transition": [
        "${local.transition}"
      ],
      "expiration": [
        "${local.expiration}"
      ]
    }
  },
  "resource": {
    "aws_s3_bucket": {
      "alb_logs": {
        "bucket": "my-alb-logs",
        "lifecycle_rule": [
          "${local.lifecycle_rule}"
        ]
      },
      "elb_logs": {
        "bucket": "my-elb-logs",
        "lifecycle_rule": [
          "${local.lifecycle_rule}"
        ]
      }
    }
  }

Terraform errors out:

Error: Incorrect JSON value type

  on terraform.tf.json line 118, in resource.aws_s3_bucket.alb_logs.lifecycle_rule:
 118:           "${local.lifecycle_rule}"

Either a JSON object or JSON array of objects is required here, to define
arguments and child blocks.


Error: Missing required argument

  on terraform.tf.json line 118, in resource.aws_s3_bucket.alb_logs.lifecycle_rule:
 118:           "${local.lifecycle_rule}"

The argument "enabled" is required, but no definition was found.


Error: Incorrect JSON value type

  on terraform.tf.json line 118, in resource.aws_s3_bucket.alb_logs.lifecycle_rule:
 118:           "${local.lifecycle_rule}"

Either a JSON object or JSON array of objects is required here, to define
arguments and child blocks.


Error: Incorrect JSON value type

  on terraform.tf.json line 124, in resource.aws_s3_bucket.elb_logs.lifecycle_rule:
 124:           "${local.lifecycle_rule}"

Either a JSON object or JSON array of objects is required here, to define
arguments and child blocks.


Error: Missing required argument

  on terraform.tf.json line 124, in resource.aws_s3_bucket.elb_logs.lifecycle_rule:
 124:           "${local.lifecycle_rule}"

The argument "enabled" is required, but no definition was found.


Error: Incorrect JSON value type

  on terraform.tf.json line 124, in resource.aws_s3_bucket.elb_logs.lifecycle_rule:
 124:           "${local.lifecycle_rule}"

Either a JSON object or JSON array of objects is required here, to define
arguments and child blocks.

So the doc says that the result of the evaluation of the template should be an object, since it consists only of a single interpolation sequence. But judging by the error messages, it seems like Terraform is evaluating the template as a string.

Is this a bug in Terraform’s interpreter, or have I missed something?

Posts: 1

Participants: 1

Read full topic

Error fetching network: path ... resolves to multiple networks

$
0
0

@m4ahmed-wgt wrote:

I am trying to create a virtual machine in vCenter. My script works in the DEV cluster but errors out when trying to create the VM in the PROD cluster.

Getting error:
Error: error fetching network: path ‘production|production-ap|vlan31’ resolves to multiple networks. This network name is actually in two distributed switches in the PROD cluster. How can I specify to use a particular distributed virtual switch when creating the VM? Have been searching without success for several days.

Thanks!

Posts: 1

Participants: 1

Read full topic


Composing variable value from other vars

$
0
0

@poopsmith-asf wrote:

I want to do something like this. What is the correct syntax?

variable "location" {
  description = "Location of the project in Azure"
  default     = "cc"
}

variable "env" {
  description = "infrastructure environment"
  default     = "z"
}

variable "prefix" {
  description = "Prefix for objects"
  default        = "${var.env}-${var.location}"
}

Posts: 1

Participants: 1

Read full topic

Multiple environment with terraform workspace

$
0
0

@rayanebel wrote:

Hello Terraform users,

I’m new with terraform and I would like to use it to deploy and control infrastructure resources (AWS) of my customer for each of its environments. So I saw Terraform workspace to be able to switch between environment and to have a separate tfstate file for each of them. But, now I would like to know if we can exclude the deployment of a resource for a specific environment. Let’s take an example, Imagine, I have 3 environments : dev, staging and prod and I would like to deploy a S3 bucket for each environment and a documentDB instance for prod and one shared documentDB instance for dev and staging. How can I do this by using workspace with terraform? How can I exclude some resources for a specific workspace?

I saw that we can use conditionals with terraform but If I don’t make a mistake its working if we have a count property. But in my example DocumentDB have no count property so I can’t have a map for each environment and specify 1 if we want this resource and 0 if we don’t want.

If someone can help me to structure my project it’s will be really cool because I have never use terraform for a real project.

If you need more context about the environment of my customer I can give you more details or we can also discuss on slack on somewhere else.

Thanks you for our help,

Best regards,

Rayane.

Posts: 2

Participants: 2

Read full topic

Need complete suggested work flow for my use case

$
0
0

@venu1428 wrote:

Actually we bulit webapp from there we are passing variables to the terraform by

like below

terraform apply -input=false -auto-approve -var ami="%ami%" -var region="%region%" -var icount="%count%" -var type="%instance_type%"

My main.tf like below

terraform {
backend “s3” {
bucket = “terraform-007”
key = “key”
region = “ap-south-1”
profile=“venu”
}
}
provider “aws” {
profile = “{var.aws_profile}" region = "{var.aws_region}”
}

resource “aws_instance” “VM” {

count = var.icount
ami = var.ami
instance_type = var.type

tags = {
Environment = “${var.env_indicator}”
}

}

vars.tf like

variable “aws_profile” {
default = “default”
description = “AWS profile name, as set in ~/.aws/credentials”
}

variable “aws_region” {
type = “string”
default = “ap-south-1”
description = “AWS region in which to create resources”
}

variable “env_indicator” {
type = “string”
default = “dev”
description = “What environment are we in?”
}

variable “icount” {

default = 1

}

variable “ami” {

default =“ami-54d2a63b”

}

variable “bucket”{

default=“terraform-002”
}

variable “type” {

default=“t2.micro”

}

output.tf like:

output “ec2_public_ip” {
value = ["${aws_instance.VM.*.public_ip}"]

}

output “ec2_private_ip” {
value = ["${aws_instance.VM.*.private_ip}"]

}

Actually the problem here was backend does not support variables i need to pass there values also form app.

Need some alternative approach to resolve this

I am using jenkins parameterized job to pass the values.

Can some one explain me the best workflow for me.

I mean file structure,ci/cd intergration and all.

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

Azure multiple workspace handling dependencies

$
0
0

@rwarnke wrote:

I’m in the process of creating a PoC for managing complex Azure IaaS with Terraform cloud only.

I have decided on gitlab as my VCS integration and intend to use a single repository for each workspace. Workspaces will be scoped by service type or team. Currently I have a single workspace for all network resources and multiple workspaces for each application and core services.

I am wondering how one would handle critical changes in configurations with dependencies from other workspaces. For example changing the adress space would fail because it is used by resources in another configuration/workspace (the example application) Is there any way in terraform cloud to make this visible and create a deployment / change plan? Has anyone found a solution for issues like this?

Posts: 1

Participants: 1

Read full topic

Handling one environment per developer

$
0
0

@pontus-mp wrote:

Hi, we are happy Terraform users, but are looking into improving things even more. We have a fairly basic setup with all environments in a single workspace (production.tf, stage.tf, etc). This approach works fairly well, but doesn’t handle developer-specific infrastructure.

A quick and dirty solution would be to create a module that sets up everything needed for a developer, and then add a module block for each developer.

module "developer-alice" {
  source = "./modules/developer"
  name = "alice"
}

module "developer-bob" {
  source = "./modules/developer"
  name = "bob"
}
# Etc...

I don’t like this approach because that would end up in source control and changes in branches would affect everyone.

A fancier approach would be to use terraform workspaces. Using some tricks it’s possible to size things appropriately according to environment, but things gets hairy for the edge cases. Certain resources may be globally unique and should be shared between workspaces, access rules will differ between workspaces, developers might only need non-server-resources like storage buckets, some security groups might need to reference multiple workspaces, etc.

This could possibly be handled by using terraform_remote_state, but it seems things would get very hairy and fragile quickly. Migrating the existing setup to workspaces scares me as well. :smiley:

Are there any additional approaches we haven’t thought of yet?

Posts: 1

Participants: 1

Read full topic

Default provider for module

$
0
0

@cyrus-mc wrote:

Is there a way to say use the default provider for a module that defines an aliased provider.

For example I have a module that can create Route53 entires, in our setup DNS lives in a separate account so I have to over-ride the provider for that specific resource

resource "route53_record" "this" {
  provider = aws.route53
}

And therefor within the module I define

provider “aws” {
alias = “route53”
}

When calling this module the user has to call it as such:

module “route53” {
providers = {
aws.route53 = aws.some-alias
}
}

However there are certain use cases where the aws.route53 provider should just be set to the default aws provider (or based on the inputs to the module the DNS entries won’t be created and therefore the secondary provider won’t actually be used). In those situations it would be beneficial for the user to not have to supply the provider when calling the module and just have the module default to the default provider.

Posts: 2

Participants: 2

Read full topic

Vsphere timeout parameter by resource is not available

$
0
0

@masert2 wrote:

Hello,

as this ref on https://www.terraform.io/docs/configuration-0-11/resources.html

I will try to use timeout metaparameter on resource like vm and snapshot and each time terraform validate fail : “timeouts” is not a valid argument.

Tested on vsphere 6.0 with :
$ terraform version
Terraform v0.11.14

  • provider.null v2.1.2
  • provider.vsphere v1.12.0

Is there a way to introduce this delay in vpshere provider usage ?

Regards

Posts: 1

Participants: 1

Read full topic


Terraform module rewrites

$
0
0

@mikek wrote:

Back when our project was using Terraform 11.x, our module design was very linear.

For instance, every time we assigned a new IAM role, we would declare a new module to do so, and we had a resource defined that would provision 1 IAM role.

Our end result was that we now have N amount of modules for N amount of role assignments, e.g

module "iam_user1" {
   // stuff to do first role assignment
}

module "iam_user2" {
   // stuff to do first role assignment
}

module "iam_user3" {
   // stuff to do first role assignment
} 

// and so and so forth...

Now that we have switched over to Terraform 12, we want to delete the old module declarations, and create a module that will call a resource which will utilize the for_each meta-argument, and create however many IAM roles needed with one module.

module "iam_all_users" {
  // pass in a map(map(string)) and have for_each do its thing
}

I’ve basically implemented the second variant (using for_each), but a new problem has arisen:

The problem:

When running terraform plans, it’ll show the new module that employs for_each is going to create however many resources, and that’s great. I’ve simply just moved the IAM declaration from a bunch of modules to one.

The problem is that once I delete the old modules from the terraform source code, the plan will now show that it also wants to delete all of the old modules in the plan, which if I’m not mistaken, is going to essentially end up being a destroy followed by a re-create.

Basically, my concern was the nature of the destroy / recreation when re-writing modules in this way. I’d like to do this for many types of resources (not just IAM but also things like instances, disks, and so on and so forth), and can see some complications if everything ends up getting destroyed and then re-created.

Is there some way to rewrite modules in a way similar to how I’ve described above but not have to re-create everything, or is the nature of this kind of re-write intended to produce this kind of consequence?

Posts: 2

Participants: 2

Read full topic

Adding a secondary IP for a single node in a module

$
0
0

@staticprop wrote:

Hello, I have created a module that allows me to create systems with a single interface in AWS. I want to use this module to build a 5 node cluster with a single shared IP. Is there a way to add a secondary IP to one of the servers during the provision process.

Here’s what the module looks like.

resource “aws_network_interface” “node” {
count = length(var.node_ips)
subnet_id = var.subnet_id
private_ips = [var.node_ips[count.index]]
security_groups = [var.security_group_id]
source_dest_check = “true”
description = element(var.node_host, count.index)

tags = merge(
local.common_tags,
{
“Name” = element(var.node_host, count.index)
},
)
}

resource “aws_instance” “node_” {
count = length(var.node_ips)
instance_type = var.instance_type
ami = var.ami
key_name = var.key_id

network_interface {
device_index = 0
network_interface_id = element(aws_network_interface.node.*.id, count.index)
}

tags = merge(
local.common_tags,
{
“Name” = element(var.node_host, count.index)
},
)
}

module nodes {
source = “./modules/my_module”
instance_type = “t2.small”
volume_size = “16”
ami = “{module.amis.latest_ubuntu_id}" key_pair_id = "{module.vpc.key_id}”
subnet_id = “{module.subnet_id}" security_group_allow_all_id = "{module.vpc.security_group_id}”
availability_zone = “${module.vpc.az}”

node_host = [“system1”, “system2”, “system3”, “system4”, “system5”, “system6”]

node_ips = {
“0” = “x.x.x.1”
“1” = “x.x.x.2”
“2” = “x.x.x.3”
“3” = “x.x.x.4”
“4” = “x.x.x.5”
“5” = “x.x.x.6”

}
}

Ordinarily I would just add the following to the aws_netwwork_interface resource to get the secondary IP.

private_ips = [var.node_ips[count.index], var.node_vip[count.index]]

If I applied this to the module here it would try to add the same IP to each interface of each node and fail. Is there any trick to make this work.

When everything is up I want it to look like the following

system1, eth0: x.x.x.1 and x.x.x.10 (vip)
system2, eth0: x.x.x.2
system3, eth0: x.x.x.3
system4, eth0: x.x.x.4
system5, eth0: x.x.x.5
system6, eth0: x.x.x.6

Posts: 1

Participants: 1

Read full topic

Collate two maps, resetting the key at the end of the inner loop

$
0
0

@arcotek-ltd wrote:

Hi all,

I have two list of map variables: vnet and subnet that I iterate over in a nested for loop. When the second loop gets to the end, I want it to reset to 0, however, it continues to count up.

Here is my TF code:

variable "vnet" {
  default = [
    {"instance_id" = "1.1"},
    {"instance_id" = "1.2"},
    {"instance_id" = "2.1"},
  ]
}

variable "subnet" {
  default = [
    {"instance_id" = "1.1.1"},
    {"instance_id" = "1.1.2"},
    {"instance_id" = "1.1.3"},
    {"instance_id" = "1.1.4"},
    {"instance_id" = "1.1.5"},
    {"instance_id" = "1.2.6"},
    {"instance_id" = "2.1.7"},
    {"instance_id" = "2.1.8"},
  ]
}

locals {
  test = [
    for v_c, v in var.vnet : [
      for s_c, s in var.subnet : {
        v = v.instance_id
        s = s.instance_id
        
        vf = split(".", v.instance_id)[1]
        filter = "${split(".", s.instance_id)[0]}.${split(".", s.instance_id)[1]}"
        
        label = "foo_${split(".", v.instance_id)[1]}_${s_c + 1}"
      }
      if "${split(".", s.instance_id)[0]}.${split(".", s.instance_id)[1]}" == v.instance_id   
    ]
  ]
}

output "testing" {
  value = local.test
}

And this is the resulting output:

testing = [
  [
    {
      "filter" = "1.1"
      "label" = "foo_1_1"
      "s" = "1.1.1"
      "v" = "1.1"
      "vf" = "1"
    },
    {
      "filter" = "1.1"
      "label" = "foo_1_2"
      "s" = "1.1.2"
      "v" = "1.1"
      "vf" = "1"
    },
    {
      "filter" = "1.1"
      "label" = "foo_1_3"
      "s" = "1.1.3"
      "v" = "1.1"
      "vf" = "1"
    },
    {
      "filter" = "1.1"
      "label" = "foo_1_4"
      "s" = "1.1.4"
      "v" = "1.1"
      "vf" = "1"
    },
    {
      "filter" = "1.1"
      "label" = "foo_1_5"
      "s" = "1.1.5"
      "v" = "1.1"
      "vf" = "1"
    },
  ],
  [
    {
      "filter" = "1.2"
      "label" = "foo_2_6"
      "s" = "1.2.6"
      "v" = "1.2"
      "vf" = "2"
    },
  ],
  [
    {
      "filter" = "2.1"
      "label" = "foo_1_7"
      "s" = "2.1.7"
      "v" = "2.1"
      "vf" = "1"
    },
    {
      "filter" = "2.1"
      "label" = "foo_1_8"
      "s" = "2.1.8"
      "v" = "2.1"
      "vf" = "1"
    },
  ],
]

When vf changes, I want s_c to reset.

To hopfully illustrate my point, I’ve written a similar thing in PowerShell:

$vnet = @(
  @{instance_id = "1.1"}
  @{instance_id = "1.2"}
  @{instance_id = "2.1"}
)

$subnet = @(
  @{"instance_id" = "1.1.1"}
  @{"instance_id" = "1.1.2"}
  @{"instance_id" = "1.1.3"}
  @{"instance_id" = "1.1.4"}
  @{"instance_id" = "1.1.5"}
  @{"instance_id" = "1.2.6"}
  @{"instance_id" = "2.1.7"}
  @{"instance_id" = "2.1.8"}
)

foreach($v in $vnet)
{
  $c = 1
  foreach($s in $subnet | Where-Object{$_.instance_id.substring(0,3) -eq $v.instance_id})
  {
    $vnet_instance = $s.instance_id.split(".")[1]
    
    Write-Output("foo_{0}_{1}" -f $vnet_instance, $c)
    $c++
  }
  "==========="
}

With the following output:

foo_1_1
foo_1_2
foo_1_3
foo_1_4
foo_1_5
===========
foo_2_1
===========
foo_1_1
foo_1_2
===========

As demostrated, the first digit ($vnet_instance) resets when the end of the inner collection ($subnet) is reached.

I am sure this is programming 101, but I can’t work out how to do it in HCL.

Any help or pointers would be greatfully received.

TIA.

Posts: 3

Participants: 2

Read full topic

Terraform 0.12.13 released!

$
0
0

@pkolyvas wrote:

0.12.13 (October 31, 2019)

UPGRADE NOTES:

  • Remote backend local-only operations: Previously the remote backend was not correctly handling variables marked as “HCL” in the remote workspace when running local-only operations like terraform import, instead interpreting them as literal strings as described in #23228.

    That behavior is now corrected in this release, but in the unlikely event that an existing remote workspace contains a variable marked as “HCL” whose value is not valid HCL syntax these local-only commands will now fail with a syntax error where previously the value would not have been parsed at all and so an operation not relying on that value may have succeeded in spite of the problem. If you see an error like “Invalid expression for var.example” on local-only commands after upgrading, ensure that the remotely-stored value for the given variable uses correct HCL value syntax.

    This does not affect true remote operations like terraform plan and terraform apply, because the processing of variables for those always happens in the remote system.

BUG FIXES:

  • config: Fix regression where self wasn’t properly evaluated when using for_each (#23215)
  • config: dotfiles are no longer excluded when copying existing modules; previously, any dotfile/dir was excluded in this copy, but this change makes the local copy behavior match go-getter behavior (#22946)
  • core: Ensure create_before_destroy ordering is enforced with dependencies between modules (#22937)
  • core: Fix some destroy-time cycles due to unnecessary edges in the graph, and remove unused resource nodes (#22976)
  • backend/remote: Correctly handle remotely-stored variables that are marked as “HCL” when running local-only operations like terraform import. Previously they would produce a type mismatch error, due to misinterpreting them as literal strings. (#23229)

Posts: 1

Participants: 1

Read full topic

Use a map as a block

$
0
0

@jbonnier wrote:

Hi there,

I’m trying to find a way to use a variable as a block.

I’m working on an OpsWorks module for my organization and I’m stuck with the custom_cookbooks_source block of aws_opsworks_stack.

My idea goes like this.

resource "aws_opsworks_stack" "stack" {
  ...
  custom_cookbooks_source {
      >>>> dump my map variable somehow
   }
}

The reason why I wanna do that is that I would like to be able to pass a map variable to my module and use it as key = val in my block.

Since I don’t know if it’s going to git or something else, user/pass or ssh key… I don’t know how to deal with this.

Any idea?

Cheers!

Posts: 1

Participants: 1

Read full topic

Viewing all 11432 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>