Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11412 articles
Browse latest View live

Reference of a data source in a data source: Invalid for_each argument

$
0
0

Hello,

I’ve found many great answers on this forum, thank you!

Here, I’m trying to create resources based on data retrieved from Vault.
The first data source retrieves a list of secrets from Vault with a scope name and the second one retrieves the content (key/value) of each secret.

data "vault_kv_secrets_list_v2" "scope_secrets_list" {
  mount = "kv2"
  name  = local.vault_scope_path
}

data "vault_kv_secret_v2" "scope_secrets" {
  for_each = nonsensitive(toset(data.vault_kv_secrets_list_v2.scope_secrets_list.names))
  mount    = "kv2"
  name     = "${local.vault_scope_path}/${each.key}"
}

When doing so in a child module, I get the following Error:

Error: Invalid for_each argument

on ../../modules/vault_databricks_secret_scope/main.tf line 26, in data "vault_kv_secret_v2" "scope_secrets":

26:   for_each = nonsensitive(toset(data.vault_kv_secrets_list_v2.scope_secrets_list.names))

The "for_each" set includes values derived from resource attributes that
cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.

If I include the data sources in the root module, the planning phase manages to retrieve the data sources properly without any error and builds the resources with the result.

Would you know what’s wrong here please?
Thank you!

1 post - 1 participant

Read full topic


Request for optional switch in modules

$
0
0

When creating a module we can use the count or for_each to make it optional, eg.

module "my_module" {
  ...
  for_each = var.enable_my_module ? {y="Y"} : {}
  ...
}

or

module "my_module" {
  ...
  count = var.enable_my_module ? 1 : 0
  ...
}

But the issue with that is that we end up with spurious indexes like my_module[1] or my_module[“Y”] items resulting.

Unless I’ve missed it, it’d be great to have a simple “deploy_if” switch to put in modules instead of the above artificial constructs, eg.

module "my_module" {
  ...
  deploy_if = var.enable_my_module
  ...
}

It sort of fits with “depends_on” in modules, which are controls to how the module is used rather than what it deploys.

1 post - 1 participant

Read full topic

Conditionally pass complex object

$
0
0

I’m importing the terraform-aws-autoscaling module, and am trying to configure a simple boolean toggle on whether the var.instance_refresh value is set (used in the module here).

In my implementation, I have the following

  instance_refresh = local.refresh ? local.default_instance_refresh : null

Where

  default_instance_refresh = {
    strategy = "Rolling"
    preferences = {
      min_healthy_percentage = 100
      max_healthy_percentage = 110
    }
  }

If I set local.refresh = false I get an error because that causes null to be passed to the public module which blindly runs length(var.instance_refresh).

Alternatively, replacing my null with {} gives a type mismatch with my default_instance_refresh.

What would the recommended approach to have a boolean value toggle whether instance_refresh is used?

1 post - 1 participant

Read full topic

Crossaccount IAM role stopped working with terraform

$
0
0

Hello,

Could someone help me understand an issue I encountered when I used Terraform to recreate an IAM role that was originally created manually?

I have two AWS accounts: A (111111111111) and B (222222222222).

Account A has a role named Role_1 with a policy that allows assuming roles located in Account B. Here is the policy from Account A:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "arn:aws:iam::222222222222:role/Role_2"
  }
}

At the same time, Account B has a role called Role_2 with a trusted entity policy as shown below, which allows Account A to assume this role

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:role/Role_1"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

All this functionality worked fine until I attempted to create Role_1 with the same policy using Terraform. Below is the Terraform code that achieves what I described above

resource "aws_iam_policy" "assume_sts_policy" {
  name        = "sts-policy"
  policy      = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Action" : "sts:AssumeRole",
        "Resource" : "arn:aws:iam::111111111111:role/Role_1"
      }
    ]
  })
}

resource "aws_iam_role" "sts_role" {
  name                 = "Role_1"
  assume_role_policy   = data.aws_iam_policy_document.main_role_trusted_entities.json
  managed_policy_arns  = [
    "arn:aws:iam::aws:policy/AmazonS3FullAccess",
    aws_iam_policy.assume_sts_policy.arn
  ]
}

After creation, I started to receive an error

An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::111111111111:role/Role_1 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::222222222222:role/Role_2

I spent a lot of time until I found a way to fix it. In Account B, I modified the trusted entities policy to look like below, and then it finally worked

{
  "Effect": "Allow",
  "Condition": {
      "StringEquals" : {
          "aws: PrincipalARN": "arn:aws:iam: : 111111111111:role/Role_1"
      }
  },
  "Principal": {
      "AWS": "arn:aws:iam:: 111111111111:root"
  },
    "Action": "sts:AssumeRole"
}

So i had to add :root instead of role name, and a condition that checks PrincipalARN

1 post - 1 participant

Read full topic

Is cross-stack referencing possible?

$
0
0

I am using CDK for Terraform (cdktf) and Python to build AWS resources.

According to the requirements of my work, the structure of my project includes

main.py,
vpc.py,
s3.py,
ec2.py,

and so on, where different parts are defined in different stacks in separate files and are finally implemented together in main.py.

I’m using s3 as backend.

In this case, how can I reference the resource IDs from other files (stacks)? For example, how can I reference the VPC and subnet IDs in ec2.py from vpc.py?

I would be very grateful for a reply!

1 post - 1 participant

Read full topic

Terraform Cloudflare

$
0
0

Hello,

I’m using Cloudflare’s Terraform provider to handle DNS Records for all my domains.
With more than 60 domains, and due to the big number of resources (DNS Records) to manage, I started to get rate limit errors from Cloudflare.
Also, applying Terraform changes is taking too long.

I’m thinking about splitting my Terraform repository into multiple ones. Can I do this without having to recreate all the resources from scratch?

Here’s a summary of my current structure:

Root

  • domain1
    ** main.tf
    ** vars.tf
    ** terraform.tf
  • domain2
    ** main.tf
    ** vars.tf
    ** terraform.tf
  • main.tf
  • vars.tf

Thanks

1 post - 1 participant

Read full topic

Create resources iterating through values of a map and Override a single value in a map using tfvars file

$
0
0

I have used implemented same structure as this post:
Override a single value in a map.

I am able to create a number of resources based on each type. However I would also like the ability to override the “instance_count” values from either command line, but preferably in a .tfvars file without having to specify all the other attributes as well. Any help would be appreciated !!

#main.tf 
# A list of objects with one object per instance.
  instances = flatten([
    for image_key, image in var.images : [
      for index in range(image.instance_count) : {
        flavor            = image.flavor
        instance_index    = index
        image_key         = image_key
        instance_name     = image.instance_name
      }
    ]
  ])
  
}

resource "openstack_compute_instance_v2" "instance" {
  for_each = {
    # Generate a unique string identifier for each instance
    for inst in local.instances : format("%s-%02d", inst.image_key, inst.instance_index + 1) => inst
  }

  image_name        = each.value.image_name
  flavor_id         = each.value.flavor
  name              = each.key
  security_groups   = var.security_groups
  availability_zone = each.value.availability_zones
  key_pair          = "foptst"
  network {
    name = var.network_name
  }
}

#variables.tf
variable "images" {
    type = map(object({
        instance_name = string
        flavor = string
        instance_count = string 
        
    }))
    default = {
        "web" = {
            instance_name = "web"
            flavor = "1"
            "instance_count" = 4
        }
        "worker" = {
            instance_name = "wrk"
            flavor = "2"
            "instance_count" = 3
        } 
        "db" = {
            instance_name  = "sql"
            flavor = "3"
            "instance_count" = 2
        }
    }  
#terraform.tfvars
images =  {
            db = {
                instance_name = "sql"
                flavor = "1"
                "instance_count" = 0
            },
            "web" = {
                instance_name = "web"
                flavor = "2"
                "instance_count" = 0
            },
            "worker" = {
                instance_name = "wrk"
                flavor = "3"
                "instance_count" = 1
            }
            
}

1 post - 1 participant

Read full topic

Configure backend remote s3 bucket access

$
0
0

Im using a git-ops styled deployment for some AWS SSM stuff being deployed with TF in a cloud based SVC tool

I want to store the state file in a remote s3 bucket in a different account to the one im deploying to but not sure how to allow the access or how to define this in tf

Im connecting to the account im deploying to via an OICD connection stated within the pipeline that then assumes a role within that account.

How can i configure terraform to put the state file in a different aws account?

1 post - 1 participant

Read full topic


Passing modules in a list as an input

$
0
0

Hi,

Im trying to pass a modules list into another module, for some unknown reason - when sending 2 modules it works properly but for > 2 it troughs an error -
all list elements must have the same type.

module "D" {
  modules = ["${module.A}","${module.B}","${module.C}"]
}

Please advice on how to overcome this issue.

Thanks,

1 post - 1 participant

Read full topic

Reference amazon created ipv6 cidr block with smaller prefix

$
0
0

I want to create vpc with amazaon provided ipv6 cidr_block and then create smaller subnet (/64, since amazon provided subnets are /56) and assign this smaller cidr_block to subnet inside vpc, below is my code, it throws an error

│ Error: “2406:da1a:415:1800::/56/64” is not a valid CIDR block: invalid CIDR address: 2406:da1a:415:1800::/56/64

│ with aws_subnet.dbvpczsrdstestvpcanil_cmcsubnetexternal_4F398891,
│ on cdk.tf.json line 415, in resource.aws_subnet.dbvpczsrdstestvpcanil_cmcsubnetexternal_4F398891:
│ 415: “ipv6_cidr_block”: “${aws_vpc.dbvpczsrdstestvpcanil_cmcvpc_10707888.ipv6_cidr_block}/64”,

Can someone look into the code and tell me how can i achieve the above requirement.

test_vpc=Vpc(
                self,
                "test-vpc",
                cidr_block="10.10.0.0/16",
                enable_dns_hostnames=True,
                tags={**DEFAULT_TAGS, "Name": f"{stack_id}-test"},
                assign_generated_ipv6_cidr_block=True,
            )
Subnet(
                self,
                "subnet-external",
                tags={**DEFAULT_TAGS_CMC, "Name": f"{stack_id}-external"},
                cidr_block="10.10.0.0/16",
                availability_zone="ap-south-1",
                vpc_id=vpc.id,
                map_public_ip_on_launch=False,
                depends_on=[test_vpc],
                ipv6_cidr_block=test_vpc.ipv6_cidr_block.split("/")[0] + "/64"
            )

1 post - 1 participant

Read full topic

Dynamic Module Source (variable for reference)

$
0
0

Did anyone find a solution to this?
We are using CI/CD so worst case i could make this part of a wrapper script but optimally i would be able to change the source URL of my module to something dynamic like:

module "ec2" {
  source = "git::git@gitlab.com:terraform-modules/aws/ec2.git?ref=${var.environment_type}"

2 posts - 2 participants

Read full topic

Ignore error from data source

$
0
0

How do you make output shut up and carry on if there’s nothing to return?
I’m getting:

│ Error: reading EC2 Network Interface: empty result
│ 
│   with data.aws_network_interface.netiface,
│   on main.tf line 55, in data "aws_network_interface" "netiface":
│   55: data "aws_network_interface" "netiface" {

For

data "aws_network_interface" "netiface" {
  filter {
    name   = "tag:aws:ecs:serviceName"
    values = ["my-service"]
  }
}

Thanks.

3 posts - 2 participants

Read full topic

Nested loop with foreach

$
0
0

Hi all,

I have been following you for a long time and you have helped me many times… thank you very much!

So… i have little bit problem… and i don’t find out the solution

I need to deploy multiple VMs on Vsphere with multiple disks on multiple datastore…
I have compiled this module with this tfvars file, all is ok but i cannot get datastore.id by data.tf… :frowning:

when i run terraform plan, my result is it ok but the datastore id is not resolved by terraform but only take the “clear” data on tfvars… (will be not deploy inside it)… below the bold datastore.id

terraform plan --var site=US --var-file=prod.tfvars -out=prod.plan

module.vm.vsphere_virtual_machine.vm[“server1”] will be created

  • resource “vsphere_virtual_machine” “vm” {
    • annotation = “VM1 PRod”

    • boot_retry_delay = 10000

    • change_version = (known after apply)

    • cpu_hot_add_enabled = true

    • cpu_limit = -1

    • cpu_share_count = (known after apply)

    • cpu_share_level = “normal”
      + datastore_id = “datastore-1179”

    • default_ip_address = (known after apply)

    • ept_rvi_mode = “automatic”

    • extra_config_reboot_required = true

    • firmware = “efi”

    • force_power_off = true

    • guest_id = “sles15_64Guest”

    • guest_ip_addresses = (known after apply)

    • hardware_version = (known after apply)

    • host_system_id = “host-1175”

    • hv_mode = “hvAuto”

    • id = (known after apply)

    • ide_controller_count = 2

    • imported = (known after apply)

    • latency_sensitivity = “normal”

    • memory = 2048

    • memory_hot_add_enabled = true

    • memory_limit = -1

    • memory_share_count = (known after apply)

    • memory_share_level = “normal”

    • migrate_wait_timeout = 30

    • moid = (known after apply)

    • name = “server1”

    • num_cores_per_socket = 1

    • num_cpus = 8

    • power_state = (known after apply)

    • poweron_timeout = 300

    • reboot_required = (known after apply)

    • resource_pool_id = “resgroup-1177”

    • run_tools_scripts_after_power_on = true

    • run_tools_scripts_after_resume = true

    • run_tools_scripts_before_guest_shutdown = true

    • run_tools_scripts_before_guest_standby = true

    • sata_controller_count = 0

    • scsi_bus_sharing = “noSharing”

    • scsi_controller_count = 4

    • scsi_type = “pvscsi”

    • shutdown_wait_timeout = 3

    • storage_policy_id = (known after apply)

    • swap_placement_policy = “inherit”

    • sync_time_with_host = true

    • tools_upgrade_policy = “manual”

    • uuid = (known after apply)

    • vapp_transport = (known after apply)

    • vmware_tools_status = (known after apply)

    • vmx_path = (known after apply)

    • wait_for_guest_ip_timeout = 0

    • wait_for_guest_net_routable = true

    • wait_for_guest_net_timeout = 0

    • clone {

      • template_uuid = “423deb75-fdf1-7553-b080-bd617bb1281d”

      • timeout = 30

      • customize {

        • dns_server_list = [

          • “10.237.216.7”,
          • “10.237.208.5”,
          • “10.237.111.8”,
            ]
        • ipv4_gateway = “10.237.113.1”

        • timeout = 10

        • linux_options {

          • domain = “adgr.net
          • host_name = “server1”
          • hw_clock_utc = true
            }
        • network_interface {

          • ipv4_address = “10.237.113.186”
          • ipv4_netmask = 24
            }
            }
            }
    • disk {

      • attach = false
      • controller_type = “scsi”
      • datastore_id = “”
      • device_address = (known after apply)
      • disk_mode = “persistent”
      • disk_sharing = “sharingNone”
      • eagerly_scrub = false
      • io_limit = -1
      • io_reservation = 0
      • io_share_count = 0
      • io_share_level = “normal”
      • keep_on_remove = false
      • key = 0
      • label = “disk0”
      • path = (known after apply)
      • size = 80
      • storage_policy_id = (known after apply)
      • thin_provisioned = true
      • unit_number = 0
      • uuid = (known after apply)
      • write_through = false
        }
    • disk {

      • attach = false
      • controller_type = “scsi”
        + datastore_id = “datastore1”
      • device_address = (known after apply)
      • disk_mode = “persistent”
      • disk_sharing = “sharingNone”
      • eagerly_scrub = false
      • io_limit = -1
      • io_reservation = 0
      • io_share_count = 0
      • io_share_level = “normal”
      • keep_on_remove = false
      • key = 0
      • label = “0”
      • path = (known after apply)
      • size = 20
      • storage_policy_id = (known after apply)
      • thin_provisioned = true
      • unit_number = 10
      • uuid = (known after apply)
      • write_through = false
        }
    • disk {

      • attach = false
      • controller_type = “scsi”
        + datastore_id = “LocalStorage103”
      • device_address = (known after apply)
      • disk_mode = “persistent”
      • disk_sharing = “sharingNone”
      • eagerly_scrub = false
      • io_limit = -1
      • io_reservation = 0
      • io_share_count = 0
      • io_share_level = “normal”
      • keep_on_remove = false
      • key = 0
      • label = “1”
      • path = (known after apply)
      • size = 200
      • storage_policy_id = (known after apply)
      • thin_provisioned = true
      • unit_number = 30
      • uuid = (known after apply)
      • write_through = false
        }
    • network_interface {

      • adapter_type = “vmxnet3”
      • bandwidth_limit = -1
      • bandwidth_reservation = 0
      • bandwidth_share_count = (known after apply)
      • bandwidth_share_level = “normal”
      • device_address = (known after apply)
      • key = (known after apply)
      • mac_address = (known after apply)
      • network_id = “network-12”
        }
        }

Plan: 2 to add, 0 to change, 0 to destroy.

Below my configuration…

prod.tfvars

vsphere_server = “192.168.10.100”
vsphere_unverified_ssl = true
datacenter = “Lab.local”
compute_cluster = “Lab.local”
change = “CH0000999”

virtual_machines = {
server1 = {
datacenter = “Lab.local”
compute_cluster = “Lab.local”
rspool = “max”
network_interface = “VM Network”
vmnotes = “VM1 PRod”
category = “dbserver”
system_cores = 8
system_memory = 2048
system_disk = 80
paging = 10
datadisk = 30
system_ipv4_address = “10.237.113.186”
system_ipv4_netmask = “24”
system_ipv4_gateway = “10.237.113.1”
datastore = “LocalStorage103”
datastore_a = “LocalStorage103”
datastore_b = “datastore1”
folder = “/”
system_name = “server1”
system_domain = “ad1.local”
dns_server_list = [“192.168.1.200”, “192.168.1.201”, “192.168.1.202”]
disks = [ {
vdisklabel = “server1-disk16”
vdisksize = “20”
vdisknumber = “10”
vdatastore = “datastore1”
},
{
vdisklabel = “server2-disk30”
vdisksize = “20”
vdisknumber = “30”
vdatastore = “datastore2”
},
]
}
}

variables.tf

variable “vsphere_user” {}
variable “vsphere_password” {}
variable “vsphere_server” {}
variable “vsphere_unverified_ssl” {}
variable “compute_cluster” {}
variable “datacenter” {}
variable “site” {}
variable “change” {}
variable “virtual_machines” {}

data.tf

locals{
vds = var.site == “US” ? “vds_001” : “vds_002”
hostesx = var.site == “US” ? “192.168.10.103” : “192.168.10.104”
vmgroup = var.site == “US” ? “RheLicenseVM” : “RheLicenseVM”
template = var.site == “US” ? “MicroOS-Temp” : “RHEL8_ZH_template”

virtm = { for_each = var.virtual_machines}

flat_sandboxes = {
for sandbox in var.virtual_machines :
sandbox.system_name => sandbox
}

network_subnets = flatten([
for network_key, network in var.virtual_machines : [
for subnet in network.disks : {
network_key = network_key
purpose = subnet.vdatastore
namedisk = subnet.vdisklabel
sizedisk = subnet.vdisksize
numberdisk = subnet.vdisknumber
}
]
])

}

data “vsphere_datacenter” “datacenter” {
name = var.datacenter
}

data “vsphere_resource_pool” “pool” {
for_each = var.virtual_machines

name = each.value.rspool
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_virtual_machine” “template” {
name = “MicroOS-Temp”
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_datastore” “datastore” {
for_each = var.virtual_machines

name = each.value.datastore
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_datastore” “datastore2” {
for_each = {
#for ns in local.network_subnets: ns.purpose => ns
for index, ns in local.network_subnets : index => ns
}

name = each.value.purpose
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_datastore” “datastore_a” {
for_each = var.virtual_machines
name = each.value.datastore_a
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_datastore” “datastore_b” {
for_each = var.virtual_machines

name = each.value.datastore_b
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_network” “network_interface” {
for_each = var.virtual_machines

name = each.value.network_interface
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data “vsphere_host” “host” {
name = local.hostesx
datacenter_id = data.vsphere_datacenter.datacenter.id
}

instance.tf

resource “vsphere_virtual_machine” “vm” {
for_each = var.virtual_machines

name = each.key
resource_pool_id = “${data.vsphere_host.host.resource_pool_id}”

guest_id = data.vsphere_virtual_machine.template.guest_id
scsi_type = data.vsphere_virtual_machine.template.scsi_type

num_cpus = each.value.system_cores

memory = each.value.system_memory

host_system_id = data.vsphere_host.host.id

annotation = each.value.vmnotes
firmware = “efi”
sync_time_with_host = true

cpu_hot_add_enabled = true

memory_hot_add_enabled = true

datastore_id = data.vsphere_datastore.datastore[each.key].id
folder = each.value.folder

wait_for_guest_ip_timeout = 0
wait_for_guest_net_timeout = 0

scsi_controller_count = 4

#Network
network_interface {
network_id = data.vsphere_network.network_interface[each.key].id
adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
}

disk {
label = “disk0”
size = each.value.system_disk
eagerly_scrub = false
thin_provisioned = true
#unit_number = 0
}

dynamic “disk” {
for_each = each.value.disks
content {
label = disk.key
size = disk.value.vdisksize
unit_number = disk.value.vdisknumber
datastore_id = disk.value.vdatastore
}
}

#cloning from template
clone {
template_uuid = data.vsphere_virtual_machine.template.id

customize {
linux_options {

    host_name = each.value.system_name
    domain    = each.value.system_domain

  }


  network_interface {
    ipv4_address = each.value.system_ipv4_address
    ipv4_netmask = each.value.system_ipv4_netmask
  }

  ipv4_gateway = each.value.system_ipv4_gateway
  dns_server_list = each.value.dns_server_list
}

}

}

resource “local_file” “change” {
content = var.change
filename = “${path.module}/change”
}

providers.tf

terraform {
required_providers {
vsphere = {
source = “hashicorp/vsphere”
}
}
required_version = “>= 1.1.0”
}

provider “vsphere” {
user = var.vsphere_user
password = var.vsphere_password
vsphere_server = var.vsphere_server
allow_unverified_ssl = var.vsphere_unverified_ssl
}

variable.tf

variable “vsphere_user” {
type = string
description = “User that connects to the vCenter.”
default = “admin”
sensitive = true
}

variable “vsphere_password” {
type = string
description = “Password of the user that connects to the vCenter.”
default = “admin”
sensitive = true
}

variable “vsphere_server” {
type = string
description = “vCenter URL.”
default = “https://localhost
}

variable “change” {
type = string
description = “change”
default = “”
}

variable “vsphere_unverified_ssl” {
type = bool
description = “Disable verification of vCenter server HTTPS certificate.”
default = false
}

variable “compute_cluster” {
type = string
description = “Cluster where to provision the servers of the group.”
default = “”
}

variable “resource_pool” {
type = string
description = “Cluster where to provision the servers of the group.”
default = “”
}

variable “datacenter” {
type = string
description = “Datacenter where to provision the servers of the group.”
default = “”
}

variable “vds” {
type = string
description = “select os option”
default = “vds”
}

variable “site” {}

variable “virtual_machines” {
type = map(object({
system_cores = number
system_memory = number
system_disk = number
datadisk = number
paging = number
system_ipv4_address = string
system_ipv4_netmask = string
system_ipv4_gateway = string
rspool = string
category = string
datastore = string
datastore_a = string
datastore_b = string
folder = string
compute_cluster = string
network_interface = string
vmnotes = string
system_name = string
system_domain = string
dns_server_list = list(string)
disks = list(object({
vdisklabel=string
vdisksize=number
vdisknumber=number
vdatastore=string}))
}))

I hope to help me because i don’t understand where is my problem… :frowning:

Thx a lot :slight_smile:

Omar

1 post - 1 participant

Read full topic

What is the best way to upgrade TF code repo built on version 0.12.31 to latest version

$
0
0

We have a very old terraform code repo built on version 0.12.31. We need to upgrade it to latest version. What are the best practices we need to follow ? On searching I found out that such a big upgrade should not be done directly, rather first upgrade to some intermediate version ex: 0.13.x . Looking for proper steps we need to carry on for upgrade.

1 post - 1 participant

Read full topic

Duplicate key error on core

$
0
0

Hi,
I am getting this error can you please help me resolve it.

Planning failed. Terraform encountered an error while generating this plan.


│ Error: Duplicate object key

│ on .terraform\modules\enterprise_scale\modules\archetypes\locals.policy_assignments.tf line 52, in locals:
│ 50: custom_policy_assignments_map_from_json = try(length(local.custom_policy_assignments_dataset_from_json) > 0, false) ? {
│ 51: for key, value in local.custom_policy_assignments_dataset_from_json :
│ 52: value.name => value
│ 53: if value.type == local.resource_types.policy_assignment
│ 54: } : null
│ ├────────────────
│ │ value.name is “Deny-NIC-NSG”

│ Two different items produced the key “Deny-NIC-NSG” in this ‘for’ expression. If duplicates are expected, use the ellipsis (…) after the value expression to enable grouping by key.

This is my code

custom_policy_assignments_map_from_json = try(length(local.custom_policy_assignments_dataset_from_json) > 0, false) ? {
for key, value in local.custom_policy_assignments_dataset_from_json :
value.name => value
if value.type == local.resource_types.policy_assignment
} : null

2 posts - 2 participants

Read full topic


Strange problem setting up a storage account

$
0
0

I’m using terraform cloud to create a simple storage account (SA) in our subscription.

Create storage account for network watcher logging

resource “azurerm_storage_account” “sa-net-wat-op” {
name = “nameremoved”
resource_group_name = azurerm_resource_group.ops-netwatch-rg.name
location = var.region_ne
account_tier = “Standard”
account_replication_type = “LRS”
}

I have several Workspaces set up and use a Cloud Agent running on an Azure Container as I have a locked down environment.
There is a storage Registry module but that does not have any variables set up and my .tf file is not referencing it at all.

The SA creates but the terraform side does not complete. It times out and says the creation fails.

And this on the Container Agent

I can create the same SA in the subscription manually. And as I say, the SA gets set up anyway but any subsequent runs will fail as TFC keeps hanging when checking this as part of its run.
I can create other resources without any problems in this Workspace, so not an Agent issue or permission issue.
I can use other Workspaces to create this SA in the same subscription without any problems, so again not RBAC permission.
I can create this SA in other subscriptions as well.

I’m leaning towards the Workspace but I cant see how or what might be happening to cause this and my initial google searches has not proved that fruitful.

2 posts - 2 participants

Read full topic

Purchase Reserved Capacity and Purchase Reserved Instance

$
0
0

Does terraform provide support for Purchase Reserved Capacity and Purchase Reserved Instance in Azure?

1 post - 1 participant

Read full topic

Unexpected attribute: An attribute named "max_node_count" is not expected here

$
0
0

Hi there,

Even though it’s possible to define maximum node count through variables, I get the error outlined above and when I remove them, it works perfectly.

main.tf file

# Define variables for cluster configuration (unchanged)

# Define the main GKE cluster (unchanged)
resource "google_container_cluster" "gke_cluster" {
  name  = var.cluster_name
  location = var.location
  initial_node_count = var.initial_node_count
}

# Define a separate node pool with desired machine type and maximum count
resource "google_container_node_pool" "high_mem_pool" {
  name      = "high-mem-pool"
  cluster   = google_container_cluster.gke_cluster.name
  node_count = var.initial_node_count  # Initial node count (can be adjusted)
  max_node_count = var.maximum_node_count  # Use the defined variable

  # Set machine type with more memory for memory-intensive workloads
  node_config {
    machine_type = "e2-standard-8"
  }
}

variables.tf

# Define variables for cluster configuration (variable block)
variable "cluster_name" {
 default = "gke-terraform"
}

variable "location" {
 default = "us-central1"
}

variable "initial_node_count" {
 default = 1
}

variable "maximum_node_count" {
  default = 3
}

providers.tf

provider "google" {
  project     = "project-id" #change project id to suit your project
  region      = "us-central1"
}

I don’t know why I get the error and when I extensively looked at the problem, I looked at documentation and other resources extensively but couldn’t find an answer.

1 post - 1 participant

Read full topic

Issue with importing 'aws_route53_health_check' module

$
0
0

Hi Terraform Gurus!

Although the ids are correct, the plan shows that two resources are going to be created instead of imported.

Any idea what could be wrong?

This is my current file structure

/app.tf
/module/r53/r53.tf

app.tf

locals {
  healthchecks = [
    {
      distribution = "dist1"
      id           = "XXXXf534-4f69-43cf-XXXXXXXX"
    },
    {
      distribution = "dist2"
      id           = "XXXX213-52a9-435f-XXXXXXX"
    },
  ]
}

import {
  for_each = { for i, obj in local.healthchecks : obj.distribution => obj }
  to       = module.r53.aws_route53_health_check.this[each.value.distribution]
  id       = each.value.id
}

r53.tf

locals {
  healthchecks = [
    {
      distribution = "dist1"
      id           = "XXXXf534-4f69-43cf-XXXXXXXX"
    },
    {
      distribution = "dist2"
      id           = "XXXX213-52a9-435f-XXXXXXX"
    },
  ]
}

resource "aws_route53_health_check" "this" {
  for_each          = { for i, obj in local.healthchecks : obj.distribution => obj }
  reference_name    = each.value.distribution
  fqdn              = each.value.distribution
  port              = 443
  type              = "HTTPS"
  resource_path     = "/status/health"
  failure_threshold = 1
  request_interval  = 30
}

1 post - 1 participant

Read full topic

Using EIPs for ALB

$
0
0

trying to create a load balancer with EIPs attached to them.

resource "aws_alb" "alb1" {
  name="lb"
  ... (truncated) ...

  subnet_mapping {
    count = length( data.terraform_remote_state.vpc.outputs.subnets_public )
    subnet_id = data.terraform_remote_state.vpc.outputs.subnets_public[ count.index ]
    allocation_id = element( aws_eip.eip.*.id, count.index ) 
}

So this doesnt work, get an error

The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count" argument is set.

is there a way to be able to do something dynamically? As I’m multi-region and some regions have less or more subnets than others.

2 posts - 2 participants

Read full topic

Viewing all 11412 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>