Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11401 articles
Browse latest View live

Custom Provider Error - ConflictsWith for TypeSet/TypeList


Error: Custom Provider, ConflictsWith for TypeSet/TypeList

$
0
0

@vidyasagarsvn wrote:

Hi,
I am writing a custom provider and I came across a roadblock pertaining to the behavior of ‘ConflictsWith’ within a schema.TypeSet/TypeList. Please see the snippet of the schema below.

 "acl": {
    Type:     schema.TypeSet,
    Required: true,
    MinItems: 1,
    Elem: &schema.Resource{
        Schema: map[string]*schema.Schema{
            "user_name": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"acl.0.group_name"},
            },
            "group_name": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"acl.0.user_name"},
            },
            "permission": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    },
    Set: <hashFunc,
}

I want to make sure only one of ‘user_name’ and ‘group_name’ is specified in each acl block. However the following error comes up when I specify ‘user_name’ and ‘group_name’ in separate ‘acl’ blocks.

Error: “acl.1.user_name”: conflicts with acl.0.group_name

How do I make sure the conflict is evaluated within a single element in the set/list instead of across the whole list. Please let me know if I missed something.

Posts: 1

Participants: 1

Read full topic

Lifecycle block throws error

$
0
0

@mohammadasim wrote:

Hi,
I recently upgrade my terraform version to 0.12.21. I followed the official guide in this process. The syntax changes were implemented correctly. In my code, I use the official AWS modules to provision the infrastructure.
The AMI ids are dynamically generated. Before the upgrade, whenever I wanted to add another server to our infrastructure, terraform plan would output that it wanted to delete the already existing servers due to the change in AMI id. To address this issue, I would modify the module to ignore AMI changes as shown below and it would fix the problem.

     resource "aws_instance" "this_t2" {
      count = "${var.instance_count * local.is_t_instance_type}"

      ami                    = "${var.ami}"
      instance_type          = "${var.instance_type}"
      user_data              = "${var.user_data}"
      subnet_id              = "${element(distinct(compact(concat(list(var.subnet_id), var.subnet_ids))),count.index)}"
      key_name               = "${var.key_name}"
      monitoring             = "${var.monitoring}"
      vpc_security_group_ids = ["${var.vpc_security_group_ids}"]
      iam_instance_profile   = "${var.iam_instance_profile}"

      associate_public_ip_address = "${var.associate_public_ip_address}"
      private_ip                  = "${var.private_ip}"
      ipv6_address_count          = "${var.ipv6_address_count}"
      ipv6_addresses              = "${var.ipv6_addresses}"

      ebs_optimized          = "${var.ebs_optimized}"
      volume_tags            = "${var.volume_tags}"
      root_block_device      = "${var.root_block_device}"
      ebs_block_device       = "${var.ebs_block_device}"
      ephemeral_block_device = "${var.ephemeral_block_device}"

      source_dest_check                    = "${var.source_dest_check}"
      disable_api_termination              = "${var.disable_api_termination}"
      instance_initiated_shutdown_behavior = "${var.instance_initiated_shutdown_behavior}"
      placement_group                      = "${var.placement_group}"
      tenancy                              = "${var.tenancy}"

      credit_specification {
        cpu_credits = "${var.cpu_credits}"
      }

      tags = "${merge(map("Name", (var.instance_count > 1) || (var.use_num_suffix == "true") ? format("%s-%d", var.name, count.index+1) : var.name), var.tags)}"

      lifecycle {
        # Due to several known issues in Terraform AWS provider related to arguments of aws_instance:
        # (eg, https://github.com/terraform-providers/terraform-provider-aws/issues/2036)
        # we have to ignore changes in the following arguments
        ignore_changes = ["private_ip", "root_block_device", "ebs_block_device", "ami"]
      }
    }

However, after the upgrade when I run terraform plan, the output shows that all the servers in the environment will be deleted and recreated. I have tried to make similar changes like fore by adding lifecycle block, but I get an error when I run terraform plan.
resource "aws_instance" "this" {
  count = var.instance_count

  ami              = var.ami
  instance_type    = var.instance_type
  user_data        = var.user_data
  user_data_base64 = var.user_data_base64
  subnet_id = length(var.network_interface) > 0 ? null : element(
    distinct(compact(concat([var.subnet_id], var.subnet_ids))),
    count.index,
  )
  key_name               = var.key_name
  monitoring             = var.monitoring
  get_password_data      = var.get_password_data
  vpc_security_group_ids = var.vpc_security_group_ids
  iam_instance_profile   = var.iam_instance_profile

  associate_public_ip_address = var.associate_public_ip_address
  private_ip                  = length(var.private_ips) > 0 ? element(var.private_ips, count.index) : var.private_ip
  ipv6_address_count          = var.ipv6_address_count
  ipv6_addresses              = var.ipv6_addresses

  ebs_optimized = var.ebs_optimized
  
  dynamic "root_block_device" {
    for_each = var.root_block_device
    content {
      delete_on_termination = lookup(root_block_device.value, "delete_on_termination", null)
      encrypted             = lookup(root_block_device.value, "encrypted", null)
      iops                  = lookup(root_block_device.value, "iops", null)
      kms_key_id            = lookup(root_block_device.value, "kms_key_id", null)
      volume_size           = lookup(root_block_device.value, "volume_size", null)
      volume_type           = lookup(root_block_device.value, "volume_type", null)
    }
  }

  dynamic "ebs_block_device" {
    for_each = var.ebs_block_device
    content {
      delete_on_termination = lookup(ebs_block_device.value, "delete_on_termination", null)
      device_name           = ebs_block_device.value.device_name
      encrypted             = lookup(ebs_block_device.value, "encrypted", null)
      iops                  = lookup(ebs_block_device.value, "iops", null)
      kms_key_id            = lookup(ebs_block_device.value, "kms_key_id", null)
      snapshot_id           = lookup(ebs_block_device.value, "snapshot_id", null)
      volume_size           = lookup(ebs_block_device.value, "volume_size", null)
      volume_type           = lookup(ebs_block_device.value, "volume_type", null)
    }
  }

  dynamic "ephemeral_block_device" {
    for_each = var.ephemeral_block_device
    content {
      device_name  = ephemeral_block_device.value.device_name
      no_device    = lookup(ephemeral_block_device.value, "no_device", null)
      virtual_name = lookup(ephemeral_block_device.value, "virtual_name", null)
    }
  }

  dynamic "network_interface" {
    for_each = var.network_interface
    content {
      device_index          = network_interface.value.device_index
      network_interface_id  = lookup(network_interface.value, "network_interface_id", null)
      delete_on_termination = lookup(network_interface.value, "delete_on_termination", false)
    }
  }

  source_dest_check                    = length(var.network_interface) > 0 ? null : var.source_dest_check
  disable_api_termination              = var.disable_api_termination
  instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior
  placement_group                      = var.placement_group
  tenancy                              = var.tenancy

  lifecyle {
    ignore_changes = all
  }

  tags = merge(
    {
      "Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s-%d", var.name, count.index + 1) : var.name
    },
    var.tags,
  )

  volume_tags = merge(
    {
      "Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s-%d", var.name, count.index + 1) : var.name
    },
    var.volume_tags,
  )

  credit_specification {
    cpu_credits = local.is_t_instance_type ? var.cpu_credits : null
  }
}

Any help will be highly appreciated. I have checked terraform resources here

Posts: 3

Participants: 2

Read full topic

AWS Cloudfront Distribution - Non S3 Bucket

$
0
0

@jon-guidance wrote:

I’m using aws_cloudfront_distribution and i want to use a custom origin, not a S3 bucket. I can’t find the way to turn off the use of S3. I keep getting this error message during the Apply process: * aws_cloudfront_distribution.cdn_distribution: error creating CloudFront Distribution: InvalidArgument: The parameter Origin DomainName does not refer to a valid S3 bucket

Here are my parameters:

origin.#: “0” => “1”
origin.4021250195.custom_header.#: “0” => “0”
origin.4021250195.custom_origin_config.#: “0” => “0”
origin.4021250195.domain_name: “” => “origin.clientname”
origin.4021250195.origin_id: “” => “custom-origin.clientname”
origin.4021250195.origin_path: “” => “”
origin.4021250195.s3_origin_config.#: “0” => “0”

Any suggestions? Thanks.

Posts: 1

Participants: 1

Read full topic

Retry Resource Creation

Building string array to pass via AWS user_data

$
0
0

@nyue wrote:

Hi,

I am trying to pass the list of private_ip for a collection of compute nodes to the head node in a computational cluster but not getting much joy so far.

resource "aws_instance" "head_node" {
  ami             = "ami-0a269ca7cc3e3beff"
  instance_type   = "t3.large"
  security_groups = [aws_security_group.head_node_sg.name]
  key_name        = "testssh"
  user_data       = templatefile("${path.module}/head_node_setup.sh", {
    efs_hostname = aws_efs_mount_target.cluster_efs_mt["ca-central-1a"].dns_name
    _cnodes_ip = []
    for ip in aws_instance.compute_nodes.private_ip:
      _cnodes_ip.append(ip)
    cnodes_ip    = tostring(_cnodes_ip)
  })

I am aiming for a space separated string I can tokenize in my script but I can grok other representation I guess.

Cheers

Posts: 3

Participants: 2

Read full topic

Obtaining tf version and provider versions internally to tf

$
0
0

@ryan-dyer wrote:

We would like to tag our resources with the tf version and provider version on the individual resources. Is there any way to do this?

Posts: 1

Participants: 1

Read full topic

Terraform cross account - retrieve SNS topic id from target account to source account

$
0
0

@sbjedfx wrote:

Hello,
I am trying to achieve a multi-account architecture provisioning through terraform.
Example
Account - A (user John)
Account - B (having a role name admin which has the policy to SNS service)

Cross account has been setup and John is able to switch to Account-B through crossaccount role.

Terraform Provider

provider “aws” {

region = “us-east-2”

profile = “John”

assume_role {

role_arn  = "arn:aws:iam::<Account-B-id>:role/cross-account-role"

session_name = "Terraform"

}

}

now I need to create a SNS topic in the Account-B and need the topic ARN to update a resource in Account-A(user: John).

Could you please suggest a terraformic :wink: way to create the resource in target account and get the value of that resource to perform further action in source account.
Thanks

Posts: 1

Participants: 1

Read full topic


How do I import an Azure AD Service Principal Password into Terraform?

$
0
0

@aarroyoc wrote:

We’re using Terraform to build our cloud infrastructure. Previously we had a few service principals created without Terraform that are being used right now on production and can’t be changed. Now we want to move to Terraform the creation of that service principals, but we’re unable to import the previous ones while keeping a structure to create new ones using random_string.

resource "azuread_service_principal_password" "service-images" {
  for_each             = toset(var.profiles)
  service_principal_id = azuread_service_principal.service-images[each.value].id
  end_date             = "2222-01-01T23:00:00Z"
  value                = random_string.images_password[each.value].result
}
resource "random_string" "images_password" {
  for_each = toset(var.profiles)
  length   = 32
  special  = true
}

When we create a new service principal (by adding an element to var.profiles list) it works fine, but when it’s a already used service principal, we’re worried that Terraform will smash the previous value and go down in production.

Also, Terraform seems to have an import interface for azuread_service_principal_password:

terraform import azuread_service_principal_password.test 00000000-0000-0000-0000-000000000000/11111111-1111-1111-1111-111111111111

Where first part is ServicePrincipalObjectId and second part is ServicePrincipalPasswordKeyId, however I can’t find that latter value on Azure Portal (where is it?).

How would you proceed?

Posts: 1

Participants: 1

Read full topic

Provider implementation of API actions

$
0
0

@giorgos-nikolopoulos wrote:

Hello. I am developing the citrixadc provider. ( https://github.com/citrix/terraform-provider-citrixadc )
In this provider we use the HTTP REST API ( NITRO) implemented on citrixadc to implement the terraform resources.

We have a request to implement some functionality in the API that does not match the typical resource life cycle.

For example we need to implement the reboot action or sync configuration files action.
There is no state on the target node for these actions so a create, update, delete life cycle is irrelevant.

So far these cases are handled with null_provider and local-exec provisioner to issue the API calls.

What would be the best practice to incorporate this functionality in the provider itself?

Posts: 1

Participants: 1

Read full topic

API not found. ex: https://registry.terraform.io/v1/providers/-/aws-vpc/versions

$
0
0

@AngeloDamasio wrote:

Good Morning,

A few months ago, I started studying terraform, installed it, performed tests, left a code ready, used terraforming to transform the current infra into code, updated this terraforming to the most current version of terraform, everything happened as expected, without errors.

However, now I’m back to playing with terraform to create the new company infrastructure, and terraform simply accuses that the aws provider is not available for installation, detail, I’m using the most current version, and the same AMI I used when I used it terraform for the first time.

It is interesting, that even when I do a terraform --version with TF_LOG = trace enabled, flaws are pointed out, it seems that files are missing, as shown below:
root@ip-172-31-1-4:/home/ubuntu/bitbucket/infra-test/terraform-cm# terraform --version
2020/02/21 14:08:40 [INFO] Terraform version: 0.12.21
2020/02/21 14:08:40 [INFO] Go runtime version: go1.12.13
2020/02/21 14:08:40 [INFO] CLI args: string{"/usr/local/bin/terraform", “–version”}
2020/02/21 14:08:40 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2020/02/21 14:08:40 [DEBUG] File doesn’t exist, but doesn’t need to. Ignoring.
2020/02/21 14:08:40 [INFO] CLI command args: string{“version”, “–version”}
Terraform v0.12.21
2020/02/21 14:08:40 [DEBUG] checking for provider in “.”
2020/02/21 14:08:40 [DEBUG] checking for provider in “/usr/local/bin”
2020/02/21 14:08:40 [DEBUG] checking for provider in “.terraform/plugins/linux_amd64”
2020/02/21 14:08:40 [DEBUG] found provider “terraform-provider-aws_v2.49.0_x4”
2020/02/21 14:08:40 [DEBUG] found valid plugin: “aws”, “2.49.0”, “/home/ubuntu/bitbucket/infra-test/terraform-cm/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.49.0_x4”
2020/02/21 14:08:40 [INFO] Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory

And the error reported when terraform tries to get the provider after giving terraform init:

2020/02/21 14:01:30 [DEBUG] plugin requirements: “aws”="~> 2.0"
2020/02/21 14:01:30 [DEBUG] plugin requirements: “aws-vpc”=""
2020/02/21 14:01:30 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json
2020/02/21 14:01:30 [TRACE] HTTP client GET request to https://registry.terraform.io/.well-known/terraform.json
2020/02/21 14:01:30 [DEBUG] fetching provider versions from “https://registry.terraform.io/v1/providers/-/aws-vpc/versions
2020/02/21 14:01:30 [TRACE] HTTP client GET request to https://registry.terraform.io/v1/providers/-/aws-vpc/versions
2020/02/21 14:01:30 [DEBUG] provider &{registry.terraform.io - aws-vpc linux amd64} not found

Thank you very much in advance, and if any information is missing, let me know.

Posts: 2

Participants: 2

Read full topic

Dynamics with child blocks

$
0
0

@rohrerb wrote:

We have been waiting for the following change > https://github.com/terraform-providers/terraform-provider-azurerm/pull/5440

Now that it is here I am a bit confused on how to implement a dynamic in a child block.

We have a list of ips we would like to add using a dynamic blocks. Terraform itself doesn’t give us a error but the plan shows no changes what so ever.

resource "azurerm_function_app" "app" {
  for_each = { for o in local.functions : o.group_key => o }

  name                      = format("%s%s%s", upper(var.full_env_code), "-", each.value.group_key)
  location                  = data.azurerm_resource_group.rg.location
  resource_group_name       = data.azurerm_resource_group.rg.name
  app_service_plan_id       = azurerm_app_service_plan.service_plan[each.value.key].id
  storage_connection_string = module.storage.primary_connection_string
  version                   = "~2"

  site_config {
     dynamic "ip_restriction" {
        for_each = var.ip_restrictions

        content {
          ip_address = ip_restriction.key
        }
      } 
  }

}

If i swap out

  site_config {
     dynamic "ip_restriction" {
        for_each = var.ip_restrictions

        content {
          ip_address = ip_restriction.key
        }
      } 
  }

and hardcode a ip

  site_config {
     ip_restriction {
         ip_address = "10.0.0.1
      } 
  }

We are able to see the change in a plan

site_config {
            always_on                 = false
            ftps_state                = "AllAllowed"
            http2_enabled             = false
          ~ ip_restriction            = [
              + {
                  + ip_address = "10.0.0.1"
                  + subnet_id  = null
                },
            ]

How can we use a dynamic on a child block?

Posts: 2

Participants: 2

Read full topic

Joining Virtual Machine to Active Directory

List current NSG rule

TFC remote backend still creates local resources

$
0
0

@edoboker wrote:

Hi all,

I’ve been trying to use TFC in order to store the state remotely and get rid of local modules, files and runs on my team’s laptops. I’ve configured the TFC backend (after creating an account, organisation and workspace, of course):

terraform {
  backend "remote" {
    organization = "my_org"

    workspaces {
      name = "my_workspace"
    }
  }
}

I’ve followed the TFC instructions, generated credentials for TFC and created a local file at $HOME/.terraformrc :

credentials "app.terraform.io" {
  token = "#############################################"
}

My Terraform file is the simplest Azure environment:

provider "azurerm" {
  version = ">=1.38.0"
  alias           = "alias"
  subscription_id = "#########################"
}

resource "azurerm_resource_group" "test_resource_group" {
  location = "westeurope"
  name     = "test_resource_group"
}

I was thinking that when I’ll run terraform plan or terraform apply it’ll link me to TFC which will run those commands. To my surprise, I saw azurerm plugin being downloaded to my laptop and Terraform state created locally.

What am I doing wrong here?

Posts: 1

Participants: 1

Read full topic


Specifying ignore_changes for a block correctly

$
0
0

@mikek wrote:

Given a block in a resource e.g

version {
    instance_template = "foo"
    name              = "bar"
}

Is it possible to use ignore_changes in a partial way - meaning to only specify the name attribute be ignored, for instance?

Also, if the block were dynamic - how would ignore_changes be specified in that case?

Posts: 2

Participants: 2

Read full topic

How to first update dependency and then delete an old object

$
0
0

@michcio1234 wrote:

I have a setup where resource A depends on resource B. I want to replace resource B with another resource C.

# I have:
A ---depends on---> B 
# I want to change it to:  
A ---depends on---> C 

Current behaviour
Terraform tries to execute operations in the following order:

  1. Create C
  2. Destroy B
  3. Update A

But it fails, because B can’t be destroyed, because A depends on it.

Expected behaviour

  1. Create C
  2. Update A
  3. Destroy B

How can I achieve it?

lifecycle = create_before_destroy does not help me, because I am not updating B, but replacing it with a whole new resource C.

Some more context
I am using Terraform 0.12.20.
The actual resources in question are:

  • A - AWS Application Load Balancer
  • B - Security Group created with a module
  • C - Security Group resource created directly in TF

Posts: 3

Participants: 2

Read full topic

Not able to setup terraform runtime service for BPD

Add resources to sourced module (AWS)

$
0
0

@dragosandronache wrote:

Hi all!
I have a Terraform modules related question to you.

We are currently using verified vpc registry module and would like to customize it. This registry module has predefined subnets and routes for various components: elasticache, Redshift, DB etc.
We would like to add to this module new arguments when calling it, with other subnets and routes (i.e. elasticsearch, awx etc.) in order to be part of our reusable confs. We wrap registry module to one private git repo and the wrapped one is being used in our conf.
Can you please let me know how can we add customization to registry modules?

Thank you very much for your time and answer!

Best regards,
Dragos

Posts: 5

Participants: 2

Read full topic

Terraform as a self service - Options

$
0
0

@sathishkpsvdms wrote:

We use the open source version of Terraform for our infrastructure. We want to offer self-service for certain things (like create vm, lb, networks, etc) via terraform. I have reviewed the option with servicenow via terraform Enterprise.
We are not experts in Terraform

What are other options available without terraform Enterprise?
Kindly suggest.

Posts: 1

Participants: 1

Read full topic

Viewing all 11401 articles
Browse latest View live