Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11361 articles
Browse latest View live

Migrate AzureRM template to Terraform

$
0
0

Hi,
there is a way to migrate Azure template to Terraform format?
I think many of us that start using Terraform will be happy to have the ability
to migrate the template…

Thanks
Miki

1 post - 1 participant

Read full topic


Global 3rd party plugin directory?

$
0
0

According to documentation you can install 3rd party plugins to ~/.terraform.d/plugins.
Is there any directory bound to the system instead of to the user? Something like /etc/terraform.d/plugins which can be used by all users?

2 posts - 2 participants

Read full topic

How to support multiple domains in acm module

$
0
0

I’m currently stuck on how to validate for multiple domain names for my acm module, ie i want to validate route53 for foo.dev and bar.com.

My module looks like
variable “domain_names” {
description = “A domain name for which the certificate should be issued”
type = map(list(string))
}

variable "validation_method" {
  description = "Validation method DNS/EMAIL/NONE"
  type        = string
}


data "aws_route53_zone" "selected" {
  for_each     = var.validation_method == "DNS" ? var.domain_names : {}
  name         = each.key
  private_zone = false
}

resource "aws_acm_certificate" "certificate" {
  for_each                  = var.domain_names
  domain_name               = each.key
  subject_alternative_names = [join(",", each.value)]
  validation_method         = var.validation_method

  tags = {
    Name      = each.key
    owner     = "xxx"
    terraform = "true"
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "validation" {
  for_each   = var.validation_method == "DNS" ? var.domain_names : {}
  name       = aws_acm_certificate.certificate[each.key].domain_validation_options.0.resource_record_name
  type       = aws_acm_certificate.certificate[each.key].domain_validation_options.0.resource_record_type
  zone_id    = data.aws_route53_zone.selected[each.key].zone_id
  ttl        = "300"
  records    = [aws_acm_certificate.certificate[each.key].domain_validation_options.0.resource_record_value]
  depends_on = [aws_acm_certificate.certificate]
}

resource "aws_acm_certificate_validation" "certificate_validation" {
  for_each                = var.validation_method == "DNS" ? var.domain_names : {}
  certificate_arn         = aws_acm_certificate.certificate[each.key].arn
  validation_record_fqdns = [aws_route53_record.validation[each.key].fqdn, ]
}


module "acm_private" {
  source = "../projects/tf_module_acm/"
  domain_names = {
    "foo.dev" = ["*.foo.dev","bar.com"]
}

1 post - 1 participant

Read full topic

Detach some ebs volume from EC2 Instance (aws)

$
0
0

Hi,
I need to detach a volume from ec2 instance, in the documentation I only find aws_volume_attachment. Does exist something like aws_volume_deattachment ?

2 posts - 2 participants

Read full topic

Terraform backend to on-premise S3 storage

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Azure Stack Custom Script Example

$
0
0

Hi,

I have been trying the custom script extension example for terraforms deployment to azure stack and have not been able to get it working. It seems like my azure stack does not support the extension repository of {publisher=Microsoft.Azure.Extensions and type=CustomScript}. How do I add support for the following publish and type that is used in the example?

Below is the link to the examples I am referring:

1 post - 1 participant

Read full topic

How to retrieve the null_resource returned value?

$
0
0

Hi All,

I have the below null_resource to retrieve the prometheus cluster IP:

 resource "null_resource" "get_prometheus_ip" {
  provisioner "local-exec" {
    command = "kubectl get svc prometheus-server -n monitoring | awk -F' ' '{print $3}' | tail -1"
  }
}

I want to use its returned result in another place:

resource "helm_release" "prometheus-adapter" {
  name = "prometheus-adapter"
  chart = "${path.module}/helm/charts/stable/prometheus-adapter/"
  namespace = "default"

  // prometheus URL
  set {
    name = "prometheus.url"
    value = "http://${returnedValueHere}"
  }

is this doable or not and is there also a better way to do this?

Thanks :slight_smile:

1 post - 1 participant

Read full topic

How to get a shared VPC for a project - GCP

$
0
0

Shared VPC details.
Unable to extract/import the shared VPC details for a project and use the details in the networking field via terraform.

1 post - 1 participant

Read full topic


AWS Opsworks Puppet Enterprise

$
0
0

Hi,

we are planning to launch opsworks pupptet enterprise, we are looking for options available to implement the same. As most of our stacks are built using terraform, thought of checking here if we have an existing/new feature available to launch PE opsworks using TF.

Any suggestions and help would be appreciated.

Thanks.

1 post - 1 participant

Read full topic

Map interpolation

$
0
0

I’ve a map(list(string))

domain_names = {
        "foo.com"          = ["*.foo.com","bar.com"]
            "sand.co.uk" = [*.sand.co.uk]
}

How to create a map something like this , just removing the element if it has “*.”

 domain_names = {
         "foo.com"      = ["foo.com","bar.com"]
          "sand.co.uk" = ["sand.co.uk"]
 }

want to do something inside the locals file. I tried

all_domains = zipmap(keys(var.domain_names), [for each in values(var.domain_name): trimprefix(each, "*.")])

Invalid value for “str” parameter: string required.

also

resource "aws_acm_certificate" "certificate" {

  for_each                  = var.domain_names
  domain_name               = each.value 
  subject_alternative_names = each.value # I want to get all my values here like ["*.foo.com","bar.com"]
  validation_method         = var.validation_method

  tags = {
    Name      = each.value
    owner     = "amp"
    terraform = "true"
  }

  lifecycle {
    create_before_destroy = true
  }
}

2 posts - 2 participants

Read full topic

Error: only lowercase alphanumeric characters and hyphens allowed in parameter group "name"

$
0
0

I am creating an rds module and facing an error that is unelectable. terraform behaving unusually here.

rds.tf

resource “aws_db_parameter_group” “parameter_group” {

name = “${var.parameter_group_name} parameters”

family = “${var.family}”

description = “Database parameter group for ${var.parameter_group_name}”

parameter {

name  = "${var.parameter_name}"

value = "${var.parameter_value}"

}

}

Error: only lowercase alphanumeric characters and hyphens allowed in parameter group “name”

on …\RDS\rds.tf line 7, in resource “aws_db_parameter_group” “parameter_group”:
7: resource “aws_db_parameter_group” “parameter_group” {

I tried many names but it is showing me the same error. can you plz help me to resolve this issue and why is terraform behaving like this here?

1 post - 1 participant

Read full topic

How to create multiple vms

Terraform unable to refresh state in-memory

$
0
0

Hello, I have recently migrated the backend to terraform enterprise and after queue plan, Its trying to initiate and unable to refresh its state in-memory,just waiting on [DEBUG] Using modified User-Agent: Terraform/0.12.1 TFE/vxxxx.Appreciate any suggestion

1 post - 1 participant

Read full topic

Passing data from powershell scripts to Terraform

$
0
0

I have a powershell scripts like this :

#collecting user input for client_id, client_secret, subscription_id, and tenant_id
$client_id = Read-Host -Prompt ‘Input your Azure client_id’
$client_secret = Read-Host -Prompt ‘Input your Azure client_secret’
$subscription_id = Read-Host -Prompt ‘Input your Azure subscription_id’
$tenant_id = Read-Host -Prompt ‘Input your Azure tenant_id’

I want to pass the variables $client_id, $client_secret, $subscription_id. Using data “external”.

How can i pass these data to Terraform

1 post - 1 participant

Read full topic

State locking and central server

$
0
0

Hi,

I am new to Terraform. When you want to use Terraform in a Team all the docs state that you need a remote backend with support for locking.
But when you use a central server for running Terraform (team members have their own login), Terraform will take care for the locking and prevent state corruption. Am I right?

Cheers

1 post - 1 participant

Read full topic


Using the expansion operator with resource splat expression

$
0
0

Is it possible to use the expansion operator when referring to a resource using the splat expression (implying that 0 or more are created)? My initial attempts were unsuccessful at getting this to work.

It would seem possible since References to Named Values mentions the type is a list:

If the resource has the count argument set, the value of this expression is a list of objects representing its instances.

Initial use-case:

variable create {
    type = bool
}

resource aws_s3_bucket bucket {
  count = var.create ? 1 : 0
  ...
}

I would expect to be able to use the expansion operator with this, but it doesn’t seem to work. This is a contrived example, but valid:

output bucket_ids {
  value = list(aws_s3_bucket.bucket[*].id...)
}

provides the error:

Error: Invalid expanding argument value

  on ./outputs.tf line 6, in output "bucket_ids":
   6:   value = list(aws_s3_bucket.bucket[*].id...)

The expanding argument (indicated by ...) must be of a tuple, list, or set
type.

While there may be better approaches to this case, I think the use of expansion with splat expressions is valid.

2 posts - 2 participants

Read full topic

Loop over files before uploading to S3

$
0
0

I have multiple files under some root directory, let’s call it module/data/.
I need to upload this directory to the corresponding S3 bucket. All this works as expected with:

resource "aws_s3_bucket_object" "k8s-state" {
      for_each = fileset("${path.module}/data", "**/*")
      bucket = aws_s3_bucket.kops.bucket
      key    = each.value
      source = "${path.module}/data/${each.value}"
      etag   = filemd5("${path.module}/data/${each.value}")
    }

The only thing is left is that I need to loop over all files recursively and replace markers (for example !S3!) with values from variables of terraform’s module.
Similar to this, but across all files in directories/subdirectories:

replace(file("${path.module}/launchconfigs/aws_launch_configuration_masters_user_data"), “#S3”, aws_s3_bucket.kops.bucket)

So the question in one sentence: how to loop over files and replace parts of them with variables from terraform?

1 post - 1 participant

Read full topic

AWS MSK rolling version upgrade is not supported

$
0
0

AWS MSK now supports version upgrade -

https://aws.amazon.com/about-aws/whats-new/2020/05/amazon-msk-supports-apache-kafka-version-upgrades/

When an MSK version is upgrade via AWS console, everything works fine. The broker nodes remain the same and the version is upgraded to latest one.

But when the same process is carried out using Terraform, it destroys the existing cluster and replaces it with a new one. Therefore, it looks like Terraform is not currently aware of this new AWS MSK capability.

2 posts - 1 participant

Read full topic

Dynamically changing the resource name

$
0
0

i am using terraform 0.12 version and i am creating multiple instances on the vmware cloud but i would like to differentiate between my instance via resource name.

something like we do in packer name= “vm-name{timestamp}” to make the names different from each other.

1 post - 1 participant

Read full topic

How to destroy a specific element from a resource

$
0
0

Current Terraform Version

terraform -v
v0.12.19

Use-cases

i have created many vms into one resource and i want to delete only one vm tagged by its key

terraform state list
 cloudplatform_compute_instance.vms["AmbariMaas"]

I am using a custommized provider

resource "cloudplatform_compute_instance" "vms" {
    for_each = {
      for server in local.all_servers : "${server.name}" => server
    }
    name = each.value.name
    flavor_ref = data.cloudplatform_compute_flavor.flavor[each.value.flavor].id
    port = cloudplatform_compute_port.fixed_ip[each.value.port].id
    image_ref = data.cloudplatform_compute_image.test.id
    key_name = var.ssh_key_name
    availability_zone = var.az
    description = ""
    tags = each.value.tags
    server_group = cloudplatform_compute_server_group.cluster[each.value.group].id

resource "cloudplatform_compute_volume" "volumes" {
    for_each = {
      for volume in local.all_volumes : "${volume.suffix}.${volume.server}" => volume
    }
    name = format("%s-%s", each.value.server, each.value.suffix )
    size = each.value.size
    volume_type = "tiefighter"
    availability_zone = var.az
}

resource "cloudplatform_compute_volume_attachement" "mount" {
    for_each = {
      for volume in local.all_volumes : "${volume.suffix}.${volume.server}" => volume
    }
    volume_id = cloudplatform_compute_volume.volumes[each.key].id
    server_id = cloudplatform_compute_instance.vms[each.value.server].id
}

in terraform.tfvars

servers_definitions = {
    AmbariMaas = {
      names =  ["AmbariMaas"]
      tags = [ "MaasDev", "hdf-masternode-01" ]
      group = "maas_master_group"
      flavor = "Large-mem16 4vCPU-16GB"
      volumes =  {
        volume1 = {
          "suffix" =  "hadoop"
          "size" = "450"
          "moun_dir" = "/DATA/hadoop"
        }
        volume2 = {
          "suffix" =  "log"
          "size" = "50"
          "moun_dir" = "/DATA/log"
        }
      }
    }
}

in vars.tf

locals {
  all_volumes = flatten([
    for server in var.servers_definitions : [
      for name in server.names : [
        for volume in server.volumes : {
          size = volume.size
          suffix  = volume.suffix
          server = name
        }
      ]
    ]
  ])
  all_servers = flatten([
    for server in var.servers_definitions : [
      for name in server.names : [
        {
          name =  name
          port =  format("%sPort", name)
          tags =  server.tags
          flavor =  server.flavor
          group = server.group
          volumes = server.volumes
        }
      ]
    ]
  ])

}

Attempted Solutions

I have tried this command
terraform plan -destroy -target 'cloudplatform_compute_instance.vms["AmbariMaas"]'
But it will planify to destroy the vm as well as its all resource dependecies (volumes and mount) of other servers.
when i try to destroy a specific element of mount resource, it is working because it does not have dependencies.
Do you recommend another method to define my global architecture of servers?

2 posts - 2 participants

Read full topic

Viewing all 11361 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>