Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11434 articles
Browse latest View live

Creating GCP Compute group from a module count

$
0
0

I have created a module which abstracts the building of GCP compute resources and it’s basic form look likes:

resource "google_compute_instance" "instance" {
count        = var.vm_count
name         = "${var.region}-${var.vm_name_prefix}${var.name}${count.index}"
machine_type = var.machine_type
zone         = var.zone
}

Then in main.tf I am calling it using:

module "dmz" {
  source           = "./modules/gce"
  name             = "dmz"
  vm_count         = 2
  region           = var.region
  vm_name_prefix   = var.vm_name_prefix
}

That all works fine but then I wish to create a group:

resource "google_compute_instance_group" "default" {
  name        = "${var.deployment_label}-dmz"
  description = "DMZ Instance Group"
  instances   = module.dmz.*
  named_port {
    name = "http"
    port = "80"
  }
}

The problem is that when I test the plan it is failing with:-

  on compute.tf line 60, in resource "google_compute_instance_group" "default":
  60:   instances   = module.dmz.*.id

`This object does not have an attribute named "id".`

How should I reference the two compute resources that have been created in this example please ?

1 post - 1 participant

Read full topic


Terraform Update statefile for null_resource

$
0
0

I have a null_resource that will format disks attached to an aws_instance. I made a change to the null_resource trigger to be based off the instance ID rather than the IP. Terraform now wants to re-run the null resource. How can I update the state file with that configuration without re-running the resource?

1 post - 1 participant

Read full topic

How can I use environment variable in output?

$
0
0

Now I am using terraform cloud.
I can set environment variables in workspace variables UI.
For example, I set AWS_REGIOIN environment variable.
I want to use this in output like;

output "aws-region" {
  value     = "${AWS_REGION}"
}

so from the other module, can use this.
Is this possible?

1 post - 1 participant

Read full topic

Cdktf and custom source modules

$
0
0

Please forgive me as i have just started to peel back cdktf and its details. I am currently working with cdktf using python as the extended language. In my initial time I have figured out a great deal. However one aspect eludes me, module sources, see below:

module "mysql" {
    source = "git::ssh://git@gitlab.<private_repo>/<username>/project//modules/common/azure/database/mysql"

....
}

Right now i see ways of providing terraform providers and terraform modules, I however have not been able to provide any custom modules. Is this a feature thats available?

1 post - 1 participant

Read full topic

End of stream delimiters showing up in output

$
0
0

Problem
I am receiving outputs which contain the <<EOT and EOT delimiter, for an unknown reason. It only appears in the outputs for which I use the random_password resource on.

Example Input

resource “random_password” “http-cs-teamserver-password” {
count = 2
length = 15
special = true
override_special = “@%)-_+[}:”
}
output “dns-cs-teamserver-passwords” {
value = join("\n", random_password.dns-cs-teamserver-password[*].result)
}

Example Output

dns-cs-teamserver-passwords = <<EOT
-@GoESLjwrgXE4F
_PfWDI[zJ:yS5Sb
EOT

Is there a way to fix this minor issue? I can upload an image of the stdout output if needed.
Thank you for your time!

1 post - 1 participant

Read full topic

Create cluster with Shared Network in GKE

$
0
0

I’m trying to create a cluster in GKE project-1 with shared network of project-2.

Roles given to Service account:
project-1: Kubernetes Engine Cluster Admin, Compute Network Admin
project-2: Kubernetes Engine Service Agent, Compute Network User

Service Account is created under project-1.
API & Services are enabled in both Projects.

But I am getting this error persistently.
Error: googleapi: Error 403: Kubernetes Engine Service Agent is missing required permissions on this project. See Troubleshooting  |  Kubernetes Engine Documentation  |  Google Cloud for more info: required “container.hostServiceAgent.use” permission(s) for “projects/project-2”., forbidden

data "google_compute_network" "shared_vpc" {
    name = "network-name-in-project-2"
    project = "project-2"
}

 
data "google_compute_subnetwork" "shared_subnet" {
    name = "subnet-name-in-project-2"
    project = "project-2"
    region = "us-east1"
}

 # cluster creation under project 1
 # project 1 specified in Provider 
resource "google_container_cluster" "mowx_cluster" {
    name = var.cluster_name
    location = "us-east1"
    initial_node_count = 1
 
    master_auth {
        username = ""
        password = ""
 
        client_certificate_config {
            issue_client_certificate = false
        }
    }
 
    remove_default_node_pool = true
    cluster_autoscaling {
        enabled = false
    }
 
    # cluster_ipv4_cidr = var.cluster_pod_cidr
    ip_allocation_policy {
        cluster_secondary_range_name = "pods"
        services_secondary_range_name = "svc"
    }
 
    network = data.google_compute_network.shared_vpc.id
    subnetwork = data.google_compute_subnetwork.shared_subnet.id
}

1 post - 1 participant

Read full topic

How to set unit value in azurerm_monitor_metric_alert [Azure]

$
0
0

Hi, everyone

I would like to automate alert’s creation with Terraform on Azure. In my case, I want to create an alert about Used Capacity in Storage Account. For example, if capacity is greather than 40 Gb, it will send a email. However, I do not know how set the unit value (GiB).

An example on Azure:

An example with Terraform:

resource "azurerm_monitor_metric_alert" "terraform" {
  name                = "example-metricalert"
  resource_group_name = azurerm_resource_group.terraform.name
  scopes              = [azurerm_storage_account.terraform.id]
  description         = "Action will be triggered when Transactions count is greater than 40."
  severity            = "3"
  window_size        = "PT1D"
  criteria {
    metric_namespace = "Microsoft.Storage/storageAccounts"
    metric_name      = "UsedCapacity"
    aggregation      = "Average"
    operator         = "GreaterThan"
    threshold        = "40"
  }
}

Is it possible to set the unit value ?

Thanks in advance,

Rodrigo

1 post - 1 participant

Read full topic

Setting up Terraform

$
0
0

Hello all,

I am currently looking in setting up terraform to be used with datadog. Currently the setup consists of multiple GCP environments. Does terraform need to be installed on each host for all the GCP environments to get the best results when setting up datadog?

1 post - 1 participant

Read full topic


Using terraform in an DevOps Pipeline

$
0
0

Hi,
I created a IaC Pipeline with terraform and azure devops. In the devops pipeline a added a storage account to store the tfstate file in the blob storage. When I run the pipeline, everything works, all services are created, but a can’t find the tfstate file in the storage account. here is the code of the yaml file:
jobs:

- deployment: deployDev

  continueOnError: false

  environment: 'dev'

  strategy:

    runOnce:

      deploy:

        steps:

          - checkout: self

          - task: TerraformInstaller@0

            displayName: 'install'

            inputs:

              terraformVersion: '0.12.3'

          - task: TerraformTaskV1@0

            displayName: 'init'

            inputs:

              provider: 'azurerm'

              command: 'init'

              backendServiceArm: ' (XXX)'

              backendAzureRmResourceGroupName: 'terraform-rg'

              backendAzureRmStorageAccountName: 'editerraformaccount'

              backendAzureRmContainerName: 'editerraformcontainer'

              backendAzureRmKey: 'terraformDev.tfstate'

              workingDirectory: $(System.DefaultWorkingDirectory)/Terraform/Dev

          - task: TerraformTaskV1@0

            displayName: 'plan'

            inputs:

              provider: 'azurerm'

              command: 'plan'

              #commandOptions: '-input=false'

              environmentServiceNameAzureRM: ' (XXX)'

              workingDirectory: $(System.DefaultWorkingDirectory)/Terraform/Dev

          - task: TerraformTaskV1@0

            displayName: 'apply'

            inputs:

              provider: 'azurerm'

              command: 'apply'

              #commandOptions: '-input=false -auto-approve'

              environmentServiceNameAzureRM: ' (XXX)'

              workingDirectory: $(System.DefaultWorkingDirectory)/Terraform/Dev

Any ideas?
regards
Andreas

1 post - 1 participant

Read full topic

Convert .tfvars to .tfvars.json and vice versa

$
0
0

Hi Team,
I want to convert the .tfvars file to .tfvars.json and vice versa.
Is there any library available with Hashicorp to achieve this or any support for implementing this conversion using python or golang?

Thanks,
Priya

1 post - 1 participant

Read full topic

Terraform v0.14.3 : Using count in modules

$
0
0

I am using Terraform version v0.14.3. I am using count in modules to create multiple Azure resources (network interface card, VM) of the same type. Below is the parent module, calling child modules NIC and VM :

module "NIC" {
  source = "./NIC"
  count  = 2

  nic_name      =  "vm-nic-${count.index + 1}" 
  nic_location  = "eastus2"
  rg_name       = "abc-test-rg"
  ipconfig_name = "vm-nic-ipconfig-${count.index + 1}" 
  subnet_id     = "/subscriptions/***********/resourceGroups/abc-test-rg/providers/Microsoft.Network/virtualNetworks/abc-test-vnet/subnets/abc-test-vnet"
  
}
output "nic_id" {
  value = module.NIC[*].nic_id
}
module "VM" {
  source = "./VM"
  count = 2

  vm_name        = "test-vm"
  rg_name        = "abc-test-rg"
  location       = "eastus2"
  admin_password = var.admin_password
  nic_id         = [module.NIC[*].nic_id]
  
}

I am getting below error during terraform plan :

Error: Incorrect attribute value type

  on VM\main.tf line 8, in resource "azurerm_linux_virtual_machine" "vm":
   8:   network_interface_ids           = var.nic_id
    |----------------
    | var.nic_id is tuple with 1 element

Inappropriate value for attribute "network_interface_ids": element 0: string
required.

How do I loop around the two NIC ids generated and pass them to the two VMs in the VM module? Thanks in advance!

3 posts - 2 participants

Read full topic

Azurerm_hdinsight_kafka An argument named "min_tls_version" is not expected here

$
0
0

Hi

Can you please check below error and suggest a solution?

#############################
on Modules/HDInsight/main.tf line 88, in resource “azurerm_hdinsight_kafka_cluster” “hdi_kafka_cluster”:
88: min_tls_version = “1.2”

An argument named “min_tls_version” is not expected here .

##[error]Bash exited with code ‘1’.
##[error]Bash wrote one or more lines to the standard error stream.
##[error]
Error: Unsupported argument
#############################

resource “azurerm_hdinsight_kafka_cluster” “hdi_kafka_cluster” {
for_each = { for v in local.hd_insight_cluster : v.name => v } # create a temporary map (of maps) for for_each statement

name = each.value.name
resource_group_name = each.value.resource_group
location = each.value.location
cluster_version = each.value.cluster_version
tier = each.value.tier
min_tls_version = “1.2”

component_version {
kafka = each.value.component_version
}

gateway {
enabled = each.value.gateway.enabled
username = each.value.gateway.username
password = var.cluster_kv_ksc_map[“Standard”].secrets[“hdi-gw-password”].value
}

storage_account_gen2 {
is_default = true
filesystem_id = local.sa_dl_g2_fs_ids[each.value.storage_account_gen2.sa_data_lake_gen2_fs_name]
storage_resource_id = local.storage_account_ids[each.value.storage_account_gen2.storage_account_name]
managed_identity_resource_id = local.user_msi_ids[each.value.storage_account_gen2.user_msi_name]
}

roles {
head_node {
vm_size = each.value.head_node.vm_size
username = each.value.head_node.username
ssh_keys = [ var.cluster_kv_ksc_map[“Standard”].secrets[“ssh-pub-key”].value ]
virtual_network_id = local.vnet_ids[each.value.head_node.vnet_name]
subnet_id = var.subnet_ids[each.value.head_node.snet_name]
}

  worker_node {
     vm_size                    =  each.value.worker_node.vm_size
     username                   =  each.value.worker_node.username
     ssh_keys                   =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
     virtual_network_id         =  local.vnet_ids[each.value.worker_node.vnet_name]
     subnet_id                  =  var.subnet_ids[each.value.worker_node.snet_name]
     target_instance_count      =  each.value.worker_node.target_instance_count
     number_of_disks_per_node   =  each.value.worker_node.number_of_disks_per_node
  }
  
  zookeeper_node {
     vm_size              =  each.value.zookeeper_node.vm_size
     username             =  each.value.zookeeper_node.username
     ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
     virtual_network_id   =  local.vnet_ids[each.value.zookeeper_node.vnet_name]
     subnet_id            =  var.subnet_ids[each.value.zookeeper_node.snet_name]
  }

}

monitor {
log_analytics_workspace_id = var.log_analytics.workspace_id
primary_key = var.log_analytics.primary_shared_key
}

prevent deletion of resource

lifecycle {

prevent_destroy = true

  ignore_changes = [
     cluster_version,
     component_version[0].kafka,
  ]

}

#min_tls_version = each.value.min_tls_version

TAGs

tags = var.tags

depends_on = [module.dlf2_msi.user_msi,
module.sa_data_lake_gen2_fs.sa_data_lake_gen2_fs]
}

1 post - 1 participant

Read full topic

Change the type of an existing attribute

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Best Practise and workflow with Terraform and AWS Lambda functions

$
0
0

Hi,

Happy new year to everyone :slight_smile: I have a question regarding the workflow people follow when managing AWS Lambda functions via Terraform.

Our base infrastructure workflow is managed by Terraform. We have exactly 2 repos for Terraform. One contains reusable modules and other contains root modules for different stages. So using these we create base infra layer i.e. Networking, ec2, EKS, ALB etc. And the applications itself is managed by helm by other application repos. So for example I manage EKS versions and configurations by using Terraform repo. But the configurations of the applications is managed by helm via application repo. There is a clear separation of concerns.

But when managing AWS Lambda functions, I am confused how to properly differentiate these concerns. We create lambda function and supporting resources (IAM roles, event source mapping and layers) with Terraform. And since underlying API also mandates a ZIP file as part of Lambda function creation, we create a dummy zip file and pass it to “aws_lambda_function” resource. The idea is to manage the individual function code updates via different application repo which contains code for the function. And Terraform shows no diffs if I update function code outside the Terraform (we don’t specify hashes).

But now following best practise I want to publish version and aliases. I want to keep the function aliases same as git release tags so that there is 1:1 mapping between them. But now if I create new alias corresponding to new release, then I also have to update the ARN of the event source mapping (which is managed by Terraform). And if I keep one alias and change underlying versions, then I loose 1:1 mapping between git release tags and function aliases.

So in this scenario I am not sure how to separate the concerns of infrastructure layer and application layer. So I would like to understand how the larger community is solving this problem when adopting AWS Lambda. What workflow do you follow to manage both these layers efficiently? How do you manage updates to function codes? Do you have Terraform code as part of your application repo? Or do you have a complicated CI which call terraform apply from within the application repo?

Any pointer in the right direction is appreciated.

Note: We don’t use API Gateway or SAM because our use case is to run functions upon SQS, S3 and SNS updates. We don’t serve our websites via Lambda function and API gateway.

Thank You

Best Regards,
Vishwanath

2 posts - 2 participants

Read full topic

AMI with 2 volumes causes issue when you have EC2 with EBS volumes

$
0
0

An Ec2 ubuntu 16 AMI was created, it had a code for ebs volumes like
root_block_device = {
volume_type = “gp2”
volume_size = “${var.volume1_size}”
delete_on_termination = true
}

ebs_block_device = {
device_name = “/dev/sda2”
volume_type = “gp2”
volume_size = “${var.volume2_size}”

encrypted = “${var.ebs_encryption}”

}

When I create EC2 with the AMI created from above, i see that the AMI has two volumes. I want to use this AMI but want to attach additional EBS volumes as necessary. So When i create the EC2 i used the same code as above so i can set the size of my volumes appropriately. And everything works fine and it creates a Ec2 with 3 volumes. But anytime after if i use terraform to make any change, be it increasing the volume size or even chanbing something as simple as a security group or role, terraform is saying it has to destroy the instance and recreate it?

do we know why this happens? here is my output first time when i did terraform apply and created ec2 and the 2nd time i am changing the size of ebs_volume i attached from 40 to 60, instead of just changing the volume size it says it has destroy and recreate the ec2 itself
First time output while creating EC2 with 2 volumes from AMI (root and ebs_block_device) and one volume added (ebs_volume_attachment)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

aws_ebs_volume.volsdh will be created

  • resource “aws_ebs_volume” “volsdh” {
    • arn = (known after apply)
    • availability_zone = “us-east-1a”
    • encrypted = (known after apply)
    • id = (known after apply)
    • iops = (known after apply)
    • kms_key_id = (known after apply)
    • size = 40
    • snapshot_id = (known after apply)
    • tags = {
      • “Environment” = “aws_dev”
      • “LOB” = “temp”
      • “Name” = “sampleec2”
      • “Project” = “testec2”
      • “System Number” = “”
        }
    • type = (known after apply)
      }

aws_instance.ec2 will be created

  • resource “aws_instance” “ec2” {
    • ami = “ami-00d1d98dfde2c3742”

    • arn = (known after apply)

    • associate_public_ip_address = (known after apply)

    • availability_zone = (known after apply)

    • cpu_core_count = (known after apply)

    • cpu_threads_per_core = (known after apply)

    • ebs_optimized = false

    • get_password_data = false

    • host_id = (known after apply)

    • id = (known after apply)

    • instance_state = (known after apply)

    • instance_type = “t2.medium”

    • ipv6_address_count = (known after apply)

    • ipv6_addresses = (known after apply)

    • key_name = (known after apply)

    • network_interface_id = (known after apply)

    • password_data = (known after apply)

    • placement_group = (known after apply)

    • primary_network_interface_id = (known after apply)

    • private_dns = (known after apply)

    • private_ip = (known after apply)

    • public_dns = (known after apply)

    • public_ip = (known after apply)

    • security_groups = (known after apply)

    • subnet_id = (known after apply)

    • tags = {

      • “Environment” = “lab”
      • “LOB” = “oi”
      • “Name” = “testec2EC2-2fromtestec2AMI-2”
      • “Project” = “Core”
      • “System Number” = " "
      • “snapsvc” = “false”
        }
    • tenancy = (known after apply)

    • user_data = “25e32189148f1c938282b516141f109deb9888c4”

    • volume_tags = {

      • “Environment” = “lab”
      • “LOB” = “oi”
      • “Name” = “sampleec2”
      • “Project” = “Core”
      • “System Number” = " "
      • “snapsvc” = “false”
        }
    • vpc_security_group_ids = (known after apply)

    • ebs_block_device {

      • delete_on_termination = true
      • device_name = “/dev/sda2”
      • encrypted = (known after apply)
      • iops = (known after apply)
      • snapshot_id = (known after apply)
      • volume_id = (known after apply)
      • volume_size = 50
      • volume_type = “gp2”
        }
    • ephemeral_block_device {

      • device_name = (known after apply)
      • no_device = (known after apply)
      • virtual_name = (known after apply)
        }
    • network_interface {

      • delete_on_termination = false
      • device_index = 0
      • network_interface_id = (known after apply)
        }
    • root_block_device {

      • delete_on_termination = true
      • iops = (known after apply)
      • volume_id = (known after apply)
      • volume_size = 40
      • volume_type = “gp2”
        }
        }

aws_network_interface.ec2_nic will be created

  • resource “aws_network_interface” “ec2_nic” {
    • description = “ENI for testec2EC2-2fromtestec2AMI-2”

    • id = (known after apply)

    • private_dns_name = (known after apply)

    • private_ip = (known after apply)

    • private_ips = (known after apply)

    • private_ips_count = (known after apply)

    • security_groups = [

      • “sg-03c5dcd4492399a51”,
      • “sg-078ad65ef12c9e7af”,
      • “sg-0a8159ff0109ae900”,
        ]
    • source_dest_check = true

    • subnet_id = “subnet-00d12b5903a6cb3f5”

    • tags = {

      • “Environment” = “lab”
      • “LOB” = “oi”
      • “Name” = “ENI for testec2EC2-2fromtestec2AMI-2”
      • “Project” = “Core”
      • “System Number” = " "
      • “snapsvc” = “false”
        }
    • attachment {

      • attachment_id = (known after apply)
      • device_index = (known after apply)
      • instance = (known after apply)
        }
        }

aws_volume_attachment.ebs_att will be created

  • resource “aws_volume_attachment” “ebs_att” {
    • device_name = “/dev/sdh”
    • id = (known after apply)
    • instance_id = (known after apply)
    • skip_destroy = true
    • volume_id = (known after apply)
      }

Plan: 4 to add, 0 to change, 0 to destroy.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

aws_ebs_volume.volsdh will be updated in-place

~ resource “aws_ebs_volume” “volsdh” {
arn = “arn:aws:ec2:us-east-1:163326074592:volume/vol-03a97d0d5336537e5”
availability_zone = “us-east-1a”
encrypted = true
id = “vol-03a97d0d5336537e5”
iops = 120
kms_key_id = “arn:aws:kms:us-east-1:163326074592:key/fbedf51e-b1b0-46a9-be2b-1c7d57c35620”
~ size = 40 -> 60
tags = {
“Environment” = “aws_dev”
“LOB” = “EBIA”
“Name” = “Volume for testec2EC2EC2-2fromtestec2EC2AMI-2”
“Project” = “testec2EC2”
“System Number” = “Z150”
}
type = “gp2”
}

aws_instance.ec2 must be replaced

-/+ resource “aws_instance” “ec2” {
ami = “ami-----------------------”
~ arn = “-----------------” -> (known after apply)
~ associate_public_ip_address = false -> (known after apply)
~ availability_zone = “us-east-1a” -> (known after apply)
~ cpu_core_count = 2 -> (known after apply)
~ cpu_threads_per_core = 1 -> (known after apply)
- disable_api_termination = false -> null
ebs_optimized = false
get_password_data = false
+ host_id = (known after apply)
~ id = “i-0a2e8c70311030ff8” -> (known after apply)
~ instance_state = “running” -> (known after apply)
instance_type = “t2.medium”
~ ipv6_address_count = 0 -> (known after apply)
~ ipv6_addresses = -> (known after apply)
+ key_name = (known after apply)
- monitoring = false -> null
+ network_interface_id = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
~ primary_network_interface_id = “eni-0ee7d7b894a99e65b” -> (known after apply)
~ private_dns = “ip-10-181-114-126.ec2.internal” -> (known after apply)
~ private_ip = “10.181.114.126” -> (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
~ security_groups = -> (known after apply)
- source_dest_check = true -> null
~ subnet_id = “subnet-00d12b5903a6cb3f5” -> (known after apply)
tags = {
“Environment” = “lab”
“LOB” = “oi”
“Name” = “testec2EC2EC2-2fromtestec2EC2AMI-2”
“Project” = “Core”
“System Number” = " "
“snapsvc” = “false”
}
~ tenancy = “default” -> (known after apply)
user_data = “25e32189148f1c938282b516141f109deb9888c4”
volume_tags = {
“Environment” = “lab”
“LOB” = “oi”
“Name” = “Volume for testec2EC2EC2-2fromtestec2EC2AMI-2”
“Project” = “Core”
“System Number” = " "
“snapsvc” = “false”
}
~ vpc_security_group_ids = [
- “sg-03c5dcd4492399a51”,
- “sg-078ad65ef12c9e7af”,
- “sg-0a8159ff0109ae900”,
] -> (known after apply)

  - credit_specification {
      - cpu_credits = "standard" -> null
    }

  - ebs_block_device { # forces replacement
      - delete_on_termination = false -> null
      - device_name           = "/dev/sdh" -> null
      - encrypted             = true -> null
      - iops                  = 120 -> null
      - volume_id             = "vol-03a97d0d5336537e5" -> null
      - volume_size           = 40 -> null
      - volume_type           = "gp2" -> null
    }
  + ebs_block_device { # forces replacement
      + delete_on_termination = true
      + device_name           = "/dev/sda2"
      + encrypted             = (known after apply)
      + iops                  = (known after apply)
      + snapshot_id           = (known after apply)
      + volume_id             = (known after apply)
      + volume_size           = 50
      + volume_type           = "gp2"
    }
  - ebs_block_device { # forces replacement
      - delete_on_termination = true -> null
      - device_name           = "/dev/sda2" -> null
      - encrypted             = true -> null
      - iops                  = 150 -> null
      - snapshot_id           = "snap-0dc4e2b5e7ea6a033" -> null
      - volume_id             = "vol-024e723d29b24463b" -> null
      - volume_size           = 50 -> null
      - volume_type           = "gp2" -> null
    }

  + ephemeral_block_device {
      + device_name  = (known after apply)
      + no_device    = (known after apply)
      + virtual_name = (known after apply)
    }

    network_interface {
        delete_on_termination = false
        device_index          = 0
        network_interface_id  = "eni-0ee7d7b894a99e65b"
    }

  ~ root_block_device {
        delete_on_termination = true
      ~ iops                  = 120 -> (known after apply)
      ~ volume_id             = "vol-0cb73def749d6f171" -> (known after apply)
        volume_size           = 40
        volume_type           = "gp2"
    }
}

aws_volume_attachment.ebs_att must be replaced

-/+ resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
~ id = “vai-3950028857” -> (known after apply)
~ instance_id = “i-0a2e8c70311030ff8” -> (known after apply) # forces replacement
skip_destroy = true
volume_id = “vol-03a97d0d5336537e5”
}

Plan: 2 to add, 1 to change, 2 to destroy.

Please help, thanks

2 posts - 2 participants

Read full topic


Interval and Interval_unit with aws_dlm_lifecycle_policy

$
0
0

Hi everyone,

I am implementing some Data Lifecycle Manager policies within my AWS estate via terraform. Some of the the snapshot policies have the requirement to take an EC2 snapshot every 1 week for example and retain 3 snapshots (via a Tag 7/3). However I can see in the latest terraform docs (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dlm_lifecycle_policy) currently terraform only allows snapshot interval values and an interval unit of the below:

  • interval (Required) How often this lifecycle policy should be evaluated. 1 , 2 , 3 , 4 , 6 , 8 , 12 or 24 are valid values.
  • interval_unit - (Optional) The unit for how often the lifecycle policy should be evaluated. HOURS is currently the only allowed value and also the default value.

Therefore it is seemingly impossible to implement a snapshot policy through terraform with my above requirement. Does anyone know a way around this (I know I can create it in the AWS console directly) or if terraform plan to incorporate additional values for the interval and interval_unit parameters in the future?

Thanks.

1 post - 1 participant

Read full topic

Referencing outputs from a for_each module

$
0
0

I have a module which has a variable defined using for_each , and its output is as below:

output "nic_ids" {
    value = [for x in azurerm_network_interface.nic : x.id]
}
nic_ids = [
  "/subscriptions/*****/resourceGroups/test-rg/providers/Microsoft.Network/networkInterfaces/test-nic-1",
  "/subscriptions/*****/resourceGroups/test-rg/providers/Microsoft.Network/networkInterfaces/test-nic-2",
]

My aim is to pass above NIC ids to the VM module and have 1:1 mapping between NIC id and VM ( test-nic-1 should only be attached to vm-1 , test-nic-2 to vm-2 etc.)

module "vm" {
  source  = "*****/vm/azurerm"
  version = "0.1.0"
  
   vms = var.vms
   nic_ids = module.nic[each.value.id].nic_ids 
} 

I am getting below error:

Error: each.value cannot be used in this context

  on main.tf line 58, in module "vm":
  58:    nic_ids = module.nic[each.value.id].nic_ids 

A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.

Can you please suggest?

1 post - 1 participant

Read full topic

Interval and Interval_unit with aws_dlm_lifecycle_policy

$
0
0

Hi everyone,

I am implementing some Data Lifecycle Manager policies within my AWS estate via terraform. Some of the the snapshot policies have the requirement to take an EC2 snapshot every 1 week for example and retain 3 snapshots (via a Tag 7/3). However I can see in the latest terraform docs (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dlm_lifecycle_policy) currently terraform only allows snapshot interval values and an interval unit of the below:

  • interval (Required) How often this lifecycle policy should be evaluated. 1 , 2 , 3 , 4 , 6 , 8 , 12 or 24 are valid values.
  • interval_unit - (Optional) The unit for how often the lifecycle policy should be evaluated. HOURS is currently the only allowed value and also the default value.

Therefore it is seemingly impossible to implement a snapshot policy through terraform with my above requirement. Does anyone know a way around this (I know I can create it in the AWS console directly) or if terraform plan to incorporate additional values for the interval and interval_unit parameters in the future?

Thanks.

1 post - 1 participant

Read full topic

Endpoint Services Allowed Principals

$
0
0

Looks like this code will only pull in current ID for allowed principals for the endpoint service. How can you add additional ARNs to a whitelist for an endpoint service?

In the console I would go to the endpoint service and add an ARN to the whitelist. How does this translate to terraform code?

2 posts - 2 participants

Read full topic

Migrate State file from one AWS account to another account

$
0
0

Hi All,
I am trying to migrate the state file from S3 bucket of one account to S3 bucket of another account. The current S3 bucket is just using server side encryption so I just downloaded the file and uploaded it into the S3 bucket of another account but while running terraform init, it failed with the below error.

Error refreshing state: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

1 post - 1 participant

Read full topic

Viewing all 11434 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>