Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Error: EBS Volume in unexpected state after deletion: available

$
0
0

Hello,

Terraform v0.12.20

  • provider.aws v2.61.1

I received an error while deleting EBS volume:

Error: EBS Volume in unexpected state after deletion: available. I know it’s AWS glitch but terraform is not able to recover from this error.

In the state file after this error, it’s not showing that resource is tainted - it shows as a regular status:

“module”: “module.firefly_EC2”,
“mode”: “managed”,
“type”: “aws_ebs_volume”,
“name”: “default”,
“each”: “map”,
“provider”: “provider.aws”,
“instances”: [
{
“index_key”: “opstest-Firefly-Firefly-B1./dev/sdg”,
“schema_version”: 0,
“attributes”: {

When I am trying to refresh the state it gives another error:
Error: Invalid function argument

 ../../../modules/Cluster/EC2/main.tf line 60, in resource "aws_volume_attachment" "default":
  60:     volume_id   = lookup(aws_ebs_volume.default, each.key).id
    |----------------
    | aws_ebs_volume.default is object with 3 attributes

Invalid value for "inputMap" parameter: the given object has no attribute
"opstest-Firefly-Firefly-B1./dev/sdg".

State file is empty except with this 1 corrupted EBS.

Is there a workaround for this?

1 post - 1 participant

Read full topic


The tags argument for Azure resources are inconsistent

$
0
0

azuread tags use list(string)

azurerm tags use map(string)

I’d like to use map(string) for my plan but how do I convert from one to the other?

1 post - 1 participant

Read full topic

Iam policy, multiple resources, and for_each

$
0
0

I’m using AWS.
I have one parent account.
I have many children accounts.

I have a policy in the parent, which allows IAM users to assume children accounts:

data "aws_iam_policy_document" "assume" {
  statement {
    sid    = "AssumeIntoChildren"
    effect = "Allow"

    actions = [
      "sts:AssumeRole"
    ]

    resources = [
      "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role/assume-into-me"
    ]
  }
}

I can get a list of all accounts:

data "aws_organizations_organization" "all_accounts" {}

Is it possible to use for_each to loop over data.aws_organizations_organization.all_accounts.accounts[*].id when defining the resources in my iam policy?

If not, I’m stuck modifying this policy every time I add an account.

1 post - 1 participant

Read full topic

How terraform variables work when there is modularized stack?

$
0
0

Hi Team,

Can someone please explain, I have been using it but have some gap 100% understand it.

Let say I have following:

terraform

  • modules:

    • ec2:
      • main.tf
      • variables.tf
  • main.tf

  • variable.tf

I’ve main.tf, variables.tf and modules under root.
Under module, I’ve ec2 folder with main.tf and variables.tf.

Root main.tf will call modules/ec2 main.tf which will have variables passed from variables.tf. Same variables are defined in variables.tf in root folder. How this works.

Can someone please help me understand this?

Thanks.
KS

2 posts - 2 participants

Read full topic

Terraform show -json

$
0
0

I have plan files generated with different terraform 0.12 versions. For my processing I just want to see the json output of these files. I am using terraform 0.12.17 version to run the command “terraform show -json planfile” on those plan files one by one. But I am getting this error:

Error: Invalid plan file

Failed to read plan from plan file: plan file was created by Terraform 0.12.6,

but this is 0.12.17; plan files cannot be transferred between different

Terraform versions.

I understand that Terraform does require the same version and plugins being available to use the terraform command on the existing plan files. But this is just the show file for converting to json. Is there a way so that “terraform show -json” be version agnostic, atleast for subversions within the major version i.e. terraform v0.12.

2 posts - 2 participants

Read full topic

Ssh_exchange_identification: read: Connection reset by peer

$
0
0

I have written Terraform code that creates an AWS EC2 instance and installs httpd web server in it. SSH keys are created through terraform code. When I do ‘terraform apply’, EC2 instance is created with httpd web server installed on it and I’m able to access contents of index.html using public IP of created EC2 instance. Problem I face is, SSH to created EC2 instance doesn’t work and throws error : ssh_exchange_identification: read: Connection reset by peer
Interesting point is, if I comment out ‘provisioner’ and ‘connection’ sections of code, I am able to SSH into the created EC2 instance. But, I want to be able to install software as well as SSH into instances. Please help.

Below is the code.

provider “aws” {
region = “us-east-1”
profile = “default”
}

variable “sgports” {
type = list(number)
description = “Enter ports to be allowed in Security Group”
}

variable “kname” {
description = “Enter Key name”
}

resource “tls_private_key” “keys” {
algorithm = “RSA”
}

resource “aws_key_pair” “ec2key” {
key_name = var.kname
public_key = tls_private_key.keys.public_key_openssh

}
resource “aws_security_group” “sgiac” {
name = “sgiacdynamic”

dynamic “ingress” {
for_each = var.sgports
content {
from_port = ingress.value
to_port = ingress.value
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

}

egress {
from_port = 0
to_port = 65535
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

}

resource “aws_instance” “ec2” {
instance_type = “t2.micro”
ami = “ami-09d95fab7fff3776c”
associate_public_ip_address = “true”
key_name = aws_key_pair.ec2key.key_name
vpc_security_group_ids = [aws_security_group.sgiac.id]

provisioner “remote-exec” {
inline = [
“sudo yum install -y httpd”,
“sudo systemctl start httpd”,
“sudo chmod 777 -R /var/”,
“echo $HOSTNAME >> /var/www/html/index.html”
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.keys.private_key_pem
host = self.public_ip
}

}

}

output “sg” {
value = aws_security_group.sgiac.id
}

output “ec2” {
value = aws_instance.ec2.public_ip
}

output “ec2ENI” {
value = aws_instance.ec2.primary_network_interface_id
}

output “key” {
value = tls_private_key.keys.private_key_pem
}

output “public” {
value = tls_private_key.keys.public_key_openssh
}

1 post - 1 participant

Read full topic

Creating FSx with multi-AZ deployment using terraform

$
0
0

I am able to create the FSx with single AZ but I am unable to create multi AZ deployment.

Can anyone help with this?

Thanks in advance

1 post - 1 participant

Read full topic

Specifying kubernetes version for unmanaged EKS worker nodes in terraform

$
0
0

We are creating an EKS cluster with worker nodes using the below terraform modules. We can see that there is an option to specify the kubernetes version for the master node in the aws_eks_cluster resource. However, on specifying this, the worker nodes do not get the same version.
For ex, if version is set as 1.15 in aws_eks_cluster resource, the worker nodes which join the cluster still seem to have 1.13 as the version. We are aware that the worker nodes version can be specified with aws_eks_node_group resource. But we cannot go with that approach since certain things do not suit out case.
Can we specify the version for the worker nodes that join the cluster using the below resources in any way?

resource "aws_eks_cluster" "eks_cluster" {
  name            = "${var.eks_cluster_name}"
  role_arn        = "${var.iam_role_master}"
 version = "1.15"
  vpc_config {
    security_group_ids = ["${var.sg-eks-master}"]
    subnet_ids = ["${var.subnet_private1}", "${var.subnet_private2}","${var.subnet_public1}","${var.subnet_public2}"]
    endpoint_private_access= true
    endpoint_public_access = true
        public_access_cidrs = ["${var.accessingip}"]
  }
}
locals {
iam-eks-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.eks_cluster.endpoint}' --b64-cluster-ca '${aws_eks_cluster.eks_cluster.certificate_authority.0.data}' '${var.eks_cluster_name}'
USERDATA
}
resource "aws_launch_configuration" "lc_eks" {
  iam_instance_profile        = "${var.instance_profile_node}"
  image_id                    = "${var.image_id}"
  instance_type               = "${var.instance_type}"
  name_prefix                 = "lc-${var.client}"
  security_groups             = ["${var.sg-eks-node}"]
  user_data_base64            = "${base64encode(local.iam-eks-node-userdata)}"
  lifecycle {
    create_before_destroy = true
  }
}
resource "aws_autoscaling_group" "asg_eks" {
  desired_capacity     = "${var.min_node_count}"
  launch_configuration = "${aws_launch_configuration.lc_eks.id}"
  max_size             = "${var.max_node_count}"
  min_size             = "${var.min_node_count}"
  name                 = "asg-${var.client}"
  vpc_zone_identifier  = ["${var.subnet_private1}", "${var.subnet_private2}","${var.subnet_public1}","${var.subnet_public2}"]
  tag {
    key                 = "Name"
    value               = "${var.client}-terraform-tf-eks"
    propagate_at_launch = true
  }
  tag {
    key                 = "kubernetes.io/cluster/${var.eks_cluster_name}"
    value               = "owned"
    propagate_at_launch = true
  }
}
data "external" "aws_iam_authenticator" {
  program = ["/bin/bash", "-c", "aws-iam-authenticator token -i ${var.eks_cluster_name} | jq -c -r .status"]
}

provider "kubernetes" {
  host                      = "${aws_eks_cluster.eks_cluster.endpoint}"
  cluster_ca_certificate    = "${base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)}"
  token                     = "${data.external.aws_iam_authenticator.result.token}"
  load_config_file          = false
  version = "1.11.3"
}
resource "kubernetes_config_map" "aws_auth" {
  metadata {
    name = "aws-auth"
    namespace = "kube-system"
  }
  data {
    mapRoles = <<EOF
- rolearn: "${var.iam_role_node}"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
EOF
  }
  depends_on = [
    "aws_eks_cluster.eks_cluster"  ]
}

1 post - 1 participant

Read full topic


How do I separate infrastructure per customer?

$
0
0

My use case is a little different than what Terraform is designed for. I want to set up a service similar to Runcloud, Cloudways etc. So potentially thousands of customers each having one or more VPS servers. I want to use Terraform for setting up and tearing down the server for each customer on any one of maybe 4 major cloud providers. Just like how Runcloud and Cloudways do it. So I basically want to use Terraform as the abstraction layer between 4 or 5 different major cloud providers instead of having to connect to each providers API separately and then having to write my own abstraction layer.

How would I do something like this using Terraform? Terraform is designed for infrastructure described in a few state files. I am looking to do tiny micro infrastructures (mostly just one or 2 VPS servers per customer) described in potentially thousands of state files, presumably one for each customer. I don’t think terraform workspaces is the right separation method because the separation is not strong enough.

My current thinking is to create a separate remote state bucket folder for each customer. That customer folder will probably just contain the customers state file which will typically describe just one or more fairly generic VPS servers. Will this work even though Terraform is not really designed with this sort of use case?

The other part of it I haven’t bothered to mention is that I will also combine it with Ansible for setting up apps inside the server once Terraform sets up the server and installs SSH keys and then provides Ansible with the server IP address. That should not be a problem.

Right now, the plan is to run all the command line stuff using Jenkins. So the logical workflow will be:

My User CP website > My Backend > Jenkins > Terraform + Ansible > (Google Cloud or AWS or DigitalOcean or Linode) VPS server(s)

1 post - 1 participant

Read full topic

Terraform state file behavior

$
0
0

We’re not sure if this is a terraform 12 thing or just something I’m simply doing wrong.
Some new code completely in terraform 12 - when we apply the state file is updated every time regardless of any changes. We have much different code in 11 (both modularized) but we don’t see this behavior there. Is this expected?

--- a/federation_iam/CloudOpsState/968600917556-terraform.tfstate
+++ b/federation_iam/CloudOpsState/968600917556-terraform.tfstate
@@ -1,7 +1,7 @@
 {
   "version": 4,
   "terraform_version": "0.12.26",
-  "serial": 79,
+  "serial": 80,
   "lineage": "af4fd9cd-e1ff-9b2a-4989-d64d09608e59",
   "outputs": {},
   "resources": [
@@ -17,7 +17,7 @@
           "attributes": {
             "account_id": "XXXXXXXXXXXXX",
             "arn": "arn:aws:sts::XXXXXXXXXXXX:assumed-role/some-role/org-script",
-            "id": "2020-06-10 15:21:42.969888429 +0000 UTC",
+            "id": "2020-06-10 15:34:49.663208607 +0000 UTC",
             "user_id": "XXXXXXXXXXXXXXX:org-script"
           }
         }

3 posts - 2 participants

Read full topic

Vault provider - illegal token_reviewer_jwt data

$
0
0

Hello,

I am using the Vault provider for Terraform to enable Kubernetes auth for Vault.

resource "vault_auth_backend" "kubernetes" {
  type = "kubernetes"
}

resource "vault_kubernetes_auth_backend_config" "kubernetes" {
  backend            = vault_auth_backend.kubernetes.path
  kubernetes_host    = var.kubernetes_host
  kubernetes_ca_cert = var.kubernetes_ca_cert
  token_reviewer_jwt = var.token_reviewer_jwt
}

I followed the documentation at https://www.vaultproject.io/docs/platform/k8s/helm/examples/kubernetes-auth to obtain the token_reviewer_jwt and kubernetes_ca_cert.

However running terraform apply, the token_reviewer_jwt errors out with

 Error: error updating Kubernetes auth backend config "auth/kubernetes/config": Error making API request.
 URL: PUT [MASKED]/v1/auth/kubernetes/config
 Code: 500. Errors:
 * 1 error occurred:
 	* illegal base64 data at input byte 342
   on dev/main.tf line 33, in resource "vault_kubernetes_auth_backend_config" "kubernetes":
   33: resource "vault_kubernetes_auth_backend_config" "kubernetes" {

My token_reviewer_jwt variable is what was outputted from cat /var/run/secrets/kubernetes.io/serviceaccount/token.

I’ve also tried encoding the token_reviewer_jwt to base64encoded. but terraform apply errors with

 Error: error updating Kubernetes auth backend config "auth/kubernetes/config": Error making API request.
 URL: PUT [MASKED]/v1/auth/kubernetes/config
 Code: 500. Errors:
 * 1 error occurred:
 	* not a compact JWS
   on dev/main.tf line 33, in resource "vault_kubernetes_auth_backend_config" "kubernetes":
   33: resource "vault_kubernetes_auth_backend_config" "kubernetes" {
 ERROR: Job failed: command terminated with exit code 1

1 post - 1 participant

Read full topic

Can't set static ip for Ubuntu 18.04 with Terraform 1.17.3 Vsphere

$
0
0

Hello everyone ,
Now i use Terraform using Vsphere Terraform v1.17.3 but i don’t set static ip for vm Ubuntu 18.04
That’s logs vmware

DEBUG: Opening /var/lock/vmware/gosc in O_CREAT|O_EXCL|O_WRONLY mode
INFO: Opening file name /tmp/.vmware-imgcust-dSMsUDU/cust.cfg.
DEBUG: Processing line: '[NETWORK]'
DEBUG: FOUND CATEGORY = NETWORK
DEBUG: Processing line: 'NETWORKING = yes'
DEBUG: ADDED KEY-VAL :: 'NETWORK|NETWORKING' = 'yes'
DEBUG: Processing line: 'BOOTPROTO = dhcp'
DEBUG: ADDED KEY-VAL :: 'NETWORK|BOOTPROTO' = 'dhcp'
DEBUG: Processing line: 'HOSTNAME = terraform0002'
DEBUG: ADDED KEY-VAL :: 'NETWORK|HOSTNAME' = 'terraform0002'
DEBUG: Processing line: 'DOMAINNAME = noc.test'
DEBUG: ADDED KEY-VAL :: 'NETWORK|DOMAINNAME' = 'noc.test'
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: '[NIC-CONFIG]'
DEBUG: FOUND CATEGORY = NIC-CONFIG
DEBUG: Processing line: 'NICS = NIC1'
DEBUG: ADDED KEY-VAL :: 'NIC-CONFIG|NICS' = 'NIC1'
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: '[NIC1]'
DEBUG: FOUND CATEGORY = NIC1
DEBUG: Processing line: 'MACADDR = 00:50:56:a6:53:fc'
DEBUG: ADDED KEY-VAL :: 'NIC1|MACADDR' = '00:50:56:a6:53:fc'
DEBUG: Processing line: 'PRIMARY = yes'
DEBUG: ADDED KEY-VAL :: 'NIC1|PRIMARY' = 'yes'
DEBUG: Processing line: 'ONBOOT = yes'
DEBUG: ADDED KEY-VAL :: 'NIC1|ONBOOT' = 'yes'
DEBUG: Processing line: 'IPv4_MODE = BACKWARDS_COMPATIBLE'
DEBUG: ADDED KEY-VAL :: 'NIC1|IPv4_MODE' = 'BACKWARDS_COMPATIBLE'
DEBUG: Processing line: 'BOOTPROTO = static'
DEBUG: ADDED KEY-VAL :: 'NIC1|BOOTPROTO' = 'static'
DEBUG: Processing line: 'IPADDR = 10.20.20.221'
DEBUG: ADDED KEY-VAL :: 'NIC1|IPADDR' = '10.20.20.221'
DEBUG: Processing line: 'NETMASK = 255.255.252.0'
DEBUG: ADDED KEY-VAL :: 'NIC1|NETMASK' = '255.255.252.0'
DEBUG: Processing line: 'GATEWAY = 10.20.20.1'
DEBUG: ADDED KEY-VAL :: 'NIC1|GATEWAY' = '10.20.20.1'
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: '[DNS]'
DEBUG: FOUND CATEGORY = DNS
DEBUG: Processing line: 'DNSFROMDHCP=yes'
DEBUG: ADDED KEY-VAL :: 'DNS|DNSFROMDHCP' = 'yes'
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: ''
DEBUG: Empty line. Ignored.
DEBUG: Processing line: '[DATETIME]'
DEBUG: FOUND CATEGORY = DATETIME
DEBUG: Processing line: 'UTC = yes'
DEBUG: ADDED KEY-VAL :: 'DATETIME|UTC' = 'yes'
DEBUG: Reading issue file ...
DEBUG: Command: 'cat /etc/issue'
DEBUG: Result: Ubuntu 18.04.1 LTS \n \l
INFO: Customizing NICS. { NIC1 }
INFO: Customizing NIC NIC1
DEBUG: Get interface name for MAC 00:50:56:a6:53:fc, via [ip addr show]
DEBUG: Command: 'whereis ip'
DEBUG: Result: ip: /bin/ip /sbin/ip /usr/share/man/man8/ip.8.gz /usr/share/man/man7/ip.7.gz
DEBUG: opening file /etc/hostname.
DEBUG: Match found   : Line = terraform0002
DEBUG: Actual String : terraform0002
INFO: OLD HOST NAME = terraform0002
INFO: Marker file exists or is undefined, pre-customization is not needed
INFO: Customizing Network settings ...
INFO: Erasing DHCP leases
DEBUG: Command: 'pkill dhclient'
DEBUG: Result:
DEBUG: Exit Code: 256
DEBUG: Command: 'rm -f /var/lib/dhcp/*'
DEBUG: Result:
DEBUG: Exit Code: 0
DEBUG: Check if command [hostnamectl] is available
INFO: Check if hostnamectl is available
DEBUG: Command: 'hostnamectl status 2>/tmp/guest.customization.stderr'
DEBUG: Result:
DEBUG: Exit Code: 256
DEBUG: Stderr: Failed to create bus connection: No such file or directory
DEBUG: Exit Code: 0
DEBUG: Command: '/bin/ip addr show'
DEBUG: Result: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:50:56:a6:53:fc brd ff:ff:ff:ff:ff:ff
    DEBUG: Exit Code: 0
    INFO: NIC suffix = eth0
    INFO: Query config for ^(NIC1\|IPv6ADDR\|)
    INFO: Query config for ^(NIC1\|IPv6NETMASK\|)
    INFO: Configuring gateway from the primary NIC 'NIC1'.
    INFO: Query config for ^NIC1(\|IPv6GATEWAY\|)
    INFO: Query config for ^(DNS\|SUFFIX\|)
    INFO: Query config for ^(DNS\|NAMESERVER\|)
    DEBUG: opening file for writing (/etc/netplan/99-netcfg-vmware.yaml).
    DEBUG: Command: 'chmod 644 /etc/netplan/99-netcfg-vmware.yaml'
    DEBUG: Result:
    DEBUG: Exit Code: 0
    INFO: Apply Netplan Settings
    DEBUG: Command: '/usr/sbin/netplan apply 2>&1'
    DEBUG: Result:
    DEBUG: Exit Code: 0
    INFO: Customizing Hosts file ...
    DEBUG: Old hostname=[terraform0002]
    DEBUG: Old FQDN=[terraform0002.noc.test]
    DEBUG: New hostname=[terraform0002]
    DEBUG: Building FQDN. HostnameFQDN: terraform0002, Domainname: noc.test
    DEBUG: New FQDN=[terraform0002.noc.test]
    DEBUG: opening file /etc/hosts.
    DEBUG: Line (inp): 127.0.0.1    localhost
    DEBUG: Line (inp):

    DEBUG: Line (inp): # The following lines are desirable for IPv6 capable hosts

    DEBUG: Line (inp): ::1     localhost ip6-localhost ip6-loopback

    DEBUG: Line (inp): ff02::1 ip6-allnodes

    DEBUG: Line (inp): ff02::2 ip6-allrouters

    DEBUG: Line (inp):

    DEBUG: Line (inp):

    DEBUG: Line (inp): 10.240.120.221       terraform0002.noc.test terraform0002

1 post - 1 participant

Read full topic

Constructing resource names within Terraform

$
0
0

I have a set of CNAME records to be maintained via the Cloudflare provider.

The set contains a subdomain and associated endpoint.

If I use count I end up with names like cname_record[0], cname_record[1], etc.

All looking good so far.

As long as we only ever add new CNAMEs to the end of the list, all is good. As soon as we delete one all hell breaks loose as the records after that are deleted and recreated and then we have DNS propagation issues, etc.

What I want to do is have:

resource "cloudflare_record" "cname_record_${var.cnames[count.index].subdomain}" {

But Terraform does not allow variable resource names.

One solution we have had to build is an external rendering system that takes a template and builds the .tf file (it has a BIG header that says that this is a generated file, but not ideal) so that each resource is uniquely named and is not dependent upon the position in the list. So, if it is the api subdomain, the resource name is cname_record_api, regardless of it is the first record or the last record in the list.

I’ve tried to get my head around the template feature offered by Terraform, but this seems to ONLY be useful for resource attribute values, not resources themselves.

Am I missing a way to do this?

2 posts - 2 participants

Read full topic

Terraform azure data "azurerm_image" regex cannot get both 2 and 3 digits

$
0
0

I have below terraform code to look for latest image. But the problem is it cannot look for images that has both 2 and 3 digits after “b” in “name_regex”

data “azurerm_image” “prod_image” {
name_regex = “^test-${var.os}-b\d+”

resource_group_name = “${data.azurerm_resource_group.linux_rg.name}”
sort_descending = true
}

we have centos and ubuntu images with build numbers like below, above code looks for only 2 digits after “b” and gets highest number(Ex “99” is highest is 2 digit), how to looks for both 2 and 3digits and get highest number

Images names in storage are like below:

test-ubuntu-b1
.
.
test-ubuntu-b45
test-ubuntu-b46 – need to get this latest image

For centos we have like below 3 digits:

test-centos-b1
.
.
test-centos-b120
test-centos-b121 – need to get this latest image

How to match regex for both 2 and 3 digits.

Terraform version 0.11.14

1 post - 1 participant

Read full topic

Instance replacement creates new instance before destroying old one

$
0
0

I have an issue that I cannot explain and I am pretty sure that things used to work.

I have an aws_instance resource. When updating the AMI, it gets replaced. But instead of destroying the previous instance and creating the new one, it creates the new instance and leave the “destroy” for the end.

This causes issues with EBS volume attachments that did not happen a couple of minor versions before. When the instance that is meant to be destroyed is destroyed first, the attachment (skip_destroy=true) is gone with it, and it can be re-attached to the new instance afterwards.

Funny enough, I have a second, similar type of instance in the same stack that behaves as I expect and gets destroyed before created.

In all of this there is no create_before_destroy life-cycle directive involved.

Any tips?

3 posts - 2 participants

Read full topic


Combine information from two resources

$
0
0

I need to pass information from my web module to my app_gateway module to associate each NIC to a backend pool, and to use the VM names for overriding host names.

I have been able to do the latter by just doing the following:

output "vm_list" {
	value = azurerm_virtual_machine.main
}

This outputs the list of 3 VMs that I can then pass as a variable into the other module. Is there a way that I can combine the NIC information with the VM information to create only one mapped output?

main.tf

resource "azurerm_network_interface" "main" {
  count               = var.vm_count
  name                = "${format("${local.resource_prefix}%03d", count.index)}-nic"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "ipconfig"
    subnet_id                     = var.azurerm_subnet_id
    private_ip_address_allocation = "Dynamic"
  }

  tags = local.common_tags
}

resource "azurerm_virtual_machine" "main" {
  count                 = var.vm_count
  name                  = format("${local.resource_prefix}%03s", count.index)
  location              = azurerm_resource_group.main.location
  resource_group_name   = azurerm_resource_group.main.name
  network_interface_ids = [element(azurerm_network_interface.main.*.id, count.index)]
  vm_size               = var.azurerm_vm_size
  availability_set_id   = azurerm_availability_set.main.id
}

I want to group VM name, VM id, NIC id, and ipaddress together as an output. Is this possible?

3 posts - 2 participants

Read full topic

Need help creating multiple AZ resources

$
0
0

I need to be able to create resources based on the names and numbers. I need to use the name for the ‘name’ in each resource and the ‘num’ for the ‘IP scope’ in the virtual network as one of the octets in the IP address.

This does not work:

variable "names" {
    default     = [
        {    
            name    = "name1"
            num     = 1
        },
        {
            name    = "name2"
            num     = 2
        },
    ]
}

resource "azurerm_resource_group" "rg-customers" {
  for_each  = var.names
  name      = "TEST-rg-${var.names[each.key].name}"
  location  = "East US 2"
}
resource "azurerm_virtual_network" "vnet-customers" {
  for_each            = (var.names)
  name                 = "vnet-${each.value.name}"
  location             = azurerm_resource_group.rg-customers[each.key].location
  resource_group_name = azurerm_resource_group.rg-customers[each.key].name
  address_space       = ["10.${each.value.num}.0.0/16"]
  dns_servers         = ["10.${each.value.num}.0.4/16", "10.${each.value.num}.0.5/16"]
}

4 posts - 2 participants

Read full topic

Azure_application_gateway rewrite rule with block condition... then

Loading list variable on OpenStack secgroups

$
0
0

Hi guys,

I’m working on a Terraform definition for Openstack and came across an issue which I cannot find a solution so far.

I’ve declared a map variable with a list key called secgroups:

variable "instances" {
  description = "instances to be deployed"
      type        = map(object({
        ufqdn     = string
        fqdn      = string
        flavor    = string
        image     = string
        disk2     = number
        zone      = string
        ip        = string
        secgroups  = list(string)
      }))
      default = {
      "os1000" = {
        ufqdn     = "os1000.domain.com"
        fqdn      = "os1000.domain.com"
        flavor    = "1000" 
        image     = "43369c0b-...."
        disk2     = 1
        zone      = "BLAH"
        ip        = "192.168.0.100"
        secgroups  = [
                                "data.terraform_remote_state.network.outputs.secgroup_prod", 
                                "data.terraform_remote_state.network.outputs.secgroup_default", 
                                "data.terraform_remote_state.network.outputs.secgroup_global_www"
                                ]
      },
      ...

The secgroup IDs are being obtained from a remote state file, and the port definitions are shown below:

data "terraform_remote_state" "network" {
  backend = "local"
  config = {
     path  = "../../network/terraform.tfstate"
  }
}
...

resource "openstack_networking_port_v2" "port_instance" {
  for_each           = var.instances
  name               = "port-${each.value.ufqdn}"
  network_id         = data.terraform_remote_state.network.outputs.network_id
  security_group_ids =  each.value.secgroups 
...
}

Whenever I try to apply the definitions, I’ve got the error below:

Error: Error updating OpenStack Neutron Port: Bad request with: [PUT 
https://openstack.000.com/v2.0/ports/0e28b16d-49ba-4994-8bbb-da1c797952e2], error 
message: {"NeutronError": {"message": "Invalid input for operation: 
'data.terraform_remote_state.network.outputs.secgroup_default' is not an integer or 
uuid.", "type": "InvalidInput", "detail": ""}}

on main.tf line 20, in resource "openstack_networking_port_v2" "port_instance":
20: resource "openstack_networking_port_v2" "port_instance" {

When I check the secgroups associated with the port on Openstack, it turns out that only the first secgroup is applied.

The apply command works fine when I set the secgroups directly on the code (rather than as variables):

resource "openstack_networking_port_v2" "port_instance" {
....
security_group_ids = [ 
                    "${data.terraform_remote_state.network.outputs.secgroup_prod}", 
                    "${data.terraform_remote_state.network.outputs.secgroup_default}", 
                    "${data.terraform_remote_state.network.outputs.secgroup_global_www}"
                   ]
...

I’ve tried different approaches, but no luck so far. Any ideas about what I’m doing wrong?

Thanks in advance.

1 post - 1 participant

Read full topic

SSL Certificates key_vault_secret_id not recognised

$
0
0

Hi. I’m trying to install a certificate into an Application Gateway.
However following the documentation I have used key_vault_secret_id in the ssl_certificate
ssl_certificate {
name = var.pfx_certificate_name
key_vault_secret_id = “https://[redacted]”
password = data.azurerm_key_vault_secret.cert-password.value
}
but I am getting error messages around this config:

The argument "data" is required, but no definition was found.
An argument named "key_vault_secret_id" is not expected here.

This is confusing as the documentation states the ssl_certificate block makes data optional if the key_vault_secret_id is set. What am I doing wrong?
I am using the following versions:

Terraform v0.12.26

  • provider.azuread v0.8.0
  • provider.azurerm v1.44.0
  • provider.null v2.1.2
  • provider.random v2.2.1
  • provider.template v2.1.2

1 post - 1 participant

Read full topic

Viewing all 11357 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>