Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11435 articles
Browse latest View live

Quick easy one using multiple variables to form a name

$
0
0

@RussellMaycock wrote:

Hi,
Just a quick, hopefully, easier question to answer?

Before 0.12 I could write
name = "{${var.storageaccountname}${var.env}}"
and it would give me the name as two variables e.g. storageaccrussproduction

How do I do this in 0.12 I’ve tried

name = var.storageaccountnamevar.env

and a few other arrangements but can’t get it right.

Thanks
Russ

Posts: 2

Participants: 2

Read full topic


Terraform apply scopes and namespacing - an alternative to boolean resource counts for flagging

$
0
0

@SudoBasher wrote:

hey folks. first time poster here on the forum. i’m trying to figure out how to apply some sort of broad stroke build flags on my terraform builds. right now i have 2 separate folders of terraform: one for setting up a provisioning environment for building with packer (creates some iam stuff that i can then attach to packer as it builds amis), and then a much larger environment build of the vpc, nodes, dbs, etc etc. after i build the provisioning environment, i need to then copy that iam.tf file into the environment build folder, so that it keeps the iam roles intact when i build the rest of the environment. but copying the file seems weird. cleaner code would be to reuse the same file and enable runtime flags.

so i started out with using basic boolean vars to trigger whether certain portions of my environment would build (using the resource count attribute), but i’d like to somehow use tags, or another namespace type of approach, for selectively provisioning certain elements of the environment build, so that i don’t need to use the count attribute and all the [index] syntax. that count attribute [index] syntax is just unnecessary when lots of things are single items. basically i just want to set a runtime var to true, and if it’s true, then certain portions of the environment build will be created, but without using the resource count attribute, it won’t create them.

took a look through all the docs, but couldn’t find anything to support this. maybe i’m blind, so posting this here to see if someone has a strategy that i haven’t learned yet.

thank you very much!

Posts: 2

Participants: 2

Read full topic

Terraform Cloud: Check is API-driven run is complete

$
0
0

@jeffreznik wrote:

I’m using the Terraform Cloud API to create configuration versions and upload the tar.gz files to the upload URL as laid out in https://www.terraform.io/docs/cloud/run/api.html. I have auto-apply enabled so the run should complete on its own, after which I need to grab an output value.

What is the easiest (or “correct”) way to know when I can GET /workspaces/:workspace_id/current-state-version?include=outputs to obtain the output value?

I assume I need to poll some endpoint to determine the status of the run, and I already have a configuration-id from the earlier call to POST /workspaces/:workspace_id/configuration-versions, but there doesn’t seem to be an API that will give me the status of a run from that ID.

Do I need to poll GET /workspaces/:workspace_id/runs and filter the list by configuration-version? Unfortunately, this requires needing to deal with pagination once the runs exceed whatever the max page size is.

There must be a better way?

Posts: 1

Participants: 1

Read full topic

Https://www.terraform.io/docs/providers/azurerm/r/hdinsight_hadoop_cluster.html

$
0
0

@mridul0709 wrote:

We would like to find the module options under “azurerm_hdinsight_hadoop_cluster” by which we can enable security in the cluster.

This security feature can be enabled with option “Enterprise security package” from Azure portal as also can be seen via the below link.

I have gone through the official documentation page (https://www.terraform.io/docs/providers/azurerm/r/hdinsight_hadoop_cluster.html) from terraform and I cannot find the options.

Also, please note we were able to achieve it using azure cli but we are exploring a way to implement it using terraform.

az hdinsight create –esp -t hadoop -g RG-US-hadoop -n xxxx -p “xxxx” --version 4.0 --storage-account xxxx --subnet “/subscriptions/xxxx/resourceGroups/RG-US-hadoop/providers/Microsoft.Network/virtualNetworks/aadds-vnet/subnets/aadds-subnet” --domain “/subscriptions/xxxx/resourceGroups/RG-US-hadoop/providers/Microsoft.AAD/domainServices/xxxx.xxx” --assign-identity “/subscriptions/xxxx/resourceGroups/RG-US-hadoop/providers/Microsoft.ManagedIdentity/userAssignedIdentities/hdinsights_mi” --cluster-admin-account xxxx@xxx.xxx --cluster-users-group-dns xxxx

Posts: 1

Participants: 1

Read full topic

Terraform output not being shown

$
0
0

@pkaramol wrote:

I am declaring the following output in a TF module in the output.tf file:

output "jenkins_username" {
  value       = "${local.jenkins_username}"
  description = "Jenkins admin username"
  #sensitive   = true
}


output "jenkins_password" {
  value       = "${local.jenkins_password}"
  description = "Jenkins admin password"
  #sensitive   = true
}

The corresponding locals have been declared in main.tf as follows:

locals {
  jenkins_username = "${var.jenkins_username == "" ? random_string.j_username.result : var.jenkins_username}"
  jenkins_password = "${var.jenkins_password == "" ? random_string.j_password.result : var.jenkins_password}"
}

However, after the apply has finished, I see no relevant output, and what is more, it is not displayed even when I call the explicit output command:

$ terraform output jenkins_password

The output variable requested could not be found in the state
file. If you recently added this to your configuration, be
sure to run `terraform apply`, since the state won't be updated
with new output variables until that command is run.

Using Terraform 0.11.14

Posts: 1

Participants: 1

Read full topic

How can i set a tag for the routing table in aws

$
0
0

@Ka-Di wrote:

How can i set a tag for the routing table, which was created while the creating of the vpc.
I wolud like set a name tage for the default routing table.
I would like not create a new routing table, if this possible.

thx
Ka-Di

Posts: 2

Participants: 2

Read full topic

What tentative release date of terraform new version 0.12.20

Error: rpc error: code = Unavailable desc = transport is closing

$
0
0

@npatilCTP wrote:

Hello,
We started seeing this error in our dev env builds this morning.
The same code branch of TF code works fine in another int env (different state file).

Error: rpc error: code = Canceled desc = context canceled
Error: rpc error: code = Unavailable desc = transport is closing
Error: rpc error: code = Unavailable desc = transport is closing

There is no crash.log created in this case.

Tried setting TF_LOG=DEBUG but the output was not of much help

Terraform version: 0.12.17
AWS provider version: 2.45

Any tips how to troubleshoot further or clues why this is happening ?

Posts: 1

Participants: 1

Read full topic


Submitted RPM package for Fedora x86_64

$
0
0

@adamzerella wrote:

Probably a bit irrelevant but I wanted to notify somebody on the Terraform team that I have
submitted version 0.12.19 of the x86_64 Linux binary to the Fedora project.

One could argue that this is a bit moot as one just needs to download the binary and it’s quite an easy setup but as an added convenience to some and a learning exercise I figured it can’t hurt. I do intend to maintain the RPM package and keep it up to date if people are interested.

Submission: https://bugzilla.redhat.com/show_bug.cgi?id=1794230
Spec repo: https://github.com/adamzerella/terraform-rpm

Hopefully soon I can just do dnf install terraform :slight_smile:

Posts: 1

Participants: 1

Read full topic

For-Loop fails if just one object is given

$
0
0

@tiwood wrote:

Hi :wave:t4:

I’m currently trying the following:

  1. Requesting allocated subnets for a network using the HTTP data source (REST call)
  2. Using jsondecode to convert the response body
  3. Convert the response to a list to use it in the resource

This is my code:

locals {
 assigned_subnets_list = jsondecode(data.http.assigned_subnets.body) == null ? [] : [ for subnet in jsondecode(data.http.assigned_subnets.body) : subnet.CIDR ]
}

data "http" "assigned_subnets" {
  url = "http://127.0.0.1/api/v1/subnets/?WorkloadName=FOOBAR&WorkloadEnvironment=PROD&select=cidr"
}

Depending on the API response this works or not:

Working response (array/multiple subnets):

[
  {
    "CIDR": "172.28.0.0/27"
  },
  {
    "CIDR": "172.28.0.32/27"
  }
]

Not working if just one CIDR is returned:

{
  "CIDR": "172.30.1.128/27"
}

This is the error I’m getting if the API returns 1 object:

Error: Unsupported attribute

  on main.tf line 71, in locals:
  71:    assigned_subnets_list = jsondecode(data.http.assigned_subnets.body) == null ? [] : [ for subnet in jsondecode(data.http.assigned_subnets.body) : subnet.CIDR ]

This value does not have any attributes.

Any ideas how I can fix this?

Thanks in advance!

Posts: 1

Participants: 1

Read full topic

Help on "Warning: External references from destroy provisioners are deprecated"

$
0
0

@Edwin-Pau wrote:

Hello, I ran into this warning message telling me the use of variables inside a destroy provisioner is deprecated. Is there a way around this? Basically I want a destroy-time provisioner to run which will winrm into my domain controller to run a script that will unjoin a workstation from the domain group.

I have the following provisioner block inside a resource “google_compute_instance” “workstation”
image

Posts: 1

Participants: 1

Read full topic

Reuse the same resources in multiple environments (bastion host ec2, SG)

$
0
0

@sulemanb wrote:

Hi All,

I have two environments (dev, qa) with their own set of respective resources (similar). However, in my dev environment, I have setup a single bastion host (EC2 in public subnet), which I want to reuse by my QA resources as well. I don’t want to setup a separate bastion host for QA, rather i want to share it for the other environment (dev, QA) as well.

Both the environments and all their resources are in the same AWS VPN.

I have a single set of Terraform stack whereas I maintain environments’ specific “terraform.tfvars” to provision the resources for each environment, through gitlab CICD pipeline (dey pipeline, qa pipeline).

I have a bool variable “deploy_bastion”, whose value i set to “true” in the dev environment (terraform.tfvars), and “false” in the QA, respectively.

I do terraform apply in my dev pipeline, where i provision all the resources including bastion host. Then i do terraform apply for my QA environment, but here i dont want bastion host resources to be recreated and also i don’t want a new bastion host SG.

I am handling the setting up of the bastion host through the count parameter.

here is my code for “bastion-host-autoscaling.tf”

########################################
resource “aws_launch_configuration” “bastion-host” {

  ##
  count           = var.deploy_bastion ? 1 : 0

  name_prefix     = var.bastion_host_launch_configuration_name
  image_id        = var.amis[var.aws_region]
  instance_type   = var.bastion_host_instance_type
  key_name        = aws_key_pair.public_key.key_name
  security_groups = [aws_security_group.bastion-host[count.index].id]
}

resource "aws_autoscaling_group" "bastion-host" {

  ##
  count      = var.deploy_bastion ? 1 : 0
  
  name                      = var.bastion_host_autoscaling_group_name
  vpc_zone_identifier       = [var.x_eks_public_subnet_1, var.x_eks_public_subnet_2]
  launch_configuration      = aws_launch_configuration.bastion-host[count.index].name
  min_size                  = var.deploy_bastion ? 1 : 0
  max_size                  = var.deploy_bastion ? 2 : 0
  health_check_grace_period = 300
  health_check_type         = "EC2"
  force_delete              = true

  tag {
    key                 = "Name"
    value               = var.bastion_host_autoscaling_group_tag_name
    propagate_at_launch = true
  }
}

########################################

and here is my code for “bastion-host-autoscalingpolicy.tf”

########################################

# scale up alarm

resource "aws_autoscaling_policy" "bastion-host-cpu-policy" {

  ##
  count      = var.deploy_bastion ? 1 : 0

  name                   = "bastion-host-xxx-cpu-policy"
  autoscaling_group_name = aws_autoscaling_group.bastion-host[count.index].name
  adjustment_type        = "ChangeInCapacity"
  scaling_adjustment     = "1"
  cooldown               = "300"
  policy_type            = "SimpleScaling"
}

resource "aws_cloudwatch_metric_alarm" "bastion-host-cpu-alarm" {

  ##
  count               = var.deploy_bastion ? 1 : 0

  alarm_name          = "bastion-host-x-cpu-alarm"
  alarm_description   = "bastion-host-x-cpu-alarm"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = "120"
  statistic           = "Average"
  threshold           = "30"

  dimensions = {
    "AutoScalingGroupName" = aws_autoscaling_group.bastion-host[count.index].name
  }

  actions_enabled = true
  alarm_actions   = [aws_autoscaling_policy.bastion-host-cpu-policy[count.index].arn]
}

# scale down alarm
resource "aws_autoscaling_policy" "bastion-host-cpu-policy-scaledown" {

  ##
  count      = var.deploy_bastion ? 1 : 0

  name                   = "bastion-host-x-cpu-policy-scaledown"
  autoscaling_group_name = aws_autoscaling_group.bastion-host[count.index].name
  adjustment_type        = "ChangeInCapacity"
  scaling_adjustment     = "-1"
  cooldown               = "300"
  policy_type            = "SimpleScaling"
}

resource "aws_cloudwatch_metric_alarm" "bastion-host-cpu-alarm-scaledown" {
  ##
  count      = var.deploy_bastion ? 1 : 0

  alarm_name          = "bastion-host-x-cpu-alarm-scaledown"
  alarm_description   = "bastion-host-x-cpu-alarm-scaledown"
  comparison_operator = "LessThanOrEqualToThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = "120"
  statistic           = "Average"
  threshold           = "5"

  dimensions = {
    "AutoScalingGroupName" = aws_autoscaling_group.bastion-host[count.index].name
  }

  actions_enabled = true
  alarm_actions   = [aws_autoscaling_policy.bastion-host-cpu-policy-scaledown[count.index].arn]
}

########################################

I think the above code is fine, however, the problem comes, when I apply the SecurityGroups.tf (see below)

########################################

resource "aws_security_group" "bastion-host" {
  ##
  count      = var.deploy_bastion ? 1 : 0

  vpc_id      = var.x_eks_dev_vpc
  name        = var.bastion_host_security_group_name
  description = var.bastion_host_security_group_description
  egress {
from_port   = 0
to_port     = 0
protocol    = "-1"
cidr_blocks = ["0.0.0.0/0"]
description = "all internet"
  }

  ingress {
from_port   = 22
to_port     = 22
protocol    = "tcp"
cidr_blocks = [var.xxx, var.xxx]
description = "xxx-proxy"
  }
  tags = {
Name = var.bastion_host_security_group_tag_name
  }
}

resource "aws_security_group" "server-host" {
  
  ##
  #count = length(aws_security_group.bastion-host)

  vpc_id      = var.x_eks_dev_vpc
  name        = var.server_host_security_group_name
  description = var.server_host_security_group_description
  
  ingress {
from_port       = 22
to_port         = 22
protocol        = "tcp"
security_groups = [aws_security_group.bastion-host.id] # allowing access from our bastion-host-x-instance
self        = true
  }
  ingress {
from_port       = 80
to_port         = 80
protocol        = "tcp"
#cidr_blocks = [var.xxx, var.xxx] # load balance tbd
cidr_blocks = [var.k8s_workernode_subnet_range_1, var.k8s_workernode_subnet_range_2,var.k8s_workernode_subnet_range_3]
security_groups = [aws_security_group.elb-server-host.id]
self        = true
  }
  ingress {
from_port       = 443
to_port         = 443
protocol        = "tcp"
cidr_blocks = [var.k8s_workernode_subnet_range_1, var.k8s_workernode_subnet_range_2,var.k8s_workernode_subnet_range_3]
security_groups = [aws_security_group.elb-server-host.id]
self        = true
  }
  ingress {
from_port       = 9142
to_port         = 9142
protocol        = "tcp"
cidr_blocks = [var.k8s_workernode_subnet_range_1, var.k8s_workernode_subnet_range_2,var.k8s_workernode_subnet_range_3]
security_groups = [aws_security_group.elb-server-host.id]
self        = true
  }

  egress {
from_port   = 0
to_port     = 0
protocol    = "-1"
cidr_blocks = ["0.0.0.0/0"]
self        = true
  }
  tags = {
Name = var.server_host_security_group_tag_name
  }
}

########################################

I am trying to stop the creation of SG the second time for QA environment, see the count inside …“aws_security_group” “bastion-host” {…

however, then the problem comes in the resource “aws_security_group” “server-host” {…"
which i do want to provision separately for the QA environment, but want to reuse the SG of the bastion-host which was created during the dev environment.

This “#count = length(aws_security_group.bastion-host)” in the “server-host” above makes no sense, because i do want to create it separately for the QA environment, whereas bastion-host count is 0 in this case. Also this makes no sense to me as well if I put in the above code:
“security_groups = [aws_security_group.bastion-host[count.index].id]”.

What i am trying to do somehow is to use here the SG id of the “dev” bastion host while setting the SG of QA “server-host”.

Now how can i reference in here (QA run) the dev SG bastion Id is my problem…
security_groups = [aws_security_group.bastion-host.id] # DEV?

Or by looking at the above entire code, let me know if I am approaching the problem in a right way? and if yes, how can i fix this last riddle.

Thanks.

Posts: 3

Participants: 2

Read full topic

AWS EKS Fargate

$
0
0

@bizza wrote:

Hello,
I like the introduction of aws_eks_fargate_profile resource, but you cannot create an EKS cluster fargate ready. This means that you are missing the kube-system pods. What I want to say is that there is not a terraform resource which executes this command:

eksctl create cluster --name my-cluster --version 1.14 --fargate

Thank you
Daniele

Posts: 1

Participants: 1

Read full topic

Redis in Terraform

$
0
0

@jmccauley-mandmdirec wrote:

Hi all,

Trying to build a redis instance in google. Having some issues with the authorised network argument.

So my argument looks like this

authorized_network = “${element(data.terraform_remote_state.network.network.self_link ,0)}”

But when running a plan / apply, I get the following output

google_redis_instance.redis: At column 3, line 1: element: argument 1 should be type list, got type string in:

${element(data.terraform_remote_state.network.network.self_link ,0)}

Sounds like it might need an array somewhere, but no idea where. Have tried lots of different things.

Cheers.

Posts: 1

Participants: 1

Read full topic

Iterate over diff of variable

$
0
0

@mhumeSF wrote:

I’d like to perform an action based on the values additionally added to a list variable. So given a list of [1,2], when 3 is added, take action of 3. But I’d like to ignore changes when values are removed.

Is this possible?

Posts: 2

Participants: 2

Read full topic


Get private IPs from ENIs of NLB?

$
0
0

@80kk wrote:

Hi,
This is my issue opened on the Github:

What I want to do is to create target group attachment. I have tried this:

data "aws_network_interface" "sftp-nlb" {
  for_each = var.private_subnet_ids

  filter {
    name   = "description"
    values = ["ELB ${aws_lb.sftp-nlb.arn_suffix}"]
  }

  filter {
    name   = "private_subnet_ids"
    values = [each.value]
  }
}

resource "aws_alb_target_group_attachment" "tg_attachment" {
  vpc_id           = var.vpc_id
  target_group_arn = aws_lb_target_group.sftp-nlb-target-group.arn
  target_id        = formatlist("%s/32", [for eni in data.aws_network_interface.sftp-nlb : eni.private_ip])  
  port             = 22
}

but that gives me:

Error: Incorrect attribute value type

 on modules/sftp/main.tf line 135, in resource "aws_alb_target_group_attachment" "tg_attachment":
135:   target_id        = formatlist("%s/32", [for eni in data.aws_network_interface.sftp-nlb : eni.private_ip])

Inappropriate value for attribute “target_id”: string required.

Posts: 2

Participants: 2

Read full topic

For_each to iterate over distinct network_interface_ids

$
0
0

@aghori1 wrote:

Hello,

I have recently upgraded to TF 0.12.19 and replaced count with for_each, as I wanted to build multiple VMs with unique configuration items. Here is part of the code that I modified to use the loop:

resource "azurerm_virtual_machine" "ubuntu" {
name = each.key
for_each = var.u_name
location = var.vm_location
resource_group_name = var.rg_vm
network_interface_ids = values(azurerm_network_interface.unic).*.id
delete_os_disk_on_termination = var.os_disk_attr["os_disk_delete"]
vm_size = each.value

u_name as defined in the variables.tf file:

variable "u_name" {
description = "Defines the VM sizes"
type = map(string)
default = {
dmp-nifi-prod-u-vm = "Standard_D2s_v3"
dmp-es-prod-u-vm = "Standard_D2s_v3"
dmp-kfk-prod-u-vm = "Standard_B2s"
dmp-utl-prod-u-vm = "Standard_B2s"
   }

}

With network_interface_ids = values(azurerm_network_interface.unic).*.id, TF tries to add 4 NICs per VM. I want it to add one NIC per VM. I have referenced the terraform page that describes the use of for_each and values, but I have been unable to make it work.

I will need to use it with the local-exec provisioner as well to pick private IP addresses one at a time to run an Ansible playbook against. This is what I have in the provisioner file:

resource "null_resource" "Ansible4Ubuntu" {
for_each = var.u_name
depends_on = [
azurerm_virtual_machine.ubuntu,
azurerm_network_interface.unic,
]
#triggers = {
#network_interface_ids = values(azurerm_network_interface.unic). <em>.id
#network_interface_ids = join(",", azurerm_network_interface.unic.</em> .id)
#}
provisioner "local-exec" {
command = "sleep 5 ; ansible-playbook -i ${values(azurerm_network_interface.unic).*.id} 
vmlinux-playbook.yml"
   }
}

Any help would be appreciated.

Thanks

Posts: 2

Participants: 2

Read full topic

Specify Source Module directory

$
0
0

@vikas027 wrote:

I source a git module as below in my Terraform 0.12 code

module "autospotting" {
  source                                    = "github.com/autospotting/terraform-aws-autospotting?ref=0.1.1"
  ...
  ...
}

At the time of terraform initialization, the below directory structure gets created. Is there a way to move this into a another directory and specify that with terraform init ?

Posts: 3

Participants: 2

Read full topic

Adding more than one instance to a target group attachment

$
0
0

@gurdeepsira wrote:

Hi,
I am using “aws_lb_target_group_attachment” in my terraform code. I have several instances that I provision (EC2) in my code. Is there a way I can ad all those instances to the target group attachment? I have seen code, like:

"${aws_instance.test.*.id

Does this add all instances created in the resource? So if i specify 3.

Posts: 1

Participants: 1

Read full topic

How to output the hostnames of app services created with for_each in terraform?

$
0
0

@MarkKharitonov wrote:

I have the following terraform module:

provider "azurerm" {
}

variable "env" {
    type = string
    description = "The SDLC environment (qa, dev, prod, etc...)"
}

variable "appsvc_names" {
    type = list(string)
    description = "The names of the app services to create under the same app service plan"
}

locals {
    location = "eastus2"
    resource_group_name = "app505-dfpg-${var.env}-web-${local.location}"
}

resource "azurerm_app_service_plan" "asp" {
    name                = "${local.resource_group_name}-asp"
    location            = local.location
    resource_group_name = local.resource_group_name
    kind                = "Linux"
    reserved            = true

    sku {
        tier = "Basic"
        size = "B1"
    }
}

resource "azurerm_app_service" "appsvc" {
    for_each            = toset(var.appsvc_names)

    name                = "${local.resource_group_name}-${each.value}-appsvc"
    location            = local.location
    resource_group_name = local.resource_group_name
    app_service_plan_id = azurerm_app_service_plan.asp.id
}

# output "hostnames" {
#     value       = azurerm_app_service.appsvc[*].default_site_hostname
#     description = "The hostnames of the created app services"
# }

It works, but I want to output the hostnames. Preferrably as a map, but for now just a list could be fine too.

When I uncomment the output statement and run terraform apply I get this:

Error: Unsupported attribute

  on ..\..\modules\web\main.tf line 42, in output "hostnames":
  42:     value       = azurerm_app_service.appsvc[*].default_site_hostname

This object does not have an attribute named "default_site_hostname".

So how do I output the list (or better the map) of hostnames of the new app services?

(The question is also posted here - https://stackoverflow.com/questions/59906907/how-to-output-the-hostnames-of-app-services-created-with-for-each-in-terraform)

Posts: 1

Participants: 1

Read full topic

Viewing all 11435 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>