Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11364 articles
Browse latest View live

Aws security group source(CIDR/Source SG)

$
0
0

Hello All,

I am trying to create security group with multiple ingress rules(Lets assume 2 ingress rules). One Rule with source as CIDR and the another rule with source as another security group.

So, we have use to cidr_blocks argument for CIDR Source and source_security_group_id for 2nd scenario.

Here is my code.
variables.tf

variable “ingress_rules” {
type = map(object({
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
description = string
source_sg_id= string
}))
}

main.tf

resource “aws_security_group_rule” “managed_node_ssh_access” {
for_each = var.ingress_rules
security_group_id = aws_security_group.default.id
description = lookup(each.value, “description”, null)
type = “ingress”
from_port = lookup(each.value, “from_port”, null)
to_port = lookup(each.value, “to_port”, null)
protocol = lookup(each.value, “protocol”, null)
cidr_blocks = lookup(each.value, “cidr_blocks”, null)
source_security_group_id = lookup(each.value, “source_sg_id”, null)
}

terraform.tfvars

ingress_rules = {
rule1 = {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“10.0.0.0/16”]
description = “test”
# source_sg_id = “”
},
rule2 = {
from_port = 80
to_port = 80
protocol = “tcp”
# cidr_blocks = [“0.0.0.0/0”]
description = “test”
source_sg_id = “sg-123456”
},
}

I am getting error that says cidr_blocks/source security group is required field.

I want to use cidr for rule1 and source_sg_id for rule2, please advice.
Thanks in advance.

1 post - 1 participant

Read full topic


How do I get list of all the folders in terraform

$
0
0

I am trying to list all the folders in ${path.root}.

Is it possible.

1 post - 1 participant

Read full topic

Undocumented behavior around nested, aliased providers

$
0
0

Hello! I ran into a problem with the way terraform 0.13 seems to handle Providers within Modules.

Specifically, the issue crops up when using a submodule with an implicitly-inherited, aliased provider. A reason this might come up is that the submodule needs two different google providers (to create resources in two google projects).

I have the following (example) root module and submodule:

# main.tf (root module, defines providers)
provider "google" {
  project = "my-primary-project"
}

provider "google" {
  alias = "other-project"
  project = "my-other-project"
}

module "some_submodule" {
  source = "./some_submodule"
}

# some_submodule/main.tf (submodule, uses both google providers)
provider "google" {
  alias = "my-other-project"
}

resource "google_service_account" "some_primary_project_resource" {
  provider = google
  
  # ... other config
}

resource "google_service_account" "some_other_project_resource" {
  provider = google.other-project
  
  # ... other config
}

Note that I am not explicitly passing down the other-project alias to the submodule. The docs say that this shouldn’t work:

Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.

Given the documentation, I would expect this terraform config not to work at all. The submodule should fail with some kind of error like Provider not found: google.other-project. The submodule, which is not explicitly passed the aliased configuration, shouldn’t have access to that particular provider. Therefore some_submodule should not know to create some_other_project_resource in the my-other-project GCP project.

However, this config can actually be applied, and it leads to a messy state. Somehow the submodule is actually able to find the other-project alias (and therefore knows to create some_other_project_resource in the GCP project named my-other-project), but in terraform state, the resource’s provider is named module.some_submodule.providers[registry.terraform.io/hashicorp/google].other-project. So it is reusing the same provider configuration, but giving it a new name and tying it to the some_submodule module. This leads to some confusion, because now the some_submodule module cannot be deleted (because terraform thinks the provider definition is gone, even though it isn’t).

Is this actually a valid setup? My sense is that this configuration should cause an error to be thrown during terraform plan, perhaps with an error message that recommends adding a providers = { ... } argument to the some_submodule definition.

1 post - 1 participant

Read full topic

Terraform apply is failing. Getting internal server error

$
0
0


Using Terraform v0.13.2.
Expected behavior is that changes should be applied to existing workspace but getting internal server error. Running terraform apply through azure DevOps services. On Sep 9 Hashicorp sent an email due to some changes we would have to revoke connection for existing vcs connection and reconnect. I had created a new one instead and updated to use ne oath_vcs_id and tfe token.

1 post - 1 participant

Read full topic

Unable to Update VCS provider on Terraform UI

Community Office Hours: Terraform Registry

$
0
0

Join us tomorrow, as we host a session of live public, virtual office hours. This 60-minute session will focus on the Terraform Registry.

We encourage community members to submit questions and topics to discuss, and up-vote topics identified by fellow practitioners ahead of time. We will have experts available that can provide advice on your technical architecture, give recommendations for operational best practices, review current Github issues, or dive into the open source code itself.

Join us live tomorrow, from 10-11am EDT!

2 posts - 1 participant

Read full topic

Dynamic backend configuration

$
0
0

Hi there,

I’m trying to have a single tf file that can cater to local state or remote AWS S3 for a build pipeline running in a container.

I tried having a dynamic block:

dynamic “s3_backend” {
for_each = var.use_remote_terraform_state == false ? : [1]
terraform {
backend “s3” {
bucket = var.terraform_state_s3_bucket
key = var.terraform_state_s3_key
region = var.terraform_state_s3_region
dynamodb_table = var.terraform_state_dynamo_table
encrypt = true
}
}
}

However this results in the error:
Error: Unsupported block type

on main.tf line 10:
10: dynamic “s3_backend” {

Blocks of type “dynamic” are not expected here.

Any suggestions on how to achieve this?

Thanks in advance.

Tim

1 post - 1 participant

Read full topic

Module dependancies

$
0
0

Hello I have an issue with modules dependencies, here’s what I am doing

I have 2 modules, one for the creation of IAM custom policies and the other for creation user with attaching these custom policies (this is done via cloud formation)

module “global-iam-policies” {
source = “…/modules/global-iam-policies”
}

module “service-accounts” {
source = “…/modules/service-accounts”
svcNames = var.svcNames
default_custom_policies = module.global-iam-policies.default_custom_policies
}

I have no issues during the creation, so the policies get first created then the USER with the policies attachment is done.

The problem is when I try to destroy one policy, I get this error : DeleteConflict: Cannot delete a policy attached to entities

So it tries to destroy the IAM policy before applying the changes to the cloudfromation template which should first detach the policy.

Any idea how to solve this?

Thanks a lot

1 post - 1 participant

Read full topic


How to configure a Snapshot Scheduler to copy 1200 PV's

$
0
0

Hello:
I configured my Terraform v0.12.21 to deploy +100 webs with wordpress in Google Cloud Kubernetes Engine (GKE). I used Terraform to create one Persistent Volume Claim for each webpage.

Now I have a problem: I need to configure a snapshot schedule for all of that Persistent Volumes, but after reading a lot of the Google and Terraform documentation about the snapshot scheduler, I cannot find any example of how to do that:

Docs:



Here is an example of my code:

#VOLUME CLAIM
resource "kubernetes_persistent_volume_claim" "wordpress_volumeclaim" {
  for_each = var.wordpress_site

  metadata {
    name      = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
    namespace = "default"
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = { storage = each.value.disk }
    }
  }
}

resource "kubernetes_deployment" "wordpress" {
  for_each = var.wordpress_site

  metadata {
    name   = each.value.name
    labels = { app = each.value.name }
  }
  spec {
    replicas = 1
    selector {
      match_labels = { app = each.value.name }
    }
    template {
      metadata {
        labels = { app = each.value.name }
      }
      spec {

        volume {
          name = "wordpress-persistent-storage-${terraform.workspace}-${each.value.name}"
          persistent_volume_claim {
            claim_name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
          }
        }

Could you help me to add my snapshot scheduler (created manually in GCE, and named “snapshot-pvc”). Maybe this cannot be possibly done.

Thanks in advance,

aconde

1 post - 1 participant

Read full topic

Switch from using count/element and a data source to for_each

$
0
0

I’m looking to switch from count to for_each but I’m having trouble with the following.

The following is an example of what I’m currently using and takes advantage of the wrap feature with element.

data "aws_availability_zones" "available" {
  state = "available"
}

resource "aws_instance" "example" {
  count                  = length(var.example_ips)
  subnet_id              = element(module.vpc.private_subnets, count.index)
  private_ip             = element(var.example_ips, count.index)
}

Any ideas on how to recreate this with for_each? I’ve tried the code below however a change of order to var.example_ips still results in recreating the instances because I’m still looking at the index to loop through subnet_id.

resource "aws_instance" "example" {
  for_each = { for ip in var.example_ips : ip => index(var.example_ips, ip) }
  subnet_id              = element(module.vpc.private_subnets, each.value)
  private_ip = each.key
}

1 post - 1 participant

Read full topic

Issue with attaching spotinst vmss to Application gateway load balancer

$
0
0

I have created 2 modules in terraform 1. Application Gateway load balancer 2.Spoinst elastic group creation.

I want to attach the scale set created in spotinst to the application gateway backend pool.

I believe below syntax is to attach AppGateway loadbalancer in spotinst elastic group creation. But i am having doubt on how to get the below values.
Spotinst module :
load_balancers {
type = “”
balancer_id = “”
target_set_id = “”
auto_weight = true
}
Can anyone pls guide me what is the value for the above syntax to attach azure application gateway loadbalancer to spotinst elastic group creation.

1 post - 1 participant

Read full topic

For_each depends_on previous item

$
0
0

Good afternoon terraform community. I have a challenge I am trying to solve and I have not found a good solution yet… other than break my for_each into seperate resources.

I am trying to mount a variable number of data disks on an Azure VM with the following code:

resource azurerm_virtual_machine_data_disk_attachment data_disks {
  for_each = var.data_disks

  managed_disk_id    = azurerm_managed_disk.data_disks[each.key].id
  virtual_machine_id = azurerm_linux_virtual_machine.VM.id
  lun                = each.value.lun
  caching            = "ReadWrite"
}

The code above work fine but sometime the disks will get attached out of order to the VM… like LUN1 will be attached before LUN0. This appear to be an Azure issue where it does not necessarily create resources in the order it received.

So to fix this I would need to add a depends_on inside each item of the for_each loop to make it depend on the precedent item… if it is not the 1st one.

It appear depends_on apply before the for_each in a resource and not inside each item.

Any suggestion? Breaking the code in separate resource would be awful as I don’t know how many data disks will be asked to be created…

1 post - 1 participant

Read full topic

Azure VMSS with custom Image - update failed

$
0
0

Hi,

TF 0.13.2 with latest AzureRM provider.

I provisioned a VMSS with custom Image. The problem is, if i know add datadisks to the VMSS, the update request fails with:

Error: Error updating LWindowsinux Virtual Machine Scale Set “vmss-apptf” (Resource Group “RG-TFCICDAPP105”): compute.VirtualMachineScaleSetsClient#Update: Failure sending request: StatusCode=400 – Original Error: Code=“InvalidParameter” Message=“Required parameter ‘imageReference’ is missing (null).” Target=“imageReference”

on vmss.tf line 5, in resource “azurerm_windows_virtual_machine_scale_set” “vmss”: 5: resource “azurerm_windows_virtual_machine_scale_set” “vmss” {​​​​​

Of course, imagereference is missing as I use source_image_id for the VMSS as document.

Any idea what´s causing the issue?

1 post - 1 participant

Read full topic

`terraform plan -out` with temporary credentials

$
0
0

Hello there,

I have Terraform part of my CI/CD pipeline - we segregate the plan stage from the apply stage, with the output of the plan stage (terraform plan -out plan.tfplan) as the input to the apply stage.

This works great – if the backend and providers use credentials that is consistent between the stages. It appears that if each job in the pipeline has different credentials (because they are ephemeral and specific to that job), the plan includes the credentials and when applying – despite terraform init -reconfigure with the new job credentials – the credentials in the plan are used, resulting in a 401 from my HTTP backend.

I could probably work around this problem by writing fancy scripts that check that plans match, except the backend configuration – but I’m wondering if there’s a better way to accomplish this, or if the behaviour is expected (it definitely was not to me, despite having used Terraform for quite a while now.)

If necessary – I’m on Terraform 0.12.29.

Joel

1 post - 1 participant

Read full topic

How can I predict name of created resources

$
0
0

Hi there.
I try to use CDK with Python for deploying our environment but it still unclear about algorithm of generation name of resources. I can not predict which name of resources will be generated. This impotant for as because we want to use CDK with Jenkins jobs.
I’m afraid that after some deployments this name can be changed and our remote state can be out of sync.
Gould someone give some information about this algorithm, which activities can changed the current name of the resource.

1 post - 1 participant

Read full topic


Run command cdktf deploy with parameters

$
0
0

Hi there.
I use Python for developing scripts. Script should will be able to deploy the same environment with different parameters.
Now I solve this problem by using this approach.

  1. pipenv run ./main.py eu-west-3c blue
    then I go to the folder cdktf.out and after that start
  2. terraform apply

Is it possible to combine those parameters with command cdktf deploy

2 posts - 2 participants

Read full topic

Property Handling at scale

$
0
0

Hi,

we are using terraform to manage currently 20 aws accounts each requiring about 60 variables split in about 10 yaml files (for trigger reasons - seperate terraform projects for networking, permissions, eks etc.) all stored in git.

As the number of accounts and values to manage are about to grow rapidly, we are looking for a new way to store the parameters.

Current ideas:

  • build a ui + api around the configs in git (not preferred)
  • store most properties in AWS parameter store (leaves us only with managing the “pointer” to the account)
  • add a product that manages the properties for us

Adding terragrunt or terraform workspaces seems not to solve the issues we have right now.

Thanks ahead

1 post - 1 participant

Read full topic

Initialization of the CDK project

$
0
0

Hi there.
I have question about initialization of the CDK project.
I use Jenkins for deploying environment. After cloning code into Jenkins workspace from repository it contains files like this:
555555
then I run command: cdktf get for getting providers
after that command: pipenv run ./main.py
I got error: Traceback (most recent call last):
File “./main.py”, line 3, in
from constructs import Construct
ModuleNotFoundError: No module named 'constructs’

Jenkins workspace does not have nesessary libraries. When I created this project on my local desktop I ran command -> cdktf init --template=“python” --local
this command installs the cdktf library so that it can be used in the project,
I can not start initialization into Jenkins workspace
I got error: ERROR: Cannot initialize a project in a non-empty directory

Is it posible to initialize all nesessary libraries into Jenkins environment by using an existing code?

2 posts - 2 participants

Read full topic

Conditional Block in terraform for if-else not working for list passing based on value of other variable

$
0
0

I am trying to use the Conditional Block for terraform for creating aws iam groups and adding users into that groups based on count n length.

However the value of the iam user is a list and depends upon the environment value set by another value.

So this list is not passable by any iteration method in terraform.

Please check if anyone can help me out from HC family.

Code Snippet is shared below.
I want if the aws_env is NONPROD, one value should be used, if env is PROD, other value should be used.

module “iam_users” {

source = “…/modules/iam_users”

tf_state_s3_bucket = “aws-${lower(var.aws_env)}-tf-state”

tf_state_dynamodb = “aws-${lower(var.aws_env)}-tf-state”

aws_env = var.aws_env

iam_admin_users = var.iam_admin_users

}

variable “iam_admin_users” {

description = “List of Admin users to add to IAM”

type = list(string)

NONPROD = [

"abc@gmail.com",

"efg@gmail.com",

"ijk@gmail.com",

"mno@gmail.com",

"xyz@gmail.com"

]

PROD = [

"tap@gmail.com",

"pat@gmail.com",

]

}

1 post - 1 participant

Read full topic

Community Office Hours: AWS Provider for Terraform

$
0
0

Community Office Hours: Terraform will be back next week! Join us Thursday, September 24 from 9:00-10:00am EDT. This 60-minute session will focus on the AWS Provider for Terraform.

We encourage community members to submit questions and topics to discuss, and up-vote topics identified by fellow practitioners ahead of time. We will have experts available that can provide advice on your technical architecture, give recommendations for operational best practices, review current Github issues, or dive into the open source code itself.

Join us live September 24, from 9-10am EDT!

2 posts - 1 participant

Read full topic

Viewing all 11364 articles
Browse latest View live