Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11403 articles
Browse latest View live

Conditional object with concat result expressions must have consistent types

$
0
0

My goal is to define a container definition which has a “main” container like ngix or apache, and it optionally has multiple sidecar containers. The sidecar containers are re-used by a bunch of different services, so I’m defining those as a local variable which I can reuse as many times as needed. Also, the sidecar containers need to be conditionally enabled, such that we can use sidecars in prod but not use them in dev, or vice versa, etc.

Here is a super minimal reproducable example using apache as a “main” container and conditionally concatting the sidecars into the definition:

variable "use_generic_sidecars" {
  default = true
}

locals {
  # These generic sidecars are conditionally reused by a bunch
  # of different services, so we define them here as their own
  # variable and we re-use that variable in various other
  # container definitions.
  generic_sidecars = [
    {
      name      = "sidecar1"
      image     = "sidecar:latest"
      essential = false
    },
    {
      name      = "sidecar2"
      image     = "sidecar:latest"
      essential = false
    }
  ]
  apache = concat([
    {
      name      = "apache"
      image     = "httpd:latest"
      essential = true
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
    ],
    var.use_generic_sidecars ? local.generic_sidecars : [],
  )
}

resource "aws_ecs_task_definition" "generic_service" {
  family                = "service"
  container_definitions = jsonencode(local.apache)
}

I can use terraform plan and this plans as expected. It will build a task definition which has 3 total containers. 1 apache container plus two sidecars which are brought it by way of concat(). This suggests to me that the general method I’m using for composing this resource is valid.

  # aws_ecs_task_definition.generic_service will be created
  + resource "aws_ecs_task_definition" "generic_service" {
      + arn                   = (known after apply)
      + arn_without_revision  = (known after apply)
      + container_definitions = jsonencode(
            [
              + {
                  + essential    = true
                  + image        = "httpd:latest"
                  + name         = "apache"
                  + portMappings = [
                      + {
                          + containerPort = 80
                          + hostPort      = 80
                        },
                    ]
                },
              + {
                  + essential = false
                  + image     = "sidecar:latest"
                  + name      = "sidecar1"
                },
              + {
                  + essential = false
                  + image     = "sidecar:latest"
                  + name      = "sidecar2"
                },
            ]
        )
      + family                = "service"
      + id                    = (known after apply)
      + network_mode          = (known after apply)
      + revision              = (known after apply)
      + skip_destroy          = false
      + tags_all              = (known after apply)
      + track_latest          = false
    }

Plan: 1 to add, 0 to change, 0 to destroy.

But when I try the same method with a more real-world example like using datadog sidecar containers, it doesn’t work. Here is a reproducable example using nginx as the “main” container, plus two datadog containers with the same conditional concat method:

variable "use_datadog" {
  default = true
}

locals {
  # These datadog sidecars are conditionally reused by a bunch
  # of different services, so we define them here as their own
  # variable and we re-use that variable in various other
  # container definitions.
  datadog_containers = [
    {
      # https://docs.datadoghq.com/integrations/ecs_fargate/?tab=cloudformation#aws-cloudformation-task-definition
      name      = "datadog-agent"
      image     = "public.ecr.aws/datadog/agent:latest"
      essential = false
      cpu       = null
      memory    = null
      environment = [
        {
          name  = "ECS_FARGATE"
          value = "true"
        },
        # https://docs.datadoghq.com/security/cloud_security_management/setup/csm_pro/agent/ecs_ec2/
        {
          name  = "DD_CONTAINER_IMAGE_ENABLED"
          value = "true"
        },
        {
          name  = "DD_SBOM_ENABLED"
          value = "true"
        },
        {
          name  = "DD_SBOM_CONTAINER_IMAGE_ENABLED"
          value = "true"
        },
      ]
    },
    {
      name      = "cws-instrumentation-init"
      image     = "datadog/cws-instrumentation:latest"
      essential = false
      user      = "0"
      command = [
        "/cws-instrumentation",
        "setup",
        "--cws-volume-mount",
        "/cws-instrumentation-volume"
      ]
      mountPoints = [
        {
          sourceVolume  = "cws-instrumentation-volume"
          containerPath = "/cws-instrumentation-volume"
          readOnly      = false
        }
      ]
    },
  ]
  nginx = concat([
    {
      name      = "nginx"
      image     = "nginx:latest"
      essential = true
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
    ],
    var.use_datadog ? local.datadog_containers : [],
  )
}

resource "aws_ecs_task_definition" "service_with_datadog" {
  family                = "service"
  container_definitions = jsonencode(local.nginx)
}

Error message from terraform plan:

│ Error: Inconsistent conditional result types
│ 
│   on main.tf line 71, in locals:
│   71:     var.use_datadog ? local.datadog_containers : [],
│     ├────────────────
│     │ local.datadog_containers is tuple with 2 elements
│ 
│ The true and false result expressions must have consistent types. The 'true' tuple has length 2, but the 'false' tuple
│ has length 0.

In the nginx/datadog example, if I comment out the cws-instrumentation-init container, then I can get a valid plan with only one sidecar. But I do need that other sidecar.

Anybody know how I can unblock this inconsistent usage of conditional concatting?

And in case it is helpful, both of the above HCL examples are reproducable by pasting the HCL seen above into a main.tf file in an empty directory, run an init and then run a plan. The apache example will plan cleaning and the nginx example will break with the same error seen above.

$ terraform version
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.46.0

Thanks.

1 post - 1 participant

Read full topic


Resource recreating loop with route53 zones

$
0
0

Hello,
I have an issue where Terraform (1.7.5) always wants to add or remove all route53 zone vpc associations.

My module creates a r53 zone for each item in a list and then associates each zone with each vpc given as a list. I use a local resource to create
a map of the associations.

Any pointers on how to stop the resources recreating? Im guessing its to do with ordering in the for_each in the association resource but I’m not sure how to fix it.

locals {
 associations = merge([for item in var.vpc_ids_to_associate_with_phz : {
   for k, v in aws_route53_zone.private_hosted_zone : "${k}/${item}" => {
     zone_id = v.zone_id
     vpc_id  = item
   }
   }
 ]...)
}

variable "vpc_endpoints" {
 description = "A list of services to create endpoints for"
 type        = list(string)
 default     = []
}

variable "vpc_ids_to_associate_with_phz" {
 type        = list(string)
 description = "List of VPC IDs to associate with the private hosted Route 53 zone"
 default     = []
}

resource "aws_route53_zone" "private_hosted_zone" {
 for_each = toset(var.vpc_endpoints)

 name = "${each.key}.${data.aws_region.current.name}.amazonaws.com"
 vpc {
   vpc_id = aws_vpc.main_vpc.id
 }

 tags = {
   Name = "${each.key}-phz"
   env  = "${var.environment_tag}"
 }
}

resource "aws_route53_zone_association" "phz_association" {
 for_each = local.associations

 zone_id = each.value.zone_id
 vpc_id  = each.value.vpc_id
}

2 posts - 1 participant

Read full topic

Merging a map of Subnets with CidrSubnet List

$
0
0

Hi all, im fairly new to terraform and im trying to create a subnet model where i recieve the name and subnet mask.
Based of the subnet mask i generate a list of CIDRs using “cidrsubnets”. I would like to merge based of the index, the map of subnets to include subnet_name, subnet_size and the list of subnet_cidr
Expected outcome is

[ 
   {
    subnet_ip   = "10.0.0.0/26"
    subnet_name = "app"
    subnet_size = 26
    },
   {
      subnet_ip   = "10.0.0.64/27"
      subnet_name = "data"
      subnet_size = 27
    },
   {
      subnet_ip   = "10.0.0.96/26"
      subnet_name = "shd"
      subnet_size = 27
    }
]
locals {
  base_cidr_block = "10.0.0.0/24"

  cidr_mask = tonumber(split("/", local.base_cidr_block)[1])

  #This will be a tfvar input 
  subnet_input = {
    "app" = 26
    "data" = 27
    }

  #this is static, always part of subnetting
  shd_subnet = {
      "shd" = 27
  }

  subnet_merge = merge(local.subnet_input, local.shd_subnet)

  subnet_cidr = cidrsubnets(local.base_cidr_block, [for k,v in local.subnet_merge : v - local.cidr_mask]...)

  first_try = concat(
    [ 
      for i in range(length(local.subnet_cidr)) : 
      [ for k,v in local.subnet_merge : {
            subnet_name = k
            subnet_size = v
            subnet_ip = local.subnet_cidr[i]
      }]])

  second_try = merge(flatten(
    [ 
      for i in range(length(local.subnet_cidr)) : [ 
      for k,v in local.subnet_merge : {
            subnet_name = k
            subnet_size = v
            subnet_ip = local.subnet_cidr[i]
      }]])...)
}

output "subnet_merge" {
  value = local.subnet_merge
}
output "subnet_list" {
  value = local.subnet_cidr
}
output "first_try" {
  value = local.first_try

}
output "second_try" {
  value = local.second_try
}

2 posts - 1 participant

Read full topic

Terraform hangs, but gives no indication why

$
0
0

I’ve got an issue with terraform apply never timing out, and when it’s cancelled, there’s no reason given. The rest of the TF is created fine, it’s just this one load balancer listener. Other listeners are also created. The target group is created OK, but this one rule just hangs.

Output:

Terraform will perform the following actions:

module.login_lambda.aws_lb_listener_rule.this_https will be created

  • resource “aws_lb_listener_rule” “this_https” {
    • arn = (known after apply)

    • id = (known after apply)

    • listener_arn = “arn:aws:elasticloadbalancing:eu-west-2:XXXXXXX:loadbalancer/app/XXXXX/f5820bda91b3d2e2”

    • priority = 10

    • action {

      • order = (known after apply)
      • target_group_arn = “arn:aws:elasticloadbalancing:eu-west-2:XXXXX:targetgroup/XXXXXX/8da3d82954fedc6f”
      • type = “forward”
        }
    • condition {

      • path_pattern {
        • values = [
          • “/account/login”,
            ]
            }
            }
            }

Plan: 1 to add, 0 to change, 0 to destroy.
module.login_lambda.aws_lb_listener_rule.this_https: Creating…
module.login_lambda.aws_lb_listener_rule.this_https: Still creating… [10s elapsed]

module.login_lambda.aws_lb_listener_rule.this_https: Still creating… [2m20s elapsed]
Stopping operation…

Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down…


│ Error: execution halted




│ Error: execution halted




│ Error: creating ELBv2 Listener Rule: operation error Elastic Load Balancing v2: CreateRule, request canceled, context canceled

│ with module.login_lambda.aws_lb_listener_rule.this_https,
│ on modules\lambda\main.tf line 68, in resource “aws_lb_listener_rule” “this_https”:
│ 68: resource “aws_lb_listener_rule” “this_https” {

1 post - 1 participant

Read full topic

Terraform keeps ignoring provider constraint strings on terraform init

$
0
0

Hi all,

This is my first question on this forum. I have been using Terraform for some months ago but always using Jenkins pipelines, never using CLI.

Currently, one of the pipelines generates Terraform code to create resources on GCP but during the last days I found an issue that I can find how to solver. I’ll explain the issue and the steps taken so far, without luck.

I’m using Terraform 0.13.7, and in my providers.tf file I have defined google and local as providers, with fixed version because they are strictly required to use those versions (3.13 and 2.2.1, respectively). However, due the nature of the Jenkins pipeline, terraform init should be executed twice before proceeding with the terraform plan.

The first init indeed installs the versions defined on my providers.tf file but the second init first are using the recently installed providers versions but immediately after that, is trying to upgrade to the latest version of both providers, which will not work for me. Automatically plan fails because Terraform will try to remove a “local_file” resource that is not longer required but the provider version is wrong. That local_file resource was created sometime ago with local 2.2.1 but Terraform init (second run) upgrades it to 2.5.2 and then the plan fails with this error:

Error: Provider configuration not present

To work with local_file.nameofthefile its original provider configuration at
provider[“Terraform Registry”] is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy local_file.df-tls-in-json, after which you can remove
the provider configuration again.

Therefore, I need to force local provider to use 2.2.1 and don’t upgrade to 2.5.2 during the second terraform init.

Worth to mention, each Jenkins job setup a temp workspace where all the TF files live on the Jenkins workers, so that I don’t have access to the terraform.lock.hcl file. All files generated during the run will be destroyed and the code generated for the resources stored on an external repository.

I tried to use constrains in different ways:

local = {
   source  = "hashicorp/local"
   version = "!=2.5.1,!=2.5.0,!=2.4.1,!=2.4.0,!=2.3.0,!=2.2.3,!=2.2.2"
}

local = {
   source  = "hashicorp/local"
   version = "2.2.1"
}

But the second run still override this:

  • Using previously-installed hashicorp/local v2.2.1
  • Finding latest version of -/local…
  • Installing -/local v2.5.1…
  • Installed -/local v2.5.1 (signed by HashiCorp)

Since Terraform 0.13 is not longer supporting -get-plugins=false, I’m running out of ideas.

I really appreciate any guidance here.

Thanks in advance.

1 post - 1 participant

Read full topic

How to make child module resources *stay* inside of the child module when synthesizing?

$
0
0

Hi!

I made a ticket for this issue on github because I wasn’t sure if it was a bug, or a misunderstanding on my part: https://github.com/hashicorp/terraform-cdk/issues/3604

Given a stack that declares a Terraform module:

// main.ts

import { Construct } from "constructs";
import { App, TerraformStack } from "cdktf";
import * as ModuleA from "./modules/moduleA/main";

class MyStack extends TerraformStack {
  constructor(scope: Construct, id: string) {
    super(scope, id);

    new ModuleA.ModuleA(this, "module_a", {
      source: "./modules/moduleA",
    });
  }
}

const app = new App();
new MyStack(app, "mystack");
app.synth();

And the module contains a scoped resource:

// modules/moduleA/main.ts

import { TerraformModule, TerraformModuleConfig } from "cdktf";
import { Construct } from "constructs";
import { RandomProvider } from "@cdktf/provider-random/lib/provider";
import { Id as RandomId } from "@cdktf/provider-random/lib/id";

export class ModuleA extends TerraformModule {
  public constructor(
    scope: Construct,
    id: string,
    config: TerraformModuleConfig
  ) {
    super(scope, id, {
      ...config,
    });
    new RandomProvider(this, "default");
    new RandomId(this, "mod_a_resource_one", {
      byteLength: 2,
    });
  }
}

I expected to see the resource inside of the module in the synthesized hcl, because it was specified in a different scope, like this for example:

// cdktf.out/stacks/mystack/cdk.tf

terraform {
  required_providers {
    random = {
      version = "3.6.0"
      source  = "hashicorp/random"
    }
  }
  backend "local" {
    path = "<path>/terraform.mystack.tfstate"
  }
}

module "module_a" {
  source = "./assets/__cdktf_module_asset_26CE565C/AED96664F96F513147DECF4060B0C6AE"
}
// ./assets/__cdktf_module_asset_26CE565C/AED96664F96F513147DECF4060B0C6AE/main.tf

provider "random" {
}
resource "random_id" "mod_a_resource_one_0BDEBAC6" {
  byte_length = 2
}

But to my surprise, the resource that was created in the terraform module wasn’t actually in the module:

// cdktf.out/stacks/mystack/cdk.tf

terraform {
  required_providers {
    random = {
      version = "3.6.0"
      source  = "hashicorp/random"
    }
  }
  backend "local" {
    path = "/<path>/terraform.mystack.tfstate"
  }
}

module "module_a" {
  source = "./assets/__cdktf_module_asset_26CE565C/AED96664F96F513147DECF4060B0C6AE"
}

provider "random" {
}
resource "random_id" "module_a_mod_a_resource_one_0BDEBAC6" {
  byte_length = 2
}

For context, this is how my remote state file currently looks. Resources are nested inside of child modules, and those modules are then referenced in the main.tf, but the child resources are not exposed in that file, if that makes sense.

I wanted to migrate my state from HCL files to CDKTF, but keep the same resource ID’s/configuration structure, so I was hoping I could reverse-engineer my config with cdktf synth --hcl. Is this possible?

One thing I noticed is that synth produces a state file with a version value equal to 3, while my pre-existing terraform.tfstate in production has a version value equal to 4. I wonder if that’s related somehow?

Any thoughts/advice would be appreciated :pray: thank you

1 post - 1 participant

Read full topic

Where is the option to create dynamic scope for maintenance configuration in azure using terraform

S3 compatible server (DELL/EMC) Remote State setup

$
0
0

Hello,
We are using Dell/EMC’s Object Storage (ECS) which is S3 compatible storage for Terraform’s remote state backend.

Since its S3 based, I used S3 configuration as shown below:

terraform {
backend “s3” {
bucket = “bucket-name”
endpoints = { s3 = “https://our.endpoint/” }
key = “test/tf.state”
secret_key = “"
access_key = "

region = “us-east-1”
encrypt = “false”
skip_credentials_validation = “true”
skip_requesting_account_id = “true”
}
}

When initializing, its throwing the below error:

│ Error: Failed to get existing workspaces: Unable to list objects in S3 bucket “bucket-name” with prefix “env:/”: operation error S3: ListObjectsV2, exceeded maximum number of attempts, 5, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Get “https://bucket-name.our.endpoint/?list-type=2&max-keys=1000&prefix=env%3A%2F”: tls: failed to verify certificate: x509: certificate is valid for *.our.endpoint, *.our.endpoint…

Here are my troubleshooting steps:
Direct testing with REST Client & S3 Browser Application (Successful):
When we use the REST client from our browser I can connect to the S3 Bucket directly and also using S3 browser application. The creds has Full Control permissions on the bucket itself.

Testing via AWS CLI Commands:
When I use the same creds from AWS Configure, I see the below issue.

C:\Users\my-username>aws s3 ls

An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.

I take even though the credentials has Full Access on the bucket itself, when I am initializing it via Terraform Backend, its failing with above error and also so does via AWS CLI method.

How can I get this thing sorted out??

Cheers in advance Terraform’ers!!

1 post - 1 participant

Read full topic


EBS volume integration with ECS fargate

$
0
0

Hi Team,

I want to attach EBS volume in my ECS fargate task, through the AWS console I’m getting option to “Configure at deployment” however not able to find this option in task definition resource of terraform registry. AWS recently announced a new feature for ECS at the start of this year, 2024, that ECS can now leverage EBS volumes as a storage option for tasks. So can you please confirm that this option is available in terraform or not?

Thanks

1 post - 1 participant

Read full topic

Merging a map of Subnets with CidrSubnet List

$
0
0

Hi all, im fairly new to terraform and im trying to create a subnet model where i recieve the name and subnet mask.
Based of the subnet mask i generate a list of CIDRs using “cidrsubnets”. I would like to merge based of the index, the map of subnets to include subnet_name, subnet_size and the list of subnet_cidr
Expected outcome is

[ 
   {
    subnet_ip   = "10.0.0.0/26"
    subnet_name = "app"
    subnet_size = 26
    },
   {
      subnet_ip   = "10.0.0.64/27"
      subnet_name = "data"
      subnet_size = 27
    },
   {
      subnet_ip   = "10.0.0.96/26"
      subnet_name = "shd"
      subnet_size = 27
    }
]
locals {
  base_cidr_block = "10.0.0.0/24"

  cidr_mask = tonumber(split("/", local.base_cidr_block)[1])

  #This will be a tfvar input 
  subnet_input = {
    "app" = 26
    "data" = 27
    }

  #this is static, always part of subnetting
  shd_subnet = {
      "shd" = 27
  }

  subnet_merge = merge(local.subnet_input, local.shd_subnet)

  subnet_cidr = cidrsubnets(local.base_cidr_block, [for k,v in local.subnet_merge : v - local.cidr_mask]...)

  first_try = concat(
    [ 
      for i in range(length(local.subnet_cidr)) : 
      [ for k,v in local.subnet_merge : {
            subnet_name = k
            subnet_size = v
            subnet_ip = local.subnet_cidr[i]
      }]])

  second_try = merge(flatten(
    [ 
      for i in range(length(local.subnet_cidr)) : [ 
      for k,v in local.subnet_merge : {
            subnet_name = k
            subnet_size = v
            subnet_ip = local.subnet_cidr[i]
      }]])...)
}

output "subnet_merge" {
  value = local.subnet_merge
}
output "subnet_list" {
  value = local.subnet_cidr
}
output "first_try" {
  value = local.first_try

}
output "second_try" {
  value = local.second_try
}

2 posts - 1 participant

Read full topic

Changing implicit dependencies of a resource without destroying the resource

$
0
0

Terraform version 0.13.7

I have a resource which uses a provider that is configured using the outputs of a module. According to the state file, my dependent resource directly depends on the resource of that module:

module "service_account_old" { ... }

provider "kafka" {
  bootstrap_servers = [module.service_account_old.bootstrap_servers]
  sasl_username     = module.service_account_old.sasl_user
  sasl_password     = module.service_account_old.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

In the state file, I can see the dependency for the topic as "dependencies": ["module.service_account_old.restapi_object.service_account"]

Now, I want to replace the old module with a new one, so I have applied the following:

module "service_account_old" { ... }

module "service_account_new" { ... } # Has no references to old module

provider "kafka" {
  bootstrap_servers = [module.service_account_new.bootstrap_servers]
  sasl_username     = module.service_account_new.sasl_user
  sasl_password     = module.service_account_new.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

Even after the apply, the kafka_topic dependency still points to the old module, and thus when I attempt to destroy the old module (which I have to do with a manual target since it has a nested provider), it also tries to destroy the kafka_topic resource, which is blocked due to it’s lifecycle config.

My question is, why does the topic resource still have the old module resources as a dependency? Is there some way for me to change this dependency without destroying the kafka_topic resource?

3 posts - 2 participants

Read full topic

Assigning Maintenance configuration to multiple existing virtual machines

$
0
0

I have below terraform code which creates maintenance configuration it is only for one virtual machine. you can see in the below code.

provider "azurerm" {
  features {}
}
terraform {
  required_providers {
    azapi = {
      source  = "Azure/azapi"
      version = "=0.4.0"
    }
  }
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

# resource "azurerm_maintenance_configuration" "example" {
#   name                = "example-mc"
#   resource_group_name = azurerm_resource_group.example.name
#   location            = azurerm_resource_group.example.location
#   scope               = "InGuestPatch"

#   tags = {
#     Env = "prod"
#   }

  
# }

resource "azapi_resource" "vm_maintenance" {
  type      = "Microsoft.Maintenance/maintenanceConfigurations@2021-09-01-preview"
  name      = "vm-mc"
  parent_id = "/subscriptions/XXXX/resourceGroups/example-resources"
  location  = azurerm_resource_group.example.location

  body = jsonencode({
    properties = {
      visibility          = "Custom"
      namespace           = "Microsoft.Maintenance"
      maintenanceScope    = "InGuestPatch"
      extensionProperties = {
        "InGuestPatchMode" = "User"
      }
      maintenanceWindow = {
        startDateTime      = formatdate("YYYY-MM-DD 17:30", timestamp())
        expirationDateTime = null
        duration           = "PT3H30M"
        timeZone           = "Eastern Standard Time"
        recurEvery         = "120Hour"
      }
      installPatches = {
        linuxParameters = {
          classificationsToInclude  = ["Critical", "Security", "Other"]
          packageNameMasksToExclude = null
          packageNameMasksToInclude = null
        }
        windowsParameters = {
          classificationsToInclude = ["Critical", "Security" , "UpdateRollup", "FeaturePack" , "ServicePack", "Definition" ,"Tools", "Updates"  ]
          kbNumbersToExclude       = null
          kbNumbersToInclude       = null 
        }
        rebootSetting = "RebootIfRequired"
      }
    }
  })


}

resource "azapi_resource" "vm_maintenance_assignment" {
  type      = "Microsoft.Maintenance/configurationAssignments@2021-09-01-preview"
  name      = "vm--mca"
  parent_id = "/subscriptions/XXX/resourceGroups/example-resources/providers/Microsoft.Compute/virtualMachines/test1"
  location  = "East US 2"

  body = jsonencode({
    properties = {
      maintenanceConfigurationId = azapi_resource.vm_maintenance.id
    }
  })
}ype or paste code here

how do I assign it to multiple existing virtual machines? Please suggest fix

2 posts - 2 participants

Read full topic

Adding resource (VM) details and others to existing dynamic scope

$
0
0

how to add dynamic scope to existing maintenance configuration.

I have created maintenance configuration using terraform. here is the code for maintenance configuration:

provider "azurerm" {
  features {}
}
terraform {
  required_providers {
    azapi = {
      source  = "Azure/azapi"
      version = "=0.4.0"
    }
  }
}
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}
# resource "azurerm_maintenance_configuration" "example" {
#   name                = "example-mc"
#   resource_group_name = azurerm_resource_group.example.name
#   location            = azurerm_resource_group.example.location
#   scope               = "InGuestPatch"
#   tags = {
#     Env = "prod"
#   }
  
# }
resource "azapi_resource" "vm_maintenance" {
  type      = "Microsoft.Maintenance/maintenanceConfigurations@2021-09-01-preview"
  name      = "vm-mc"
  parent_id = "/subscriptions/XXXX/resourceGroups/example-resources"
  location  = azurerm_resource_group.example.location
  body = jsonencode({
    properties = {
      visibility          = "Custom"
      namespace           = "Microsoft.Maintenance"
      maintenanceScope    = "InGuestPatch"
      extensionProperties = {
        "InGuestPatchMode" = "User"
      }
      maintenanceWindow = {
        startDateTime      = formatdate("YYYY-MM-DD 17:30", timestamp())
        expirationDateTime = null
        duration           = "PT3H30M"
        timeZone           = "Eastern Standard Time"
        recurEvery         = "120Hour"
      }
      installPatches = {
        linuxParameters = {
          classificationsToInclude  = ["Critical", "Security", "Other"]
          packageNameMasksToExclude = null
          packageNameMasksToInclude = null
        }
        windowsParameters = {
          classificationsToInclude = ["Critical", "Security" , "UpdateRollup", "FeaturePack" , "ServicePack", "Definition" ,"Tools", "Updates"  ]
          kbNumbersToExclude       = null
          kbNumbersToInclude       = null 
        }
        rebootSetting = "RebootIfRequired"
      }
    }
  })
}
resource "azapi_resource" "vm_maintenance_assignment" {
  type      = "Microsoft.Maintenance/configurationAssignments@2021-09-01-preview"
  name      = "vm--mca"
  parent_id = "/subscriptions/XXX/resourceGroups/example-resources/providers/Microsoft.Compute/virtualMachines/test1"
  location  = "East US 2"
  body = jsonencode({
    properties = {
      maintenanceConfigurationId = azapi_resource.vm_maintenance.id
    }
  })
}

and I am trying to follow to add a dynamic scope to above maintenancee configuration but where to begin and how to provide input values

2 posts - 2 participants

Read full topic

Changing implicit dependencies of a resource without destroying the resource

$
0
0

Terraform version 0.13.7

I have a resource which uses a provider that is configured using the outputs of a module. According to the state file, my dependent resource directly depends on the resource of that module:

module "service_account_old" { ... }

provider "kafka" {
  bootstrap_servers = [module.service_account_old.bootstrap_servers]
  sasl_username     = module.service_account_old.sasl_user
  sasl_password     = module.service_account_old.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

In the state file, I can see the dependency for the topic as "dependencies": ["module.service_account_old.restapi_object.service_account"]

Now, I want to replace the old module with a new one, so I have applied the following:

module "service_account_old" { ... }

module "service_account_new" { ... } # Has no references to old module

provider "kafka" {
  bootstrap_servers = [module.service_account_new.bootstrap_servers]
  sasl_username     = module.service_account_new.sasl_user
  sasl_password     = module.service_account_new.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

Even after the apply, the kafka_topic dependency still points to the old module, and thus when I attempt to destroy the old module (which I have to do with a manual target since it has a nested provider), it also tries to destroy the kafka_topic resource, which is blocked due to it’s lifecycle config.

My question is, why does the topic resource still have the old module resources as a dependency? Is there some way for me to change this dependency without destroying the kafka_topic resource?

3 posts - 2 participants

Read full topic

Make terraform process Map argumented in a sorted manner?

$
0
0

I’m using Terraform Registry azurerm_data_factory_pipeline resource. It accepts a parameters map object, likt this:

 parameters = {
    "source"      = "source"
    "destination" = "destination"
   }

However, upon applying this, I noticed than in azure, the parameters are not added in the order I have them in my terraform code. Upon research, it seems that the map order is not guaranteed. I wonder if there is some way to guarantee it?

1 post - 1 participant

Read full topic


VCD - loop -Reference to undeclared resource

$
0
0

Hello, i am beginner with terraform, i declared the sizing policy (var:compute) in my variable file ( list of object), when i use the loop for each to use the right sizing policy it does not work
sizing_policy_id = data.vcd_vm_sizing_policy.[each.value.compute].id
i tried also to declare the variable like compute = “data.vcd_vm_sizing_policy.VM_CRIT_4_8.id” and use it like sizing_policy_id = each.value.compute but also does not work.

Any suggestions are welcomed

1 post - 1 participant

Read full topic

Changing implicit dependencies of a resource without destroying the resource

$
0
0

Terraform version 0.13.7

I have a resource which uses a provider that is configured using the outputs of a module. According to the state file, my dependent resource directly depends on the resource of that module:

module "service_account_old" { ... }

provider "kafka" {
  bootstrap_servers = [module.service_account_old.bootstrap_servers]
  sasl_username     = module.service_account_old.sasl_user
  sasl_password     = module.service_account_old.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

In the state file, I can see the dependency for the topic as "dependencies": ["module.service_account_old.restapi_object.service_account"]

Now, I want to replace the old module with a new one, so I have applied the following:

module "service_account_old" { ... }

module "service_account_new" { ... } # Has no references to old module

provider "kafka" {
  bootstrap_servers = [module.service_account_new.bootstrap_servers]
  sasl_username     = module.service_account_new.sasl_user
  sasl_password     = module.service_account_new.sasl_password
}

resource "kafka_topic" "my_topic" {
  name      = "my-topic"
  lifecycle {
    prevent_destroy = true
  }
}

Even after the apply, the kafka_topic dependency still points to the old module, and thus when I attempt to destroy the old module (which I have to do with a manual target since it has a nested provider), it also tries to destroy the kafka_topic resource, which is blocked due to it’s lifecycle config.

My question is, why does the topic resource still have the old module resources as a dependency? Is there some way for me to change this dependency without destroying the kafka_topic resource?

3 posts - 2 participants

Read full topic

Can we create the maintenance configuration with azurerm provider not azureapi

$
0
0

can we create the maintenance configuration with all arguments install patches , windows , Linux etc only with azurerm provider not azureapi provider?

1 post - 1 participant

Read full topic

Adding multiple resource group to the resource VM maintenance assignment resource

$
0
0

You can see following what ever vms are there in the “example-resources” resource group which are test1, test2, I could add in the mainteance configuration.
suppose I need to add vms in other resource group, then do I need to one more parent_id in the below resoure vm_maintenance_assignment?
Please suggest how it can be done?

1 post - 1 participant

Read full topic

Any way to use retrieve an azure access token using terraform cloud deployment?

$
0
0

Im fairly new to terraform so there may be a simpler way to achieve this.

I’m currently using terraform cloud with a repo in Azure devOps for Azure deployments.

The provider is currently authenticating with azure using a service principal.

I want to know if there’s a way to get or use the access token from this authentication? Usually you would just use Az CLI but I can’t do that from terraform cloud.

I’m trying to use the following provider but can’t find a way around my issue.

https://registry.terraform.io/providers/XtratusCloud/azureipam/latest/docs

1 post - 1 participant

Read full topic

Viewing all 11403 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>