Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11468 articles
Browse latest View live

Documentation error

$
0
0

@mikeassel wrote:

Terraform newbie here, but I think I found a mistake in a code example in the docs. Is this the correct place to share or is there a better forum? I saw some github issues opened regarding documentation, but wanted to check here first.

Posts: 1

Participants: 1

Read full topic


Can I reduce the frequency of "Still creating" messages for "remote-exec"

$
0
0

@ljckennedy wrote:

I have a shell script that runs for hours. I would like to set the frequency of the “Stil creating” messages to every 5 minutes rather than 10 seconds. Is this possible?

Sounds simple, but I have not found any reference to this being an option.

Thanks.

Posts: 1

Participants: 1

Read full topic

The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing

$
0
0

@DaniBet wrote:

I want to implement traffic manager in Terraform between two vm’s from different locations (West Europe and North Europe). I attach my code, but I don’t know how to configure “target_resource_id” for each vm, because the vm’s were created in a for loop, also the networks. The traffic manager would switch to the secondary vm, in case of failure of the first vm. Any ideas?

My code:

variable "subscription_id" {}
variable "tenant_id" {}
variable "environment" {}
variable "azurerm_resource_group_name" {}
variable "locations" {
  type = map(string)
  default = {
    vm1 = "North Europe"
    vm2 = "West Europe"
  }
}

# Configure the Azure Provider
provider "azurerm" {
  subscription_id = var.subscription_id
  tenant_id       = var.tenant_id
  version = "=2.10.0"
  features {}
}

resource "azurerm_virtual_network" "main" {
  for_each            = var.locations
  name                = "${each.key}-network"
  address_space       = ["10.0.0.0/16"]
  location            = each.value
  resource_group_name = var.azurerm_resource_group_name
}

resource "azurerm_subnet" "internal" {
  for_each             = var.locations
  name                 = "${each.key}-subnet"
  resource_group_name  = var.azurerm_resource_group_name
  virtual_network_name = azurerm_virtual_network.main[each.key].name
  address_prefixes     = ["10.0.2.0/24"]
}

resource "azurerm_public_ip" "example" {
  for_each                = var.locations
  name                    = "${each.key}-pip"
  location                = each.value
  resource_group_name     = var.azurerm_resource_group_name
  allocation_method       = "Static"
  idle_timeout_in_minutes = 30

  tags = {
    environment = "dev01"
  }
}

resource "azurerm_network_interface" "main" {
  for_each            = var.locations
  name                = "${each.key}-nic"
  location            = each.value
  resource_group_name = var.azurerm_resource_group_name

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = azurerm_subnet.internal[each.key].id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.example[each.key].id
  }
}

resource "random_password" "password" {
  length = 16
  special = true
  override_special = "_%@"
}

resource "azurerm_virtual_machine" "main" {
  for_each              = var.locations
  name                  = "${each.key}t-vm"
  location              = each.value
  resource_group_name   = var.azurerm_resource_group_name
  network_interface_ids = [azurerm_network_interface.main[each.key].id]
  vm_size               = "Standard_D2s_v3"


  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }
  storage_os_disk {
    name              = "${each.key}-myosdisk1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }
  os_profile {
    computer_name  = "${each.key}-hostname"
    admin_username = "testadmin"
    admin_password = random_password.password.result
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }
  tags = {
    environment = "dev01"
  }

}

resource "random_id" "server" {
  keepers = {
    azi_id = 1
  }

  byte_length = 8
}

resource "azurerm_traffic_manager_profile" "example" {
  name                   = random_id.server.hex
  resource_group_name    = var.azurerm_resource_group_name
  traffic_routing_method = "Priority"

  dns_config {
    relative_name = random_id.server.hex
    ttl           = 100
  }

  monitor_config {
    protocol                     = "http"
    port                         = 80
    path                         = "/"
    interval_in_seconds          = 30
    timeout_in_seconds           = 9
    tolerated_number_of_failures = 3
  }

  tags = {
    environment = "dev01"
  }
}

resource "azurerm_traffic_manager_endpoint" "first-vm" {
  for_each            = var.locations
  name                = "${each.key}-TF"
  resource_group_name = var.azurerm_resource_group_name
  profile_name        = "${azurerm_traffic_manager_profile.example.name}"
  target_resource_id  = "[azurerm_network_interface.main[each.key].id]"
  type                = "azureEndpoints"
  priority              = "${[each.key] == "vm1" ? 1 : 2}"
}

My error:

Error: trafficmanager.EndpointsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. 
Status=400 Code="BadRequest" Message="The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing. 
The property must be specified only for the following endpoint types: AzureEndpoints, NestedEndpoints. 
You must have read access to the resource to which it refers."

Posts: 1

Participants: 1

Read full topic

Defining a variable which reads from a remote storage

$
0
0

@afshinm wrote:

Hello Folks!

I’m currently using a local variables in a TF v0.12 project but I’m hoping to use a remote storage (e.g redis) to feed the variable which is a list of strings. My question is:

  • Is it possible to define a variable that can read from a remote key/value storage?
  • if yes, what would be an appropriate storage for this purpose?

Thank you

Posts: 1

Participants: 1

Read full topic

Custom Providers in Terraform Cloud

$
0
0

@deasunk wrote:

Is there anything better than this for using Custom Providers in Terraform Cloud.

I’ve tried the submodule and symlink approach but it’s unworkable

  • submodule clone fails fail on TFC with Internal error: SIC-001 and fatal: Authentcation failed I’ve tried both https and git@ urls for submodules
  • mklink for Git symlinks on our Windows dev machines are a pain
  • committing the provider binary plugin to Git repo makes Git operations slow.

Posts: 2

Participants: 2

Read full topic

If Conditionals with for_each

$
0
0

Have troubles to make if condition with for_each

locals {
services = {
api = {
task_definition = “api.json”
service_discovery_enabled = true
application_loadbalancer = true
}
auth = {
task_definition = “auth.json”
service_discovery_enabled = true
}
post = {
task_definition = “post.json”
service_discovery_enabled = true
application_loadbalancer = true
}
}
}

Try to run only resource only for service where application_loadbalancer = true

resource “aws_cloudwatch_dashboard” “this” {
for_each = [for s in var.services : s if lookup(s, “application_loadbalancer”, false)]

dashboard_name = “{var.name}-{terraform.workspace}-${each.key}-metrics-dashboard”

dashboard_body = data.template_file.metric_dashboard[each.key].rendered
}

unsuccessfully:
Error: Invalid for_each argument

on …/…/…/terraform/main.tf line 77, in resource “aws_cloudwatch_dashboard” “this”:
1178: for_each = [for s in var.services : s if lookup(s, “application_loadbalancer”, false)]

The given “for_each” argument value is unsuitable: the “for_each” argument
must be a map, or set of strings, and you have provided a value of type tuple.

Any ideas on how to resolve it?

1 post - 1 participant

Read full topic

Terraform v0.13.0 beta program

$
0
0

I’m very excited to announce that we will start shipping public Terraform 0.13.0 beta releases on June 3rd. The full release announcement is posted as a GitHub issue, and I’m opening up this discuss post as a place to host community discussions so that the GitHub issue can be used just for announcements.

1 post - 1 participant

Read full topic

Inconsistent "value depends on resource attributes that cannot be determined until apply"

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic


Basic examples for serverless functions

$
0
0

As a starter I just want to have a single function that responds on a POST.

  const response = {
    statusCode: 200,
    headers: {
      'Content-Type': 'text/html; charset=utf-8'
    },
    body: '<p>Hello world!</p>'
  }

And I want to deploy to AWS Lambda and Google Firebase. But I am really struggling with finding good examples and comprehensive documentation.

Maybe I am just not looking at the right place?
Thanks for any pointers.

1 post - 1 participant

Read full topic

Secondary ip is getting removed on one apply then readded on the subsequent apply. What gives?

$
0
0

I am configuring and aws_network_interface and setting a private ip and requesting a secondary_private_ip. I’m seeing a problem where I run apply once and it adds the secondary IP, then I run apply a second time, and it removes the IP. If I keep running apply, it will add then remove the secondary. I’ve tried adding prevent_destroy to both the aws_network_interface and the aws_instance but it keeps destroying and readding the secondary. Is there anyway to stop this?

Heres my config for the aws_network_interface

 resource "aws_network_interface" "tips_sql_1_secondary_ip" {
  count                 = "${element(var.tips_sql_enabled, 0) ? 1 : 0}"
  subnet_id             = "${element(aws_subnet.data.*.id, 0)}"
  private_ips           = ["${cidrhost(element(aws_subnet.data.*.cidr_block, 0), module.config.data_subnet_tipssql_host_number)}"]
  private_ips_count     = 2
  security_groups       = ["${aws_security_group.tips_sql_serverports.id}", "${aws_security_group.tips_sql_sqlports.id}"]

  lifecycle {
    prevent_destroy     = true
    # ignore_changes            = ["private_ips_count"]
  }
}

Heres the config for the aws_instance that references it.

resource "aws_instance" "tips_sql_1" {
  count                       = "${element(var.tips_sql_enabled, 0) ? 1 : 0}"
  ami                         = "${data.aws_ami.mssql.id}"
  instance_type               = "${element(var.tips_sql_instance_type, 0)}"
  iam_instance_profile        = "${aws_iam_instance_profile.tips_sql_profile.name}"
  key_name                    = "${aws_key_pair.tips_sql_key.id}"
  user_data                   = "${data.template_file.userdata_sql_server_1_setup.rendered}"
  network_interface {
    device_index = 0
    network_interface_id = "${aws_network_interface.tips_sql_1_secondary_ip.id}"
  }
  monitoring                  = true
  disable_api_termination     = "${element(var.tips_sql_disable_api_termination, 0)}"
  root_block_device {
    volume_type               = "gp2"
    volume_size               = "${element(var.tips_sql_root_volume_size, 0)}"
    delete_on_termination     = "${element(var.tips_sql_delete_on_termination, 0)}"
  }
  
  tags = "${merge(
    local.tips_sql_common_tags,
    map("Name","${format("%[1]s-%[2]s", var.name_prefix, element(var.tips_sql_instances, 0))}")
  )}"
    #map("Name","${format(module.config.name_format_var_dif, var.name_prefix, element(var.tips_sql_instances, count.index + 1), var.name_suffix)}")

  lifecycle {
    # prevent_destroy           = true
    # ignore_changes            = ["network_interface"]
  }
}

1 post - 1 participant

Read full topic

Help with getting values from lists and maps

$
0
0

I am trying to output values for an object which is a combination of a map and list.
Variable Object looks like this:

variable iam-policy-users-map {
default = {
# following dataset does not work
    "policy3" = [{ key1 = "value1", key2 = "value2" }, { key1 = "value3", key2 = "value4" }]
    "policy4" = [{ key1 = "value5", key2 = "value6" }, { key1 = "value7", key2 = "value8" }]
    "policy5" = { key1 = "value5", key2 = "value6" }
  }
}

I would like to output all the policies, with its keys and value which is list(map) e:g

policy3_value1_value2
policy3_valu31_value4
policy4_value5_value6
policy4_value7_value8
policy5_value5_value6

Foll is my local definition:

locals {
  user_policy_pairs = flatten([
    for policy, users in var.iam-policy-users-map : [ #list
      for value1, value2 in users : { # map
        policy = policy
        value1   = value1
        value2   = value2
      }
    ]
  ])

Foll is the output I have which does not show the output.

output "association-map" {
  value = {
    for obj in local.user_policy_pairs :
    "${obj.policy}_${obj.value1}_${obj.value2}" => [obj.policy, obj.value1, obj.value2]
  }
}

Instead, I get foll error:

obj.value2 is object with 2 attributes

Eventually, I want to use the object to create dynamic content in following manner:

resource "resourcename" r1{
....
....
dynamic "element"{
   for_each=local.user_policy_pairs
   iterator=next
   }
   content {
       policy = next.value.policy
       value1 = next.value.value1
       value2 = next.value.value2
   }
}
...
...

Can someone help?

1 post - 1 participant

Read full topic

Getting timeout error in Jenkins pipeline for synthetics monitor (using terraform code)

$
0
0

Terraform Version

Terraform v0.12.24

NewRelic Proider Version

provider.newrelic v1.18.0

Affected Resource(s)

newrelic_synthetics_monitor

Issue

I have setup newrelic monitoring using terraform and pipeline job configured in Jenkins. Resource creation working fine but when we execute the job second time without changes in the code, refreshing state throwing errors random synthetics URL’s(Ideally it should show “No changes. Infrastructure is up-to-date” message). Jenkins server configured in cloud environment. I am facing this issue both windows based jenkins and linux based jenkins.

Note: Same code working fine in my local jenkins (Windows based) and terraform command line. I already have admin access to the newrelic account. Also I have checked URL access using below command and looks good.

curl -v  -H "X-Api-Key:MyApiKey" https://synthetics.newrelic.com/synthetics/api/v4/monitors/2ce45bd5-f98a-4c4a-b703-111114b920a9

Error Logs:

2020/05/21 11:39:04 [ERROR] : eval: *terraform.EvalRefresh, err: GET https://synthetics.newrelic.com/synthetics/api/v4/monitors/2ce45bd5-f98a-4c4a-b703-111114b920a9 giving up after 4 attempts
2020/05/21 11:39:04 [ERROR] : eval: *terraform.EvalSequence, err: GET https://synthetics.newrelic.com/synthetics/api/v4/monitors/2ce45bd5-f98a-4c4a-b703-111114b920a9 giving up after 4 attempts

2020/05/21 11:39:31 [TRACE] vertex “provider.newrelic (close)”: evaluating
2020/05/21 11:39:31 [TRACE] [walkRefresh] Entering eval tree: provider.newrelic (close)
2020/05/21 11:39:31 [TRACE] : eval: *terraform.EvalCloseProvider
2020/05/21 11:39:31 [TRACE] GRPCProvider: Close
2020-05-21T11:39:31.499Z [DEBUG] plugin: plugin process exited: path=/var/jenkins_home/workspace/wdms-amg-newrelic-infra/.terraform/plugins/linux_amd64/terraform-provider-newrelic_v1.18.0_x4 pid=1769
2020-05-21T11:39:31.499Z [DEBUG] plugin: plugin exited
2020/05/21 11:39:31 [TRACE] [walkRefresh] Exiting eval tree: provider.newrelic (close)
2020/05/21 11:39:31 [TRACE] vertex “provider.newrelic (close)”: visit complete
2020/05/21 11:39:31 [TRACE] dag/walk: upstream of “root” errored, so skipping
2020/05/21 11:39:31 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/05/21 11:39:31 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock

Error: GET https://synthetics.newrelic.com/synthetics/api/v4/monitors/2ce45bd5-f98a-4c4a-b703-111114b920a9 giving up after 4 attempts

2020-05-21T11:39:31.503Z [DEBUG] plugin: plugin process exited: path=/var/jenkins_home/workspace/wdms-amg-newrelic-infra/.terraform/plugins/linux_amd64/terraform-provider-newrelic_v1.18.0_x4 pid=1761
2020-05-21T11:39:31.503Z [DEBUG] plugin: plugin exited
[Pipeline] }
[Pipeline] // ansiColor
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Terraform Apply)
Stage “Terraform Apply” skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

Thanks,
Selvan.

1 post - 1 participant

Read full topic

How to add tags to the created snapshot when using aws_ami_from_instance resource

$
0
0

We are using terraform to create AWS AMI’s from instance with the aws_ami_from_instance resource.

The resource supports adding tags to the created ami, but I couldn’t find a parameter for the tags of the created snapshot.

Is there a way to provide a map of tags to the aws_ami_from_instance which to be assigned to the snapshot of the ami?

1 post - 1 participant

Read full topic

How to destroy and recreate a resource or a worspace from TF enterprise

$
0
0

Hi
I’d like to destroy (not delete) either a set of resources or a workspace managed on TF enterprise and linked to a VCS. In a cli-driven workflow, It’s enough to simply go to the appropriate directory in the hierarchy and issue from the command line:

$ terraform destroy
$ tarreform apply

How can I do that in with a (workspace + VCS) driven workflow ?

1 post - 1 participant

Read full topic

Why do I get VMExtensionProvisioningError. Failed to download all specified files?

$
0
0

Why do I get VMExtensionProvisioningError. Failed to download all specified files?

provider "azurerm" {
    version = "~>1.44.0"
}

# Create a new resource group
resource "azurerm_resource_group" "rg" {
    name     = "myTFResourceGroup"
    location = "eastus2"
    
    tags = {
        Environment = "Terraform Getting Started"
        Team = "DevOps"   
    }
}
variable "prefix" {
  default = "vmtrial"
}

resource "azurerm_virtual_network" "main" {
  name                = "${var.prefix}-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_subnet" "internal" {
  name                 = "internal"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.main.name
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_public_ip" "main" {
  name                = "${var.prefix}-publicip"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method   = "Static"
}

resource "azurerm_network_security_group" "main" {
  name                = "${var.prefix}-nsg"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  

  security_rule {
    name                       = "test123"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  tags = {
    environment = "Production"
  }
}

resource "azurerm_network_interface" "main" {
  name                      = "${var.prefix}-nic"
  location                  = azurerm_resource_group.rg.location
  resource_group_name       = azurerm_resource_group.rg.name
  network_security_group_id = azurerm_network_security_group.main.id

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = azurerm_subnet.internal.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.main.id
  }
}

resource "azurerm_storage_account" "main" {
  name                     = "*******************"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_storage_container" "main" {
  name                  = "vhds"
  resource_group_name   = azurerm_resource_group.rg.name
  storage_account_name  = azurerm_storage_account.main.name
  container_access_type = "private"
}



resource "azurerm_virtual_machine" "main" {
  name                  = "${var.prefix}-vm"
  location              = azurerm_resource_group.rg.location
  resource_group_name   = azurerm_resource_group.rg.name
  network_interface_ids = [azurerm_network_interface.main.id]
  vm_size               = "Standard_D2s_v3"

  # Uncomment this line to delete the OS disk automatically when deleting the VM
   delete_os_disk_on_termination = true

  # Uncomment this line to delete the data disks automatically when deleting the VM
   delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2019-Datacenter"
    version   = "latest"
  }
  storage_os_disk {
    name              = "myosdisk1"
    vhd_uri       = "${azurerm_storage_account.main.primary_blob_endpoint}${azurerm_storage_container.main.name}/myosdisk1.vhd"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    #managed_disk_type = "Standard_LRS"
  }
  os_profile {
    computer_name  = "hostname"
    admin_username = "*********"
    admin_password = "*************"
  }

  os_profile_windows_config{
    provision_vm_agent = true
  }
  
  tags = {
    Owner = "staging"
  }
}

resource "azurerm_virtual_machine_extension" "test" {
  name                 = "hostname"
  location             = azurerm_resource_group.rg.location
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_machine_name = azurerm_virtual_machine.main.name
  publisher            = "Microsoft.Compute"
  type                 = "CustomScriptExtension"
  type_handler_version = "1.9"

  protected_settings = <<PROTECTED_SETTINGS
    {
      "commandToExecute": "powershell.exe -Command \"./dservicerole.ps1; exit 0;\""
    }
  PROTECTED_SETTINGS

  settings = <<SETTINGS
    {
        "fileUris": [
          "https://github.com/Aditya0311/Scripts/blob/master/dservicerole.ps1"
        ]
    }
  SETTINGS
}

This is the error that I receive:

Code=“VMExtensionProvisioningError” Message="VM has reported a failure when processing extension ‘hostname’. Error message: “Failed to download all specified files. Exiting. Error Message: The request was aborted: The connection was closed unexpectedly.”\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot "

1 post - 1 participant

Read full topic


Terraform Cloud run limit

$
0
0

This maybe a feature request but as TFC only supports one run at a time, is there a plan to increase the limit for more simultaneous runs?

1 post - 1 participant

Read full topic

Dynamic block with for_each resource

$
0
0

Hi, I have some troubles with dynamic blocks that used in for_each resource:


locals {
  services = {
    api = {
      container_port = "4000"
      service_discovery_enabled = true
      application_loadbalancer = true
    }
    auth = {
      container_port = "4001"
      service_discovery_enabled = true
    }
    post = {
      container_port = "4003"
      service_discovery_enabled = true
      application_loadbalancer = true
    }
  }
}


resource "aws_ecs_service" "this" {
  for_each = local.services

  name          = each.key
  cluster       = aws_ecs_cluster.this.name
  desired_count = lookup(each.value, "replicas", "1")
  launch_type   = "FARGATE"

  task_definition = "${aws_ecs_task_definition.this[each.key].family}:${max(
    aws_ecs_task_definition.this[each.key].revision, data.aws_ecs_task_definition.this[each.key].revision
  )}"

  deployment_minimum_healthy_percent = 100
  deployment_maximum_percent         = 200

  network_configuration {
    security_groups = [
      aws_security_group.services[each.key].id,
      aws_security_group.services_dynamic[each.key].id
    ]

    subnets          = var.vpc_create_nat ? local.vpc_private_subnets_ids : local.vpc_public_subnets_ids
    assign_public_ip = ! var.vpc_create_nat
  }

  dynamic "load_balancer" {
  for_each = { for s,v in local.services: s => v if lookup(v, "application_loadbalancer", false) }

    content {
      target_group_arn = aws_lb_target_group.this[each.key].arn
      container_name   = local.services[each.key]
      container_port   = local.services[each.key].container_port
    }
  }

  depends_on = [aws_lb_target_group.this, aws_lb_listener.this, aws_ecs_task_definition.this]

  lifecycle {
    ignore_changes = [desired_count]
  }
}

and I can’t figure out how to make it works:
on …/…/…/terraform/tf-aws-fargate/main.tf line 55, in resource “aws_ecs_service” “this”:
55: target_group_arn = aws_lb_target_group.this[each.key].arn
|----------------
| aws_lb_target_group.this is object with 1 attribute “api”
| each.key is “auth”

The given key does not identify an element in this collection value.

Also if I hardcoded:

  dynamic "load_balancer" {
  for_each = { for s,v in local.services: s => v if lookup(v, "application_loadbalancer", false) }

    content {
      target_group_arn = "arn:aws:elasticloadbalancing:eu-west-2:xxxxx:targetgroup/test-api-lb-tg/xxxx"
      container_name   = "api"
      container_port   = "4000"
    }
  }

I got installed lb it into all services:

  # aws_ecs_service.this["auth"] will be created
  + resource "aws_ecs_service" "this" {
      + cluster                            = "1111"
      + task_definition                    = "auth"

      + load_balancer {
          + container_name   = "api"
          + container_port   = 4000
          + target_group_arn = "arn:aws:elasticloadbalancing:xxxx:targetgroup/test-api-lb-tg/xxxx"
        }

please help =(

1 post - 1 participant

Read full topic

Data source with depends_on always forces a resource recreation

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Error with code - help

$
0
0

Hello all,
I am using Terraform version
Terraform v0.12.25

  • provider.aws v2.62.0

I have a problem creating a security group resource using the below code

resource “aws_security_group” “dev_sg” {
name = “dev_sg”
description = “Allow port80 and ssh access to dev”
vpc_id = “{aws_vpc.Myvpc.id}”

ingress {
description = “http from vpc”
from_port = 22
to_port = 22
protocol = “ssh”![Terraform|416x114]
cidr_blocks = [“0.0.0.0/0”]

Terraform

2 posts - 2 participants

Read full topic

No data source available for AWS API GW - http api

$
0
0

The corresponding rest api has a: aws_api_gateway_rest_api but the docs dont list one for http api eg. .aws_api_gateway_http_api.

1 post - 1 participant

Read full topic

Viewing all 11468 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>