Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11402 articles
Browse latest View live

Azure - virtual machine was not found

$
0
0

@idokaplan wrote:

Hi,

I have delete a VM via Azure console and now I cannot redeploy when running “apply”.

Error: Virtual Machine “XXXX1” (Resource Group “XXX-resource-group”) was not found

Please advise.
Thanks,
Ido

Posts: 1

Participants: 1

Read full topic


Wait for ALB Health Check Until After Codebuild Pushes Image To ECS

$
0
0

@MonteKrysto wrote:

Hey Everyone,

I’m new to Terraform. We are using Codepipeline and Codebuild to deploy our app. The issue seems to be that TF creates all the resources and when it creates the load balancer it starts the health check immediately, but it fails because Codebuild didn’t finish and push the image to ECS. Is there a way to delay the health check until after Codebuild finishes?

Posts: 1

Participants: 1

Read full topic

Cloud Functions environment variables

$
0
0

@nikhilbalekundargi wrote:

I have many cloud functions and I want a environment variable to be added to only one cloud function. I have created a terrafrom module and using the module to create multiple functions on main.tf. I want to add a environment variable (name = test) to one of cloud function on which a variable (var.cf_env) is true. The env variable should not be added to cloud functions if var.cf_env is false. How can I achieve this?

Posts: 1

Participants: 1

Read full topic

AWS Lambda creation fails with ValidationException

$
0
0

@hash1024 wrote:

Trying to create a very trivial lambda, plan passes, apply fails with JUST “ValidationException” and no thing else.

This is my TF code:

resource "aws_lambda_function" "odp_deployment" {
  function_name    = var.odp_deploy_lambda_name
  handler          = var.odp_deploy_lambda_handler
  runtime          = var.odp_deploy_lambda_runtime
  # filename         = var.odp_deploy_lambda_zip
  s3_bucket = "<REDUCTED>"
  s3_key    = "/lambda.zip"
  # source_code_hash = filebase64sha256(var.odp_deploy_lambda_zip)
  role             = "arn:aws:lambda:us-west-2:<REDUCTED>:function:deploy_odp_job"
  # memory_size      = var.odp_deploy_lambda_memory_size
  timeout          = 120   # Default is 3
}

This is what plan reports:

  # aws_lambda_function.odp_deployment will be created
  + resource "aws_lambda_function" "odp_deployment" {
      + arn                            = (known after apply)
      + function_name                  = "deployment_monitoring_1_odp_deploy"
      + handler                        = "deploy_odp_job.deploy_job"
      + id                             = (known after apply)
      + invoke_arn                     = (known after apply)
      + last_modified                  = (known after apply)
      + memory_size                    = 128
      + publish                        = false
      + qualified_arn                  = (known after apply)
      + reserved_concurrent_executions = -1
      + role                           = "arn:aws:lambda:us-west-2:<REDUCTED>:function:deploy_odp_job"
      + runtime                        = "python3.6"
      + s3_bucket                      = "<REDUCTED>"
      + s3_key                         = "/lambda.zip"
      + source_code_hash               = (known after apply)
      + source_code_size               = (known after apply)
      + timeout                        = 120
      + version                        = (known after apply)

      + tracing_config {
          + mode = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

… and then I get this:

aws_lambda_function.odp_deployment: Creating...

Error: Error creating Lambda function: ValidationException:
	status code: 400, request id: d7daf6c6-20d6-4bc6-be68-3f00d41d6223

  on lambdas.tf line 1, in resource "aws_lambda_function" "odp_deployment":
   1: resource "aws_lambda_function" "odp_deployment" {
$ terraform -version
Terraform v0.12.19
+ provider.aws v2.49.0
+ provider.random v2.2.1

Any pointers are appreciated!

Posts: 2

Participants: 1

Read full topic

How to set cluster_version for redshift in order to use preview maintenance track

$
0
0

@ruipan wrote:

Since there is no option to set the “maintenance track” when creating a cluster, I am trying to give a cluster_version instead. However, I have tried few cluster_versions that in the Cluster Management Guide. It gives me an error like this :slight_smile:Error: InvalidParameterCombination: Cannot find Cluster version 1.0.12911
status code: 400, request id: 99a74232-5278-11ea-a6c3-999510296f83

How to set the cluster_version correctly in order to use preview maintenance track ? Any help will be appreciated.

Posts: 1

Participants: 1

Read full topic

Invalid version constraint with non registry URL

$
0
0

@prasadnh wrote:

How to resolve the following error message
Error: Invalid version constraint Cannot apply a version constraint to module “xxxxxx-policy” (at xxxxx-policy.tf:81) because it has a non Registry URL.

i have module version of non null value, but TFE still complains about invalid version constraint

Posts: 1

Participants: 1

Read full topic

Iterating help needed: map of lists, or list of maps, for subnet cidr+name matching?

$
0
0

@law wrote:

Hello everyone, and thanks in advance for any assistance you may be able to lend. I am just beating my head against a wall here. I want to avoid using a simple list to define subnets, because then I need to add a new resource block every time I build a new VPC with a different subnet name. For example:

With that example (taken more or less whole-cloth from the “terraform-aws-modules/vpc/aws” module), if I want to create a new subnet named “foo”, I need to build a whole new ‘resource’ block that iterates over a whole new list called “cidr_list_foo”, and it’s just… messy.

I want to be able to define a tree that matches my preferred subnet name to the various cidr_blocks associated with those subnets. For example, I’d like the end result to be as if I had defined my subnets with standard Terraform primitives like so:

But I don’t want to have to ‘unroll’ a loop like that. I don’t know what an appropriate data type would be, but I’m thinking it’s going to look something like a “list of maps” or a “map of lists”. If that’s the case, I just don’t know how to properly iterate over it. A “map of lists” might look something like:

mapped_subnets = {
    mgmt = {
      subnets = ["10.105.160.0/26", "10.105.160.64/26", "10.105.160.128/26", "10.105.160.192/26"]
    },
    app = {
      subnets = ["10.105.161.0/27", "10.105.161.32/27", "10.105.161.64/27", "10.105.161.96/27"]
    }
  }

whereas a “list of maps” would be:

mapped_subnets = [
{      
      name = "mgmt" 
      subnets = ["10.105.160.0/26", "10.105.160.64/26", "10.105.160.128/26", "10.105.160.192/26"]
    },
     {
      name = "app"
      subnets = ["10.105.161.0/27", "10.105.161.32/27", "10.105.161.64/27", "10.105.161.96/27"]
    }
  ]

Either way, I suspect I’m barking up the wrong tree because I can’t get the iteration right. If “for_each()” could be nested, this would be straightforward, but it can’t so it’s not. Any thoughts on how I can better structure my data, and then iterate over it, so I can be just a bit more dynamic?

Posts: 3

Participants: 2

Read full topic

Unable Import Azure Network Security group?

$
0
0

@hungnm2527 wrote:

Currently, I am using “terraform import” to put all my created infrastructure to under control of terraform.
Everything went fine until I try to import Azure Network Security Group.
Even I saw it on Azure Portal and found it on resources.azure.com, but result is alway not found.
The Error message is: resource address “azurerm_subnet_network_security_group.xx-xxx-xxxx” does not exist in the configuration.
Please share me if anyone have idea on how to solve this issue.
Thanks.

Posts: 1

Participants: 1

Read full topic


Rancher2 provider - Cluster creation - Existing nodes

$
0
0

@raghuveerakumar wrote:

I am using Rancher2 provider to create the K8 cluster through rancher2_node_template resource ( by using this will create the new K8 Cluster nodes) but I have a new requirement, I want to create the K8 cluster for existing nodes means already I have Azure VM’s, I want to add these nodes for creating K8 cluster. Please help me with this, is there any way to achieve.

Posts: 1

Participants: 1

Read full topic

For_each value depends on resource attributes that cannot be determined until apply

$
0
0

@vmorkunas wrote:

I have a root module which calls child module with this call:

module "routing_extapp_data" {
    source = "../../modules/Stack/Routing"
    routing = { 
        src_rt_ids = module.extapp_vpc.private_route_table_ids
        dst_rt_ids = module.data_vpc.private_route_table_ids
        src_cidr = module.extapp_vpc.vpc.cidr_block
        dst_cidr = module.data_vpc.vpc.cidr_block
        peering_connection_id = module.peering_extapp_data.peering_connection_ids
        name = "extapp-data"
        src_provider = "aws.stack"
        dst_provider = "aws.stack"
    }
    providers = {
        aws.stack = aws.stack
        aws.ops = aws.ops
    }
}

In child module I have:

resource "aws_route" "src_stack" {
    timeouts {
        create = "5m"
        delete = "5m"
    }
    for_each = {for rt_id in var.routing.src_rt_ids: format("%s-%s", var.routing.name, rt_id) => rt_id if var.routing.src_provider == "aws.stack"}

    provider = aws.stack
    route_table_id = each.value
    destination_cidr_block = var.routing.dst_cidr
    vpc_peering_connection_id = var.routing.peering_connection_id

}

However it gives me this error:

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

Logs says:

2020/02/19 14:49:38 [TRACE] evalVariableValidations: not active for module.routing_extapp_intapp.var.routing, so skipping

2020/02/19 14:49:38 [TRACE] [walkPlan] Exiting eval tree: module.routing_extapp_intapp.var.routing

2020/02/19 14:49:38 [TRACE] vertex “module.routing_extapp_intapp.var.routing”: visit complete

2020/02/19 14:49:38 [TRACE] dag/walk: visiting “module.routing_extapp_intapp.aws_route.dst_stack”

2020/02/19 14:49:38 [TRACE] vertex “module.routing_extapp_intapp.aws_route.dst_stack”: starting visit (*terraform.NodePlannableResource)

2020/02/19 14:49:38 [TRACE] vertex “module.routing_extapp_intapp.aws_route.dst_stack”: evaluating

2020/02/19 14:49:38 [TRACE] [walkPlan] Entering eval tree: module.routing_extapp_intapp.aws_route.dst_stack

2020/02/19 14:49:38 [TRACE] module.routing_extapp_intapp: eval: *terraform.EvalWriteResourceState

2020/02/19 14:49:38 [WARN] Provider “registry.terraform.io/-/aws” produced an invalid plan for module.peering_extapp_intapp.aws_vpc_peering_connection_accepter.stack_accept_peering[0], but we are tolerating it because it is using the legacy plugin SDK.

The following problems may be the cause of any confusing errors from downstream operations:

  • .accepter: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead

  • .requester: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead

2020/02/19 14:49:38 [TRACE] module.peering_extapp_intapp: eval: *terraform.EvalCheckPreventDestroy

2020/02/19 14:49:38 [TRACE] module.peering_extapp_intapp: eval: *terraform.EvalWriteState

2020/02/19 14:49:38 [TRACE] EvalWriteState: writing current state object for module.peering_extapp_intapp.aws_vpc_peering_connection_accepter.stack_accept_peering[0]

2020/02/19 14:49:38 [ERROR] module.routing_extapp_intapp: eval: *terraform.EvalWriteResourceState, err: Invalid for_each argument: The “for_each” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

2020/02/19 14:49:38 [TRACE] [walkPlan] Exiting eval tree: module.routing_extapp_intapp.aws_route.dst_stack

2020/02/19 14:49:38 [TRACE] vertex “module.routing_extapp_intapp.aws_route.dst_stack”: visit complete

2020/02/19 14:49:38 [TRACE] GRPCProvider: Close

Posts: 3

Participants: 2

Read full topic

Provider endpoint

$
0
0

@skydion wrote:

Hello

I’m trying to write custom provider and have question about it is really configure provider endpoint by resource object?

For example we have API endpoint, and when created some resource by terraform, I need do some task on the API endpoint over ssh.

It is possible to do something like remote-exec but for provider endpoint?
For example I need something like this

resource "null_resource" "ds" {
  triggers = {
    datastore_identifier = join(",", test_data_store.ds.*.indentifier)
  }

  connection {
    host = provider.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      ## Login into CP over SSH and run some commands
    ]
  }
}

Posts: 1

Participants: 1

Read full topic

Maintenance status TLS provider?

$
0
0

@carlpett wrote:

Hey,
The TLS provider is supposedly maintained by Hashicorp. However, there are quite a few PRs (and issues) untouched for months. Is this a similar situation to how the Kubernetes provider was in a while ago?
I was considering making a PR on something I think would fit well in that provider (generating DH params), but then it would be good to know it would have a chance of getting reviewed/merged.

Posts: 1

Participants: 1

Read full topic

Azure Devops CICD with terraform

With Terraform cloud can I have Terraform state that behaves more like cloudformation stacks?

$
0
0

@red8888 wrote:

I asked this on github: https://github.com/hashicorp/terraform/issues/23807

And got this response:

The “Apply a Run” operation operation lets you run Terraform against the latest configuration known to Terraform Cloud (by omitting the explicit configuration version id). You can set is-destroy to instruct Terraform Cloud to run terraform destroy instead of terraform apply .

Im confused about how I do this. Is there a cli command to do this? To test I setup terraform cloud and a git provider. I ran an apply and I see the state. Now how can I have terraform cloud destroy that state and workspace (without having to supply a config)

I want to be able to delete a terraform workspace/state like I can delete a cloudformation stack

Posts: 1

Participants: 1

Read full topic

Provision Infrastructure in OCI(Oracle Cloud Infrastructure)

$
0
0

@joshimithilesh002 wrote:

I have created scripts which contains 1 VCN, 2 subnets(public and private subnet). I have 1 windows instance( bastion server) in my public subnet and 1 in private subnet.
when i try to connect my private subnet instance from the bastion host for attaching block volume,i am not able to connect, i get connection error. I also opened the network(0.0.0.0/0) but still not able to connect.
i searched it on google also, but i couldn’t find anything.
so if anyone can help,please.

Posts: 1

Participants: 1

Read full topic


Join the domain depending on what network zone I pick

$
0
0

@roccas-86 wrote:

Hello,

I am trying to find a way to join the domain depending on what network zone I pick.
I have 3 zones. ADM, RES and DMZ.

If I pick ADM or RES then I want to join the domain bout if I pick DMZ I don’t want to join the domain.
I am using Ansible also.

May script looks like this now where I use “join_domain = “${var.domain}”” to join the domain.

data “vsphere_datacenter” “dc” {
name = “Datacenter1”
}

data “vsphere_datastore_cluster” “datastore_cluster” {
name = “StorageCluster2”
datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

data “vsphere_compute_cluster” “cluster” {
name = “Servercluster”
datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

#data “vsphere_network” “network” {

name = “${lookup(var.portgroup, var.vlan)}”

datacenter_id = “${data.vsphere_datacenter.dc.id}”

#}

data “vsphere_virtual_machine” “template” {
name = “Windows_Server_2016_template_marsaf”
datacenter_id = “${data.vsphere_datacenter.dc.id}”
}

resource “vsphere_virtual_machine” “vm” {
name = “{var.vmname}" resource_pool_id = "{data.vsphere_compute_cluster.cluster.resource_pool_id}”
datastore_cluster_id = “${data.vsphere_datastore_cluster.datastore_cluster.id}”

num_cpus = 2
memory = 4096
guest_id = “${data.vsphere_virtual_machine.template.guest_id}”

firmware = “${data.vsphere_virtual_machine.template.firmware}”

cpu_hot_add_enabled = “true”
cpu_hot_remove_enabled = “true”
memory_hot_add_enabled = “true”

efi_secure_boot_enabled = “true”

scsi_type = “${data.vsphere_virtual_machine.template.scsi_type}”

network_interface {
network_id = “{lookup(var.portgroup, var.vlan)}" adapter_type = "{data.vsphere_virtual_machine.template.network_interface_types[0]}”
}

disk {
label = “disk0”
size = “{data.vsphere_virtual_machine.template.disks.0.size}" eagerly_scrub = "{data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}”
thin_provisioned = “${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}”
}

disk {
label = “disk1”
unit_number = 1
size = “{data.vsphere_virtual_machine.template.disks.0.size}" thin_provisioned = "{data.vsphere_virtual_machine.template.disks.0.thin_provisioned}”
eagerly_scrub = “${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}”

clone {
template_uuid = “${data.vsphere_virtual_machine.template.id}”

customize {
windows_options {
    computer_name = "${var.vmname}"
    admin_password = "${var.localadminpw}"
    join_domain = "${var.domain}"
    domain_admin_user = "${var.domainuser}"
    domain_admin_password = "${var.domainpw}"
    product_key = "WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY"
    time_zone = "110"
  }

  network_interface {
    ipv4_address = "${var.ipaddr}"
    ipv4_netmask = "${lookup(var.netmask, var.vlan)}"
  }

  ipv4_gateway = "${lookup(var.ipgw, var.vlan)}"
  dns_server_list = ["10.127.0.142", "10.127.0.143"]
}

}
}

Posts: 1

Participants: 1

Read full topic

Variable_validation feature

$
0
0

@manishingole-coder wrote:

Hello

I was expecting variable_validation feature to include in new terraform version 0.12.21 which was experimental opt-in feature included in version 0.12.20. I feel this is very useful feature when you want to validate variable input. The error message we write will give a clear view to the user what they need to provide as input.

Posts: 1

Participants: 1

Read full topic

Use variables from gitlab to variables file

$
0
0

@darrellburgher wrote:

Hi, I have a bunch of .tf files committed to gitlab.com. I set variables in the ci/cd setting and pass them on to my .gitlab-ci.yml as per normal.
In my .gitlab-ci.yml I have the following:

image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
- 'AWS_CLUSTER_NAME=${AWS_CLUSTER_NAME}'

I also have a variables.tf file with the following:

variable “cluster-name” {
default = “name_from_gitlab ci”
type = string
description = “The name of your EKS Cluster”
}

How do I get the variable from the gitlab-ci to terraform “name_from_gitlab ci” variables file?
I have tried to use TF_VAR_AWS_CLUSTER_NAME in the variables.tf file but it seems you can’t pull a variable from a variable in this file.

Terraform v0.12.21

  • provider.aws v2.49.0
  • provider.http v1.1.1
  • provider.null v2.1.2
  • provider.random v2.2.1

Regards
Darrell

Posts: 1

Participants: 1

Read full topic

Understanding on Data command

Fail to deploy mongo atlas

$
0
0

@simond-b2 wrote:

Hi guys, Before I raise this as a bug I thought I would ask here first in case I am doing something stupid, which wouldn’t be the first time.

Terraform version 0.12.20
mongodbatlas provider version 0.4.0

So trying to deploy to an Azure environment we attempt to create a mongo atlas cluster via terraform, but get the following error:

Error: error getting Project IP Whitelist information: whiteListEntry is invalid because must be set

The following entry validates ok (ip address redacted), but generates the above error at run time :

resource “mongodbatlas_project_ip_whitelist” “test” {
project_id = mongodbatlas_project.test[0].id

cidr_block = “1.2.3.4/32”
comment = “Test whitelist (terraform managed)”
}

Any clues suggestions or pointers gladly received!

Posts: 1

Participants: 1

Read full topic

Viewing all 11402 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>