Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Does Terraform communicate with Azure API using TLS 1.2?

$
0
0

@ayusmadi wrote:

As per title, our organization were asked to address this security question.

Does Terraform communicate with Azure API using TLS 1.2?

Posts: 1

Participants: 1

Read full topic


Unable to get memory or num_cpus from template

$
0
0

@harshit0921 wrote:

While trying to allocate memory and num_cpus for a virtual machine, I am trying to extract the memory and num_cpus from the template. But I am getting an error while trying to do so.
Here is my resource:

resource "vsphere_virtual_machine" "vm" {

  count = var.vm-count

  name = "${var.vm-name}-${count.index + 1}"

  resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id

  datastore_id = data.vsphere_datastore.datastore.id

  folder = var.vm-folder

  num_cpus = data.vsphere_virtual_machine.template.num_cpus

  memory = data.vsphere_virtual_machine.template.memory

  guest_id = data.vsphere_virtual_machine.template.guest_id

  scsi_type = data.vsphere_virtual_machine.template.scsi_type

  firmware = data.vsphere_virtual_machine.template.firmware

  network_interface {

    network_id = data.vsphere_network.network.id

    adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]

  }

  disk {

      label            = "disk0"

      size             = data.vsphere_virtual_machine.template.disks.0.size

      eagerly_scrub    = data.vsphere_virtual_machine.template.disks.0.eagerly_scrub

      thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned

    }

  clone {

    template_uuid = data.vsphere_virtual_machine.template.id

    customize {

        windows_options {

          auto_logon_count = 2

          computer_name = "HarshitDev"

          organization_name = "Philips"

        }

      }

  }

}

And here is the error I am getting:

Error: Unsupported attribute

  on main.tf line 39, in resource "vsphere_virtual_machine" "vm":
  39: num_cpus = data.vsphere_virtual_machine.template.num_cpus

This object has no argument, nested block, or exported attribute named
"num_cpus".


Error: Unsupported attribute

  on main.tf line 40, in resource "vsphere_virtual_machine" "vm":
  40: memory = data.vsphere_virtual_machine.template.memory

This object has no argument, nested block, or exported attribute named
"memory".

Posts: 1

Participants: 1

Read full topic

Using git bash in Windows machine and terragrunt init failed

$
0
0

@louiekwan wrote:

It should not fail but get the following,

provider “aws”: failed to create .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: open .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: The system cannot find the path specified…

PLUGIN cahce dir defined as:
TF_PLUGIN_CACHE_DIR=c:/Users/louie.kwan/.terraform.d/plugin-cache

Actual behavior

2020/04/14 22:16:57 [DEBUG] installing aws 2.57.0 to .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe from local cache c:\Users\louie.kwan.terraform.d\plugin-cache\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe

Error installing provider “aws”: failed to create .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: open .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: The system cannot find the path specified…

Terraform analyses the configuration and state and automatically downloads
plugins for the providers used. However, when attempting to download this
plugin an unexpected error occurred.

This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is blocked
by a firewall.

If automatic installation is not possible or desirable in your environment,
you may alternatively manually install plugins by downloading a suitable
distribution package and placing the plugin’s executable file in the
following directory:
terraform.d/plugins/windows_amd64

Error: failed to create .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: open .terraform\plugins\windows_amd64\terraform-provider-aws_v2.57.0_x4.exe: The system cannot find the path specified.

Posts: 1

Participants: 1

Read full topic

Error: Cycle - Two AWS resources that reference each other

$
0
0

@gadgetmerc wrote:

I have two AWS SQS resources that need to reference each other. Unfortunately that creates a circular dependency for TF. Is there any way to work around this?

My initial thought was a conditional based on if the ARN was already computed. Use the computed value or use a blank string. Then run a second apply to populate the values correctly. In theory it would work but I don’t think it’s a feature of TF.

Any suggestions would be appreciated. Thanks!

Posts: 1

Participants: 1

Read full topic

How to force a sequential execution of child modules in main.tf

$
0
0

@mizunos wrote:

I am trying to setup a sequential execution of the following child modules in main.tf where add-membership depends on the output gitlab-project-ids list from add-gitlab-project but it is not working for me

module "add-gitlab-project" {
    source = "<module location>"
    modinput_token = var.tplinput_token
    modinput_input-file= var.tplinput_projects
    modinput_tags = var.tplinput_tags
    modinput_shared_groups = var.tplinput_shared_groups
}

module "add-membership" {
    source = "<module location>"
    modinput_token = var.tplinput_token
    modinput_input-file = var.tplinput_users
    modinput_access_level = var.tplinput_useraccess
    modinput_projectid = module.add-gitlab-project.gitlab-project-ids
}

terraform graph showed that both are still being loaded in parallel and since I use for_each in add-membership to loop through all the project_id and user pairs, it always error out with the complain that it does not know the number of project object to be used.

Each of the child module execute flawlessly of its own even with the for_each construct

Posts: 3

Participants: 3

Read full topic

Terraform provisioner "file" and "remote-exec" not working

$
0
0

@tdubb123 wrote:

I cant get aws instance provisioned because its stuck and failing at this block

any idea why it is failing.

Copy Scripts to EC2 instance

provisioner “file” {
source = “${path.module}/activedirectory/”
destination = “C:\scripts”
connection {
host = coalesce(self.public_ip, self.private_ip)
type = “winrm”
user = “Administrator”
password = var.admin_password
agent = “false”
}
}

Set Execution Policy to Remote-Signed, Configure Active Directory

provisioner “remote-exec” {
connection {
host = coalesce(self.public_ip, self.private_ip)
type = “winrm”
user = “Administrator”
password = var.admin_password
agent = “false”
https = “true”
port = “5986”
insecure = “true”
}
inline = [
“powershell.exe Set-ExecutionPolicy RemoteSigned -force”,
“powershell.exe -version 4 -ExecutionPolicy Bypass -File C:\scripts\01-ad_init.ps1”,
]
}
}

Posts: 1

Participants: 1

Read full topic

Issues with Terraform JSON

$
0
0

@Unfairz wrote:

Hello folks! I have been dealing with this problem for quite some time now and am hoping to be missing something really simple at this point!

Basically, when I use the JSON syntax of building the terraform files,I cannot get the disk portition to function correctly and the error given is quite odd. When I use the HLC2 format, it works like a charm!

I am not using one of the known providers but the libvirt one for creating VMs with KVM:

{
  "data": [
    {
      "template_file": [
        {
          "user_data": [
            {
              "template": "$file(\"/home/terraform/config/cloud_init.cfg\")"
            }
          ]
        }
      ]
    },
    {
      "template_file": [
        {
          "meta_data": [
            {
              "template": "file(\"/home/terraform/config/network_config.cfg\")"
            }
          ]
        }
      ]
    }
  ],
  "resource": [
    {
      "libvirt_volume": [
        {
          "vm-001": [
            {
              "format": "qcow2",
              "name": "vm-001.qcow2",
              "pool": "vm-storage",
              "source": "/home/terraform/images/CentOS-7.qcow2"
            }
          ]
        }
      ]
    },
    {
      "libvirt_cloudinit_disk": [
        {
          "cloudinit": [
            {
              "meta_data": "${data.template_file.meta_data.rendered}",
              "name": "cloudinit.iso",
              "pool": "vm-storage",
              "user_data": "${data.template_file.user_data.rendered}"
            }
          ]
        }
      ]
    },
    {
      "libvirt_domain": [
        {
          "vm-001": [
            {
              "autostart": "true",
              "cloudinit": "${libvirt_cloudinit_disk.cloudinit.id}",
              "memory": "2048",
              "name": "vm-001",
              "network_interface": [
                {
                   "bridge": "br0"
                }
              ],
              "disk": {
              [
                {
                 "volume_id": "${libvirt_volume.vm-001.id}"
                }
              ]
              },
              "running": "true",
              "vcpu": "2"
            }
          ]
        }
      ]
    }
  ]
}

This is my .tf.json file and the error given is:

Error: Incorrect attribute value type

  on vm.tf.json line 69, in resource[2].libvirt_domain[0].vm-001[0]:
  69:               "disk": [
  70:                 {
  71:                  "volume_id": "${libvirt_volume.vm-001.id}"
  72:                 }
  73:               ],

Inappropriate value for attribute "disk": element 0: attributes
"block_device", "file", "scsi", "url", and "wwn" are required.

If someone would give me a hand with this it would be greatly appreciated!

Posts: 1

Participants: 1

Read full topic

How to integrate Terraform Cloud in your CI pipeline

$
0
0

@kvrhdn wrote:

Hi, I’m looking for best practices and experiences with integrating Terraform Cloud in your CI pipeline.
In a nutshell: we use Terraform Cloud in VCS mode, but we would like to start a TF Cloud run when the CI pipeline is done, not immediately when the commit/merge happens.

So some context: for my project I have some CI (GitHub Actions) that builds and publishes a Docker image to AWS. This takes about 12 minutes (unfortunately).
Our infrastructure is managed by Terraform: we have some TF code that will deploy this Docker image on AWS Fargate. The version of the Docker image (the task definition) is managed by Terraform.

But when we use Terraform Cloud in the VCS mode to we end up with a race condition: when we commit/merge code both CI and TF Cloud will start. Since the CI takes 10+ minutes, TF Cloud will be ready to update the infrastructure, way before the image it wants to deploy actually exists…
We can work around this by waiting until the CI is done to manually confirm and apply. But this makes it impossible for us to switch to the auto apply method.

Does anyone else have a similar setup / issue?

Cheers!

Posts: 2

Participants: 2

Read full topic


Api-gateway integration response settings are removed after second terraform apply with same code

$
0
0

@mmiot-dev-iiot wrote:

I’m trying to create REST API in AWS API Gateway with terraform.

To enable CORS, option methods and related integration settings are prepared in my tf code. It works well when I did “terraform plan” -> “terraform apply” for the first time. Checking from AWS management console, I found an option method was created as I wrote.

However, when I did “terraform plan” -> “terraform apply” second time without any change of API Gateway, Integration Response settings for Option method was removed even though the apply was completed.(“removed” means all Integration response disappears from management console).

Is this usual behaviors? Do I need additional settings to my terraform code?

My present code is following:

resource "aws_api_gateway_rest_api" "rest_api_test" {
  name = "rest_api_test"

  endpoint_configuration {
    types = ["REGIONAL"]
  }
}

resource "aws_api_gateway_deployment" "rest_api_test_deploy" {
  depends_on = [
    "aws_api_gateway_integration_response.integration_response",
    "aws_api_gateway_method.rest_api_test_method",
    "aws_api_gateway_integration.rest_api_test_integration",
  ]

  rest_api_id = "${aws_api_gateway_rest_api.rest_api_test.id}"
  stage_name  = "dev"
}

# resource
resource "aws_api_gateway_resource" "rest_api_test_resource" {
  rest_api_id = "${aws_api_gateway_rest_api.rest_api_test.id}"
  parent_id   = "${aws_api_gateway_rest_api.rest_api_test.root_resource_id}"
  path_part   = "rest_api_test_resource"
}

resource "aws_api_gateway_method" "rest_api_test_method" {
  rest_api_id   = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id   = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "rest_api_test_integration_request" {
  rest_api_id             = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id             = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method             = "${aws_api_gateway_method.rest_api_test_method.http_method}"
  integration_http_method = "POST"
  type                    = "AWS"
  uri                     = "arn:aws:apigateway:ap-northeast-1:lambda:path/2015-03-31/functions/${var.lambda_func_arn}/invocations"
}

resource "aws_api_gateway_method_response" "http_status_value" {
  rest_api_id = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method = "${aws_api_gateway_method.rest_api_test_method.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_integration_response" "integration_response" {
  rest_api_id = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method = "${aws_api_gateway_method.rest_api_test_method.http_method}"
  status_code = "${aws_api_gateway_method_response.http_status_value.status_code}"

  response_templates = {
    "application/json" = "${file("../../module/apigw/src/json/response_template.json")}"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = "'*'"
  }
}

# cors
resource "aws_api_gateway_method" "rest_api_test_method_options" {
  rest_api_id      = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id      = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method      = "OPTIONS"
  authorization    = "NONE"
  api_key_required = false
}

resource "aws_api_gateway_method_response" "rest_api_test_method_response_options_200" {
  rest_api_id = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method = "${aws_api_gateway_method.rest_api_test_method_options.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Headers" = true,
    "method.response.header.Access-Control-Allow-Methods" = true,
    "method.response.header.Access-Control-Allow-Origin"  = true
  }
  depends_on = ["aws_api_gateway_method.rest_api_test_method_options"]
}

resource "aws_api_gateway_integration" "rest_api_test_integration" {
  rest_api_id             = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id             = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method             = "${aws_api_gateway_method.rest_api_test_method_options.http_method}"
  integration_http_method = "OPTIONS"
  type                    = "MOCK"

  request_templates = {
    "application/json" = "${file("../../module/apigw/src/json/request_template.json")}"
  }
  depends_on = ["aws_api_gateway_method.rest_api_test_method_options"]
}

resource "aws_api_gateway_integration_response" "rest_api_test_integration_response_options_200" {
  depends_on = [
    "aws_api_gateway_integration.rest_api_test_integration",
    "aws_api_gateway_method_response.rest_api_test_method_response_options_200"
  ]
  rest_api_id       = "${aws_api_gateway_rest_api.rest_api_test.id}"
  resource_id       = "${aws_api_gateway_resource.rest_api_test_resource.id}"
  http_method       = "${aws_api_gateway_method.rest_api_test_method_options.http_method}"
  status_code       = "${aws_api_gateway_method_response.rest_api_test_method_response_options_200.status_code}"
  selection_pattern = ""

  response_templates = {
    "application/json" = ""
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Headers" = "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'",
    "method.response.header.Access-Control-Allow-Methods" = "'GET,OPTIONS'",
    "method.response.header.Access-Control-Allow-Origin"  = "'*'"
  }
}

Posts: 1

Participants: 1

Read full topic

For_each type conversions

$
0
0

@favoretti wrote:

Given the following variable:

locals {
  hubs = ["sep", "fusion", "trip"]
}

These 2 pieces of code behave differently:

resource "azurerm_storage_account" "this" {
  for_each = var.separate_namespaces ? local.hubs : toset(["data-ingest"])
...
}

above works fine, although tertiary list argument also requires explicit toset() conversion.

resource "azurerm_eventhub" "this" {
  for_each = toset(local.hubs)

above requires explicit toset() conversion.

I’m not sure I understand why in the first example local.hubs just works.

Posts: 1

Participants: 1

Read full topic

Terraform & Openstack - Zero downtime flavor change

$
0
0

@N0zz wrote:

I’m using openstack_compute_instance_v2 to create instances in openstack. There is a lifecycle setting create_before_destroy = true present. And it works just fine in case I e.g. change volume size, where instances needs to be replaced.

But. When I do flavor change, which can be done by using resize instance option from openstack, it does just that, but doesn’t care about any HA. All instances in the cluster are unavailable for 20-30 seconds, before resize finishes.

How can I change this behaviour?

Some setting like serial from ansible, or some other options would come in handy. But I can’t find anything.
Just any solution that would allow me to say “at least half of the instances needs to be online at all times”.

Terraform version: 12.20.

Posts: 1

Participants: 1

Read full topic

Team "create but not admin old workspaces" permission

$
0
0

@simonklb wrote:

Currently it looks like you have two options if you want to allow teams to administer workspaces. Either the team has the “Manage Workspaces” permission which lets the team create and manage all workspaces in the organization or someone else needs to create the workspace first and then give access to the team.

It would be nice if you could have a third option which allows a team to create new workspaces but not gain admin access to all workspaces in the organization.

Is this already possible and I’ve just missed it or would this be something you would consider implementing?

Posts: 1

Participants: 1

Read full topic

Azure Private endpoint AKS cluster recreation issue

$
0
0

@msivarami wrote:

When i tried to create private endpoint AKS cluster by enabling “private_link_enabled = true”.Initially, AKS cluster created without any issue.After creation,when i do “terraform plan” instead of showing zero resources to create and update.It is showing 1 create , 1 update and 1 destroy.It is forcefully asking to set “windows_profile” property.If i set this property it is working as expected.But windows_profile is optional property and mine is linux cluster node.I’m really not sure why i need to set forcefully during AKS cluster creation . Please let me know , if anyone come across this situation.
AzureRm version: 2.5.0
AKS verison: 1.15.10

Posts: 1

Participants: 1

Read full topic

Datadog - reuse of widgets

$
0
0

@eilon47 wrote:

Hi,

I want to create 2 datadog dashboards with the same widget but with different parameters - for example same query_value_definition with different query on the metric tags.
Is there a way to do it using the modules?

Thanks!

Posts: 1

Participants: 1

Read full topic

Terraform vSphere Local-exec

$
0
0

@Quinvalen wrote:

Hi all, I am a cloud engineer and I’m new here. Currently I’m working with terraform and ansible to deploy simple vm to vsphere environment. It’s all okay when I use Terraform only, then I use ansible (remote-exec and local-exec) to config my simple vm, but it failed, maybe you will understand with this picture:

as you can see, I also use same resource to make output file, and it’s shows my deployed vm so it works perfectly. But when I use it with remote and local exec, it doesn’t seems work correctly. Do you guys have any suggestion regarding this issue?

Posts: 1

Participants: 1

Read full topic


CloudFront Conditional Custom Error Response

$
0
0

@aashitvyas wrote:

I am using TF 0.11.14 to manage the Cloud Front Distributions of multiple environments for our applications.

I would like to make a change on certain CF distribution and only want add Custom Error Response if the given variable is exists in variables.tf file

I am wondering how can I do that ?

Below is the variable I have defined in my variables.tf file

variable "spa" {
  type        = "string"
  default     = ""
  description = "if spa is enable, Cloudfront will have the routing of the custom pages/endpoints in  client by modifying the Error Pages"
}

Below is the TF configs I have tried.

I have tried to put the count to enable/disable the resource for custom_error_response only however that is failing with the following error.

  custom_error_response = {
        count                 = "${var.spa == "enable" ?1:0 }"
        error_caching_min_ttl = 0
        error_code            = 404
        response_code         = 200
        response_page_path    = "index.html"
      }
    }

module.nightly-client.aws_cloudfront_distribution.cf: custom_error_response.0: invalid or unknown key: count

I have also tried the following way by enabling each sub resources of custom_error_response

  custom_error_response = {
    error_caching_min_ttl = "${var.spa == "enable" ? 0:0 }"
    error_code            = "${var.spa == "enable"? 404:0}"
    response_code         = "${var.spa == "enable" ? 200:0}"
    response_page_path    = "${var.spa == "enable" ? "/index.html":""}"
  }
}

The above doesn’t work and errors out because http code for the false evaluation is not valid.

  • aws_cloudfront_distribution.cf: error updating CloudFront Distribution (E2WYMYBXQLZC0Z): InvalidArgument: The parameter ErrorCode is invalid.
    status code: 400, request id: 61cabc92-6aee-44c8-ba4d-e7ff619682ac

Posts: 1

Participants: 1

Read full topic

Why use 'terraform destroy'?

$
0
0

@kpfleming wrote:

This is a really basic question, but I’m trying to understand why the ‘destroy’ subcommand exists :slight_smile:

In my usage of Terraform, I’ve only found a few cases where resources need to be destroyed, all of which can be done without this subcommand:

  • If I no longer need a resource, I can remove it from the configuration and run plan/apply.

  • If a resource is damaged in some way and needs to be recreated, I can use the ‘taint’ subcommand and then run plan/apply.

What sorts of situations do people have where ‘destroy’ is useful on its own?

Posts: 3

Participants: 3

Read full topic

Using some loop with count and for_each. Availability zones and server count

$
0
0

@tluv2006 wrote:

Im trying to do a module for a module build. I have a count statement for the vm build but now I need to use a for each to rotate through a list.

locals {
zone = toset([“1”,“2”,“3”])
}
resource “azurerm_windows_virtual_machine” “main” {

for_each = locals.zones

zone = each.value

count = var.itemCount

and the Module would look like this:

module “webservers” {

source = “./module”

itemcount = “3”


The problem is terraform does not allow count and for_each in the same module / block. I need this because I’m doing a count for the server number and a for_each to stick the vms in a availability zone.

Help please!!!

Posts: 2

Participants: 2

Read full topic

Terraform traffic manager for multiple environments

$
0
0

@kalis777 wrote:

Hello,

We are using Terraform to configure the Azure Traffic Manager. Current folder structure is

<>
main.tf
var.tf
region1.tf
region2.tf

I also use Azure CI/CD pipeline for automation. If I want to have the same thing for both test and prod but with different endpoints, is my option to use workspace? Are there other ways to do it? By doing so, how do I automatically choose to deploy to an environment based on the changes made to the configs?

Posts: 1

Participants: 1

Read full topic

Error In Terraform apply in google_container_cluster

$
0
0

@AS011 wrote:

Whenever I run terraform apply it throws an error and i could not find how to resolve it .

project: required field is not set

on cluster.tf.json line 34, in resource.google_container_cluster.guestbook:
34: }

Posts: 1

Participants: 1

Read full topic

Viewing all 11357 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>