Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Azure VM Recreate Every Apply

$
0
0

Hi there, newish to Terraform.

I’m creating a new VM via Terraform. When I rerun the plan/apply it wants to destroy and create the resource due to the private_ip_addresses variable changing. I’m assigning a static private ip via the nic resource creation.

I’ve added it to ignore_changes but it doesn’t seem to work.

1 post - 1 participant

Read full topic


Optional Resource fields are not optional?

$
0
0

Am I right that managing only a subset of fields from Resource is not possible in Terraform? The optional fields are still being tracked even when not specified in configuration. If Terraform detects that this optional field is changed, it will try to bring it back to remembered value from the last read.

This means that for any change to “optional” field on remote side, I need to add that field to configuration. Meaning that field is not optional anymore. I can not just “import” values of optional field into state. I can not tell Terraform to notify me about changes in the field and just update the state.


When I omit optional field, such as description for GitLab project, Terraform remembers the value, and if remote side changes it, it tried to change it back.

resource "gitlab_project" "default" {
  name             = "terraform-empty-project"
  visibility_level = "public"
}

I edited the project description from GitLab website, and Terraform tries to reset it to null.

...
      - description                                      = "Update description" -> null
        http_url_to_repo                                 = "https://gitlab.com/abitrolly/terraform-empty-project.git"
...

Am I right that there is no is no way to instruct Terraform to just use the remote value if the description is not set in the config? Use it once, or just don’t pay attention.

1 post - 1 participant

Read full topic

AWS Security group rules delete existing rules

$
0
0

I have a terraform script creating Security group and security group rules , this script is provided by the product , cannot edit , if i have to write a script to update security group rules how can i do in another script.

1 post - 1 participant

Read full topic

Credentials & syntax problem

$
0
0

Hi folks,

I’m experiencing some difficulty with creating resources in AWS via Terraform and was hoping somebody could help. FYI, I am using:

Terraform v0.12.26
provider.aws v2.67.0

My IAM user has full admin privileges and programmatic access but when i try to create a resource i.e. VPC i get teh following error:

Warning: Interpolation-only expressions are deprecated

on provider.tf line 2, in provider “aws”:
2: region = “${var.aws_region}”

To silence this warning, remove the “${ sequence from the start and the }”

aws_vpc.my_vpc: Creating…

Error: Error creating VPC: AuthFailure: AWS was not able to validate the provided access credentials
status code: 401, request id: 828ad3fe-8c39-4fdc-b3a8-5c22f25909a7

on network.tf line 1, in resource “aws_vpc” “my_vpc”:
1: resource “aws_vpc” “my_vpc” {

My provider file is set up like below:

provider “aws” {
region = “${var.aws_region}”
access_key = “var.aws_access_key”
secret_key = “var.aws_secret_key”
version = “~> 2.67”
skip_credentials_validation = true
skip_requesting_account_id = true
}

Terraform init and Terraform Plan work fine. I had to add the 2 “skip” statements in above to get past “Terraform Plan” which previously I did not have to do. It has been about 4 months since i last used Terraform and i have upgraded versions since. I am not using AWS configure but instead hard-coding my access/secret keys in a variables file in clear text for testing purposes.

Some questions:

a) Is this a version issue?

b) If i remove the $ and {} from the region variable above my “Terraform Plan” fails but the irony is there is a warning above suggesting that this has been deprecated in the latest versions of Terraform. Is this a bug?

c) I am just about to start building out a production environment so though the latest versions would be best. If this is not the case, can you recommend stable terraform and provider versions?

Any help would be much appreciated.

1 post - 1 participant

Read full topic

This object has no argument, nested block, or exported attribute named when using null_resource

$
0
0

Hi All,

Need your kind help around below issue, I am getting while using null_resource. I have created two buckets and just want to simply run a command for both the buckets with the help of null resource. However I am getting error " This object has no argument, nested block, or exported attribute named arn ". Could anyone please guide me what I am doing wrong here. Actually I am bit new in Terraform. :slight_smile:

Code:

provider “aws” {
region = “eu-west-1”
}
provider “null” {}

########################################

variable “bucketname” {
type = “list”
default = [“mytestbuckettfmodule1”, “mytestbuckettfmodule2”]
}

######################################## below resource to create two buckets, working fine

resource “aws_s3_bucket” “mybucket” {
for_each = toset(var.bucketname)
bucket = each.value
acl = “private”
tags = {
Company = “ABC”
CostCenter = “21102353”
Environment = “NonProd”
Name = “TestInstane”
Role = “Test”
Service = “Test”
User = “kapil.teotia@ABC.com
}
}

######################################## below null resource I am using to execute command under local-exec for both the buckets but unfortunately getting error

########################################

Error2

1 post - 1 participant

Read full topic

Backup Azure File Storage to Block Storage

$
0
0

Is it possible to use Terraform for backup of Azure File Storage to Azure Block Storage on scheduled basis as well as backup based on file size as a policy?

1 post - 1 participant

Read full topic

Using AWS Systems Manager Parameter as a variable for Windows Ec2 bootstrapping

$
0
0

Hi,

I am wondering if there is a way to set a pre-existing AWS parameter as a variable in the variable file? I need to set the parameter as a password during the bootstrapping process for a windows ec2 template.

1 post - 1 participant

Read full topic

Dynamic blocks and lookp

$
0
0

Using TF 0.12.20 I am trying to pass along arguments to a custom module where I use dynamic blocks.

      load_balancers = [{
         target_group_arn = data.aws_lb_target_group.syslog.arn
         container_name   = local.container_name
         container_port   = 10514
    }]

In the module I define a balancers variables like:

      variable "load_balancers" {
        type = list(map(any))
        default = []
      }

And my dynamic block looks like:

  dynamic "load_balancer" {
    for_each = var.load_balancers
    content {
      elb_name         = lookup(load_balancer, "elb_name", null)
      target_group_arn = lookup(load_balancer, "target_group_arn", null)
      container_name   = load_balancer.value["container_name"]
      container_port   = load_balancer.value["container_port"]
   }
 }

The issue is with the lookup method. It always return nulle and skip adding target_group_arn or elb_name even if I provide a value into the object I pass along as argument. Any suggestion?

1 post - 1 participant

Read full topic


Assigning one group to each created user

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Terraform plan output as azure devops PR comment

$
0
0

I am using terraform to configure our Azure Traffic Manager profiles using it through Azure DevOps pipeline. One of the goal is to get the terraform plan command output as the PR comment. I am using the “Create PR Comment task” to post the output as the PR comment. It is working but I am missing the format and is appearing in a not easily readable form.

Has anyone have any suggestions or workaround for this?

1 post - 1 participant

Read full topic

Terraform Cloud provider credentials

$
0
0

Hi!

I’m just getting started with Terraform Cloud and remote execution mode. Following documentation, I need to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables to the workspace, which makes sense.

Normally (pre TF Cloud), we specify a role within the AWS provider to assume, that each person in our team has the right to assume - with their own personal ID/Key (set via env vars locally). How does this work with TF Cloud with multiple developers ? Must the workflow be changed to have only a single unique ID/Key per workspace that the provider uses ?

3 posts - 2 participants

Read full topic

Conditional for_each resources

$
0
0

Hi!

I’m currently creating an array of resources using the for_each construct. Here’s an example:

    # variables.tf
    variable "sub_environments" {
      type        = list(string)
      description = "List of sub-environments we'd like to create"
      default = ["one", "two"]
    }

    # resource.tf
    resource "my_resource" "hi" {
      for_each  = toset(var.sub_environments)
      name      = "thing-${each.value}-resource"
      ... etc ...
    }

This works great, and I successfully get two resources created. My problem, however, is that this code is in a re-usable module, and there are times when I do not want multiple resources to be made. I only want one resource made.

If I make the sub_environments variable above to be an empty list, Terraform thinks that nothing should be created. I also don’t want to pass in a single item to sub_environments (e.x. sub_environments = ["one"] because I don’t want my resource name to be thing-one-resource, I just want thing-resource).

Are there any tricks I can do to be able to pass in an empty list to sub_environments and have only one resource be created?

Thanks!

2 posts - 2 participants

Read full topic

Adding AWS Accounts to AWS Organization OU

$
0
0

Unable to find code for adding AWS Account to AWS Organization OU. can you let us know if that is possible.

1 post - 1 participant

Read full topic

For_each over which data structure?

$
0
0

I’ve attempted to get this working many, many different ways but keep falling short with the implementations. I need to grab a list of server names from a map and get the product of that list and another list of ports in order to iterate over each grouping into a bigip_ltm_pool_attachment resource block:

    node   = {
      "node1" = "192.168.1.1"
      "node2" = "192.168.1.2"
    }
    ports = [7001,7002,7003,7004,7005,7006,7007,7008,7009,7010]

    poolattachports = [for pair in setproduct(keys(local.node), local.ports): {host: pair[0], port: pair[1]}]

    resource "bigip_ltm_pool_attachment" "pool_attach" {
      for_each = ...
    }

The high level idea is to create an attachment for each possible combination of server/port. I’ve never really seen an example online for this where you start with 2 lists and create a data structure that can be for_each’d in a resource. The only way I’ve seen it done is with dynamic blocks and I don’t think they apply here, right?

3 posts - 2 participants

Read full topic

Problem with the use of JSON config syntax for a built-in function in a resource block

$
0
0

I use terraform 0.12.26 and aws provider version 2.67.0. I work locally on a linux fedora 32 computer, and run servers in AWS cloud.

I hope that someone more experienced can help me with this problem.

I want to create a subnet with IPv4 and IPv6 CIDR blocks, and custom subnets in a custom VPC. AWS allocates an IPv6 CIDR-block during the deployment of the new VPC. In the module covering the subnet(s), I believe I can write in native syntax:

resource "aws_subnet" "mainVPC" {
  vpc_id     = "aws_vpc.mainVPC.id
  cidr_block = "var.subnet_cidrs_X1A"
  ipv6_cidr_block = "cidrsubnet(aws_vpc.mainVPC.ipv6_cidr_block,8,16)"

  tags = {
    Name = "Main VPC"
  }
}

NOW MY PROBLEM

I want to write (all) my config files in JSON, not in native syntax / HCL. [I have reasons why; I know it’s not recommended; but Hashicorp has also stated that they want JSON to be a first-class alternative language; and this is not the place for that discussion] I have not managed to find a way of doing it.

I have run three scenarios.

Scenario 1

My most-likely-to-succeed version was the following:

subnet.tf.json

{
  "resource": [
    {
      "aws_subnet": {
        "X1A-subnet": [
          {
            "vpc_id": "${aws_vpc.mainVPC.id}"
            "cidr_block": "${var.public_cidrs_X1A}",
            "ipv6_cidr_block": "cidrsubnet(${aws_vpc.mainVPC.ipv6_cidr_block},8,16)",
            "tags": {
              "Name": "X1A Subnet"
            },
          }
        ]
      }
    }
  ]
}

This passes terraform init and terraform validate. I can run terraform plan without an error. But the terraform engine refuses to deploy a subnet; running terraform apply produces the error message:

Error: Error creating subnet: InvalidParameterValue: invalid value for parameter ipv6-cidr-block: cidrsubnet(2a05:d014:aac:7700::/56,8,16)
status code: 400, request id: aed1>>>

Scenario 2

The variant:

"ipv6_cidr_block": "cidrsubnet(aws_vpc.THEVPC77.ipv6_cidr_block,8,16)"

also passes init and validate, but is rejected when I run terraform plan as an error:

Error: "ingress.4.ipv6_cidr_blocks.0" must contain a valid CIDR, got error parsing: invalid CIDR address: cidrsubnet(aws_vpc.mainVPC.ipv6_cidr_block,8,32)

  on NAT_sec_group.tf.json line 223, in resource[0].aws_security_group.NAT_instance[0]:
 223:           }

This error is different, and I understand it even less. It comes from a security group file, and does not seem directly related to the issue I’m trying to solve here.

In a very small-scale separate pure test-case that I ran, the error was located differently. It got through without errors all the way to terraform plan. But when I ran terraform apply, it could not create the subnet, and the error message was:

Error: Error creating subnet: InvalidParameterValue: invalid value for parameter ipv6-cidr-block: cidrsubnet(aws_vpc.mainVPC.ipv6_cidr_block,8,16)
status code: 400, request id: e1d162>>>

  on subnet.tf.json line 15, in resource[0].aws_subnet.X1A-subnet[0]:
  15:           }

Scenario 3

The following variant passes terraform init and terraform validate :

"ipv6_cidr_block": "cidrsubnet("${aws_vpc.mainVPC.ipv6_cidr_block}",8,16)"

This causes a terraform crash when running terraform plan, with the initial crash message reading:

panic: runtime error: index out of range

goroutine 1 [running]:
github.com/hashicorp/hcl/v2/json.(*peeker).Read(...)
	/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/hashicorp/hcl/v2@v2.3.0/json/peeker.go:20
github.com/hashicorp/hcl/v2/json.parseObject.func1(0x0, 0xc000458bb6, 0x1, 0x8ae, 0xc000049240, 0x1a, 0xa, 0x2d, 0x136, 0xa, ...)

In my very small-scale test run with only a few files, and the most basic of set-ups, just running terraform init crashed. First few lines:

panic: runtime error: index out of range

goroutine 1 [running]:
github.com/hashicorp/hcl/v2/json.(*peeker).Read(...)
	/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/hashicorp/hcl/v2@v2.3.0/json/peeker.go:20
github.com/hashicorp/hcl/v2/json.parseObject.func1(0x0, 0xc0003c612a, 0x1, 0x2d6, 0xc000044600, 0xe, 0xa, 0x2d, 0x12a, 0xa, ...)

MY COMMENTS / UNDERSTANDING

I am pretty baffled about what’s going on. I understand and can work with the different “interpolation” (old name) syntax between HCL and JSON files. But for this particular expression I don’t know what to do. I understand that Scenario 3 could not work, though I find it a bit disappointing that the parser can do nothing better than crashing.

But I don’t know how to make this work, and it is essential for my main use-case.

There are probably quite involved work-arounds to deal with this - including separating a run for creating the VPC, subnets, security groups, etc from a run for deploying the instances. But I thought that the whole idea of Terraform was that you could do it all in one go. And overall my application is tiny compared with the industrial-scale usage of Terraform. At the moment 4 instances, maybe over the years growing to between 10 and 20. All small; nothing really sophisticated.

I would be very grateful if anyone could throw light on this, tell me how to solve this specifically, and ideally, also with some indication of where in the documentation I can find more background to this.

3 posts - 2 participants

Read full topic


Error in tf file created by previous Terraform version when using Terraform v0.12.26

$
0
0

We become indebted to.

To build an Azure environment,
I’m trying to build it with Terraform 12 grammar.

A grammar check with “terraform validate” using the files that constructed the environment a year ago
I was getting a lot of errors.
Since the version of Terrraform has changed from the last time I built the environment, it did not seem to mesh with the latest version.

We plan to hand over the environment on the 24th,
I want some clear solution by the 23rd at the latest.

We apologize for the inconvenience, but thank you for your cooperation.

–procedure–
(1) Support based on the file that built the environment last time.
When I checked the grammar with “terraform validate” for the first time,
“Error: “features”: required field is not set” error.
When I checked it, I found that the description of “features{}” was necessary, so
Described in Provider.tf file as follows.

provider “azurerm” {
subscription_id = “{var.subscription_id}" client_id = "{var.client_id}”
client_secret = “{var.client_secret}" tenant_id = "{var.tenant_id}”
features{}
}

(2) After that, when “terraform validate” was performed again, several errors were output.

――error message―

terraform validate

Warning: Interpolation-only expressions are deprecated

on core.tf line 3, in resource “azurerm_availability_set” “avset-core”:
3: location = “${var.location}”

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the “${ sequence from the start and the }”
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and 133 more similar warnings elsewhere)

Warning: “address_prefix”: [DEPRECATED] Use the address_prefixes property instead.

on core.tf line 17, in resource “azurerm_subnet” “sbnt-core”:
17: resource “azurerm_subnet” “sbnt-core” {

Error: Missing required argument

on database.tf line 3, in resource “azurerm_mysql_server” “db_mysql”:
3: resource “azurerm_mysql_server” “db_mysql” {

The argument “sku_name” is required, but no definition was found.

Error: Unsupported argument

on database.tf line 8, in resource “azurerm_mysql_server” “db_mysql”:
8: sku = {

An argument named “sku” is not expected here.

Error: Unsupported argument

on storage.tf line 17, in resource “azurerm_storage_share” “storeshare”:
17: resource_group_name = “${var.rg_name}”

An argument named “resource_group_name” is not expected here.

Error: Unsupported argument

on web01vm.tf line 7, in resource “azurerm_network_interface” “web01-network”:
7: network_security_group_id = “${azurerm_network_security_group.web_server_nsg.id}”

An argument named “network_security_group_id” is not expected here.

Error: Unsupported argument

on web02vm.tf line 7, in resource “azurerm_network_interface” “web02-network”:
7: network_security_group_id = “${azurerm_network_security_group.web_server_nsg.id}”

An argument named “network_security_group_id” is not expected here.

1 post - 1 participant

Read full topic

Updating Cloned VM with additional disks fails with Error: could not find SystemId

$
0
0

Hi,
i am using Terraform v0.12.26 and provider.vsphere v1.19.0
I’m attempting to add more disks to a cloned VM. However, i’m getting the following error when i try to apply the change:

vsphere_virtual_machine.cloned_virtual_machine: Modifying… [id=42184ef1-b2b7-5216-6966-23dc1b010149]

Error: could not find SystemId

on lmain.tf line 44, in resource “vsphere_virtual_machine” “cloned_virtual_machine”:
44: resource “vsphere_virtual_machine” “cloned_virtual_machine” {

Here is a copy of my main.tf file.

provider “vsphere” {
version = “~> 1.5”
user = var.vsphere_user
password = var.vsphere_password
vsphere_server = var.vsphere_server
allow_unverified_ssl = true
}

data “vsphere_datacenter” “dc” {
name = var.vsphere_datacenter
}

data “vsphere_compute_cluster” “Production” {
name = “Production”
datacenter_id = data.vsphere_datacenter.dc.id
}

data “vsphere_datastore” “Datastore02” {
name = var.vsphere_Datastore02
datacenter_id = data.vsphere_datacenter.dc.id
}

data “vsphere_datastore” “Datastore03” {
name = var.vsphere_Datastore03
datacenter_id = data.vsphere_datacenter.dc.id
}

data “vsphere_resource_pool” “pool” {
#name = var.vsphere_resource_pool
name = “Resources”
datacenter_id = data.vsphere_datacenter.dc.id
}

data “vsphere_network” “network” {
name = var.vsphere_network
datacenter_id = data.vsphere_datacenter.dc.id
}

data “vsphere_virtual_machine” “template” {
name = var.vsphere_virtual_machine_template
datacenter_id = data.vsphere_datacenter.dc.id
}

resource “vsphere_virtual_machine” “cloned_virtual_machine” {
name = var.vsphere_virtual_machine_name
wait_for_guest_net_routable = false
wait_for_guest_net_timeout = 0
#resource_pool_id = data.vsphere_resource_pool.pool.id
resource_pool_id = data.vsphere_compute_cluster.Production.resource_pool_id
datastore_id = data.vsphere_datastore.Datastore03.id
num_cpus = var.vsphere_virtual_machine_template_num_cpus
memory = var.vsphere_virtual_machine_template_memory
guest_id = data.vsphere_virtual_machine.template.guest_id

scsi_type = data.vsphere_virtual_machine.template.scsi_type

network_interface {
network_id = data.vsphere_network.network.id
adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
}

disk {
label = “disk0”
size = data.vsphere_virtual_machine.template.disks[0].size
eagerly_scrub = data.vsphere_virtual_machine.template.disks[0].eagerly_scrub
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
}

disk {
keep_on_remove = false
label = “disk1”
size = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
unit_number = 1
datastore_id = data.vsphere_datastore.Datastore02.id
disk_mode = “independent_persistent”
}

disk {
keep_on_remove = false
label = “disk2”
size = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
unit_number = 2
datastore_id = data.vsphere_datastore.Datastore02.id
disk_mode = “independent_persistent”
}

disk {
keep_on_remove = false
label = “disk3”
size = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
unit_number = 3
datastore_id = data.vsphere_datastore.Datastore02.id
}

disk {
keep_on_remove = false
label = “disk4”
size = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
unit_number = 4
datastore_id = data.vsphere_datastore.Datastore02.id
}

disk {
keep_on_remove = false
label = “disk5”
size = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks[0].thin_provisioned
unit_number = 5
datastore_id = data.vsphere_datastore.Datastore02.id
}

clone {
#template_uuid = “{data.vsphere_virtual_machine.template.id}" template_uuid = data.vsphere_virtual_machine.template.id customize { dns_server_list = ["10.2.2.120"] dns_suffix_list = ["{var.dns_suffix_list}”]
timeout = “0”
linux_options {
#host_name = var.vsphere_virtual_machine_name
host_name = var.vsphere_virtual_machine_name
domain = “homelab.in”
}
network_interface {
#ipv4_address = “${var.vsphere_virtual_machine_ip}”
ipv4_address = var.vsphere_virtual_machine_ip
ipv4_netmask = 24
}

 }

}
}

2 posts - 1 participant

Read full topic

Unable to run the apply command after changing the backend

$
0
0

i am using terraform 0.12 version. I have a root module with main.tf,variable.tf which was working fine earlier.

recently we moved our state file from local to the terraform cloud as backend. we can login to terraform cloud and can see the states/run etc on the web console.

the issue is that the same config .tf file now throws an error stating that “OpenStack connection error, retries exhausted. Aborting. Last error was: dial tcp: lookup iadosvip01.ece.ellucian.com on 127.0.0.53:53: no such host”

after changing the state file from local to remote do we need to make changes to the TF file?

1 post - 1 participant

Read full topic

WAFv2 scope=cloudfront

$
0
0

Hi,
I’m pretty new to Terraform and I’ve been trying to build a WAFv2 web acl with little success.
I’ve got regional working ok but when I change scope=regional to cloudfront I get the following error:

Error: Error creating WAFv2 WebACL: WAFInvalidParameterException: Error reason: The scope is not valid., field: SCOPE_VALUE, parameter: CLOUDFRONT
{
RespMetadata: {
StatusCode: 400,
RequestID: “37cec571-6aa1-4ae5-916d-e5103e6de9b2”
},
Field: “SCOPE_VALUE”,
Message_: “Error reason: The scope is not valid., field: SCOPE_VALUE, parameter: CLOUDFRONT”,
Parameter: “CLOUDFRONT”,
Reason: “The scope is not valid.”

I’ve specified the region in my provider.tf and I can’t add the line under scope as that errors.
I’m running the latest Terraform and AWS provider.

Could anyone help me with this please as I’ve been trying for a few days with no joy?

Thanks

1 post - 1 participant

Read full topic

Dynamic Secrets Terraform Cloud / Enterprise

$
0
0

Are there plans to add Vault integration for dynamic secrets with TFC / TFE?

Reading the docs it appears the Vault integration is only used for encrypting static variables. Is that correct?

1 post - 1 participant

Read full topic

Viewing all 11357 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>