Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11363 articles
Browse latest View live

Recreating aws_security_group for loadbalancer

$
0
0

@pzowghi wrote:

I am using aws_security_group in my terraform codes.
terraform recreates aws_security_group for loadbalancer always, but I didn’t any changes in code.
This issue just happens for SG loadbalancer.
I have this bug in terraform 0.11 and 0.12
How I can fix it?
Thanks

Posts: 1

Participants: 1

Read full topic


Importing aws_iam_role_policy_attachment with multiple policies

$
0
0

@victorhdamian wrote:

I have a custom role with multiple policies attached. It was created using the UI and I will like to imported it. And not to recreated.
the aws_iam_role_policy_attachment resource can be imported for one policy arn and not a list of policies arns. How can I achieve an import of the actual policy list of arns with v0.12.20

Posts: 1

Participants: 1

Read full topic

Docker -ports 8080:8080

$
0
0

@ninjaboy224 wrote:

I have created a docker container in AWS using terraform to provision the server and then the container. I have exposed a couple of ports on the host system from the container. If I wanted to create more than one container on the same host, how would I be able to increment the exposed ports so that they don’t clash?
Thanks

Posts: 1

Participants: 1

Read full topic

AWS codebuild github Integration Issue

$
0
0

@ashish141287 wrote:

Hi All,

I am trying to create an AWS codebuild project using terraform using GITHUB as a source

resource “aws_codebuild_project” “training-service” {

name = “${var.github_repository}”

build_timeout = “5”

service_role = “${aws_iam_role.codebuild.arn}”

badge_enabled = “${var.codebuild_badge_enabled}”

source {

type      = "GITHUB"

location  = "${data.template_file.codebuild_source_location.rendered}"

buildspec = "${var.codebuild_buildspec}"



auth {

  type     = "OAUTH"

  resource = "${var.github_oauth_token}"

}

}

environment {

compute_type         = "${var.codebuild_compute_type}"

type                 = "LINUX_CONTAINER"

image                = "${var.codebuild_image}"

privileged_mode      = "${var.codebuild_privileged_mode}"

}

artifacts {

type           = "S3"

location       = "${aws_s3_bucket.training-service-codebuild-bucket.arn}"

name           = "${var.github_repository}"

namespace_type = "BUILD_ID"

packaging      = "ZIP"

}

}

I verified from terraform docs also and this is the correct way of doing it.

variable “github_oauth_token” {

description = “GitHub OAuth token for repository access”

default =“XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”

}

I am getting below exception. can anyone help?

Error: Error creating CodeBuild project: InvalidInputException: No Access token found, please visit AWS CodeBuild console to connect to GitHub

on main.tf line 103, in resource “aws_codebuild_project” “training-service”:
103: resource “aws_codebuild_project” “training-service” {

Posts: 1

Participants: 1

Read full topic

For loop with multiple if conditions - help for newbie on syntax

$
0
0

@slave02 wrote:

I am currently stuck on the correct syntax structure (if what I am doing is even possible) - I would like to have single if condition evaluate from within a for loop using multiple OR’s

[for s in v: upper(s) if s != “Server1” || s != “Server3” ]

any tips on the correct use of || inside a for loop if item != or or

Thanks

Posts: 2

Participants: 1

Read full topic

How to refer to a terraform resource by a resource attribute containing an environmental variable

$
0
0

@anarinsky wrote:

For example we have a resource defined as

resource “aws_iam_role_policy” “example” {
name = “${local.env}-role-policy”

I would like to refer to this terraform resource by name but not by “example” but by “${local.env}-role-policy”. The reason is because I specify an environmental variable inside the attribute, so I would be able to specify different resource stacks within the same AWS account.

Posts: 1

Participants: 1

Read full topic

Docker Hub Verified Publisher for HashiCorp

$
0
0

@elreydetoda wrote:

NOTE: posted in the terraform category, because that was the image I was looking for and there was no general HashiCorp topic that I could post to.

Hello,

I was curious if there was any plan to become a Docker Hub Verified Publisher or , so that way when people search for official HashiCorp docker images they will see that it is definitely HashiCorp’s account and not someone that could pose as HashiCorp on Docker Hub. (I believe your official account is currently here: https://hub.docker.com/u/hashicorp, but still it doesn’t hurt to have that extra verification.

Posts: 1

Participants: 1

Read full topic

Extract key value from json and populate list in Terraform

$
0
0

@rbankole wrote:

  "Events": [
    {
      "Topic": "value1"
    },
    {
      "Topic": "value2"
    },
    {
      "Topic": "value3"
    }]
}```

with the above json. I'd like to extract the key values for each Topic and use them in a list in terraform template in this format ["value1","value2","value3"]. I've tried using the jsondecode but getting errors. Perhaps not doing it correct. Also use this but to no avail.... ```data "external" "topic" {
  program = ["jq", ".dictionary_name", "files.json"]
  query   = {}

}```

What is the best way to get this done? I'd imagine it's the jsondecode that I need to figure out but just not sure.   thank you.

Posts: 1

Participants: 1

Read full topic


HCL Blocks in order using the API

$
0
0

@billgraziano wrote:

I’m using the HCL v2.4.0 to parse some custom HCL files. Overall I’ve been very happy with it. HCL seems to be a good “syntax” that works well for what I need.

I have situation where I want (1) either the blocks in order or (2) I’d like to get back their range information so I can sort them myself. I’m currently doing something like this (code is approximate):

type HCLFile struct {
  Services    []Service       `hcl:"service,block"`
  Apps        []Database      `hcl:"app,block"`
}

var hclfile HCLFile
parser := hclparse.NewParser()
f, diags := parser.ParseHCL(src, fileName)

diags = gohcl.DecodeBody(f.Body, nil, &hclfile)

I have a requirement to go through these in order. I’m assume each type goes into their respective array in order. But I’m not sure how to determine the overall order.

I saw this thread on HCL and the reply from @apparentlymart led me to his syntax cleaner.

That led me to look at a pattern using the hclwrite package and something like hclwrite.File.Body().Blocks(). That returns them all in order and I can use the Type() method to figure out what I have.

Is there a more elegant way to this? Also, is it possible to get back the Position information of each block? I can see Position down deeper in the data structures but I don’t see a way to bring it back with the blocks. I can get it back for attributes but I don’t see a way for the block itself.

(And sorry for a non-Terraform question in the Terraform forum but I didn’t see where else to put it)

Posts: 1

Participants: 1

Read full topic

Looking for server to use with HTTP backend?

What is wrong with this trivial usage of local variable?

$
0
0

@MarkKharitonov wrote:

I have posted this question on SO - https://stackoverflow.com/questions/61218173/why-terraform-is-unable-to-compute-a-local-variable-correctly-in-the-following-t

Here it is:

Given is the following configuration (main.tf):

locals {
    locations = toset(["a", "b"])
}

resource "local_file" "instance" {
    for_each = local.locations

    content  = each.value
    filename = "${path.module}/${each.value}.txt"
}

output "primary_filename" {
    value = local_file.instance["a"].filename
}

And it seems to work fine:

C:\work\test> dir


    Directory: C:\work\test


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        4/15/2020  11:47 PM            280 main.tf


C:\work\test> terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...

...
C:\work\test> terraform apply -auto-approve
local_file.instance["b"]: Creating...
local_file.instance["a"]: Creating...
local_file.instance["a"]: Creation complete after 0s [id=86f7e437faa5a7fce15d1ddcb9eaeaea377667b8]
local_file.instance["b"]: Creation complete after 0s [id=e9d71f5ee7c92d6dc9e92ffdad17b8bd49418f98]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

primary_filename = ./a.txt
C:\work\test>

Now I delete the file a.txt and rerun:

C:\work\test> del .\a.txt
C:\work\test> terraform apply -auto-approve
local_file.instance["a"]: Refreshing state... [id=86f7e437faa5a7fce15d1ddcb9eaeaea377667b8]
local_file.instance["b"]: Refreshing state... [id=e9d71f5ee7c92d6dc9e92ffdad17b8bd49418f98]

Error: Invalid index

  on main.tf line 13, in output "primary_filename":
  13:     value = local_file.instance["a"].filename
    |----------------
    | local_file.instance is object with 1 attribute "b"

The given key does not identify an element in this collection value.

It can be fixed by using the try function:

    value = try(local_file.instance["a"].filename, "")

Which does make it work:

C:\work\test> terraform apply -auto-approve
local_file.instance["b"]: Refreshing state... [id=e9d71f5ee7c92d6dc9e92ffdad17b8bd49418f98]
local_file.instance["a"]: Refreshing state... [id=86f7e437faa5a7fce15d1ddcb9eaeaea377667b8]
local_file.instance["a"]: Creating...
local_file.instance["a"]: Creation complete after 0s [id=86f7e437faa5a7fce15d1ddcb9eaeaea377667b8]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

primary_filename = ./a.txt
C:\work\test>

Now I know we are not supposed to delete resources outside of terraform, but things happen and my expectation is that terraform handles it gracefully. And it does, except for this local variable behavior.

I do not like using the try function, because it would hide a real problem. Ideally, it should behave like try during the plan phase and without try during the apply phase.

Anyway, I have a feeling I am missing something important here, like I am not using the local variables correctly or something else. So, what am I missing?

Posts: 1

Participants: 1

Read full topic

Passing the array to Azure policy Parameters

$
0
0

@Chirag1233 wrote:

Hi There,

I am trying to pass the list of string to the Parameters value of the azurerm_policy_assignment here is the code:

resource “azurerm_policy_definition” “policy” {
name = var.policy_name
policy_type = var.policy_type
mode = var.policy_mode
display_name = var.display_name

metadata = <<METADATA
{
“category”: “General”
}

METADATA

policy_rule = <<POLICY_RULE
{
“if”: {
“not”: {
“field”: “location”,
“in”: “[parameters(‘allowedLocations’)]”
}
},
“then”: {
“effect”: “[parameters(‘effect’)]”
}
}
POLICY_RULE

parameters = <<PARAMETERS
{
“allowedLocations”: {
“type”: “Array”,
“metadata”: {
“description”: “The list of allowed locations for resources.”,
“displayName”: “Allowed locations”,
“strongType”: “location”
}
},
“effect”: {
“type”: “string”,
“metadata”: {
“description”: “Provide the list of the effect that will take place”,
“displayName”: “Allowed effect that should take place”
},
“allowedValues”: [
“Audit”,
“Deny”,
“Disabled”
]
}
}
PARAMETERS

}

resource “azurerm_policy_assignment” “policy_assignment” {
name = var.policy_assignment_name
scope = var.policy_scope
policy_definition_id = azurerm_policy_definition.policy.id
description = var.policy_description
display_name = var.policy_assignment_name

parameters = <<PARAMETERS
{
“allowedLocations”: {
“value”: “{var.allowedLocations}" }, "effect": { "value": "{var.policy_effect}”
}
}
PARAMETERS

}

but I am getting the error as shown below.


Error: Invalid template interpolation value

on Modules/AzurePolicy/main.tf line 69, in resource “azurerm_policy_assignment” “policy_assignment”:
65:
66:
67:
68:
69: “${var.allowedLocations}”
70:
71:
72:
73:
74:
75:
76:
|----------------
| var.allowedLocations is list of string with 4 elements

Cannot include the given value in a string template: string required.

Do you know if I am doing something wrong here or if there is a way to do something like this.

Posts: 1

Participants: 1

Read full topic

Terraform hangs on terraform plan

$
0
0

@cottagefarmerwwt wrote:

Hello,
Terraform v0.12.24

  • provider.aws v2.58.0
    Win 10

I’ve reverted to a simple example on the terraform.io doc. New directory except two files:
example.tf
provider “aws” {
region = var.region
}

resource “aws_instance” “example” {
ami = “ami-b374d5a5”
instance_type = “t2.micro”
}

and variables.tf
variable “region” {
default = “us-east-1”
}

I saved these, did terraform init, then terraform apply: just hangs, have to break out by force.

I didn’t use a package manager to install terraform, should I try to remove and re-install? Not seeing any documentation on how to reinstall terraform, were there changes made in the machine registry?

Thanks,
CF

I started fresh with

Posts: 1

Participants: 1

Read full topic

Terraform 0.12 dynamic block for subnet

$
0
0

@pzowghi wrote:

I am wondering, is it possible to use a dynamic block for aws_subnet resource?
if so, could you send me an example?
I tried it, but I got an error.The argument “cidr_block” is required, but no definition was found.
Blocks of type “cidr_block” are not expected here.
resource “aws_subnet” “subnetp” {
vpc_id = aws_vpc.vpc22.id
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available.names[1]

dynamic “cidr_block” {
for_each = [for cidrblock in var.cidr_blocks_subnet: {
cidr_block = cidrblock.CidrBlock
name = cidrblock.Name
}]
content {
cidr_block = cidr_block.value.cidr_block
tags = merge(map(“Name”, cidr_block.value.name), var.default_tags)
}
}
}

Posts: 1

Participants: 1

Read full topic

Creating dualstack alias in aws_route53_record

$
0
0

@HashiBeliver wrote:

Hi,
I wrote the following in my tf file:

resource “aws_route53_record” “just_a_name” {
zone_id = “zone_id_string”
name = “just_a_name2”
type = “A”

alias {
name = aws_lb.lb_object.dns_name
zone_id = aws_lb.lb_object.zone_id
evaluate_target_health = false
}
}

running terraform apply results in an A record with the dns alias, how do i add the dualstack prefix?
Thanks

Posts: 1

Participants: 1

Read full topic


Azure Front Door multiple endpoints in list

$
0
0

@theRyanElliott wrote:

Currently, we are using a variables file for configure our Azure Front Door resource. We are also deploying an Azure Key Vault resource. Our Key Vault config uses an outputs.tf so that the Front Door config can pick up those variables and utilize them (key vault certificate secret name, key vault certificate current version, key vault id).

We are wanting to switch up the config and use a dynamic block with a for_each set to a variable of type = list(map(string)) and configure our front door frontendpoints.

Is there anyway to use variables inside of another variable in the list? Or what would be the best way to deploy multiple frontend endpoints dynamically with a list?

Posts: 1

Participants: 1

Read full topic

Create 2 resource with for_each where one of resource 2 elements needs to refer to instance1

$
0
0

@qjqdave1 wrote:

Hi,

I have 2 resources each created using “for_each” as shown below in a generic fashion.

Each of the instances of the 2nd resource has an element (of type string) whose value needs to be the full reference to one of the instances of resource 1.

The issue I face is that the reference needs to include a key surrounded with " " such as “key”. This is required for reference to be resolved and removing the quotes will fail to reference the instance with the specific key.

my current implementation does not seem to work as resource1_instance_ref is a string so it can’t contain another string to surround the resourec1 keys. I tired escapes but it does not help and it resulted in urlencoded value for " being passed to API.

I was wondering if there is a solution to this scenario?

resource “resource_type1” “type1_instances”
for_each = dataset1
element1a = …
element1b = …

}

resource “resource_type2” “type2_instances”
for_each = dataset2
resource1_instance_ref = resource_type1.type1_instances[“resource1_instance_key”]
element2a=…
element2b=…
}

where one of the elements of dataset2 is resource1_instance defined as a string.

variable “dataset2” {
type = map(object({
resource1_key = string
resource1_instance_ref = string

}))

Referring to this document, specifically the for_each part


I am interested to know know if it is possible to dynmically make reference and construct multiple adresses as opposed to hard-coding the “key” part.

Thanks

Posts: 1

Participants: 1

Read full topic

Starting with output variables

$
0
0

@amrikst wrote:

I am using Azure and I have started to use the play with output variables in-order to reuse code.

I have run into the following error when I perform a terraform apply

“Error: Unsupported attribute
on output.tf line 2, in output “azurerm_resource_group”:
2: value = “${azurerm_resource_group.demotest.name.id}”
This value does not have any attributes”

Within the main.tf file I have the following block:

resource “azurerm_resource_group” “demotest” {
name = “rg001”
location = “uksouth”
}

When I refer to the terraform online docs I can see that azurerm_resource_group has an attribute called id (https://www.terraform.io/docs/providers/azurerm/r/resource_group.html)
Therefore I am unsure why I get the error above. I get a similar error when I use the following block within the output.tf for my network module. Again the guid attribute is listed on the terraform online docs.

output “virtual_networkconfiguration_id” {
value = “${azurerm_virtual_network.example.name.guid}”
}

The goal is to work on outputs so that I can play with re-using code through the use of modules within Azure

Posts: 1

Participants: 1

Read full topic

Azure Terraform SQL Backup Restore

$
0
0

@arvindancloud1982 wrote:

In Azure use Terraform, sql backup in a blob storage need to restore it in a VM Tried with azurerm_sql_database but using Terraform required, any script to do this

Posts: 1

Participants: 1

Read full topic

How to use when = Destroy?

$
0
0

@rfc791 wrote:

I am trying to run a script when a module gets removed from a project. The workflow that I use is we remove the reference to the module in the tf file to do the destroy. This causes the script not to kick off, if I run terraform destroy then it works. Is there any other method to get this to work?

Posts: 1

Participants: 1

Read full topic

Viewing all 11363 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>