Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Terraform Repository Example Projects

$
0
0

Hi there, could anyone share or point me to any repositories for example .tf projects, use of functions etc etc.

1 post - 1 participant

Read full topic


Terrafrom/Jenkins CI/CD with modules from private repo (ssh-key)

$
0
0

Hello,

I have been using terrafrom for quite some time now and I really love it. I am also familiar with terraform-cloud. Recently I have started working on a Jenkins private CI/CD pipeline and it seems I am missing something. I am also familiar with Jenkins and its “credentials” structure etc.

However I have the following problem, for which it seems I am not able to find a suitable solution. We are using terraform from private repository. The main code is in one repo and the child module are separate in different repos. In the main terrafomr code the child modules are called like this:

module "s3-test" { source = "git::ssh://git@stash.prod.tis.loc:12345/s3.git" }

I would like to mention that I have configured jenkins with credentials and ssh-key to be able to pull the main repo, but during “terrafrom init” - it is not able to pull the child modules because of SSH-KEY access permission.

Can we provide via command like to terrafrom what ssh-key to use for that ? Anyone had this problem ? There is something like a solution here, but I do not much like it:

Thank you in advance.
Regards

1 post - 1 participant

Read full topic

Using AWS credential environment variables with TF Cloud

$
0
0

I am trying to pass the access key ID, secret key, and session key returned by a call to sts.AssumeRole() to my Terraform Cloud workspace. After reviewing the documentation and several posts, here is my current approach which is failing with a “No valid credential sources found for AWS Provider”:

  1. Remote backend correctly configured to point to my TF Cloud Workspace and authenticate using an API token obtained from terraform login.

  2. Variables in a credentials.auto.tfvars file in the same directory as my main.tf file:

aws_access_key = "ASIA......"
aws_secret_key = "[my_secret_key]"
aws_session_token = "[my_session_token]"
  1. variables in my main.tf file:
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "aws_session_token" {}
  1. Use those variables to create tfe_variables of category “env” in my remote workspace:
locals {
  common_variables = {
    AWS_ACCESS_KEY_ID = var.aws_access_key
    AWS_SECRET_ACCESS_KEY = var.aws_secret_key
    AWS_SESSION_TOKEN = var.aws_session_token
  }
}

resource "tfe_organization" "my-tfe-org" {
  name  = "my-org-name"
  email = "myemail@company.com"
}

resource "tfe_workspace" "my-tfe-workspace" {
  name         = "my-workspace-name"
  organization = tfe_organization.my-tfe-org.id

resource "tfe_variable" "shared" {
  for_each = local.common_variables

  workspace_id = tfe_workspace.my-tfe-workspace.id
  category     = "env"
  key          = each.key
  value        = each.value
  sensitive    = true
}
  1. Then I reference the aws provider and create some resources:
provider "aws" {
  region  = "us-east-1"
}

resource "aws_iam_role" "role" {
...
}

As stated above the error is: “Error: No valid credential sources found for AWS Provider.”

3 posts - 2 participants

Read full topic

Terraform not pulling variable value from a for loop inside a for_each

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Supplying variables to modules

$
0
0

Is there some way to provide variable values to my modules besides supplying them in my module {} section of the root main.tf file? That would make for a very large and ugly file.

I tried putting a terraform.tfvars file in the modules/module directory but that doesn’t work.

The end goal is to manage the variables for each module independently.

Thanks.

2 posts - 2 participants

Read full topic

Terraform 0.13 Beta 2 released!

$
0
0

The Terraform Team is excited to announce the availability of Terraform 0.13 beta 2.

We’ve fixed a number of issues from the first beta. We’re also incredibly grateful for all the feedback and input we’ve received from the Terraform community.

Please have a look at our changelog and draft upgrade guide for details.

You can download Terraform 0.13 here: https://releases.hashicorp.com/terraform/0.13.0-beta2/

Information about the beta program and getting started can be found here: https://github.com/hashicorp/terraform/blob/guide-v0.13-beta/README.md

Please see the beta guide above for information about reporting issues and providing feedback.

Don’t forget to join us at HashiConf Digital for updates about Terraform, the Terraform Provider Registry, and more.

2 posts - 1 participant

Read full topic

CrossAccount IAM Role in AWS

$
0
0

Is there a way I can create crossaccount IAM role in terraform. I dont see any option to mention the other account’s account id in the role creation

3 posts - 2 participants

Read full topic

Null resources depends_on doesn't work on replacement

$
0
0

Hi,
I am having issues with null provision depend_on strategies. I am trying to recreate the null provision resource if any changes happens to template file that’s used during the provision. The resources in which the template file changed performs the force replacement correctly. However the dependent resources are not recreated.

Terraform Version

0.12.26

Terraform Configuration Files

  • resource 1:
data "template_file" "disk_tmp_file" {
    count       =   var.node_count
    template    =   file(var.script_source_tpl)
    vars = {
        disk                =   var.disk
    }
}

resource "null_resource" "first_resource" {
    count       =   var.node_count
    triggers    =   {
        private_ip  =   element(var.private_ip,count.index)
        data        =   element(data.template_file.disk_tmp_file.*.rendered,count.index)
    }
    connection {
        host            =   element(var.private_ip,count.index)
        type            =   var.login_type
        user            =   var.user_name
        password        =   var.instance_password
        port            =   var.port
    }
    provisioner "file" {
        content     =   element(data.template_file.disk_tmp_file.*.rendered,count.index)
        destination =   var.script_destination
    }
}

  • resource 2:
resource "null_resource" "second_resource" {
    depends_on  =   [var.md_depends_on]
}

  • module 1:
module "first_module" {
    source     = ${resource1}
    script_source_tpl               =   "../../scripts/${var.platform}/diskPart.tpl"
}
  • module 2:
module "second_module" {
    source     = ${resource2}
    md_depends_on                       =   [module.first_module]
}
  • md_depends_on var:
variable "md_depends_on" {
    type    =   any
    default =   null
}

Expected Behavior

On template change - Should recreate both the resources - resource 1 and resource 2

Actual Behavior

On template change - Updates only the resource 1

Additional Context

Dependency works perfectly if I create or destroy resources. The problem is only if the template file changes and triggers tries to update the resource.

3 posts - 2 participants

Read full topic


Data Remote State Unsupported attribute

$
0
0

Hello everybody,
I am using Terraform ver 0.12.24 and running into an issue when trying to use data.terraform_remote_state.project.outputs.project_id. This object does not have an attribute named project_id.
This is happening on a terraform plan
My main.tf looks like this

------------------------------------------------------------

BACKEND BLOCK

------------------------------------------------------------

terraform {
backend “gcs” {}
}

------------------------------------------------------------

PROVIDER BLOCK

------------------------------------------------------------

provider “google-beta” {
credentials = var.credentials_path
region = “us-central1”
version = “~> 2.0”
}

provider “google” {
credentials = var.credentials_path
region = “us-central1”
version = “~> 2.0”
}

------------------------------------------------------------

TERRAFORM REMOTE STATE

data “terraform_remote_state” “project” {
backend = “gcs”
config = {
bucket = var.bucket
prefix = var.prefix
credentials = var.credentials_path
}
}

------------------------------------------------------------

MAIN BLOCK

------------------------------------------------------------

module “simple_vpc” {
source = “terraform-google-modules/network/google”
version = “~>2.3”

project_id = “${data.terraform_remote_state.project.outputs.project_id}”
network_name = var.network_name
routing_mode = var.routing_mode
subnets = [
{
subnet_name = var.subnet_name
subnet_ip = var.subnet_ip
subnet_region = var.subnet_region
subnet_private_access = var.subnet_private_access
subnet_flow_logs = var.subnet_flow_logs
description = var.description
},
]
}

outputs.tf

output network_name {
value = module.simple_vpc.network_name
}

output subnets_names {
value = module.simple_vpc.subnets_names
}

output subnets_self_links {
value = module.simple_vpc.subnets_self_links
}

output subnets_ips {
value = module.simple_vpc.subnets_ips
}

output “project_id” {
value = module.simple_vpc.project_id
description = “VPC project id”
}

variables.tf

variable credentials_path {
description = “./credentials.json”
}

variable bucket {
description = “GCS bucket name that stores state”
}

variable prefix {
description = “GCS folder name”
}

variable project_id {
description = “Project this vpc should be attached to.”
}

variable network_name {
description = “Name of the VPC.”
}

variable routing_mode {
description = “Routing mode. GLOBAL or REGIONAL”
default = “GLOBAL”
}

variable subnet_name {
description = “Name of the subnet.”
}

variable subnet_ip {
description = “Subnet IP CIDR.”
}

variable subnet_region {
description = “Region subnet lives in.”
}

variable subnet_private_access {
default = “true”
}

variable subnet_flow_logs {
default = “true”
}

variable description {
default = “Deployed through Terraform.”
}

1 post - 1 participant

Read full topic

Terraform Use Case

$
0
0

I am wondering if Terraform can be used in the following scenario:

(A) We have an Azure File Share which has grown rapidly
(B) We need to move files from Azure File Share to Azure Block Storage with parameters on moving files/containers older than X days or greater than Y size

Would it be easier/more maintainable writing an Azure Function to do the same job?

1 post - 1 participant

Read full topic

0.13 beta2 vendored providers

$
0
0

Provider is here:
./terraform.d/plugins/example.com/vendor/onepassword/v0.6.3/darwin_amd64/terraform-provider-onepassword_v0.6.3

And sourced like this:

terraform {
required_providers {
onepassword = {
source = “example.com/vendor/onepassword
version = “0.6.3”
}
}
}

I’ve looked at the docs here: https://gist.github.com/mildwonkey/54ce5cf5283d9ea982d952e3c04a5956 and https://github.com/hashicorp/terraform/blob/guide-v0.13-beta/draft-upgrade-guide.md#new-filesystem-layout-for-local-copies-of-providers

As well as reviewing some related issues and PRs on GitHub.

Note: I’ve replaced references to my company domain with example.com

$ terraform init
2020/06/17 18:09:58 [INFO] Terraform version: 0.13.0 beta2
2020/06/17 18:09:58 [INFO] Go runtime version: go1.14.2
2020/06/17 18:09:58 [INFO] CLI args: string{"/usr/local/Cellar/tfenv/2.0.0/versions/0.13.0-beta2/terraform", “init”}
2020/06/17 18:09:58 [DEBUG] Attempting to open CLI config file: /Users/matt/.terraformrc
2020/06/17 18:09:58 [DEBUG] File doesn’t exist, but doesn’t need to. Ignoring.
2020/06/17 18:09:58 [DEBUG] will search for provider plugins in terraform.d/plugins
2020/06/17 18:09:58 [DEBUG] ignoring non-existing provider search directory /Users/matt/.terraform.d/plugins
2020/06/17 18:09:58 [DEBUG] ignoring non-existing provider search directory /Users/matt/Library/Application Support/io.terraform/plugins
2020/06/17 18:09:58 [DEBUG] ignoring non-existing provider search directory /Library/Application Support/io.terraform/plugins
2020/06/17 18:09:58 [INFO] CLI command args: string{“init”}
2020/06/17 18:09:58 [TRACE] ModuleInstaller: installing child modules for . into .terraform/modules
Initializing modules…
2020/06/17 18:09:58 [DEBUG] Module installer: begin k8svnet
2020/06/17 18:09:58 [TRACE] ModuleInstaller: Module installer: k8svnet already installed in …/modules/masteryvnet
2020/06/17 18:09:58 [DEBUG] Module installer: begin kubectl
2020/06/17 18:09:58 [TRACE] ModuleInstaller: Module installer: kubectl already installed in …/modules/test
2020/06/17 18:09:58 [DEBUG] Module installer: begin masteryk8s
2020/06/17 18:09:58 [TRACE] ModuleInstaller: Module installer: masteryk8s already installed in …/modules/masteryk8s
2020/06/17 18:09:58 [TRACE] modsdir: writing modules manifest to .terraform/modules/modules.json

Initializing the backend…
2020/06/17 18:09:58 [TRACE] Meta.Backend: no config given or present on disk, so returning nil config
2020/06/17 18:09:58 [TRACE] Meta.Backend: backend has not previously been initialized in this working directory
2020/06/17 18:09:58 [DEBUG] New state was assigned lineage “0bb35e91-ce2d-18dd-036d-d3b063fae6cc”
2020/06/17 18:09:58 [TRACE] Meta.Backend: using default local state only (no backend configuration, and no existing initialized backend)
2020/06/17 18:09:58 [TRACE] Meta.Backend: instantiated backend of type
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: scanning directory .terraform/plugins
2020/06/17 18:09:58 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/azuread v0.10.0 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/azuread/0.10.0/darwin_amd64
2020/06/17 18:09:58 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/azurerm v2.14.0 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.14.0/darwin_amd64
2020/06/17 18:09:58 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/null v2.1.2 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/null/2.1.2/darwin_amd64
2020/06/17 18:09:58 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/random v2.2.1 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/random/2.2.1/darwin_amd64
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/random/2.2.1/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/random 2.2.1
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/azuread/0.10.0/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/azuread 0.10.0
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.14.0/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/azurerm 2.14.0
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/null/2.1.2/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/null 2.1.2
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: using cached result from previous scan of .terraform/plugins
2020/06/17 18:09:58 [TRACE] providercache.fillMetaCache: using cached result from previous scan of .terraform/plugins
2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: using cached result from previous scan of .terraform/plugins
2020/06/17 18:09:59 [DEBUG] checking for provisioner in “.”
2020/06/17 18:09:59 [DEBUG] checking for provisioner in “/usr/local/Cellar/tfenv/2.0.0/versions/0.13.0-beta2”
2020/06/17 18:09:59 [DEBUG] checking for provisioner in “terraform.d/plugins/darwin_amd64”
2020/06/17 18:09:59 [DEBUG] checking for provisioner in “.terraform/plugins/darwin_amd64”
2020/06/17 18:09:59 [TRACE] Meta.Backend: backend does not support operations, so wrapping it in a local backend
2020/06/17 18:09:59 [TRACE] backend/local: state manager for workspace “default” will:

  • read initial snapshot from terraform.tfstate
  • write new snapshots to terraform.tfstate
  • create any backup at terraform.tfstate.backup
    2020/06/17 18:09:59 [TRACE] statemgr.Filesystem: reading initial snapshot from terraform.tfstate
    2020/06/17 18:09:59 [TRACE] statemgr.Filesystem: snapshot file has nil snapshot, but that’s okay
    2020/06/17 18:09:59 [TRACE] statemgr.Filesystem: read nil snapshot

2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: scanning directory .terraform/plugins
Initializing provider plugins…
2020/06/17 18:09:59 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/azuread v0.10.0 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/azuread/0.10.0/darwin_amd64
2020/06/17 18:09:59 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/azurerm v2.14.0 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.14.0/darwin_amd64
2020/06/17 18:09:59 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/null v2.1.2 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/null/2.1.2/darwin_amd64
2020/06/17 18:09:59 [TRACE] getproviders.SearchLocalDirectory: found registry.terraform.io/hashicorp/random v2.2.1 for darwin_amd64 at .terraform/plugins/registry.terraform.io/hashicorp/random/2.2.1/darwin_amd64
2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/null/2.1.2/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/null 2.1.2
2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/random/2.2.1/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/random 2.2.1
2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/azuread/0.10.0/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/azuread 0.10.0
2020/06/17 18:09:59 [TRACE] providercache.fillMetaCache: including .terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.14.0/darwin_amd64 as a candidate package for registry.terraform.io/hashicorp/azurerm 2.14.0

1 post - 1 participant

Read full topic

0.13 Plan: `Changes to Outputs`

$
0
0

With 0.13.0-beta2 I’m seeing that we now get Changes to Outputs: after the actual plan modifications: Is there a way to suppress this (both for plan and apply)? Or at least swap things so the actual resource changes are last? One of the biggest issues with the current layout is that by putting Changes to Outputs after the actual resource modifications it shoves the add/change/destroy counts into the middle and makes initial verification of whether or not a plan meets expectations much harder.

1 post - 1 participant

Read full topic

How to hide sensitive data when calling a custom script in null resource

$
0
0

Hi All,

When calling a custom script using a null_resource, my input parameters are displayed in plain text. Example:

provisioner "local-exec" {
    when    = "destroy"
    command = "./scripts/smctl-delete-broker.sh ${var.cf_org_name} ${var.cf_ending_domain_name} ${var.cf_username} ${var.cf_password} ${var.service_broker_name} ${var.sub_account_id}"
  }

All the parameters passed to the script are displayed as below:

module.service-broker.null_resource.smctl-registration (local-exec): Executing: ["/bin/sh" "-c" "./scripts/smctl-delete-broker.sh myOrg mydomain.com dummy@gmail.com 12345 myBrokername a9d29bbc-bc4a-467f-bbe1-753a7509f0f6"]

How possible to hide that? or is there a better way to implement this using different method than null_resource

Thanks :slight_smile:

1 post - 1 participant

Read full topic

For_each on aws_subnet_ids

$
0
0

Hello… terraform newbie here =)

I’m trying to follow this doc using for_each. Heres my twist:

data "aws_subnet_ids" "public" {
    vpc_id = aws_vpc.js_vpc.id

    tags = {
        Scope = "Public"
    }
}

resource "aws_route_table_association" "public" {
    depends_on = [aws_subnet.public_subnets]
    for_each = data.aws_subnet_ids.public.ids
    subnet_id = each.value
    route_table_id = aws_route_table.public.id
}

Understandably for_each doesnt know whats going on and screamed at me with…

$ terraform plan
...
------------------------------------------------------------------------
Error: Invalid for_each argument

  on ../../modules/vpc/route-tables.tf line 88, in resource "aws_route_table_association" "public":
  88:     for_each = data.aws_subnet_ids.public.ids

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

So I proceed to append -target into the cmd

$ terraform plan -target=aws_subnet.public_subnets

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

Warning: Resource targeting is in effect

You are creating a plan with the -target option, which means that the result
of this plan may not represent all of the changes requested by the current
configuration.
		
The -target option is not for routine use, and is provided only for
exceptional situations such as recovering from errors or mistakes, or when
Terraform specifically suggests to use it as part of an error message.

This seems to me that terraform didnt do anything… Whats the recommended cmd to go about this?

1 post - 1 participant

Read full topic

Default_action block conditional or a variable for wafv2 resource

$
0
0

I have following terraform resource:

resource "aws_wafv2_web_acl" "main" {
  name  = var.name_prefix
  scope = "REGIONAL"

  default_action {
    block {}
  }
}

Question: how can I make default_action block so that it can be passed as a variable? Is there some solution using dynamic block that I am not aware of?

I have tried:

  dynamic "default_action" {
    for_each = var.default_action
    content {
      default_action.value
    }
  }

but this is simply failing with an error: An argument or block definition is required here. To set an argument, use the equals sign “=” to introduce the argument value.

Please advise :slight_smile:

1 post - 1 participant

Read full topic


An error for map variable after upgrading to Terraform 0.12.26

$
0
0

Hi,

I’ve following error after upgrading from Terraform 0.11.14 to 0.12.26:

Environment:
- provider “helm” (hashicorp/helm) 1.2.3
- provider “azurerm” (hashicorp/azurerm) 1.33.1
- provider “kubernetes” (hashicorp/kubernetes) 1.10.0
- provider “tls” (hashicorp/tls) 2.1.1

Command:
terraform plan -var-file myvalues.tfvars

Error message:

Error: Invalid index

on main.tf line 196, in module “mod1”:
196: ingress_host = var.ingress[“myval1”]
|----------------
| var.ingress is map of string with 1 element

The given key does not identify an element in this collection value.

Configuration files:

main.tf:

module "mod1" {
  source               = "../modules/mod1"
  ingress_host         = var.ingress["myval1"]   #I believe here's the offending line
  ...
}

variables.tf:

variable "ingress" {
  type        = map(string)
  description = "Map of ingress hosts"

  default = {
    myval1    = "address1.domain.com"
    myval2    = "address1.domain.com
    myval3    = "address1.domain.com"
  }
}

myvalyes.tfvars file:

...
ingress={myval1="address1.mydomain.com", myval2="address2.mydomain.com", myval3="address3.mydomain.com"}
...

Module mod1 variables.tf:

variable "ingress_host" {
  type        = string
  default     = ""
  description = ""
}

And honestly, I don’t understand what’s wrong with the ‘var.ingress[“myval1”]’ line. The original line for Terraform 0.11.14 was:

ingress_host = "${var.ingress["myval1"]}"

so I can’t see any issue with new format.

Upgrade from 0.11.14 to 0.12.26 went fine without any issues reported by ‘terraform 0.12upgrade’ command. The problem is that I can’t create new environments using the new version of Terraform. I would like to make sure that I can declare variable (map of strings) and then pass particular values as strings to modules installed locally.

Could you help me with this issue, please?

1 post - 1 participant

Read full topic

Terraform File Naming Best Practice?

$
0
0

I would like to ask what is the best way to organise related terraform resources.

For example.

If you have a use case in which you want to do following things.
Idea is to generate a key and generate a signed url for key file to securely share the key to intended audience only.

  • Create a google cloud bucket
  • Create a google service account
  • Create a google service account key
  • Upload the service account key in bucket
  • create a signed url for the key file that was uploaded
  • upload the signed url file in bucket.

Now My question is what is the best way to organise this use case ?
is it good to put all in one file ? something like generate_signed_url.tf
or should be scattered across multiple files like ?

  • bucket.tf
  • serviceaccount.tf
  • iam.tf
  • signedurl.tf

1 post - 1 participant

Read full topic

Terraform Plan JSON data inconsistently ordered

$
0
0

Greetings,

Our TF workflow uses a pair of pipelines (one for the plan (PR), one for the apply). We are mindful of stale PRs, but to ensure there are no changes between the time the PR is planned and the time it’s applied, we’d like to re-plan the PR and calculate a checksum based on the JSON output of the plan file.

The problem is, the JSON output from the same planfile can change from one invocation to the next. For example … Given the file myplan.tfout , if one were to run terraform plan show -json myplan.tfout | shasum twice and then compare the outputs, it may actually produce different results. It doesn’t always happen, but it’s almost sure to be a problem in stacks with a large number of resources.

I’ve traced this behaviour back to a single node in the JSON , prior_state – the sorting of this data is inconsistent (some kind of parallel data assembly maybe).

Given this, can anybody enlighten me as to the purpose of this data in the planfile? I am not sure its presence even figures into a calculation of change. Moreover, should this behaviour be considered a bug?

1 post - 1 participant

Read full topic

Route53 - Alias name adding "." to end such as "website.com." instead of "website.com"

$
0
0

Hello All,

I am new to Terraform and have a max of a few hours in the program. I reported this issue on github but also wanted to cross post here. Does anyone have suggestions or a workaround besides going into aws and removing the period from the alias url?

If this is ran below, it goes to aws as s3-website.us-east-2.amazonaws.com**.** instead of s3-website.us-east-2.amazonaws.com.

resource “aws_route53_record” “web_secondary” {
zone_id = “xxxxxx”
name = “xxx.net
type = “A”
set_identifier = “Secondary”

alias {
name = s3-website.us-east-2.amazonaws.com #aws_s3_bucket.b.website_domain
zone_id = aws_s3_bucket.b.hosted_zone_id
evaluate_target_health = false

}

failover_routing_policy {
type = “SECONDARY”
}

}

1 post - 1 participant

Read full topic

Setting Azure Monitor Diagnostic Settings to NSG

$
0
0

Hi everyone,

First off, thanks in advance of taking the time to read through my subject!

I’m experiencing an issue with azurerm_monitor_diagnostic_setting. I try to create the diagnostic log settings for Network Security Group by tracking the nsg using its ID. But it return an “Unknown service error”. Had anyone faced this error when setting Diagnostic Log Settings for NSG?
Thanks in advance.

I share with a simple code that I’m using :
resource “azurerm_network_security_group” “my_nsg” {

name = “00test-nsg”

location = “West Europe”

resource_group_name = “test_RG”

tags = {

environment = "Terraform test"

}

}

variable “nsg_log_category” {

type = list(string)

default = [“NetworkSecurityGroupEvent”, “NetworkSecurityGroupRuleCounter”]

}

resource “azurerm_monitor_diagnostic_setting” “nsg_diagnostic_setting” {

name = “00testDiagnostics-NSG”

target_resource_id = azurerm_network_security_group.my_nsg.id

storage_account_id = “/subscriptions/xxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_RG/providers/Microsoft.Storage/storageAccounts/00teststorage”

log_analytics_workspace_id = “/subscriptions/xxxxxxxxxxxxxxxxxxxxxxx/resourcegroups/test_RG/providers/microsoft.operationalinsights/workspaces/test-analytics”

dynamic “log” {

for_each = var.nsg_log_category

content {

  category = log.value

  enabled  = true

  retention_policy {

    enabled = true

    days = 365

  }

}

}

metric {

category = "AllMetrics"

retention_policy {

  enabled = true

  days = 365

}

}

}

The error that I’m getting is below :
Error: Error creating Monitor Diagnostics Setting “00testDiagnostics-NSG” for Resource “/subscriptions/xxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/test_RG/providers/Microsoft.Network/networkSecurityGroups/00test-nsg”: insights.DiagnosticSettingsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 – Original Error: autorest/azure: Service returned an error. Status=400 Code=“Unknown” Message=“Unknown service error” Details=[{“code”:“BadRequest”,“message”:""}]

1 post - 1 participant

Read full topic

Viewing all 11357 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>