Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11443 articles
Browse latest View live

How to maintain different path in terraform backend by using terragrunt how to config backend by terragrunt

$
0
0

@venu1428 wrote:

I need to store state files for different accounts in different paths in single bucket in s3 by not hardcoding the values i need to give dynamically from app.

I saw some solutions regarding terragrunt but I dont know how to pass the different path for s3 backend dynamically.

I saw some solutions regarding CLI -config from CLI and terraform -reconfig but I need from UI.

I will put the repo in one place every time i need to pass the values from app dynamically.

Please suggest me some solutions regarding this.Need some examples how we will achieve this in terragrunt.

Thanks in advance.

Posts: 1

Participants: 1

Read full topic


VCS Integration

$
0
0

@venkatd02 wrote:

Hi Team,

I am trying to integrate terraform cloud with GitHub Enterprise version.
Getting below error:

Error

There was a problem connecting the OAuth client to the VCS provider. Please verify the URL, credentials, and permissions of the OAuth application and try again.

Github enterprise running in diff ec2 instance .
I have created new oauthapplication in Github enterprise and generated clientId and client secret.

Also, created VCS provider in terrafrom cloud and given github enterprise urls and api url and created provider.

Please suggest .what config i am missing

Posts: 1

Participants: 1

Read full topic

Working with list of maps

$
0
0

@eladazary wrote:

Hi,

In order to create a mongodb cluster I’ll need to create a resource which contains some static and dynamic configuration.

I would like to take the dynamic configuration and insert 2 different values to this section.

Let me clarify, please check the following resource:

resource "mongodbatlas_cluster" "mongo_cluster_two_regions" {

  project_id                  = var.mongo_project_id
  name                        = var.mongo_cluster_name
  num_shards                  = var.mongo_num_shards
  replication_factor          = var.mongo_replication_factor
  backup_enabled              = var.mongo_backup_boolean
  mongo_db_major_version      = var.mongo_version
  provider_name               = var.mongo_provider_name
  disk_size_gb                = var.mongo_disk_size
  provider_disk_iops          = var.mongo_disk_iops
  provider_volume_type        = var.mongo_volume_type
  provider_encrypt_ebs_volume = var.mongo_encryption_boolean
  provider_instance_size_name = var.mongo_instance_size
  cluster_type                = var.mongo_cluster_type
  replication_specs {
    num_shards                = var.mongo_replication_num_shards
    regions_config {
      region_name = regions_config.region
      electable_nodes = regions_config.electable_nodes
      priority = regions_config.priority
      read_only_nodes = regions_config.read_only_nodes
      }
    }
  }
}

For my purposes I would like to create a list of maps which will look like this:

regions_config = [
{
  region_name = "eu-west-1"
  electable_nodes = "2"
  priority = "7"
  read_only_nodes = "0"
},
{
  region_name = "eu-central-1"
  electable_nodes = "1"
  priority = "6"
  read_only_nodes = "0"
}
]

Node that I don’t want to create several clusters so I guess the “count” function won’t help me, I just want to add another “regions_config” section to the specified cluster.

The resource should look like this:

resource "mongodbatlas_cluster" "cluster-test" {
  project_id     = "<YOUR-PROJECT-ID>"
  name           = "cluster-test-multi-region"
  disk_size_gb   = 100
  num_shards     = 1
  backup_enabled = true
  cluster_type   = "REPLICASET"

  //Provider Settings "block"
  provider_name               = "AWS"
  provider_disk_iops          = 300
  provider_volume_type        = "STANDARD"
  provider_instance_size_name = "M10"

  replication_specs {
    num_shards = 1
    regions_config {
      region_name     = "US_EAST_1"
      electable_nodes = 3
      priority        = 7
      read_only_nodes = 0
    }
    regions_config {
      region_name     = "US_EAST_2"
      electable_nodes = 2
      priority        = 6
      read_only_nodes = 0
    }
    regions_config {
      region_name     = "US_WEST_1"
      electable_nodes = 2
      priority        = 5
      read_only_nodes = 2
    }
  }
}

Please assist.

Thanks !

Posts: 1

Participants: 1

Read full topic

Terraform 0.11 Support and Development Question

$
0
0

@virtualbubble wrote:

We currently have all our modules configured to use Terraform 0.11 and are planning on moving to 0.12. I just have a couple of questions in order to prioritise the upgrade,

  • Is Terraform 0.11 still supported and if so when will it be no longer supported?
  • When will the providers for 0.11 not be actively supported and updated?

I would like to know this information so that we can plan the way that we update our modules as we use versioning we can pick and choose which ones we update and when to update them. Also we can choose to create new modules from scratch for immutable infrastructure rather than upgrade the code.

Posts: 1

Participants: 1

Read full topic

Way to work with directory structure

$
0
0

@bentinata wrote:

Hello everyone.

I’ve been using Terraform for around 6 month now, and it’s been great.
I’m having a directory, with shared directory referenced using remote_state.
Ex:

├── provider.tf
├── shared
│   ├── main.tf
│   ├── outputs.tf
│   └── provider.tf -> ../provider.tf
├── prod
│   ├── a
│   │   ├── data.tf
│   │   ├── main.tf
│   │   ├── provider.tf -> ../../provider.tf
│   │   └── variables.tf
│   └── b
│       ├── data.tf
│       ├── main.tf
│       ├── provider.tf -> ../../provider.tf
│       └── variables.tf
└── dev
    ├── a
    │   ├── data.tf
    │   ├── main.tf
    │   ├── provider.tf -> ../../provider.tf
    │   └── variables.tf
    └── b
        ├── data.tf
        ├── main.tf
        ├── provider.tf -> ../../provider.tf
        └── variables.tf

How do I do it so when shared/ updated, all those depend on it is also updated?

Thanks.

Posts: 5

Participants: 3

Read full topic

Notification - Post webhook to Splunk Cloud

$
0
0

@fraserc182 wrote:

I am looking to configure the notification settings in our terraform cloud environment.
I ideally would like to post a webhook to our Splunk Cloud instance.
From what I can see this should be possible as I can create a HTTPS endpoint in Splunk. But when I try to do this, it just gives me an error stating it cannot reach the endpoint.

Has anyone managed to get this working or am I totally misunderstanding webhooks.

Posts: 1

Participants: 1

Read full topic

Terraform machine readable output

$
0
0

@dimkinv wrote:

Hello,
I’m running terraform CLI via node.js application and I want to be able to produce machine readable (json/yaml) log from terraform apply command.
I can’t seem to find an option to do it. Is it only possible to produce human readable logs from CLI commands?

Thanks for the help

Posts: 2

Participants: 2

Read full topic

Behavior of random_password

$
0
0

@Justin-DynamicD wrote:

simple question really, but one the docs don’t make very clear.

If I wanted to use random_password to generate a password for a service, on next apply would it rotate the password or honor the original one stored in the tf plan?

In other words, would I have to wrap the resulting account created in a “ignore changes” statement to keep it from constantly updating the password every apply?

Posts: 1

Participants: 1

Read full topic


Terraform Module is erroring out "Error: module "customer-services": "APIGW_CUSTOMER_DOMAIN_FLAG" is not a valid argument"

$
0
0

@avasant21 wrote:

I am having a Variable “APIGW_CUSTOMER_DOMAIN_FLAG” in the variable.tf file with the default value as “false”. I am passing the value to a variable in a module called customer-services. In the customer-serivces module, the same variable has been declared in variable file and referenced in a S3 object resource creation. I am getting the error as

Error: module “customer-services”: “APIGW_CUSTOMER_DOMAIN_FLAG” is not a valid argument

But it is working in other location where when pulled from master branch as part of the automation.

`

The Variable representation

`

$ grep APIGW_CUSTOMER_DOMAIN_FLAG *.tf

AUTOGEN-variable-provider.tf:variable “APIGW_CUSTOMER_DOMAIN_FLAG” {default = “false”}
customer-services-module.tf: APIGW_CUSTOMER_DOMAIN_FLAG = “${var.APIGW_CUSTOMER_DOMAIN_FLAG}”

$ grep APIGW_CUSTOMER_DOMAIN_FLAG customer-services/*.tf

customer-services/apigw-customer-info-out.tf: cust-domain-flag = “${var.APIGW_CUSTOMER_DOMAIN_FLAG}” # Added as part of SC-8851
customer-services/apigw-customer-services-var.tf:variable “APIGW_CUSTOMER_DOMAIN_FLAG” {} # Added as part of SC-8851

$ terraform init

Initializing modules…

  • module.deploy-instances-01030003063
  • module.deploy-instances-01030003084
  • module.deploy-instances-01040003175
  • module.customer-services

Error: module “customer-services”: “APIGW_CUSTOMER_DOMAIN_FLAG” is not a valid argument

Attaching the files

terraform_files.txt

Posts: 1

Participants: 1

Read full topic

Assign variable using tfe_variable

$
0
0

@bentinata wrote:

While experimenting on managing Terraform Cloud using tfe_* providers, I’m having an idea to manage Terraform Cloud workspace variable using terraform, and keep a “meta” workspace to manually add or remove variables. I’m having a difficulty with reusing terraform.tfvars since you can’t interate var like:

variable "shared_vars" {
  default = ["aws_access_key", "aws_secret_key"]
}

resource "tfe_variable" "shared" {
  count = "${length(var.shared_vars)}"

  workspace_id = tfe_workspace.shared.id
  category     = "terraform"
  key          = var.shared_vars[count.index]
  
  # this got error "var object cannot be accessed directly"
  #value = "${lookup(var, var.shared_vars[count.index])}"
  
  # this, too, got error "var object cannot be accessed directly"
  #value = var[var.shared_vars[count.index]]

  # this evaluate as plain "var.variablename"
  #value = "${format("var.%s", var.shared_vars[count.index])}"
}

I’ve also tried with defining variables as maps, and using var.map as opposed to var:

variable "aws_access_key" {}
variable "aws_secret_key" {}

variable "map"
  default = {
    # this got error "variables not allowed"
    aws_access_key = var.aws_access_key
    aws_secret_key = var.aws_secret_key
  }
}

# later on tfe_variables
value = "${lookup(var.map, var.shared_vars[count.index])}"

A workaround is to just define the map variable using terraform.tfvars.json. But this feels ugly and weird to me. I know this is backwards way of thinking, but I’d like to know what can I do with terraform and Terraform Cloud. I feel like, tfe_workspace should just accept variables arguments, instead of making each one it’s own resource.

Any thoughts on this?

Posts: 1

Participants: 1

Read full topic

Executing a module according to a variable value

$
0
0

@mkelnermishal wrote:

Hey

I would like to execute a module only when a certain variable is set to specific value:
pseudo-code
if ok_to_exec == true
exec module stuff

Please advise on how to do that, I found something that might be similar, but not exactly:
in the module, add the following line:
to_exec_count = "${var.ok_to_exec ? 0 : 1}
default for ok_to_exec is “false”
when set to true, all resources related to the module will be executed.
Is that correct ?

thank you in advance,
Michal

Posts: 1

Participants: 1

Read full topic

Problem using for_each and indexing

$
0
0

@nathankodilla wrote:

I have the following code:

data "aws_vpc" "main" {
  for_each = var.vpcs
  id    = each.key
}

resource "mongodbatlas_network_peering" "main" {
  for_each = var.vpcs
  project_id             = "${mongodbatlas_project.main.id}"
  container_id           = "${mongodbatlas_network_container.main.container_id}"
  provider_name          = "AWS"
  accepter_region_name   = "us-east-1"
  aws_account_id         = "${data.aws_caller_identity.current.account_id}"
  route_table_cidr_block = data.aws_vpc.main[each.key].cidr_block
  vpc_id                 = each.key
}

resource "aws_vpc_peering_connection_accepter" "peer" {
  for_each = var.vpcs
  vpc_peering_connection_id = "${mongodbatlas_network_peering.main[each.key].connection_id}"
  auto_accept               = true
}

But trying to access mongodbatlas_network_peering.main with each.key throws an error of:

74:   vpc_peering_connection_id = "${mongodbatlas_network_peering.main[each.key].connection_id}"
    |----------------
    | each.key is "vpc-xxxxxxxxx"
    | mongodbatlas_network_peering.main is tuple with 2 elements

The given key does not identify an element in this collection value: a number
is required

From my understanding since I was using a for_each with mongodbatlas_network_peering I should be able to access that via the var.vpcs map key. The data.aws_vpc.main object seems to work fine that way.

For reference the var.vpcs object is:

variable vpcs {
  description = "List of vpc and route table ids to pair mongo atlas with"
  type = map(object({
    route_table_ids = list(string)
  }))
}

Posts: 7

Participants: 2

Read full topic

Terraform 12 - aws subnetting

$
0
0

@aricwilisch wrote:

I’m just starting with Terraform and seem to be stuck on getting some syntax correct.

I’m creating an aws instance on a non-default VPC and need to specify the subnet_id so it’s created in the right place, but I don’t want to hard-code the aws subnet ID which is what I’m seeing in most of the examples I’m coming up with.

I have CIDR blocks defined in a separate variables file but the code I’m inheriting doesn’t seem to have any subnets specifically defined, so I’m not sure if that’s something I need to do beforehand or not.

I appreciate any help anyone can give.

Posts: 3

Participants: 3

Read full topic

Redirecting the console output for a custom UI

$
0
0

@jghantous1977 wrote:

I’m building an app that will call Terraform CLI in the back engine, however I would like to provide UI updates on the process while terraform is executing, is this possible and if so how?

I would be ok with polling if I can’t hook up to an event stream of some sort.

Posts: 1

Participants: 1

Read full topic

VSphere Provider, Instant Clones and managing post-clone state

$
0
0

@brian57860 wrote:

Hello,

I’m currently adding functionality to support Instant Clones in the Terraform vSphere Provider and I’m looking for some advice on the best way to approach reconciling the cloned virtual machine with the desired Terraform State.

Specifically, once I’ve created the Instant Clone, its state is inherited from the VM from which it was cloned and any deviations from the Terraform plan would need to be corrected by invoking the reconfigure method.

This is a simple enough task but being an Instant Clone I only want to invoke a reconfiguration of the VM if one is actually required, as any reboot will eradicate any memory sharing benefits.

Looking at how a resourceVSphereVirtualMachineUpdate operation handles any deviation, it invokes the function expandVirtualMachineConfigSpec which compares the state with the diff. However, as this is a create operation, I have no existing state to work with and therefore get false positives with regard to whether changes have taken place and whether any reboots are required.

So my question is do I need to reproduce all the code in expandVirtualMachineConfigSpec and any functions it invokes in order to compare the state of the Instant Clone with the desired Terraform state, can I somehow force an update of the Diff prior to invoking expandVirtualMachineConfigSpec or is there any other means that currently evades me?

Posts: 1

Participants: 1

Read full topic


Techniques to Extremely slow plan

$
0
0

@josephholsten wrote:

I have a scale test, I’m trying to create 300 instances, with 5 volumes each, and the associated attachments between them.

Roughly:

resource "instance" "i" {
  count = 300
}
resource "volume" "v" {
  count = 1500
}
resource "volume_attachment" {
  count = 1500
  instance_id = instance.i[floor(count.index / 5)].id
  volume_id = volume.v[count.index].id
}

Now if I exclude the attachments, I get a 16sec plan. But if I include them, I end up with a 28 minute plan.

I’ve tested having all the attachments go to a single instance and volume pair, I get a nice snappy plan.

Is there some way to tell terraform that it doesn’t have to walk all n*m resources while it’s planning? The best I can come up with is to convert my config to json template and have a script to generate all the resource groups without counts.

Anyone got a less terrible approach? This was supposed to be my medium size scale test, I don’t think I can justify going larger without a workaround.

Posts: 1

Participants: 1

Read full topic

Using aws_ssm_parameter with Workspaces to share data across modules in different repositories

$
0
0

@SenhorCastor wrote:

I need your help. I am trying to use workspaces for differentiating environments, plus ssm_parameter to store configuration to be shared.

For example:
On one repo:
resource “aws_ssm_parameter” “cluster_name” {
name = “${terraform.workspace}-cluster-name”
type = “String”
value = aws_ecs_cluster.main.name
}
which creates the paramter production-cluster-name on Parameter Store

and on the other repo when I try to retrieve it:

data "aws_ssm_parameter" "cluster_name" {
 name  = "${terraform.workspace}-cluster-name"
}

It complains with:

*Error: Error describing SSM parameter: ParameterNotFound: *

status code: 400, request id: 49a02e8a-8974-4504-a1a4-e52593b7657a

on inputs.tf line 1, in data “aws_ssm_parameter” “cluster_name”:

1: data “aws_ssm_parameter” “cluster_name” {

There is something basic I am not getting.
Thanks in advance

Posts: 2

Participants: 1

Read full topic

New in terrafom - Problem with aws_security_group_rule importation

$
0
0

@Mrg77 wrote:

Hello all,

i have a little problem with my terraform execution, i use the 0.11 version, and i want to import an existing aws infrastrure.

Many of my resources have already been imported, but when i import aws_security_group, terraform absolutely want to remove aws_security_group… I think he cannot import the rules, so i change my method for create resource :

Before :

resource “aws_security_group” “WinRM” {
name = “toto-terraform.workspace-WinRM”
vpc id = “${aws_vpc.toto.id}”

tags {
Environment = “terraform.workspace”
Name =“toto-terraform.workspace-winrm”
}
lifecycle {
ignore_changes = [“name”,“description”]
}
}

ingress {
 from_port   = 5986
 to_port     = 5986
 protocol    = "tcp"
 cidr_blocks = ["0.0.0.0/0"]

}
lifecycle {
ignore_changes = [“name”,“description”]
}
}

And after :

resource “aws_security_group” “WinRM” {
name = “toto-terraform.workspace-WinRM”
vpc_id = “aws_vpc.toto.id”

tags {
Environment = “terraform.workspace”
Name =“toto-terraform.workspace-winrm”
}
lifecycle {
ignore_changes = [“name”,“description”]
}
}

resource “aws_security_group_rule” “WinRM” {
type = “ingress”
from_port = 5986
to_port = 5986
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
security_group_id = “aws_security_group.WinRM.id”

}

unfortunately, when i import SG and SG rules, and i run terrafom plan, terraform want destroy “aws_security_group_rule.WinRM-1”

Any advice ? :slight_smile:

Thank you !

Posts: 1

Participants: 1

Read full topic

Issuse with latest version terraform 0.12.13

Terraform plan refresh issue with aws_ssm_parameter resource

$
0
0

@denisbr wrote:

I have imported some pre-existing AWS SSM parameter store parameters using terraform import. I also created the terraform code to generate these parameters :

resource "aws_ssm_parameter" "resource_foo" {
  name = "resource_foo"
  description = "Lorem ipsum"
  type = "String"
  value = "foo_value"
  tags = {
    "Project" = "foo_project",
    "aws:cloudformation:stack-name" = "foo_project_stack",
    "aws:cloudformation:logical-id" = "foo_project_logical-id",
    "aws:cloudformation:stack-id" = "arn:aws:cloudformation:xxx:xxx",

  }
}

The imported state looks like this (first issue):

    {
      "mode": "managed",
      "type": "aws_ssm_parameter",
      "name": "resource_foo",
      "provider": "provider.aws",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "allowed_pattern": "",
            "arn": "arn:aws:ssm:xxx:xxx",
            "description": "Lorem ipsum",
            "id": "resource_foo",
            "key_id": "",
            "name": "resource_foo",
            "overwrite": null,
            "tags": {
              "Project": "foo_project"
            },
            "tier": "Standard",
            "type": "String",
            "value": "foo_value",
            "version": 2
          }
        }
      ]
    },

So naturally, when I run terraform plan, terraform says:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_ssm_parameter.resource_foo will be updated in-place
  ~ resource "aws_ssm_parameter" "resource_foo" {
        arn         = "arn:aws:ssm:xxx:xxxx"
        description = "Lorem ipsum"
        id          = "resource_foo"
        name        = "resource_foo"
      ~ tags        = {
            "Project"                       = "foo_project"
          + "aws:cloudformation:logical-id" = "foo_project_logical-id"
          + "aws:cloudformation:stack-id"   = "arn:aws:cloudformation:xxx:xxx"
          + "aws:cloudformation:stack-name" = "foo_project_stack"
        }
        tier        = "Standard"
        type        = "String"
        value       = (sensitive value)
        version     = 2
    }

I even tried manually adding the missing tags in the statefile, and if I then proceed to do terraform plan -refresh=false terraform then says everything is up to date.

If I do a normal terraform plan it does a state refresh, and then thinks that the three extra tags are missing…

Is this a bug? Should I report it as such?

Posts: 2

Participants: 2

Read full topic

Viewing all 11443 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>