Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11361 articles
Browse latest View live

How to create a desktop selenium project on AWS Device Farm


How to use condition inside a for loop?

$
0
0

How do i use a condition inside a for_each such that , the below resource is created/triggered only when validation_method == “DNS” else ignores the creation of a resource. However in earlier terraform (11 versions) we can use count = var.validation_method == “DNS” ? length(var.domain_names) : 0
My var.domain_names is map(list(string)) , eg domain_names = {
foo.com” = ["*.foo.com"]
}

Please suggest me some solutions. Using terraform 0.12.20 version

resource "aws_route53_record" "validation" {
  for_each   = var.validation_method == "DNS" ? var.domain_names :
  name       = aws_acm_certificate.certificate[each.key].domain_validation_options.0.resource_record_name
  type       = aws_acm_certificate.certificate[each.key].domain_validation_options.0.resource_record_type
  zone_id    = data.aws_route53_zone.selected[each.key].zone_id
  ttl        = "300"
  records    = [aws_acm_certificate.certificate.domain_validation_options.0.resource_record_value]
  depends_on = [aws_acm_certificate.certificate.domain_name]
}

2 posts - 2 participants

Read full topic

Ability to mark a resource's availability blocked until conditional is met

$
0
0

I have run into two situations in the last few weeks where terraform has felt like it is either missing some functionality, or (more likely) that there is a pattern I’m not aware of that would remove that feeling.

One pattern is where a resource is created, but needs human interaction before a dependant resource can be created. My specific example is using AWS’s secrets manager as the source for injecting a secret into a module. We use terraform to create the aws_secretsmanager_secret resource, then a human needs to set the value, then we can add the module utilizing the value of that resource. What I (think) I’d like to do is, define all resources at once, but have some sort of dependency on the module akin to depends_on [data.aws_secretsmanager_secret_version.secret_string != ‘’]. The output of a run would then be “x resource skipped for failed dependency” informing the user of this incomplete change.

The second example of this I ran into was with AWS’s requirements around the need for certain resource types to be accepted before they can be used (mostly in cross-account scenarios). Specifically, to share a transit gateway across accounts it is a 5 step process back and forth to be completely set up. The workflow to create this in terraform then becomes (the most relevant part of this is highlighting the constant back and forth):

In account A:
Define the Transit Gateway (TGW) in account A
Define the Resource Access Manager (RAM) share A
Share the RAM share with account B

In account B:
Accept the RAM share

In account A:
Define a VPC peer request

In account B:
Accept the VPC peer request
Create routes to the TGW using the new peer connection

In account A:
Create the routes using the now accepted VPC peer

This is a lot of back and forth doing small snippets, where in reality the end state config could not be re-applied anywhere else (or again) without commenting out various resources because it is a back and forth process. Again, a similar ability for “x depends on y being in z state” does not fix the back and forth, but does make it more re-usable without commenting/uncommenting parts.

Does anyone have a pattern for these types of scenarios that helps with this?

1 post - 1 participant

Read full topic

Trouble resolving subnet-id using tf .12

$
0
0

I have defined two subnets like this:

resource "aws_subnet" "archer-public-1" {
  vpc_id                  = aws_vpc.archer.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "${var.AZ1}"
}

resource "aws_subnet" "archer-public-2" {
  vpc_id                  = aws_vpc.archer.id
  cidr_block              = "10.0.2.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "${var.AZ2}"
}

Now I want to create an EC2 instance and deploy each instance in each Public subnet defined above. I tried this with no success:

resource "aws_instance" "nginx" {
  count = 2
  ami           = var.AMIS[var.AWS_REGION]
  instance_type = "t2.micro"

  # the VPC subnet
  subnet_id = "aws_subnet.archer-public-${count.index}.id"  <=== ?????????
  #subnet_id = aws_subnet.archer-public-2.id
...
}

The above subnet_id assignment doesn’t work… removing the double-quotes doesn’t either… So how can I dynamically deploy that EC2 in each public Subnet defined above?

1 post - 1 participant

Read full topic

Duplicate modules downloaded while using git repo as source

$
0
0

I’m using remote git repo as source for some of my modules. I have multiple different modules in the remote repo such as vpc , staticconfigs , datasource etc.
I refer remote repo in my module by using source git.

module jfm_static_configs {
    **source = "git@github.com:xxxx/tfv12_modules.git//modules/jfm_static_configs"**
}
module datasource {
    **source = "git@github.com:xxxx/tfv12_modules.git//modules/datasource"**
}
module vpc {
    **source = "git@github.com:xxxx/tfv12_modules.git//modules/network/vpc"**
    name = "test"
}

The remote repo contents are as below

Remote repo has different modules and I import these modules as listed above. When I do a terraform get I notice these modules in report repo are imported to .terraform folder multiple times with the different modules names as listed above.

Duplicate imported remote module & folders

This works fine with out any issues. But when my modules grow in my remote repo the no of imported folders in the .terraform folder will also grow.

Would like to understand whether I’m incorrectly referring the modules or is it the way terraform behaves which does not seem to be right. Is there a way to avoid doing this?

1 post - 1 participant

Read full topic

Tainting a resource with Terraform Cloud

$
0
0

I’m just starting out using Terraform - I have my tf files in a bitbucket repo and it’s linked to Terraform Cloud, creating infrastructure in AWS. All working fine on the whole.

I’m having some issues with a particular AWS resource which I want to taint. How can I do this given I’m using Cloud and not the CLI?

I’ve tried the CLI against my local clone of the repo but it tells me “backend reinitialization required. Please run terraform init” which I’m loathe to do in case it screws things up.

So:

  1. can I use CLI and Cloud concurrently on the same config? If so, how?
  2. using Cloud, how do I taint a resource?

1 post - 1 participant

Read full topic

Multiple accounts and aws providers

$
0
0

Hi

First time posting - I have a situation - and I don’t know enough of the terminology to search properly for the answer.

I have 20 AWS accounts and I’d like to create a centrally managed Transit gateway in AWS. This part I can do - all ok here. When I need to create TGW associations in multiple accounts, the way I found this was to use multiple providers for AWS and alias them:

provider “aws” {
alias = “account1”
region = “eu-west-1”
allowed_account_ids = [“72938479233”]
assume_role {
role_arn = “arn:aws:iam::72938479233:role/terraform”
}
}

provider “aws” {
alias = “account2”
region = “eu-west-1”
allowed_account_ids = [“72938479233”]
assume_role {
role_arn = “arn:aws:iam::72938479233:role/terraform”
}
}

I have 20 account and probably more to come. How can I use a Map here instead of each account listed?

Can I create a local data bundle (say in YAML) and then use this to variablise the AWS alias’s

Any thought? Am I on the wrong track?

Many Thanks
Tag

1 post - 1 participant

Read full topic

Inline way differences when define AWS Routes

$
0
0

Hi, i’m trying to setup route dynamically in aws_route_table resource definition, well, writing it the classic way:

 resource "aws_route_table" "tm_private_route_table" {
  vpc_id = "vpc-1111111111"
  route {
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = "nat-1111111111"
  }

it works perfect, but, if i’m try to write as a list of multiple blocks as documented in https://www.terraform.io/docs/configuration/attr-as-blocks.html#defining-a-fixed-object-collection-value :

resource "aws_route_table" "tm_private_route_table" {
  vpc_id = "vpc-1111111111"
  route = [
    {
      cidr_block = "0.0.0.0/0"
      nat_gateway_id = "nat-1111111111"
    }
  ]

It throws an error:

Inappropriate value for attribute "route": element 0: attributes "egress_only_gateway_id", "gateway_id", "instance_id", "ipv6_cidr_block", "network_interface_id", "transit_gateway_id", and "vpc_peering_connection_id" are required.

Any idea ?

1 post - 1 participant

Read full topic


Terraform Versioning

$
0
0

Hi Everyone,

Thank you in advance for reading and helping me wrap my mind around this issue. At my organization, we just started leveraging Terraform to create and manage our cloud resources. At the moment, we using Terraform in GitLab-CI CI/CD pipelines in a few different repositories. In my environment, Terraform is executed from the terraform Docker container on DockerHub. My question around what the best practices are versioning Terraform for CICD pipelines. When terraform 0.12.13 came out, we decided to pin to that version back in November to keep things relatively stable while we learned Terraform. Now we are quite a few versions behind, but more mature in our Infrastructure as Code platform.

It looks like in Terraform 0.12.14, lots of new syntax changes came in regarding deprecating variables using: ${var.variable} syntax which is a huge change for us. We are currently working on migrating all our pipelines to the new syntax and terraform 0.12.26 version. In an effort to not repeat this upgrade exercise again with new Terraform releases, what is the best practice for managing Terraform in a CICD pipeline? I see it working like this:

  1. Pin the Terraform version and do periodic upgrades: This works well for keeping things stable, but requires a lot of syntax changes to be back ported and code changes across our terraform repositories.

  2. Use the latest or light Docker Tag: This approach means that we will always be running the latest Terraform version which could introduce breaking changes into our CICD pipelines. Not to mention running Terraform locally will mean our team will always be downloading the latest release. This may also have implications for when Terraform 0.13 arrives. Any new syntax changes in that release could potentially cause issues the pipelines to break.

Are there any other options or best practices for using Terraform in a CICD pipeline?
Ideally, I wish there was a terraform:stable Docker tag that would be a little slower to update. This way we could avoid running bleeding edge, but not get stuck a few versions behind. Also, if there was a Docker tag for terraform:0.12 which always kept to the most recent version of the 0.12.x release, that would helpful as well.

I’m wondering as well how other folks version their Terraform projects? Run latest? or pin to a specific version and periodically upgrade?

Thanks in advance!

1 post - 1 participant

Read full topic

Workspace job not pulling changes from Azure DevOps branch

$
0
0

I switched my VCS configuration from default to “develop” branch and it stopped working. The workspace doesn’t get triggered when I push changes and if I run it manually it is not pulling the last commits.

Here we see I did set the branch to “develop”.

And here my “develop” branch on ADO. The last two commits are not being pulled by the job.

image

And finally when the job runs it confirms the branch is “develop” but the last commit it is picking up is the old one.

1 post - 1 participant

Read full topic

The object 'vim.Folder:group-v54745' has already been deleted or has not been completely created

$
0
0

Hey guys so im just starting out on terraform and im trying to implement this within our company infrastructure, for use with vsphere.

I ran a test with the following bit of syntax and everything went well, it did what i expected it to do, it created a folder within the correct data center:

data "vsphere_datacenter" "dc" {
    name = "LON01"
}

resource "vsphere_folder" "instance" {
    datacenter_id = "${data.vsphere_datacenter.dc.id}"
    path = "test"
    type = "vm"
}

unfortunately i deleted the folder within the vsphere gui not using terraform destroy. Now whenever i run the same bit of syntax, i get the following error:

vsphere_folder.folder: Refreshing state… [id=group-v54745]

Error: cannot locate folder: ServerFaultCode: The object 'vim.Folder:group-v54745' has already been deleted or has not been completely created

how do i go about fixing this? ive read the terraform documentation and couldnt find anything, i looked in different forums and nothing. I then started playing around with the resource part of the script, by adding different values, and also the data segment of the script but no joy.

any help is a blessing.

many thanks
D

1 post - 1 participant

Read full topic

Is there a way to do this in terraform (12)?

$
0
0
resource "rancher2_project" "myproject" {
    count       = var.enable_namespace ? 1 : 0                         
    name        = "myproject"                                                                           
    ...                                                                                                                       
}                                                                               
                                                                    
resource "rancher2_namespace" "mynamespace" {                             
    count       = var.enable_namespace ? 1 : 0                         
    name        = "mynamespace"                                                        
    project_id  = rancher2_project.myproject.id
    ...
}                                                                               

So both the project and the namespace depend on var.enable_namespace, in theory this could work. But in reality if var.enable_namespace is false terraform says that rancher2_project.myproject.id is not defined.

I thought about dynamic blocks, but I guess they are just for inside a resource block. If not, I have no idea how to define it for resources.

So is there any way to do this?

1 post - 1 participant

Read full topic

Terraform destroy azure load balancer

$
0
0

Hi Guys,

I’ve been trying to create a virtual machine scale set with terraform and its creating fine, but when I try to perform terraform destroy I receive this message below. Any ideas on how could I solve this issue?

thanks in advance

Error: Error waiting for completion of Load Balancer “vmss-see-d-01-LB” (Resource Group “RG-VMSS-D-SEE-01”): Code=“Canceled” Message=“Operation was canceled.” Details=[{“code”:“CanceledAndSupersededDueToAnotherOperation”,“message”:“Operation PutLoadBalancerOperation (81ab2118-37e3-4552-a2f7-e1e12bccb1e5) was canceled and superseded by operation InternalOperation (1d4e2e27-f457-4941-b3b8-e6352f84ddd1).”}]

1 post - 1 participant

Read full topic

External deployment configuration with terraform and helm

$
0
0

I’m quite new to terraform so my problem might be naive but still, I do not know how to solve it.
I have a terraform script that uses helm_release (https://www.terraform.io/docs/providers/helm/r/release.html) problem is that I have deployment configuration in a separate git repository. Let’s say: https://project.git/configuration.
What I would like to achieve is to be able to add all files from e.g. https://project.git/configuration/dev (for dev env deployment) to config map.
The structure of terraform module is:

+project
+-helm
+--templates
+---configmap.yaml
+---deployment.yaml
+---ingress.yaml
+---service.yaml
+--chart.yaml
+--values.yaml
+-helm.tf
+-other tf files

I need all files from the configuration repository to be placed under

+project
+-helm
+--configuration

As only then I’m able to use this:

apiVersion: v1
kind: ConfigMap
metadata:
...
data:
{{- (.Files.Glob "configuration/*").AsConfig | nindent 2 }}

in my configmap.yaml, the helm requires those files to be placed in chart directory.

I’m also keen of making https://project.git/configuration a terraform module. So I can use it as a submodule. But the problem remains the same, how to make those files available in project/helm/configuration

1 post - 1 participant

Read full topic

Unable to view EKS cluster created from terraform resources

$
0
0

We are creating an EKS cluster using terraform resources: aws_eks_cluster and aws_eks_node_group.
After applying these resources, when we query for the nodes using kubectl, we are able to see the nodes in the cluster with all the auto scaling settings. We are even able to fetch the kubeconfig file of this cluster in out bastion instance. But we are not able to view this cluster from the AWS console under the EKS service. Is this an expected behavior? Is there a way in which the EKS cluster can be seen on the console?
Below is our code:

resource "aws_eks_cluster" "eks_cluster" {
  name            = "${var.eks_cluster_name}"
  role_arn        = "${var.iam_role_master}"
  vpc_config {
    security_group_ids = ["${var.sg-eks-master}"]
    subnet_ids = ["${var.subnet_private1}", "${var.subnet_private2}"]
    endpoint_private_access= true
    endpoint_public_access = true
        public_access_cidrs = ["<ip_range>"]
  }
}

resource "aws_eks_node_group" "example" {
  cluster_name    = "${var.eks_cluster_name}"
  node_group_name = "ng-${var.eks_cluster_name}"
  node_role_arn   = "${var.iam_role_node}"
  subnet_ids      = ["${var.subnet_private1}", "${var.subnet_private2}"]
  ami_type = "${var.image_id}"
  instance_types = "${var.instance_type}"
  
  scaling_config {
    desired_size = 1
    max_size     = 4
    min_size     = 2
  }

1 post - 1 participant

Read full topic


How should I define this output?

$
0
0

I have the following aws_instance definition which works fine. Most annoying is the fact that I can’t get the right syntax to output the public_ip for each instance produced… Here is what I’ve got:

  1 resource "aws_instance" "nginx" {
  2   for_each = aws_subnet.archer-public
  3   ami           = var.AMIS[var.AWS_REGION]
  4   instance_type = "t2.micro"
  7   subnet_id = each.value.id
 10   vpc_security_group_ids = [aws_security_group.allow-ssh.id]
 13   key_name = aws_key_pair.archerkeypair.key_name
 15   monitoring = true
 19   tags = {
 20     Name  = "archer-nginx}"
 21   }
 24 }
 25
 26 output "nginx public_ip" {
 27   value = aws_instance.nginx[0].public_ip   <==== this doesn't work!
 28 }
 29

How can I properly specify the syntax to print out the public_ip of each EC2 instance created? I’ve tried:
aws_instance.nginx[0].public_ip
aws_instance.nginx[ * ].public_ip
aws_instance.nginx.*.public_ip

… I can’t get it right…

2 posts - 2 participants

Read full topic

Terraform 0.13 Beta released!

$
0
0

The Terraform Team is excited to announce the availability of Terraform 0.13 beta 1.

This release is all about community. Terraform 0.13 brings the ability to use count, for_each, and depends_on for modules. We’ve also made some changes to the way we install third-party providers as part of the upcoming ability to use partner & community providers in the Terraform Registry. Please have a look at our changelog and draft upgrade guide for details.

You can download Terraform 0.13 here: https://releases.hashicorp.com/terraform/0.13.0-beta1/

Information about the beta program and getting started can be found here: https://github.com/hashicorp/terraform/blob/guide-v0.13-beta/README.md

Please see the beta guide above for information about reporting issues and providing feedback.

Don’t forget to join us at HashiConf Digital for updates about Terraform, the Terraform Provider Registry, and more.

2 posts - 1 participant

Read full topic

How do we setup Backups in Google Cloud Spanner Instance using Terraform

$
0
0

How do we setup Backups in Google Cloud Spanner Instance using Terraform. The google_spanner_database documentation does not specify what parameters are required.

1 post - 1 participant

Read full topic

Terraform 0.11: is there a way to create iam policy statements dynamically?

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

Terraform: is there a way to create iam policy statements dynamically?

$
0
0

Terraform version: 0.11

I am running multiple eks clusters and trying to enable IAM Roles-based service account in all cluster following this doc:

This works when I hardcode the cluster name in the policy statement and create multiple statements

data "aws_iam_policy_document" "example_assume_role_policy" {

# for cluster 1

  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.example1.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:aws-node"]
    }

    principals {
      identifiers = ["${aws_iam_openid_connect_provider.example2.arn}"]
      type        = "Federated"
    }
  }
}

Since I have multiple clusters, I want to be able to generate the statement dynamically
so I made the following changes:

I created a count variable and changed values in principals and and condition

count = "${length(var.my_eks_cluster)}" 

    condition {
      test     = "StringEquals"
      variable = "${replace(element(aws_iam_openid_connect_provider.*.url, count.index), "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:aws-node"]
    }

    principals {
      identifiers = ["${element(aws_iam_openid_connect_provider.*.url, count.index)}"]
      type        = "Federated"
    }

Terraform now is able to find the clusters BUT also generate multiple policies.
And this will not work, since in the following syntax, the assume_role_policy doesn’t take the list

resource "aws_iam_role" "example" {
  assume_role_policy = "${data.aws_iam_policy_document.example_assume_role_policy.*.json}"
  name               = "example"
}

It seems like instead of creating multiple policies, I need to create generate multiple statements in one policy (so I can add to one iam_role). Has anyone done something similar before ? Thanks.

1 post - 1 participant

Read full topic

Viewing all 11361 articles
Browse latest View live