Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11403 articles
Browse latest View live

Terraform support for AWS ElastiCache creation from an external Redis Export in S3

$
0
0

@leigu wrote:

From AWS Console, I can create a new ElastiCache cluster from an external Redis export in S3. I only see input parameters for ElastiCache snapshot but not support for Redis export file. How can I create a new cluster using external Redis export using Terraform?
Thanks.

Posts: 1

Participants: 1

Read full topic


How do I prepare an S3 bucket to receive S3 logs?

$
0
0

@nhnicwaller wrote:

I’m using Terraform to create two S3 buckets, one to contain my website and a second bucket to store logs generated by S3. My trouble is that I’m not sure how to prepare the second bucket to allow S3 to write logs into it. My declaration looks something like this:

resource "aws_s3_bucket" "website" {
  bucket = "website"
  acl = "private"

  logging {
    target_bucket = "${aws_s3_bucket.logs.id}"
    target_prefix = "s3/"
  }
}

resource "aws_s3_bucket" "logs" {
  bucket = "logs
  acl = "private"
}

But when I try to apply this configuration, terraform gives me a reasonable error.

1 error occurred:

  • aws_s3_bucket.website: 1 error occurred:
  • aws_s3_bucket.website: Error putting S3 logging: InvalidTargetBucketForLogging: You must give the log-delivery group WRITE and READ_ACP permissions to the target bucket

I found a relevant question on Stack Overflow, but the top voted answer has a comment about needing to “tighten this up” so I don’t feel comfortable copying the answer there.

Amazon has pretty good instructions for granting access to the Log Delivery Group, but of course that doesn’t really help when using Terraform.

Posts: 2

Participants: 1

Read full topic

Connect Azure Windows terraform with ansible provisioner

$
0
0

@mohamed3laa33 wrote:

I am trying to create windows vm on azure using terraform then connect it to ansible directly I followed a lot approaches to auto connect to the vm using winrm http and https but every time I get connectivity issue, can you advise me with the best approach

Posts: 1

Participants: 1

Read full topic

How to create ecsAutoscaleRole with terraform?

$
0
0

@fhcat wrote:

I am trying to set up auto-scaling for my fargate tasks but I am confused about the iam role that I need. All articles I am reading refer to a role “${aws_iam_role.ecs_autoscale_role.arn}” but I am not finding it in our account. The terraform section that refers to this role is the target:

resource "aws_appautoscaling_target" "target" {
  service_namespace  = "ecs"
  resource_id        = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.web.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  role_arn           = "${aws_iam_role.ecs_autoscale_role.arn}"
  min_capacity       = 1
  max_capacity       = 4
}

If I go to the UI to manually configure autoscaling, it says it will create role ecsAutoscaleRole but that fails with

Failed creation of IAM Autoscale role
IAM Autoscale role could not create ecsAutoscaleRole: User: arn:aws:sts::xxxxxxxxx:assumed-role/Resource-Admin/shenv is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::xxxxxxx:role/ecsAutoscaleRole with an explicit deny (Service: AmazonIdentityManagement; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxx-xxxx-xxxx-x-xxx93b9b33d0fe9)

which I think makes sense because my user does not have the deploy permissions. We always use terraform to create everything. But how can I create this ecsAutoscaleRole ? I am not finding any documentation that helps me.
The articles I am following are:

Posts: 1

Participants: 1

Read full topic

Cleaning up failed build with create_before_destroy

$
0
0

@nickgrealy wrote:

I have an intermittent scenario, where a terraform deploy will fail while provisioning a server. The server has create_before_destroy = true, but because of the error (below) I end up with two resources - the original server (tainted), and the new server (failed).

e.g. communication failed, however droplet was created (now I have 2x instances).

digitalocean_droplet.web: Creating...
digitalocean_droplet.web: Still creating... [10s elapsed]

Error: Error waiting for droplet () to become ready: strconv.Atoi: parsing "": invalid syntax

  on main.tf line 47, in resource "digitalocean_droplet" "web":
  47: resource "digitalocean_droplet" "web" {

A couple of questions:

  1. is this second (new) instance tracked by Terraform, or is the reference to this instance lost?

  2. how would I re-apply my plan (i.e. recover and proceed with deployment), so that I only end up with one server instance?

  3. how would I roll back this plan (i.e. backout deployment), so that I only end up with the one original server instance?

Here’s the script I’m using to run terraform:

terraform init -input=false
terraform workspace select ${TF_WS}
terraform taint digitalocean_droplet.web || true
terraform plan -var-file=${TF_WS}.auto.tfvars -input=false -out=tfplan
terraform apply -input=false tfplan

(deployment is occurring on a CI/CD server - can provide TF config if required)

Posts: 2

Participants: 2

Read full topic

AWS WAF Restrict request from specific domains

$
0
0

@RaghvendraGit wrote:

I want to create an AWS WAF with rules which will allow access to only specific domains like example1.com, example2.com to my cloudfront distribution. It can be done using AWS console, there I can specify header, referer, match Type, string to match etc but in terraform I am not able to find any waf resource which can do the things for me although it is available for IP set but I want to use domains. Please help on this.
I am referring below link for my requirement and want to achieve this using Terraform : https://aws.amazon.com/blogs/security/how-to-prevent-hotlinking-by-using-aws-waf-amazon-cloudfront-and-referer-checking/

Posts: 1

Participants: 1

Read full topic

Depends_on(explicit ) take more precedence over implicit dependency

Can depends_on(explicit ) take more precedence over implicit dependency?

$
0
0

@rayuduc wrote:

Hello,
Is there anyway to pass extra args/flag for explicit dependency take more precedence over implicit dependency ?.
I have use case where we have created module using google_compute_instance and google_dns_record_set, VM’s created using compute resources automatically update DNS records. As its updating and creating DNS records sets in specified DNS zones correctly and count increment and decrement remove and update DNS records, But when we destroy specific VM it’s automatically destroying all DNS records due to google_dns_record_set have implicit dependency on google_compute_instance.
terraform destroy -target=“module.testing-vm.google_compute_instance.node[1]”
google provider version- google-beta and google latest
terraform version: 0.12.10

module/google_vm/google-compute.tf
count = (var.add_external_ip == true ? var.amount : 0)
resource “google_compute_instance” “node” {
count = var.amount

name = format("%s%02d", var.name, count.index + 1)
machine_type = var.machine_type
project = var.test_project
allow_stopping_for_update = true
zone = element(var.zones, count.index%length(var.zones))
tags = var.tags

DNS records explicit definition

depends_on = [google_dns_record_set.dns_record]
boot_disk {
initialize_params {
image = var.boot_disk_image
size = var.boot_disk_size
}
}
network_interface {
subnetwork = var.subnetwork
}

metadata = {
env = var.res_env
}
}

resource “google_dns_record_set” “dns_record” {
count = var.amount
name = “{format("%s%02d", var.name, count.index + 1)}.{var.dns_zone}”
managed_zone = “{var.dns_name}" type = "{var.dns_record_type}”
ttl = “${var.dns_record_ttl}”

compute implicit definition

rrdatas = ["${google_compute_instance.node.*.network_interface.0.network_ip[count.index]}"]
}

================= above module usage
module “dev-test” {
source = “./module/google_vm”
name = "dev-test
amount = 2
machine_type = var.app_machine
project = var.test-project
subnetwork = google_compute_subnetwork.test1_sub_network.self_link
zones = var.vm_zone
dns_record = true
dns_name = var.dns_name
dns_zone = var.dns_zone
dns_record_ttl = var.dns_record_ttl
dns_record_type = var.dns_record_type
tags = [
“env”,
]
label = “dev-test”
}

Posts: 1

Participants: 1

Read full topic


AWS:Module organisation for the VPC configuration

$
0
0

@shamonshan wrote:

I am a little confused with how to organize the VPC as a module, right now I have a single file called vpc.tf which contains all the configuration for the VPC.

Here is the list of the resoureces

  • VPC
  • Subnet(3-priv and 3-pub)
  • Routetable(3-priv subnet and 3-pub subnet)
  • Elastic-ip(3)
  • Internet gateway(3)
  • NAT gateway(3)

As you see above the resource configurations is duplicated except for the VPC , what is the best approach to modularize the above configuration

Posts: 1

Participants: 1

Read full topic

Is Terraform really cloud agnostic?

$
0
0

@ds33s wrote:

Hi,

Does Terraform support easy transition from one cloud provider to another? For example OpenStack to AWS, without having to rewrite all the configuration files?

I’m taking this course because my team is considering switching from OpenStack (in our own organisation data centre) to an external cloud provider (AWS, GCP, etc.). This is part of a bigger vision of DevOps and CI/CD which requires us to automate our infrastructure creation and management, hence the interest in Terraform and Kubernetes.

When I first heard of Terraform, one of the attractive features seemed to be the ability to write infrastructure in a cloud agnostic way. Perhaps naively I thought it meant that I could write one set of files that could then be used to deploy our infrastructure on any cloud provider (OS, AWS, GCP, etc.).

However, now that I am learning a bit more about Terraform, this seems less straight forward. For example, if I want to create a web server, I need to write different *.tf files depending on the cloud provider I’m targeting.

For example, for AWS I need to define an aws_instance resource, and specify things like an ami code, etc. All of those are AWS specific and require understanding of how the Amazon platform works. If I want the same infrastructure on OpenStack or Azure, I have to rewrite all of my Terraform files. There is no automatic conversion between providers is there?

If that is correct, in what way does Terraform claim to be cloud agnostic? Am I missing something here?

Posts: 1

Participants: 1

Read full topic

Are provisioners really "a last resort"?

$
0
0

@mwatts15 wrote:

I was looking at the getting started guide here: https://learn.hashicorp.com/terraform/getting-started/provision

which links to this page on provisioners: https://www.terraform.io/docs/provisioners/index.html

which indicates that provisioners are a “last resort” which suggests to me that provisioners should be a special case thing for most users, but putting them in the getting started guide suggests they’re one of the first things you reach for – is there some cognitive dissonance among the documentation writers?

Posts: 1

Participants: 1

Read full topic

Create multiple resources with multiple attributes without using Count

$
0
0

@mizunos wrote:

I need to create multiple gitlab users using Terraform without using the count function.

A user have the following attributes : name, username, email, initial pw. Want to apply that pattern for a list of 20 users.

Posts: 1

Participants: 1

Read full topic

AWS-VPC module : add tags to subnets

Create resources iterating through the values of a map rather key

$
0
0

@harshavmb wrote:

Hi All,

I couldn’t figure out how I can loop through the sum of values of the below map.

variable "images" {
  default = {
    "rhel-8-factory-os-ready" = {
       "availability_zone" = "eu-fra-1ah"
       "flavor" = 4
       "instance_count" = 2
       "image_name" = "rhel-8-factory-os-ready"
    },
    "rhel-7-factory-os-ready" = {
       "availability_zone" = "eu-fra-1ai"
       "instance_count" = 3
       "flavor" = 3
       "image_name" = "rhel-7-factory-os-ready"
    },
    "rhel-6-factory-os-ready" = {
       "availability_zone" = "eu-fra-1ah"
       "instance_count" = 3
       "flavor" = 3
       "image_name" = "rhel-6-factory-os-ready"
    }
  }
}

Here, I’ve to iterate through the sum of instance_count attribute of the all the keys & create instances based on the instance_count.

I could calculate the sum of instance_count, with below inbuilt functions.

locals {
  list_sum = length(flatten([for i in var.images: range(i["instance_count"])]))
}

How can I iterate through the list_sum variable & create the resources based on instance_count?

I created below lists to create resources ::

locals {
  list_images = tolist(keys(var.images))
  list_instance_count = [for i in var.images: i["instance_count"]]
  list_flavors = [for i in var.images: i["flavor"]]
  list_image_names = [for i in var.images: i["image_name"]]
  list_availability_zones = [for i in var.images: i["availability_zone"]]
}

My resource ::

resource "openstack_compute_instance_v2" "instance" {
  count = local.list_sum
  image_name = element(local.list_image_names, count.index +1 )
  flavor_id = element(local.list_flavors, (count.index + 1) )
  name = element(local.list_image_names, (count.index + 1) )
  security_groups = var.security_group
  availability_zone = element(local.list_availability_zones, (count.index + 1) )
  key_pair = "foptst"
  network {
    name = var.network_name
  }
}

By now, you may be knowing that my iteration is incorrect. My resource block has to create the number of resources based on instance_count var ie., 2 instances of rhel-8-factory-os-ready, 3 instances of rhel-7-factory-os-ready and 3 instances of rhel-6-factory-os-ready.

Because of incorrect looping, I couldn’t get it. It would be great if someone could help me how to iterate properly to create resources as expected.

Many Thanks in advance,
Harsha

Posts: 1

Participants: 1

Read full topic

For_each depends_on


Azure Subnet module with misleading "Private Link..." Flags

$
0
0

@ruirosamendes wrote:

The flags:
enforce_private_link_endpoint_network_policies
enforce_private_link_service_network_policies

Should probably named:
disable_private_link_endpoint_network_policies
disable_private_link_service_network_policies

The default value is FALSE which set on azure side a “Enabled” value for private_link_endpoint_network_policies flag.

If I want to use Private endpoints for example I have to “Disable” private_link_endpoint_network_policies which means, to say enforce_private_link_endpoint_network_policies = TRUE in Terraform.

So “enforce” seems not to be the right word.

What do you thing about it?

Regards, Rui

NOTE:
I build a Vnet module and my code now is:

resource “azurerm_subnet” “subnets” {

for_each = local.subnets

name = each.value[“name”]

address_prefix = each.value[“address_prefix”]

virtual_network_name = azurerm_virtual_network.vnet.name

resource_group_name = var.resource_group_name

enforce_private_link_endpoint_network_policies = each.value[“disable_private_link_endpoint_network_policies”]

enforce_private_link_service_network_policies = each.value[“disable_private_link_service_network_policies”]

service_endpoints = each.value[“service_endpoints”]

Posts: 1

Participants: 1

Read full topic

Importing Infrastructure terraform cloud

$
0
0

@camechis wrote:

Is there a way to begin using terraform cloud with infrastructure that has been created by other means? Meaning can you write terraform modules and import the existing state ?

Posts: 1

Participants: 1

Read full topic

Reference to variable in locals

$
0
0

@vmorkunas wrote:

Hello,

I have the locals block which looks like this:

routing = {
        "Ldap_Data" = {
            src_rt_ids = var.ldap_ops_private_route_table_ids
            dst_rt_ids = var.base_info["data_vpc"].private_route_table_ids
            src_cidr = var.ldap_ops_vpc_cidr
            dst_cidr = var.base_info["data_vpc"].cidr_block
            peering_connection_id = var.peering_connection_ids["Ldap-Data"]
            name = "ldap-data"
            src_provider = "aws.ldap"
            dst_provider = "aws.stack"
        }
}

In some cases var.base_info[“data_vpc”] doesn’t exist and it gives an error:
The given key does not identify an element in this collection value.

How to check, if this variable is set before assignment?

Posts: 2

Participants: 1

Read full topic

Retrieving DNS name from aws_efs_file_system resource

$
0
0

@nyue wrote:

I am using the following code to hopefully retrieve the DNS name of the AWS EFS filesystem. The validation passes, the plan creation was fine but when I ran it, it errors out. I tried looking for the /tmp log file it mention but could not find them. The instance was not created so I was not able to ssh into the instance to have a look at the logs on the remote host.

provisioner “remote-exec” {
inline = [
“sudo echo ${aws_efs_file_system.cluster_efs.dns_name} > /tmp/nfs.txt”
]
}

I have the following resource

—cut----
“root_module”: {
“resources”: [
{
“address”: “aws_efs_file_system.cluster_efs”,
“mode”: “managed”,
“type”: “aws_efs_file_system”,
“name”: “cluster_efs”,
“provider_name”: “aws”,
“schema_version”: 0,
“values”: {
“arn”: “arn:aws:elasticfilesystem:ca-central-1:083230063072:file-system/fs-fc78d411”,
“creation_token”: “cluster_efs”,
“dns_name”: “fs-fc78d411.efs.ca-central-1.amazonaws.com”,
“encrypted”: true,
“id”: “fs-fc78d411”,
“kms_key_id”: “arn:aws:kms:ca-central-1:083230063072:key/387e89d0-53cd-42e0-a10f-06c0f956dd26”,
“lifecycle_policy”: ,
“performance_mode”: “generalPurpose”,
“provisioned_throughput_in_mibps”: 0,
“reference_name”: null,
“tags”: {
“Name”: “EfsExample”
},
“throughput_mode”: “bursting”
}
},

—cut----

Posts: 1

Participants: 1

Read full topic

Each.value. and nested maps not returning attributes (0.12)

$
0
0

@ClusterDaemon wrote:

I’m pursuing the authoring of an r53_zones module, and I’m having issues with the way for_each seems to be addressing my nested map data structure I’m using as input.

For example, given the below input variable:

variable "zones" {
  type = map(
    object({})
  )
  default = {
    "test0" = {
      "vpc_ids" = ["vpc-obscura", "vpc-blerg"],
      "force_destroy" = true,
      "tags" = {"test" = "true"}
    },
    "test1" = {
      "vpc_ids" = ["vpc-blerg"],
      "force_destroy" = true,
      "tags" = {
        "test" = "true",
        "zone_name" = "test1"
      },
    }
  }
}

And the below resource block:

resource "aws_route53_zone" "this" {
  for_each = var.zones
# Some lines
  force_destroy = each.value.force_destroy
# More lines
}

I get the plan output of:

on main.tf line 20, in resource "aws_route53_zone" "this":
20:   force_destroy = each.value.force_destroy
|----------------
| each.value is object with no attributes
This object does not have an attribute named "force_destroy".

But the object in question is an object with force_destroy as one of its keys. So what’s up with that? Or more aptly, is this a valid way to grab a key that’s nested in the parent map’s value which is defined in for_each? I’ve tried various forms of lookup() with similar results.

Posts: 4

Participants: 2

Read full topic

Viewing all 11403 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>