Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11399 articles
Browse latest View live

Mixing older version of remote modules with latest Terraform 0.12.20

$
0
0

@krish7919 wrote:

Hi,

We use numerous modules with the the source set to source = git://.... However, our remote modules are still on v0.10.8, and I am just trying to add one new independent module using v0.12.20.

I get numerous interpolation warnings and errors such as Error: Unsupported block type and Error: Reference to undeclared input variable.

Is it not possible to upgrade the modules one by one? Isn’t it a big drawback if this upgrade path isn’t supported?

If it is possible, can you please provide some suggestions on how this is achieved? I have googled a bit but have hit no result so far.

Thanks!


Krish

Posts: 3

Participants: 2

Read full topic


How to pass an array via local-exec

$
0
0

@rmattier wrote:

I have a local-exec provisioner. I have a ansible playbook I’m trying to pass a list of private_ips. The line looks like: command = “ansible-playbook -e ‘hostname=elastic-master-{count.index + 1} node_type=master environ={var.environ} master_list={aws_instance.elastic-master.*.private_ip}' -u ec2-user -i '{self.private_ip},’ --private-key ‘~/.ssh/id_rsa’ packages.yml”

So, it seems the section “master_list=${aws_instance.elastic-master.*.private_ip}” is the problem.

Posts: 1

Participants: 1

Read full topic

Terraform cloud IP's for whitelist on private repo, where can we find then?

$
0
0

@thcp wrote:

Hello everyone,

My team is currently using the free tier of terraform (small team of 3 people) and we are facing an issue regarding setting up the access of bitbucket cloud. But since our repository is private, Where i could find the information regarding which IP’s from terraform cloud should I whitelist.

Same question posted on reddit: https://www.reddit.com/r/Terraform/comments/f8rnr6/anyone_had_success_connecting_terraform_cloud/

Posts: 1

Participants: 1

Read full topic

Best practices on keeping terraform code and application code

$
0
0

@mystycalprorok wrote:

Hi.

Are there any good practices on keeping terraform code and application code?

Currently for e.g. AWS Lambdas we keep the terraform code along application code as a one repository, but I am not sure if this approach is aligning with best practices.

I did found only articles which describe how to organize only the terraform code itself, and not application code + terraform code.

Do you have any thoughts on this?

Regards,
Adam

Posts: 1

Participants: 1

Read full topic

Iam_user_access_to_billing call is initiating replacement of the AWS account

$
0
0

@supratiksekhar wrote:

Hello

I have imported AWS account that does not have the IAM User Access to Billing enabled.

I enabled the same in Terraform (aws_organizations_account) and ran plan.
Terraform is trying to recreate the account.

Why “iam_user_access_to_billing” is a replacement call?

Thanks

Posts: 1

Participants: 1

Read full topic

Validation block over multiple values

$
0
0

@avarmaavarma wrote:

This validation block works for a single input variable.

variable "mytestname" {

     validation {
        condition = length(regexall("^test", var.mytestname)) > 0
        error_message = "Should start with 'test'"
     }
}

I need it to work inside a for_each - or have some workaround to accomplish this. The issue is that there is a restriction on the condition statement - the condition HAS to take in the input variable itself (i.e. - it cannot accept an each.value)

variable "mytestnames" {

listnames = split(",",var.mytestnames)     

for_each = var.listnames

     validation {
        condition = length(regexall("^test", each.value)) > 0
        error_message = "Should start with test"
      }
}

The above snippet does not work. I need a way I can iterate over a list of values and validate each of them using the validation block.

Posts: 1

Participants: 1

Read full topic

Accessing a file from outside .terraform

$
0
0

@MPersaud38704 wrote:

Hey there,

I’ve created a module that exists within a github repo. However, instead of passing all the variables within that module block I’m trying to do something in the form of the following within the variables.tf that exists within the module that’s initalized

# Variable for directory on where to find yaml files
  tfsettingsfile = "../../../environments/${terraform.workspace}.yaml"
  # Loads yaml file of workspace. If none found it just sets the contents to a default content
  tfsettingsfilecontent = "${fileexists(local.tfsettingsfile) ? file(local.tfsettingsfile) : "NoTFSettingsFileFound: true"}"
  # decodes the yaml file
  tfworkspacesettings = "${yamldecode(local.tfsettingsfilecontent)}"
  vars = "${merge(local.default, local.tfworkspacesettings)}"

The goal is that the module will be able to read all the values from the local yaml file and use that instead of passing the variables 1 by 1 Is this possible does anyone have insight on how to do this?

When I do the terraform init and apply. I Keep getting the NoTFSettingsFileFound

Posts: 1

Participants: 1

Read full topic

Hierarchical resource creation in loop (AWS Provider)

$
0
0

@anovak-sbs wrote:

Hoping for some advice on how to create hierarchical resources in a loop in Terraform v12x.
The exact use case I have is trying to create aws_api_gateway_resource in a parent child relationship.

I would like to model the hierarchy as a flat list [‘parent_path_step’, ‘child_path_step’], pass this list to a module and have the module use a looping construct in Terraform to build the path steps top down:

parent_path_step
    child_path_step

The hierarchy in the AWS provider is obtained via the parent_id property of the aws_api_gateway_resource. The first iteration of the loop has to refer to the root_id provided by the API Gatweway resource, the second iteration of the loop would have to refer to the id generated from the resource creation of ‘parent_path-step’. IE refer to the id returned by loop iteration [current - 1]

Any ideas on how I could achieve this please?

Cheers
AN

Posts: 1

Participants: 1

Read full topic


For loop returns list of map of maps

$
0
0

@mhumeSF wrote:

Trying to return a list of maps but keep getting a of map of maps.

locals {
  emails = {
    us-east-1 = {
      prod_account = {
        me-at-example-com = {
          "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com",
          "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com",
          "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
        },
      },
    },
    us-west-2 = {
      dev_account = {
        other-at-example-com = {
          "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com",
          "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com",
          "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com",
        },
      },
      prod_account = {
        me-at-example-com = {
          "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com",
          "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com",
          "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com",
        },
      },
    }
  }
}
output "emails" {
  value = flatten([
    for region, accounts in local.emails: [
      for account, emails in accounts: {
        for email, records in emails: "${region}_${account}_${email}" => records
      }
    ]
  ])
}

Output results in list of maps of maps. When I just want a list of maps.

emails = [
  {
    "us-east-1_prod_account_me-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    }
  },
  {
    "us-west-2_dev_account_other-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    }
  },
  {
    "us-west-2_prod_account_me-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    }
  },
]

Looking to get

emails = {
    "us-east-1_prod_account_me-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    },
    "us-west-2_dev_account_other-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    }
    "us-west-2_prod_account_me-at-example-com" = {
      "xxx._domainkey.goodrx.com" = "xxx.dkim.amazonses.com"
      "yyy._domainkey.goodrx.com" = "yyy.dkim.amazonses.com"
      "zzz._domainkey.goodrx.com" = "zzz.dkim.amazonses.com"
    }
  }

Posts: 1

Participants: 1

Read full topic

Using element() in output

$
0
0

@mooperd wrote:

Hi,

I cannot get element() to work in mf tf output.

output "kube_hosts" {
  description = "Control plane endpoints to SSH to"

  value = {
    control_plane = {
      cluster_name         = var.cluster_name
      cloud_provider       = "vsphere"
      private_address      = []
      public_address       = element(vsphere_virtual_machine.control_plane.*.guest_ip_addresses, 1)
      ssh_agent_socket     = var.ssh_agent_socket
      ssh_port             = var.ssh_port
      ssh_private_key_file = var.ssh_private_key_file
      ssh_user             = var.ssh_username
    }
  }
}

Am I doing something stupid?

kube_hosts = {
  "control_plane" = {
    "cloud_provider" = "vsphere"
    "cluster_name" = "testing-2"
    "private_address" = []
    "public_address" = [
      "10.2.2.20",
      "10.1.32.211",
      "fe80::250:56ff:fead:f818",
      "fe80::250:56ff:fead:72c1",
    ]
    "ssh_agent_socket" = "env:SSH_AUTH_SOCK"
    "ssh_port" = 22
    "ssh_private_key_file" = "testing-2/id_rsa"
    "ssh_user" = "ubuntu"
  }
}

I am expecting to only see one public_address - the 2nd entry.

Cheers,

Andrew

Posts: 1

Participants: 1

Read full topic

Terraform 2.0 Diable Windows updates and Delete OS disk on termination

$
0
0

@philthynz wrote:

With the release of Terraform 2.0, and “azurerm_windows_virtual_machine”. How can we now use “azurerm_windows_virtual_machine” to disable windows updates and delete the OS disk on termination? The old “enable_automatic_upgrades” and “delete_os_disk_on_termination” do not work with “azurerm_windows_virtual_machine”.

Thanks

Posts: 2

Participants: 2

Read full topic

Azure pipelines and terraform

$
0
0

@RussellMaycock wrote:

Hi, ​
I’m having a problem trying to get my pipelines and terraform to work together.
I have two separate azure build pipelines, one for a website and one for the infrastructure whic is terraform code. When it comes to the release I have two artifacts, one from each build.

I use terraform to deploy the infrastructure, my release pipeline was configured to run a plan (without saving it to a file) and then run apply. The last task in my release pipeline has an azure web app deployment task which deploys the website code to the web app.

This works fine except lately I have seen that most pipelines are using the plan as it was supposed to be used by creating a terraform plan and then apply the plan. The problem I have with this is if I update the website code and try to redeploy to the site because there are no changes to the terraform code, the release pipeline will fail saying the plan is stale, so it never gets to the task to update the web site code.

If I create two stages, Infra and Dev how to I set azure devops release pipelines to allow the Dev to update the website provided that the infrastructure exists, and that the artifact has been updated and if I just want the Infrastructure to update and not redeploy the website.

Do I need two separate release pipelines, I don’t to go this way.

Posts: 1

Participants: 1

Read full topic

How to Assign an EIP to a Bastion host running in an ASG

$
0
0

@sulemanb wrote:

We have an EC2 bastion host running in an ASG - Autoscaling group. I have added 2 “Scheduled Actions” - [shutdown and startup] - in the ASG to efficiently used the host, e.g. scale down to zero during non-working hours.

The scheduled actions are all fine, but the problem is that when the bastion host terminates as per the shutdown schedule and again when it comes back as per the startup schedule, it gets assigned a new/different Public IP. With a new Public IP the problem is that the users have to change the DNS/IP in their putty clients every time they need to make an SSH connection. This is not good!

I then adjusted my Terraform to assign an EIP to the Bastion ASG instance. I could not find a straight forward way to do it other than through the AWS cli as per this article and a similar explanation at a couple of other sources too.

However, even after applying the changes as explained in this article, unfortunately, I am still unable to assign/associate an EIP to my ASG instance that would stick/stay the same when my instance comes up again.

Has anyone addressed a similar problem and have some pointers solutions for it?

[code]
resource "aws_launch_configuration" "bastion-host" {
  ##
  count           = var.deploy_bastion ? 1 : 0
  name_prefix     = var.bastion_host_launch_configuration_name
  image_id        = var.amis[var.aws_region]
  instance_type   = var.bastion_host_instance_type
  key_name        = aws_key_pair.public_key.key_name
  security_groups = [aws_security_group.bastion-host[count.index].id]
  


  ##
  associate_public_ip_address = true


  #root_block_device {
  #  delete_on_termination = false
  #  volume_size = 10
   # volume_type = "gp2"
  #}


  user_data = <<EOF
  #cloud-config
  runcmd:
    - aws ec2 wait instance-running --instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id)
    - aws ec2 associate-address --instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id) --allocation-id ${aws_eip.bastion-host.id} --allow-reassociation
  EOF
}


resource "aws_eip" "bastion-host" {
  vpc = true
}



resource "aws_autoscaling_group" "bastion-host" {
  ##
  count      = var.deploy_bastion ? 1 : 0
  name                      = var.bastion_host_autoscaling_group_name
  vpc_zone_identifier       = [var.dxyz_eks_public_subnet_1, var.dxyz_eks_public_subnet_2]
  launch_configuration      = aws_launch_configuration.bastion-host[count.index].name
  min_size                  = var.deploy_bastion ? 1 : 0
  max_size                  = var.deploy_bastion ? 2 : 0
  health_check_grace_period = 300
  health_check_type         = "EC2"
  force_delete              = true
    
  tag {
    key                 = "Name"
    value               = var.bastion_host_autoscaling_group_tag_name
    propagate_at_launch = true
  }
}


# Stop all instances each weekday at 8pm 
resource "aws_autoscaling_schedule" "bastions-host-weekdays-shutdown" {
  count      = var.deploy_bastion ? 1 : 0
  scheduled_action_name = "bastions-host-weekdays-shutdown"
  min_size = 0
  max_size = 0
  desired_capacity = 0
  recurrence = var.bastion_host_autoscaling_weekdays_shutdown_schedule  #"00 20 * * MON-FRI"
  autoscaling_group_name = aws_autoscaling_group.bastion-host[count.index].name
}


# Startup 1 instance each weekday at 7am 
resource "aws_autoscaling_schedule" "bastions-host-weekdays-startup" {
  count      = var.deploy_bastion ? 1 : 0
  scheduled_action_name = "bastions-host-weekdays-startup"
  min_size                  = var.deploy_bastion ? 1 : 0
  max_size                  = var.deploy_bastion ? 2 : 0
  desired_capacity = 1
  recurrence = var.bastion_host_autoscaling_weekdays_startup_schedule #"00 07 * * MON-FRI"
  autoscaling_group_name = aws_autoscaling_group.bastion-host[count.index].name
}
 
[/code]

When I look at the Auto Scaling Launch Configuration under User data on the AWS webconsole, it shows as:

  #cloud-config
  runcmd:
    - aws ec2 wait instance-running --instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id)
    - aws ec2 associate-address --instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id) --allocation-id eipalloc-035039833565d2d30 --allow-reassociation

How i am not sure if it had any impact or caused any error.

On the other hand, the ASG and all in general look fine. It is just that when the instance goes down, it gets a new IP.

Posts: 1

Participants: 1

Read full topic

Terraform with AWS - Can't destroy IG sometimes!

$
0
0

@john-morsley wrote:

Hi guys,
From time-to-time Terraform hangs whilst attempting to destroy my infrastructure. It’s usual culprit is my Internet Gateway! Is/has anyone else having this issue?
Many thanks,
John

Posts: 1

Participants: 1

Read full topic

Adding a default certificate to aws network load balancer error: certificate not found

$
0
0

@juanluisbaptiste wrote:

Hi, I’m trying to create an aws network load balancer that attachs to some certificates in ACM. When running terraform I’m getting this error when setting up the default certificate:

module.swarm_cluster.module.network_lb.aws_lb_listener.listener-https-certs[0]: Still creating... [5m0s elapsed]
Error: Error creating LB Listener: CertificateNotFound: Certificate 'arn:aws:acm:us-east-2:828535259631:certificate/7632c411-02b1-4ac3-ad3c-c3de09b5b212' not found
        status code: 400, request id: 7a48106e-beff-4c62-a441-a31154470e6d

But the certificate does exists, I can get it using aws acm cli:

$ aws acm get-certificate --region us-east-2 --profile work --certificate-arn arn:aws:acm:us-east-2:828535259631:certificate/7632c411-02b1-4ac3-ad3c-c3de09b5b212
2b1-4ac3-ad3c-c3de09b5b212
{
    "Certificate": "-----BEGIN CERTIFICATE-----
...

This is the load balancer listener code:

resource "aws_lb_listener" "listener-https-certs" {
	count = var.attach_certificates ? 1 : 0
    load_balancer_arn       = aws_lb.load_balancer.arn
	port                = 443
	protocol            = "TLS"
	certificate_arn     = "arn:aws:acm:us-east-2:828535259631:certificate/7632c411-02b1-4ac3-ad3c-c3de09b5b212"

	default_action {
	target_group_arn = aws_lb_target_group.tg-https.arn
	type             = "forward"
	}
}

The terraform apply output for that resource:

# module.swarm_cluster.module.network_lb.aws_lb_listener.listener-https-certs[0] will be created
  + resource "aws_lb_listener" "listener-https-certs" {
      + arn               = (known after apply)
      + certificate_arn   = "arn:aws:acm:us-east-2:828535259631:certificate/7632c411-02b1-4ac3-ad3c-c3de09b5b212"
      + id                = (known after apply)
      + load_balancer_arn = "arn:aws:elasticloadbalancing:us-east-2:828535259631:loadbalancer/net/nlb-prod/be5676851ad42121"
      + port              = 443
      + protocol          = "TLS"
      + ssl_policy        = (known after apply)

      + default_action {
          + order            = (known after apply)
          + target_group_arn = "arn:aws:elasticloadbalancing:us-east-2:828535259631:targetgroup/prod-nlb-tg-443/ba6e028afd6683a7"
          + type             = "forward"
        }
    }

The curious thing is that this code was tested and working some weeks ago, the only thing different is that new certs were imported into ACM.

What could be making terraform think the certificate does not exist ?

full balancer code here: https://pastebin.com/aNh5F8sh

Posts: 2

Participants: 1

Read full topic


Lists with trailing commas templated by terraform, rejected by AWS

$
0
0

@piyat wrote:

Hey!

Background

To give an overview of what I’m doing - I’m working on a project in tf 0.12. The goal is to stand up some databases and data migration infrastructure. I’ve put together a couple of modules to handle that and a calling module to supply input.

In my test environment I have 3 RDS instances, 3 DMS replication instances (1 per RDS), and 6 DMS endpoints (3 source, 3 target). What I want to do is allow some database, dev types into the AWS console so they can configure the endpoints, and replication tasks manually and then once happy I can port that into the module.

Issue

I want to create a few sets of aws_iam_policy, aws_iam_group and aws_iam_group_policy_attachment resources so that per environment all users in the group will have certain permissions to explicitly defined DMS resources.

The IAM policy that is being rendered contains syntax errors, and as far as I can tell terraform is adding a trailing comma to list objects in the policy definition.

Versions
Terraform: 0.12.19
provider.aws: version = “2.46”
provider.onepassword: version = “0.5”
provider.random: version = “2.2”

What I’m doing

  • I’m using outputs/data sources to source 3 tuples of dms ARN’s from a separate child module - one ‘list’ each for source endpoints, target endpoints and repl instances - some example data below.

  • I’m merging these in a local var into a single list

  • I’m formatting this var with jsonencode and then attempting to use it as a templating var in a json iam policy template

    • This didn’t work, so I’ve attempted the same using the data source aws_iam_policy_document, but ran into the same error

    terraform console

    local.all_dms_merged
    [
    “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
    “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
    “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:rep:somevalue”,
    ]
    jsonencode(local.all_dms_merged)
    [“arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,“arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,“arn:aws:dms:eu-west-1:xxxxxxxxxxxx:rep:somevalue”]

At this point, it looks as though jsonencode has done the trick and I now have a properly formatted string to inject into my template, so I run this:

resource "aws_iam_policy" "dms_policy" {
name        = "tf-dms-permissions-${terraform.workspace}"
description = "Policy allowing console users to access dms resources created in terraform workspace."
path        = "/"
policy = templatefile("${path.module}/policies/dms_permissions.json", { dms_resources = jsonencode(local.all_dms_merged) })
}

The json policy template saved in ${path.module}/policies/dms_permissions.json

{
“Version”: “2012-10-17”,
“Sid”: “DMSAllowedOperations”,
“Statement”: [
{
“Effect”: “Allow”,
“Resource”: [
${dms_resources}
],
“Action”: [
“dms:DescribeSchemas”,
“dms:DescribeRefreshSchemasStatus”,
“dms:ModifyReplicationTask”,
“dms:StartReplicationTask”,
“dms:DescribeEventSubscriptions”,
“dms:DescribeEndpointTypes”,
“dms:DescribeEventCategories”,
“dms:StartReplicationTaskAssessment”,
“dms:DescribeOrderableReplicationInstances”,
“dms:ListTagsForResource”,
“dms:DescribeConnections”,
“dms:DescribeReplicationInstances”,
“dms:DeleteReplicationTask”,
“dms:TestConnection”,
“dms:DescribeEndpoints”
]
}
]
}

Expected

The rendered policy file is formatted as specified in either the template supplied to the templatefile function or in the data.aws_iam_policy_document renderer.

Actual

The aws iam/CreatePolicy API rejects the rendered policy with: MalformedPolicyDocument

2020/02/27 11:25:38 [DEBUG] aws_iam_policy.dms_policy: apply errored, but we’re indicating that via the Error pointer rather than returning it: Error creating IAM policy example: MalformedPolicyDocument: Syntax errors in policy.

The terraform plan looks like this:

aws_iam_policy.dms_policy will be created

  • resource “aws_iam_policy” “dms_policy” {
    • arn = (known after apply)
      2020/02/27 11:25:34 [DEBUG] command: asking for input: “Do you want to perform these actions in workspace “dev”?”
    • description = “Policy allowing console users to access dms resources created in terraform workspace.”
    • id = (known after apply)
    • name = “tf-dms-permissions-dev”
    • path = “/”
    • policy = jsonencode(
      {
      + Statement = [
      + {
      + Action = [
      + “dms:DescribeSchemas”,
      + “dms:DescribeRefreshSchemasStatus”,
      + “dms:ModifyReplicationTask”,
      + “dms:StartReplicationTask”,
      + “dms:DescribeEventSubscriptions”,
      + “dms:DescribeEndpointTypes”,
      + “dms:DescribeEventCategories”,
      + “dms:StartReplicationTaskAssessment”,
      + “dms:DescribeOrderableReplicationInstances”,
      + “dms:ListTagsForResource”,
      + “dms:DescribeConnections”,
      + “dms:DescribeReplicationInstances”,
      + “dms:DeleteReplicationTask”,
      + “dms:TestConnection”,
      + “dms:DescribeEndpoints”,
      ]
      + Effect = “Allow”
      + Resource =
      + [
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:endpoint:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:rep:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:rep:somevalue”,
      “arn:aws:dms:eu-west-1:xxxxxxxxxxxx:rep:somevalue”,
      ],
      + Sid = “DMSAllowedOperations”
      },
      ]
      + Version = “2012-10-17”
      }
      )
      }

The problem with this policy it seems is the trailing comma in both the Action list and Resource list.

If I drop this policy into policy sim and remove these trailing commas, the json is valid and I’d expect the call to AWS would succeed.

If this is a known bug - is there a workaround?

Posts: 2

Participants: 2

Read full topic

Best practice needed: Dedicated Rootservers, Cloud Instances, Terraform and Ansible

Migrate from count to for_each

$
0
0

@juanjojulian wrote:

Good morning,

We are in the process of re-writing our Terraform modules to make use of “for_each” instead of “count” and we are finding difficulties migrating networks deployed with the old module to the new one. I would like to know if there is an official procedure for this.

The main problems we are finding and solution approach we are taking:

  1. If you try to apply the new code in an already existing vnet Terraform will plan to destroy every subnet and create it with the new nomenclature:

destroy: module.network.azurerm_subnet.subnet[0]
add: module.network.azurerm_subnet.subnet["westeurope-dev-app"]

  1. Ok, don’t panic, lets rename all the subnets in the state file instead of destroy/add them:

terraform state mv 'module.network.azurerm_subnet.subnet[0]' 'module.network.azurerm_subnet.subnet["westeurope-dev-app"]'

The main problem is that “count” deploys a ‘list’ of resources while “for_each” creates a ‘map’ and the mv command only renames the resource but it doesn’t change the list to a map so once you finish renaming subnets you end up with an empty list to the eyes of Terraform because it cannot access its elements (a list element should be accessed by position ‘number’, not by ‘key’". You will also experience some Terraform crashes after this movement because of “nill interface”…

Clue is in the state file:

"each": "map", versus "each": "list",

  1. Ok, after many try/test we found two ways to “fix” this new problem:
  • By removing only one subnet from the state file and importing it again Terraform changes from list to map and everything seems fine, is it?

terraform state rm 'module.network.azurerm_subnet.subnet["westeurope-dev-app"]'
terraform import 'module.network.azurerm_subnet.subnet["westeurope-dev-app"]' "/subscriptions/BLAHBLAH/providers/Microsoft.Network/virtualNetworks/westeurope-vnet/subnets/westeurope-dev-app"

  • Edit state file with your favourite vi flavour and change from list to map. Is this correct?

We would like to script a solution to be able to migrate all our networks but we would appreciate some official confirmation from Hashicorp to be sure we are not fixing something and breaking another.

Posts: 2

Participants: 2

Read full topic

Creating multiple AWS SNS topic and subscriptions

$
0
0

@bradroe wrote:

Hi,

I’m trying to create a module that will create multiple AWS SNS topics and subscriptions. I can create multiple topics no problem, but when trying to create the subscriptions to use the topics ARN I’m getting an error.

Inappropriate value for attribute "topic_arn": string required.

The code I am using is below -

main.tf
resource “aws_sns_topic” “this” {
count = var.create_sns_topic && length(var.name) > 0 ? length(var.name) : 0
name = var.name[count.index]
display_name = var.display_name
}

resource “aws_sns_topic_subscription” “this” { count = var.create_sns_topic && length(var.name) > 0 ? length(var.name) : 0
topic_arn = aws_sns_topic.this[count.index]
protocol = “var.protocol”
endpoint = “var.endpoint”
}

outputs.tf
output “sns_topic_arn” {
description = “ARN of SNS topic”
value = aws_sns_topic.this.*.arn
}

variables.tf
variable “create_sns_topic” {
type = bool
default = true
}
variable “name” {
type = list(string)
default = [“test1”, “test2”, “test3”]
}
variable “display_name” {
type = string
default = “test”
}

Can anyone offer any advice?

Posts: 3

Participants: 2

Read full topic

Trigger module based on input

$
0
0

@dhineshbabuelango wrote:

I need to trigger a module only if a variable is present, is that achievable in terraform

so the module should run only if the var.lbname has some value.

module “record_set_creation” {
source = “…/r53-private-hosted-zone/record-set”
enable = var.create_subdomain
private_zone_id = var.private_zone_id
record_set_name = var.record_set_name
elb_hostname = var.lbname

Posts: 4

Participants: 3

Read full topic

Viewing all 11399 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>