Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11411 articles
Browse latest View live

Lifecycle ignore_changes for a key in all blocks

$
0
0

@Evesy wrote:

Hi there,

I have a resource similar to the below which has a dynamic number of origins blocks based on some environmental overrides:

resource "cloudflare_load_balancer_pool" "delivery-platform" {
  for_each = var.load_balancer_pools

  name = "gcp-delivery-platform-${each.key}"
  description = "GCP Delivery Platform - ${title(each.key)}"
  enabled = true
  minimum_origins = 1
  notification_email = ""

  dynamic "origins" {
  for_each = each.value
    content {
      name = origins.key
      address = origins.value["address"]
      weight = lookup(origins.value, "weight", 1)
    }
  }

  monitor = cloudflare_load_balancer_monitor.ingress-nginx[each.key].id
}

I’m looking to be able to add a lifecycle rule to ignore changes for the weight key in all origin blocks.

I thought I could perhaps use some sort of splat syntax but that doesn’t seem to be valid:

  lifecycle {
    ignore_changes = [
      // origins[*]["weight"]
      origins[*].weight
    ]
  }

The only current workaround I can think of is to add ignore rules for origins [0], [1], [2] etc. up to a sensible number that I don’t think will be exceeded.

Is anyone aware of a better way to achieve this?

Cheers!

Posts: 1

Participants: 1

Read full topic


Conditional null resource

$
0
0

@manishingole-coder wrote:

Hi Team,

I would like someone to help me with the below use case.

I have a null resource created which will run some bash script on a remote server. If the bash script failed it should trigger another script which will revert changes made by the previous script.

I have tried writing terraform config file like this

resource “null_resource” “efm” {
triggers = {
the_trigger = “${var.always_switch}”
}

connection {
host = “{aws_instance.myfirstec2instance.public_ip}" private_key = "{file(var.pem_file_path)}”
user = “${var.ssh_user}”
}

provisioner “local-exec” {
# Bootstrap script called with private_ip of each node in the clutser
command = “/bin/bash {path.module}/utilities/scripts/setup.sh '{local.ssh_ip_list}’ '{local.config_ip_list}' {var.pemserverip} {var.ssh_user} {var.pem_file_path} {var.region_name} {aws_instance.myfirstec2instance.private_ip} ${aws_instance.myfirstec2instance.public_ip}”

on_failure = "fail"

}

provisioner “local-exec” {
# Bootstrap script called with private_ip of each node in the clutser

command = "var.always_switch == false ? /bin/bash -xxx ${path.module}/utilities/scripts/revert.sh  '${local.ssh_ip_list}' '${local.config_ip_list}'  ${var.pemserverip} ${var.ssh_user} ${var.pem_file_path} ${var.region_name} ${aws_instance.myfirstec2instance.private_ip} ${aws_instance.myfirstec2instance.public_ip} : echo wrongstpes"

}

}

Can someone help me to achieve the above use case?

Terraform version: 0.12.9

Posts: 1

Participants: 1

Read full topic

Azure - get a ID from vnet

$
0
0

@pc-dok wrote:

hi everyone

i try to find out, how i can get in my main.tf file, where i make a vnet peering, after i have create a Azure AADDS, my remote_virtual_network_id. I dont want to hardcode that, so how i can get this info into a output or variable or something similar?

my code now is:

resource “azurerm_virtual_network_peering” “vnetpeering1” {
name = “N4K-TO-AADS”
resource_group_name = var.rg
virtual_network_name = var.vnet-n4k
remote_virtual_network_id = “/subscriptions/blablabla/aadds-vnet”
allow_virtual_network_access = true
allow_forwarded_traffic = true
}

resource “azurerm_virtual_network_peering” “vnetpeering2” {
name = “AADS-TO-N4K”
resource_group_name = var.rg
virtual_network_name = var.vnet-aadds
remote_virtual_network_id = “/subscriptions/blablabla/n4k-v01-we-vn-001”
allow_virtual_network_access = true
allow_forwarded_traffic = true

i dont make that in the same step as i create all the other ressources, because AADD i must create seperate, and than i must peer that vnet with my own vnet. so i must find i way to get the correct .ID from my subscription than. anyone who can help?

regards
frank

Posts: 1

Participants: 1

Read full topic

EKS cluster vpc nets and big nesting of variables failing

$
0
0

@qubusp wrote:

resource "aws_eks_cluster" "alpha-cluster" {
    name = "${var.ClusterName}"
    role_arn = "${aws_iam_role.lama-eks-master-policy.arn}"
    vpc_config{
        security_group_ids = ["${aws_security_group.alpha-eks-sg-int.id}", "${aws_security_group.lama-eks-sg-ext.id}"]
        subnet_ids = [["${aws_subnet.alpha-external.*.id}"], ["${aws_subnet.alpha-internal.*.id}"]]
    }
}

This one returns a Error: Incorrect attribute value type.
Shouls i add a join or sometthing
Any help highly appreciated.

Posts: 2

Participants: 2

Read full topic

When specifically is Terraform 0.11 deprecated?

$
0
0

@jrobison-sb wrote:

This deprecation notice says that Terraform 0.11 will be deprecated “starting later in mid-November and continuing into next year”. Is it know when specifically this will happen? The AWS provider is the one I’m most interested in.

Thanks.

Posts: 1

Participants: 1

Read full topic

Aws_dx_gateway_association / output property seems to be malformed?

$
0
0

@opsrom wrote:

Hi everyone :grinning:

I try to build IaC workflow to create global Network layer for AWS Multi Accounts customer.

I have a big problem with the aws_dx_gateway_association resource.

This is an example :

#Creates Direct Connect Gateway
resource "aws_dx_gateway" "this" {
  name            = "mydxgateway"
  amazon_side_asn = 64514
}

#Creates Direct Connect Gateway association with Transit Gateway
resource "aws_dx_gateway_association" "this" {
  dx_gateway_id         = aws_dx_gateway.this.id
  associated_gateway_id = "tgw-rtb-05bbb377acb7ecf46"

  allowed_prefixes = ["192.0.0.0/8"]
}

Now I try to retreive this association id like this :

resource "aws_ec2_transit_gateway_route_table_association" "dx" {
   transit_gateway_route_table_id = "tgw-rtb-05bbb377acb7ecf46"
   transit_gateway_attachment_id  = aws_dx_gateway_association.this.dx_gateway_association_id
}

I have got an error during “apply” execution :

Error: error associating EC2 Transit Gateway Route Table (tgw-rtb-05bbb377acb7ecf46) association (f3454ce1-4387-42e9-986a-b762f46f3c90): InvalidTransitGatewayAttachmentID.Malformed: Invalid Transit Gateway Attachment id f3454ce1-4387-42e9-986a-b762f46f3c90.
status code: 400, request id: 05560c5c-86f1-4951-b71f-e27bf4979169

If I take a look at the plan… indeed… the property seems to be malformed :

“associated_gateway_id”: “tgw-0f61e6aa07906bf7c”,
“associated_gateway_owner_account_id”: “XXXXXXXXXX”,
“associated_gateway_type”: “transitGateway”,
“dx_gateway_association_id”: “f3454ce1-4387-42e9-986a-b762f46f3c90”,
“dx_gateway_id”: “8a44646d-336c-4621-b032-d9a83252ce0e”,
“dx_gateway_owner_account_id”: “XXXXXXXXXX”,
“id”: “ga-8a44646d-336c-4621-b032-d9a83252ce0etgw-0f61e6aa07906bf7c”,

But if I look on AWS Console, the correct attachment id == “tgw-attach-05770a0a1186d57d3”.

Any idea ?

Thanks a lot.
Romain.

Posts: 1

Participants: 1

Read full topic

Any way to disable new deprecation warnings?

$
0
0

@daveadams wrote:

So as of the latest Terraform binary from last week, I’m now being flooded with deprecation notices for quoted provider references, type constraints, and interpolation-only expressions. Right now for some of my projects, when I run a terraform apply, I’m receiving literally thousands of lines of warning messages about syntax deprecation, and it shows up below the list of pending changes, so I have to scroll through an enormous quantity of text before I can see what changes are pending.

Is there any flag or env var that I’m missing to quiet these warnings for now? Better yet would be a simple message that deprecated syntax exists and “type this other command to see details”. The current level of verbosity about the deprecated syntax is not user-friendly–I know about and am actively migrating projects to the new syntax–It only makes Terraform a pain to use.

Posts: 2

Participants: 1

Read full topic

Create a resource of every resource created by for_each

$
0
0

@popopanda wrote:

Hello,

I have a resource of aws_kinesis_stream that is created by using a for_each expression from a variable that contains a map.

variable "stream_envs" {
  default = {
    dev = "dev-stream"
    staging = "staging-stream"
    prod = "prod-stream"
  }
}

resource "aws_kinesis_stream" "cwl_kinesis_stream" {
  for_each = var.stream_envs
  name             = each.value
  shard_count      = 1
  retention_period = 24

  shard_level_metrics = [
    "IncomingBytes",
    "OutgoingBytes",
    "IncomingRecords",
    "OutgoingRecords",
    "WriteProvisionedThroughputExceeded",
    "ReadProvisionedThroughputExceeded",
    "IteratorAgeMilliseconds"
  ]

  tags = {
    Environment = each.key
    Team        = "Devops"
    managed_by  = "Terraform"
    Role        = each.value
  }
}

This portion looks to be working as expected. However, I want to create an aws_kinesis_firehose_delivery_stream that uses my Kinesis stream as a source for every environment. For example, dev kinesis stream correlates to dev firehose stream

resource "aws_kinesis_firehose_delivery_stream" "s3_delivery_stream" {
  for_each = var.stream_envs
  name        = format("%s-firehose_s3_destination", each.key)
  destination = "extended_s3"

  kinesis_source_configuration {
    kinesis_stream_arn = <kinesis stream>
    role_arn = "arn:aws:iam::my_role"
  }

  extended_s3_configuration {
    role_arn           = "arn:aws:iam::my_role"
    bucket_arn         = "arn:aws:s3:::my_bucket"
    buffer_size        = "5"
    buffer_interval    = "300"
    compression_format = "UNCOMPRESSED"

    processing_configuration {
      enabled = "true"

      processors {
        type = "Lambda"

        parameters {
          parameter_name  = "LambdaArn"
          parameter_value = "arn:aws:lambda::my_lambda:$LATEST"
        }
      }
    }
  }

  tags = {
    Environment = each.key
    Team        = "Devops"
    managed_by  = "Terraform"
    Role        = format("%s-firehose_s3_destination", each.key)
  }

  depends_on = [
    aws_kinesis_stream.cwl_kinesis_stream,
  ]
}

How can I achieve this?

Thanks

Posts: 2

Participants: 2

Read full topic


Can I use S3 modules with Terraform Cloud?

$
0
0

@pbostrom wrote:

I’m trying to use an S3 module with Terraform Cloud and I’m running into some issues. Is this use case supported? I don’t want to spend any time debugging if not…

Posts: 4

Participants: 2

Read full topic

Error: Unsupported block type

$
0
0

@ochuko3d wrote:

Hi team,

I get this error when I run the command

terraform.exe validate

Error: Unsupported block type

on main.tf line 62:
62: clone {

Blocks of type “clone” are not expected here.

snippet

clone {
template_uuid = “data.vsphere_virtual_machine.template.id”

customize {
  windows_options {
    computer_name    = "var.server_name"
    admin_password   = "var.winadmin_password"
    auto_logon       = true
    auto_logon_count = 1
    #join_domain = "var.join_domain"
    #domain_admin_user = "var.domain_admin_user"
    #domain_admin_password = "domain_admin_password"

    # Run these commands after autologon. Configure WinRM access and disable windows firewall.
    run_once_command_list = [
      "winrm quickconfig -force",
      "winrm set winrm/config @{MaxEnvelopeSizekb=\"100000\"}",
      "winrm set winrm/config/Service @{AllowUnencrypted=\"true\"}",
      "winrm set winrm/config/Service/Auth @{Basic=\"true\"}",
      "netsh advfirewall set allprofiles state off",
    ]
  }

Posts: 2

Participants: 2

Read full topic

How to get formatted string of account id's and names from aws_organizations_organization

$
0
0

@jdeluyck wrote:

Hello,

Using Terraform 0.11.

I’m trying to get a string of account id’s and names out of aws_organizations_organization.

The format I want is:
"accountid": "account name" - "accountid2":"account name2"

I thought this would be easy as

data "aws_organizations_organization" "org" {}

locals {
   accountid_name_map = "${formatlist("\"%s\" - \"%s\"", data.aws_organizations_organization.org.accounts.*.id, data.aws_organizations_organization.org.accounts.*.name)}"
}

yet this keeps returning me an error like
formatlist: formatlist requires at least one list argument

I’ve been ttrying all kinds of iterations, but sofar it’s stumping me. How do I get this to work?

Posts: 1

Participants: 1

Read full topic

Create_before_destroy for azurerm_virtual_machine and associated resources

$
0
0

@creidmiller wrote:

We have a Windows VM deployed on Azure via Terraform. We are using 11.14 due to issue 22006. This VM is provisioned via additional resources to create and run a Docker image. This Docker image will need to be updated occasionally and we hoped that adding create_before_destroy to all of the resources might allow us to replace the VM with almost zero downtime. When we ran an apply to test this, the output messages indicated that the new VM and all associated resources were created successfully, followed by messages that the VM and all associated resources were deposed and destroyed. Finally, the output stated that an error occurred applying the plan. A more detailed message is displayed that “The Resource ‘Microsoft.Compute/virtualMachines/eastus2-app-testserver’ under resource group ‘eastus2-app-test’ was not found.” There is no eastus2-app-testserver included in the Terraform script so perhaps this is the internal name assigned to the new VM? The final result is that neither the old VM nor the replacement VM exists in Azure. Should it be possible to replace a VM via create_before_destroy with Terraform?

Posts: 2

Participants: 1

Read full topic

Random_password example error - what am I doing wrong?

$
0
0

@golightlyb wrote:

Can anyone help me with this error? I’m following the example in the docs here

provider "random" {
    version = "~> 2.2"
}

resource "random_password" "password" {
    length  = 16
    special = true
}

resource "<any>_instance" "example" {
  ...
  password = random_string.password.result
}

result:

Error: Reference to undeclared resource

A managed resource "random_string" "password" has not been declared in the
root module.

Posts: 5

Participants: 2

Read full topic

Linode_domain_record inappropriate value for attribute target

$
0
0

@golightlyb wrote:

Hi, another newbie question! I’ve followed a bunch of Linode examples, and I’m tripping up on this bit:

resource "linode_domain_record" "record_foo" {
    domain_id   = linode_domain.domain_example_org.id
    name        = linode_instance.instance_foo.label
    target      = linode_instance.instance_foo.ipv4
    record_type = "A"
}

I’m getting:

Inappropriate value for attribute "target": string required.

Any help would be really appreciated :slight_smile:

Posts: 1

Participants: 1

Read full topic

Route53 SRV Records

$
0
0

@carrgeo wrote:

Does anyone know the correct syntax (and ideally an example!) of the creation of a multi-server entry SRV record via a aws_route53_record resource?

For example there is no mention of support for the ‘port’ variable that is needed for an SRV record entry.

Thanks,

Posts: 2

Participants: 1

Read full topic


How to add new instance and remove old instance without destroying all instances

$
0
0

@lvonk wrote:

Hi,

We use:

Terraform v0.12.16
+ provider.aws v2.31.0

The use case is that we have a single instance of our webserver. We need to add a new server in order to renew some token for an external api. After adding the new server and the token is renewed successfully we want to destroy the old server.

Our initial setup is as follow (simplified):

module "web-server" {
  source               = "../modules/web-server"
  hostnames            = ["s01.web.com"]
  private_ip_addresses            = ["172.25.25.25"]
}

The module web-server:

resource "aws_instance" "web" {
  count = length(var.hostnames)
  private_ip = var.private_ip_addresses[count.index]
  tags = {
    Name = var.hostnames[count.index]
  }
}

Each hostnames binds to a fixed ip address, which we later need to provision the servers via Ansible.

When we create this, all is well. To add another server we do:

module "web-server" {
  source               = "../modules/web-server"
  hostnames            = ["s01.web.com", "s02.web.com"]
  private_ip_addresses            = ["172.25.25.25", "172.25.25.26"]
}

And run terraform apply and all is still well. Now we want to remove the s01.web.com, but removing that from the list will cause the destruction of s02.web.com and the “re-creation” of s01.web.com as s02.web.com. This is not acceptable for us since that has downtime.

This seems like a typical scenario: adding one server with specific values for variables and later removing an old one. What is the Terraform way of implementing this scenario?

Best regards,
Lars

Posts: 2

Participants: 2

Read full topic

Terraform in Interactive vs. non-interactive mode

$
0
0

@timpinkerton wrote:

I’m using Terraform (v12) in a CI/CD pipeline. My build is not completing because it is waiting for user feedback. It seems that I need to run Terraform in non-interactive mode, but I cannot find a way to change the mode. Is it correct that I need to be in non-interactive mode? and can someone point me to documentation on how I change this? Thanks

Posts: 2

Participants: 2

Read full topic

Internal module resource access

$
0
0

@ronjarrell wrote:

So, I’m refactoring some legacy code, which which there was a project with multiple tf files, and turning it into a module being called by a driver project.

so say there’s a driver module on the “driver” directory:

driver/main.tf

module "old_module" {
  source = "../old-module-dir"

param1 = var.old-value1
}

old-value1 represents a value that used to be a local or a var in the old module, that’s now being populated by the driver before calling it.

In old-module-dir there are two files in question:

aws-hosts.tf:

locals {
   sgs = {
     "tag" = aws_security_group.one.id
   }

(stuff happens)

in aws-net.tf:

resource "aws_security_group" "one" {
  stuff that does create the sg if I apply
}

If I apply it, I get an error from aws-hosts that

"tag" = aws_security_group.one

A managed resource "aws_security_group" "consul-server-ap-southwest-1" has not been declared in old_module.

If I comment out the bit in aws-hosts.tf, I’ll get a module.old_module.aws_security_group.one will be created

If I reference it that way I get a `no module call named “old_module” is declared in old_module.

So how the heck do I write a reference to the resource that the module I’m in is creating?

Posts: 1

Participants: 1

Read full topic

Unable to create Cross Account VPC Peering

$
0
0

@cloe-tang wrote:

I am trying to create a cross account VPC peering as a requestor through terraform script but it keeps throwing InvalidVPC ID which is referring to my accepter VPC. I have provided the accepter account id in peer_owner_id but it seems like its looking up the VPC ID in the requester account. Following is my configuration.

data “aws_vpc” “acceptor_vpc” {
id = var.acceptor_vpc_id
}

data “aws_vpc” “requestor_vpc” {
id = var.requestor_vpc_id
}

resource “aws_vpc_peering_connection” “vpc_peering” {
count = var.enable ? 1 : 0

peer_owner_id = var.peer_owner_id == null ? null : var.peer_owner_id

peer_vpc_id = var.acceptor_vpc_id
vpc_id = var.requestor_vpc_id
auto_accept = true

tags = {
Name = “vpc-peer-{data.aws_vpc.requestor_vpc.tags["Name"]}-{data.aws_vpc.acceptor_vpc.tags[“Name”]}”
Env = var.env
}
}

===============

Error

Error: InvalidVpcID.NotFound: The vpc ID ‘vpc-XXXXX’ does not exist
status code: 400, request id: 80362a8a-f24c-49dd-8054-38c68XXXXX

I’ve tried to hardcode the peer_own_id but it didnt work.

Not too sure what went wrong. Would appreciate if anyone could help.

Posts: 1

Participants: 1

Read full topic

Inconsitancy when creating Aurora RDS global cluster when snapshot_identifier is defined

$
0
0

@jamengual wrote:

Hi.

I have been working with terraform to create RDS global cluster without many issues until now.

I’m using the same code I use to create my prod global cluster to create another cluster base on the original prod cluster snapshot but when snapshot_identifier is provided the cluster gets created as a regional cluster and it is not attached to the newly created global cluster BUT if I use exactly the same code without specifing the snapshot_identifier the global cluster is created and the new regional rds cluster gets atached inmediatly to the global cluster.

Exactly the sam behavior happens when using the console but in the console I can successfully create the global cluster from the snapshot.

Keep in mind that I replaced some text to hide personal information

the sample code :

# Global mydata RDS cluster

resource "aws_rds_global_cluster" "mydata_clone" {
  count                     = var.create_clone ? 1 : 0
  engine_version            = "5.6.10a"
  global_cluster_appntifier = "clone-test-mydata-global"
  storage_encrypted         = true
  deletion_protection       = false
  provider                  = aws.primary
}


module "test_mydata_us_east_2_clone_cluster" {
  source         = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=0.17.0"
  enabled        = var.create_clone
  engine         = "aurora"
  engine_version = "5.6.10a"
  cluster_family = "aurora5.6"
  cluster_size   = 1
  namespace      = var.namespace
  stage          = var.environment
  name           = "us-east-2-${var.mydata_name}-clone"
#   admin_user     = var.mydata_db_user
#   admin_password = random_string.db_password.result
  db_name        = var.mydata_db_name
  instance_type  = "db.r5.2xlarge"
  vpc_id         = local.vpc_id
  security_groups = [
    local.sg-web-server-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2
  ]
  allowed_cidr_blocks                 = var.mydata_allowed_cidr_blocks
  subnets                             = local.private_subnet_ids
  engine_mode                         = "global"
  global_cluster_appntifier           = join("", aws_rds_global_cluster.mydata_clone.*.id)
  iam_database_authentication_enabled = true
  storage_encrypted                   = true
  deletion_protection                 = false
  iam_roles                           = ["${aws_iam_role.AuroraAccessToDataBuckets.arn}"]
  ##enabled_cloudwatch_logs_exports     = ["audit", "error", "general", "slowquery"]
  tags                                = local.complete_tags
  snapshot_appntifier                 = var.snapshot_appntifier
  skip_final_snapshot                 = true

  # DNS setting
  cluster_dns_name = "test-${var.environment}-mydata-writer-clone-us-east-2"
  reader_dns_name  = "test-${var.environment}-mydata-reader-clone-us-east-2"
  zone_id          = data.aws_route53_zone.ds_example_com.zone_id

  # enable monitoring every 30 seconds
  ##rds_monitoring_interval = 15

  # reference iam role created above
  ##rds_monitoring_role_arn      = aws_iam_role.mydata_enhanced_monitoring.arn
  ##performance_insights_enabled = true

  cluster_parameters = [
    {
      name         = "binlog_format"
      value        = "row"
      apply_method = "pending-reboot"
    },
    {
      apply_method = "immediate"
      name         = "max_allowed_packet"
      value        = "16777216"
    },
    {
      apply_method = "pending-reboot"
      name         = "performance_schema"
      value        = "1"
    },
    {
      apply_method = "immediate"
      name         = "server_audit_logging"
      value        = "0"
    }
  ]
  providers = {
    aws = aws.primary
  }
}

Plan output :

    + resource "aws_rds_global_cluster" "mydata_clone" {
        + arn                        = (known after apply)
        + deletion_protection        = false
        + engine                     = "aurora"
        + engine_version             = "5.6.10a"
        + global_cluster_identifier  = "clone-test-mydata-global"
        + global_cluster_resource_id = (known after apply)
        + id                         = (known after apply)
        + storage_encrypted          = true
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_parameter_group.default[0] will be created
    + resource "aws_db_parameter_group" "default" {
        + arn         = (known after apply)
        + description = "DB instance parameter group"
        + family      = "aurora5.6"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_subnet_group.default[0] will be created
    + resource "aws_db_subnet_group" "default" {
        + arn         = (known after apply)
        + description = "Allowed subnets for DB cluster instances"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + subnet_ids  = [
            + "subnet-1111111111111
            + "subnet-1111111111111
            + "subnet-1111111111111
          ]
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster.default[0] will be created
    + resource "aws_rds_cluster" "default" {
        + apply_immediately                   = true
        + arn                                 = (known after apply)
        + availability_zones                  = (known after apply)
        + backup_retention_period             = 5
        + cluster_identifier                  = "test-staging-us-east-2-mydata-clone"
        + cluster_identifier_prefix           = (known after apply)
        + cluster_members                     = (known after apply)
        + cluster_resource_id                 = (known after apply)
        + copy_tags_to_snapshot               = false
        + database_name                       = "testdb"
        + db_cluster_parameter_group_name     = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name                = "test-staging-us-east-2-mydata-clone"
        + deletion_protection                 = false
        + enabled_cloudwatch_logs_exports     = []
        + endpoint                            = (known after apply)
        + engine                              = "aurora"
        + engine_mode                         = "global"
        + engine_version                      = "5.6.10a"
        + final_snapshot_identifier           = "test-staging-us-east-2-mydata-clone"
        + global_cluster_identifier           = (known after apply)
        + hosted_zone_id                      = (known after apply)
        + iam_database_authentication_enabled = true
        + iam_roles                           = [
            + "arn:aws:iam::1111111111:role/AuroraAccessToDataBuckets",
          ]
        + id                                  = (known after apply)
        + kms_key_id                          = (known after apply)
        + master_username                     = "admin"
        + port                                = (known after apply)
        + preferred_backup_window             = "07:00-09:00"
        + preferred_maintenance_window        = "wed:03:00-wed:04:00"
        + reader_endpoint                     = (known after apply)
        + skip_final_snapshot                 = true
        + snapshot_identifier                 = "snapshot-prep-for-data-load"
        + storage_encrypted                   = true
        + tags                                = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + vpc_security_group_ids              = (known after apply)
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster_instance.default[0] will be created
    + resource "aws_rds_cluster_instance" "default" {
        + apply_immediately               = (known after apply)
        + arn                             = (known after apply)
        + auto_minor_version_upgrade      = true
        + availability_zone               = (known after apply)
        + cluster_identifier              = (known after apply)
        + copy_tags_to_snapshot           = false
        + db_parameter_group_name         = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name            = "test-staging-us-east-2-mydata-clone"
        + dbi_resource_id                 = (known after apply)
        + endpoint                        = (known after apply)
        + engine                          = "aurora"
        + engine_version                  = "5.6.10a"
        + id                              = (known after apply)
        + identifier                      = "test-staging-us-east-2-mydata-clone-1"
        + identifier_prefix               = (known after apply)
        + instance_class                  = "db.r5.2xlarge"
        + kms_key_id                      = (known after apply)
        + monitoring_interval             = 0
        + monitoring_role_arn             = (known after apply)
        + performance_insights_enabled    = false
        + performance_insights_kms_key_id = (known after apply)
        + port                            = (known after apply)
        + preferred_backup_window         = (known after apply)
        + preferred_maintenance_window    = (known after apply)
        + promotion_tier                  = 0
        + publicly_accessible             = false
        + storage_encrypted               = (known after apply)
        + tags                            = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + writer                          = (known after apply)
      }

Version :
terraform_0.12.16
provider “local” (hashicorp/local) 1.4.0
provider “aws” (hashicorp/aws) 2.38.0…
provider “null” (hashicorp/null) 2.1.2…
provider “template” (hashicorp/template) 2.1.2
provider “mysql” (terraform-providers/mysql) 1.9.0
provider “random” (hashicorp/random) 2.2.1

Posts: 1

Participants: 1

Read full topic

Viewing all 11411 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>