Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11468 articles
Browse latest View live

Terraform unable to query AMI ID

$
0
0

@vmorkunas wrote:

Hello,

I am getting this error recently after I make a new AMI from instance and use that in terraform:

Expected 1 AMI for ID: ami-022a462ecc0b2d590, got none

API call looks like this:
[DEBUG] plugin.terraform-provider-aws_v2.53.0_x4: Action=DescribeImages&ImageId.1=ami-022a462ecc0b2d590&Version=2016-11-15

AWS CLI queries return proper results but terraform fails.

Posts: 2

Participants: 1

Read full topic


Use aws_eks_node_group with public and private subnets

$
0
0

@dj-wasabi wrote:

Hi,

I’m trying to create a node-group on AWS with nodes that have 1 Public subnet and 1 private subnet for each AZ. I don’t seem to get it working. I have the following code:

Subnets:

resource "aws_subnet" "public" {
    count = length(var.aws_subnet_public_cidr)

    availability_zone = data.aws_availability_zones.available.names[count.index]
    cidr_block        = var.aws_subnet_public_cidr[count.index]
    vpc_id            = aws_vpc.main.id

    tags = {
        Name                                        = "public-${var.cluster_name}-${data.aws_availability_zones.available.names[count.index]}"
        "kubernetes.io/cluster/${var.cluster_name}" = "shared"
        "kubernetes.io/role/elb"                    = 1
    }
}

resource "aws_subnet" "private" {
    count             = length(var.aws_subnet_private_cidr)
    availability_zone = data.aws_availability_zones.available.names[count.index]
    cidr_block        = var.aws_subnet_private_cidr[count.index]
    vpc_id            = aws_vpc.main.id

    tags = {
        Name                                        = "private-${var.cluster_name}-${data.aws_availability_zones.available.names[count.index]}"
        "kubernetes.io/cluster/${var.cluster_name}" = "shared"
        "kubernetes.io/role/internal-elb"           = 1
    }
}

Node group:

resource "aws_eks_node_group" "worker-node" {
  # count           = length(var.aws_subnet_private)
  count = 1
  cluster_name    = aws_eks_cluster.eks-cluster.name
  node_group_name = "${var.cluster_name}-${data.aws_availability_zones.available.names[count.index]}"
  node_role_arn   = aws_iam_role.worker-node.arn
  subnet_ids      = concat(aws_subnet.private.*.id,aws_subnet.public.*.id)
  instance_types  = ["t3.small"]

  scaling_config {
    desired_size = var.aws_node_scaling_desired_size
    max_size     = var.aws_node_scaling_max_size
    min_size     = var.aws_node_scaling_min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.tf_AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.tf_AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.tf_AmazonEC2ContainerRegistryReadOnly,
  ]
}

I’ll probably doing something wrong, but can not find the issue. Some cases, the nodes get 2 ips for the public subnet, other times it gets it ips for the private subnet. Goal is to get 1 ip for the public subnet and 1 for the private subnet.

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

Error in Function call

$
0
0

@TomHowarth wrote:

I am using terraform remote to store my state, but since migrating a section of my code has stopped working stanza shown below

data "template_file" "user_data" {
   template = "${file("${path.module}\\user-data.sh")}"
}

when I run a terraform plan. I receive the following error

Error: Error in function call
 
on .terraform/modules/Webserver-cluster/webserver-cluster/main.tf line 26, in data "template_file" "user_data":
26:   template = "${file("${path.module}\\user-data.sh")}"
     |----------------
     | path.module is ".terraform/modules/Webserver-cluster/webserver-cluster"
 
Call to function "file" failed: no file exists at
.terraform/modules/Webserver-cluster/webserver-cluster\user-data.sh.

The file is in the location shown.

Posts: 1

Participants: 1

Read full topic

Dial tcp: lookup xxxxx.eu-west-1.eks.amazonaws.com on 192.168.x.x:53: no such host

$
0
0

@vrathore18 wrote:

I am getting this tiller error. Last time I fixed this by some workaround(I forgot the workaround). Is there a permanent fix for this

cloud aws

provider "helm" {
  service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
  namespace       = "${kubernetes_service_account.tiller.metadata.0.namespace}"
  version = "~> 0.10.4"

  kubernetes {
    config_path = ".kube_config.yaml"
  }
}
Error: Error refreshing state: 2 error(s) occurred:
* kubernetes_cluster_role_binding.tiller: 1 error(s) occurred:
* kubernetes_cluster_role_binding.tiller: kubernetes_cluster_role_binding.tiller: Get https://xxxxxxx/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller: dial tcp: lookup xxxxxxxx on x.x.x.x: no such host
* kubernetes_service_account.tiller: 1 error(s) occurred:
* kubernetes_service_account.tiller: kubernetes_service_account.tiller: Get xxxxxxx.eks.amazonaws.com/api/v1/namespaces/kube-system/serviceaccounts/tiller: dial tcp: lookup xxxxx.eu-west-1.eks.amazonaws.com on xxxxxx: no such host

Posts: 1

Participants: 1

Read full topic

Get List of Variable on Terraform

$
0
0

@rhgenius wrote:

I am trying to get the list of variable using Terraform, below is my directory structure:

.
├── main.tf
├── path_modules
│   └── module_name
│       ├── main.tf
│       └── variables.tf
└── variables.tf

I put the variables in main.tf file as below:

module "module_name"
...
  ssh_users                   = ["user1", "user2", "user3", "user4", "user5", "user6", "user7", "user8", "user9"]
  ssh_keys                    = ["user1.pem.pub", "user2.pem.pub", "user3.pem.pub", "user4.pem.pub", "user5.pem.pub", "user6.pem.pub", "user7.pem.pub", "user8.pem.pub", "user9.pem.pub"]

Then I put code to get that variable in ./path_module/module_name/main.tf file as below:

resource "google_compute_instance" "module_name" {
...
  metadata = {
    count = length(var.ssh_keys)
    ssh-keys = format("%s:%s", "${var.ssh_users[count.index]}", file("${path.module}/${var.ssh_keys[count.index]}"))
  }

and after try to validate using terraform validate I got this error:

Error: Reference to "count" in non-counted context

  on path_module/module_name/main.tf line number, in resource "google_compute_instance" "module_name":
  line number:     ssh-keys = format("%s:%s", "${var.ssh_users[count.index]}", file("${path.module}/${var.ssh_keys[count.index]}"))

The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set.


Error: Reference to "count" in non-counted context

  on path_module/module_name/main.tf line number, in resource "google_compute_instance" "module_name":
  line number:     ssh-keys = format("%s:%s", "${var.ssh_users[count.index]}", file("${path.module}/${var.ssh_keys[count.index]}"))

The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set.

kindly please anybody have experience regarding this issue on Terraform can help me out

Posts: 1

Participants: 1

Read full topic

Is terraform module available to create support role/member for GCP organization

$
0
0

@comptan wrote:

Hello,
is terraform module available to create support role/member for GCP organization? I need to ass a member with a role to enable for him/her to create a support case with GCP as and when required.

Posts: 1

Participants: 1

Read full topic

Terraform & Gitlab-CI

$
0
0

@bradroe wrote:

Hi,

We are trying to integrate terraform with gitlab-ci but are running into an issue when pipelines are run. Basically the pipeline will do a validate, plan and then apply once merged to master, however our plan always fails with:

`Error: No valid credential sources found for AWS Provider.Please see https://terraform.io/docs/providers/aws/index.html for more information onproviding credentials for the AWS Provideron main.tf line 1, in provider "aws":1: provider "aws" {`

`ERROR: Job failed: exit status`

The runner and gitlab are self hosted in AWS and there is a IAM instance profile attached to the EC2 instance which allows resources to be built so there is no need to have access keys. So there isn’t a .aws/credentials file. When a plan is run from the cli using the gitlab-runner user it works no problem it only fails when its run as part of the pipeline, also aws commands work from the cli but not in the pipeline.

I have spun a runner up in my own AWS account using the same code, IAM instance role permissions etc and registered it against my own gitlab account and the pipelines run fine. Again there are no credentials stored on the server and its using the exact same gitlab-ci.yml file, there is no role_arn in the provider block in the TF code. It just has the below:

`provider "aws" {`

`region = "eu-central-1"`

`}`

I have enabled debugging and can see the below

Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id

I can run a terraform plan from the CLI and it assumes the role fine.

AWS EC2 instance detected via default metadata API endpoint, EC2RoleProvider added to the auth chain

plugin.terraform-provider-aws_v2.53.0_x4: 2020/03/17 08:01:15 [INFO] AWS Auth provider used: "EC2RoleProvider"

plugin.terraform-provider-aws_v2.53.0_x4: 2020/03/17 08:01:15 [INFO] Attempting to AssumeRole arn:aws:iam::00000000000:role/gitlab_runner_role (SessionName: "SESSION_NAME", ExternalId: "EXTERNAL_ID", Policy: "")

Any help would be appreciated, i’m at a lose.

Posts: 1

Participants: 1

Read full topic

Error tagging resources

$
0
0

@vmorkunas wrote:

Hello,

sometimes I am getting issues while tagging resources in terraform in the resource block:

resource "aws_vpc_peering_connection" "src_peering" {
provider = aws.src
peer_owner_id = var.peering.different_account ? var.peering.account_id : null
vpc_id = var.peering.src_vpc_id
peer_vpc_id = var.peering.dst_vpc_id
peer_region   = var.stackCommon.stack_region
auto_accept   = false

tags = merge(
    map(
        "Name", "${var.stackCommon.stack_name}-${var.peering.peering_connection_name}"
    ),
    var.stackCommon.common_tags
)

lifecycle {
    create_before_destroy = true
}

Error:

    error updating EC2 VPC Peering Connection (pcx-002e3d030ff430f7c) tags: error tagging resource (pcx-002e3d030ff430f7c): InvalidVpcPeeringConnectionID.NotFound: The vpcPeeringConnection ID 'pcx-002e3d030ff430f7c' does not exist
	status code: 400, request id: 0ade2c59-4cea-4ec9-bcef-ba1a480a41ec

Tagging fails not only for peering but often for Security groups, EIP’s with the same kind of error. Dependencies are OK. Am I doing this wrong? 95% of the time it works properly.

Posts: 1

Participants: 1

Read full topic


GCP Cloud Run support

$
0
0

@LeoMouyna wrote:

Hi there !

I’m a new terraform user and I only want to deploy a docker image on a google cloud run.
I toke a look at the documentation and my final main.tf looks like this:

resource "google_service_account" "sac_pyframedecrypt" {
  account_id   = "sac-dev-01-${var.service_name}"
  display_name = "Service account for decrypt frames"
}

resource "google_cloud_run_service" "pyframedecrypt" {
  name     = var.service_name
  location = var.region

  template {
    spec {
      containers {
        image = "eu.gcr.io/${var.project}/${var.service_name}:latest"
      }
      service_account_name = google_service_account.sac_pyframedecrypt.email
    }
  }

  traffic {
    percent         = 100
    latest_revision = true
  }
}

I use gitlab-ci to deploy it and during terrafrom apply I got this error:
<p>The requested URL <code>/apis/serving.knative.dev/v1/namespaces/pyframedecrypt-dev/services</code> was not found on this server. <ins>That’s all we know.</ins>

We don’t mention namespaces on the documentation so I don’t get why the namespace isn’t automatically created (if it’s the right error).

Yes I don’t really know if it’s the right error because during terraform apply I found this log entry:

[WARN] Provider "registry.terraform.io/-/google" produced an invalid plan for google_cloud_run_service.pyframedecrypt, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .template[0].metadata: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
      - .template[0].spec[0].containers[0].resources: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
      - .metadata: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead

Here is the terraform plan result:

# google_cloud_run_service.pyframedecrypt will be created
  + resource "google_cloud_run_service" "pyframedecrypt" {
      + id       = (known after apply)
      + location = "europe-west"
      + name     = "pyframedecrypt"
      + project  = (known after apply)
      + status   = (known after apply)
      + metadata {
          + annotations      = (known after apply)
          + generation       = (known after apply)
          + labels           = (known after apply)
          + namespace        = (known after apply)
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
      + template {
          + metadata {
              + annotations      = (known after apply)
              + generation       = (known after apply)
              + labels           = (known after apply)
              + name             = (known after apply)
              + namespace        = (known after apply)
              + resource_version = (known after apply)
              + self_link        = (known after apply)
              + uid              = (known after apply)
            }
          + spec {
              + container_concurrency = (known after apply)
              + service_account_name  = "sac-dev-01-pyframedecrypt@pyframedecrypt-dev.iam.gserviceaccount.com"
              + serving_state         = (known after apply)
              + containers {
                  + image = "eu.gcr.io/pyframedecrypt-dev/pyframedecrypt:latest"
                  + resources {
                      + limits   = (known after apply)
                      + requests = (known after apply)
                    }
                }
            }
        }
      + traffic {
          + latest_revision = true
          + percent         = 100
        }
    }

Does anyone have an explanation for me ? :cat:

Posts: 1

Participants: 1

Read full topic

Can I use conditional logic (% if ) in data.aws_iam_policy_document resources

$
0
0

@makingsa wrote:

Hi

I have a load of ECS services to build and they have similar configs but some of them need different and optional AWS Secrets adding to the task definition templates and more importantly the IAM Task Execution policy.

I was trying to make it dry and use the:
“%{if example_credential_required }aws_secretsmanager_secret.example_credentials[each.value].arn%{endif}”

Whatever I seem to do syntax wise it won’t lookup the arn it will only add it as text string which obviously doesn’t work.

I’ve checked the documentation, none of the examples I have found seem to help or let me know if this is supposed to work.

I could split it all up into different data.x resources but as 85% is the same that seems a bit rubbish, I could also make it a template but it suggested in the docs that for 0.12 this the way I’m supposed to be doing it.

Please help.

data “aws_iam_policy_document” “example_execution_iam_policy_document” {
for_each = toset(var.namespace)
statement {
# Allows access to required secrets and SSM parameters defined in the task definition only
effect = “Allow”
actions = [
“ssm:GetParameters”,
“secretsmanager:GetSecretValue”,
“kms:Decrypt”
]
resources = [
aws_secretsmanager_secret.snoop_xxxxxx_credentials[each.value].arn, #This works hardcoding it, don’t want to do that though.
“%{if var.insight_analytics_db_con_required}aws_secretsmanager_secret.snoop_xxxxxx_credentials[each.value].arn%{endif}”, # I want this to work
data.terraform_remote_state.terraform_layer_xyz_db.outputs.rds_aurora_connection_parameter_arn
]
}
}

The above provides this in the Terraform Plan
~ Resource = [
+ “aws_secretsmanager_secret.snoop_xxxxxx_credentials[each.value].arn”,

Posts: 4

Participants: 2

Read full topic

Terraform remote state tls: oversized record received

$
0
0

@watsaro wrote:

Currently got myself into a problem and can figure out how to get out of it . For any command ran I get this error:

Failed to load state: RequestError: send request failed
caused by: Get https://s3.**** tls: oversized record received with length 65535

Any suggestions on how to fix this ?

Posts: 1

Participants: 1

Read full topic

Terraform plan - wait for lock to be released

$
0
0

@kurbar wrote:

I’ve integrated Terraform usage to Bamboo. I have a modular application where all pieces run under the same state.

In cases where multiple modules have changes and automatic deployment is triggered, terraform plan fails one of them since the state is locked.

Question is, can I wait for the state lock to be released instead of failing the deployment alltogether?

I’m using Terraform Cloud remote backend for state storage.

Posts: 1

Participants: 1

Read full topic

Undeclared input variable in root module

$
0
0

@aricwilisch wrote:

I’m relatively new to Terraform, so this confuses me.
I have a rather simple module to create an aws instance, cent7_base. I have a a tf file in the root to specify the specific things to the instance, server.tf

If I do a targetted plan it will says everything is fine, ready to go. However in our build environment we tend to deply with just tf plan to make sure we didn’t disrupt the rest of the infrastructure. With a tf plan I get

Error: Reference to undeclared input variable

on server.tf line 13, in module “server”:
13: var.vpc_security_group_ids,

An input variable with the name “vpc_security_group_ids” has not been
declared. This variable can be declared with a variable
“vpc_security_group_ids” {} block.

This variable exists in modules\cent7_base\variables.tf

So I’m uncertain what I’m missing. If I specify a variable in a module do I have to do something else to make sure it won’t raise issues when doing a plan on the entire infrastructure?

Hoping this is just something I’m forgetting to do.

Posts: 2

Participants: 2

Read full topic

Dynamic providers inside modules

$
0
0

@dperitch wrote:

Hey there.

We have an issue with dynamic providers inside modules and I am searching for a better of our implementation. Solution works, but the problem happens when we want to delete a module.

Related topics:

What we are doing?

With described Terraform solution, we are able to define a new Serverless project, for which we then setup resources in multiple AWS accounts.

How does it look like?

Because we need to setup resources in multiple AWS accounts for each project, our project structure looks like this.

  • root
    • project_aws module
      • stage module

Before you continue, please keep in mind that code below has been completely simplified.

In root, we define a list of projects in a projects.tf file, and one of the blocks/modules looks like this:

module "example-project" {
  name               = "example-project"
  source             = "./project_aws"

  stages = {
    "dev"  = "account-dev",
    "prod" = "account-prod",
  }
}

Then, in project_aws we separate this into two new modules, one for dev and one for prod, which basically looks like below. Here you can see that account_role_arn_dev and account_role_arn_prod are dynamically assigned and sent to stage module.

locals {
  account_role_arns = {
    "account-dev"  = "role_arn"
    "account-prod" = "role_arn"
    "account-temp" = "role_arn"
  }

  account_role_arn_dev  = contains(keys(var.stages), "dev") ? local.account_role_arns[var.stages["dev"]] : ""
  account_role_arn_prod = contains(keys(var.stages), "prod") ? local.account_role_arns[var.stages["prod"]] : ""
}

module "prod" {
  source              = "./stage"
  name                = var.name
  stage               = "prod"
  enabled             = contains(keys(var.stages), "prod") ? true : false
  account_role_arn    = local.account_role_arn_prod
}

module "dev" {
  source              = "./stage"
  name                = var.name
  stage               = "dev"
  enabled             = contains(keys(var.stages), "dev") ? true : false
  account_role_arn    = local.account_role_arn_dev
}

In stage module, we dynamically initiate provider (based on account_role_arn) and create all the needed resources with this provider:

provider "aws" {
  region = "eu-west-1"

  assume_role {
    role_arn = var.account_role_arn
  }
}

Once again, where is the problem?

As mentioned at the beginning, this works, but problem appears when we want to remove module/project from the root. In that case we receive a Error: Provider configuration not present which happens because we use dynamic providers and module is not able to detect which provider to use for destroying resources.

We want to put providers in root, but we also need them to be dynamic, we want to providers with alias from the root inside a module, but that is not doable.

Anyway - is there any other implementation that you suggest?

We wanted to have it more dynamically and avoid repeating the code, but we were not aware it will bring this kind of complications :slight_smile:

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

Create Customer_gateway in specific aws region using terraform

$
0
0

@troy-mac wrote:

I am in the process of terraforming our Site_to_Site VPN connections in multiple regions and I cannot find any documentation on how to specify a region while creating the CGW. The below will create it in my default us-east-1 but I want to create this in us-west-1 and a host of others, but one thing at a time.

resource "aws_customer_gateway" "cgw-singapore" {
    bgp_asn     = 65000
    ip_address  = "111.111.111.111"
    type        = "ipsec.1"

    tags = {
        Name =  "cgw-singapore"
    }

Posts: 1

Participants: 1

Read full topic


Terraform extract

$
0
0

@shanyangqu wrote:

Is there a way to extract the existing infrastructure details (setup manually) by using terraform

e.g get a list of Linux server’s version, firewall policy, opened ports, software packages installed etc…

My aim is to generate a block of code to describe the current server setup, then I can use a check list if validate against the code. therefor security loopholes can be identified and fixed

Posts: 1

Participants: 1

Read full topic

Updated Learn Guide: State Migration to Terraform Cloud

$
0
0

@judithpatudith wrote:

Terraform Cloud lets teams collaborate on shared Terraform state. Our newly-updated Learn guide on migrating state walks you through migrating local state to the cloud, using an example state file and a fresh workspace to practice on.

Try the hands-on, command-line tutorial

If you have any trouble or the tutorial sparks questions drop them here; I’ll do my best to answer :smiley: Happy hacking!

Posts: 2

Participants: 1

Read full topic

Terraform Cloud: How to read the state of a workspace through the API

$
0
0

@chjoerg wrote:

I am considering to host my Terraform project on Terraform Cloud, app.terraform.io. But it is critical for me to be able to query the state file (terraform.tfstate.d/<workspace>/terraform.tfstate) through the API. However, it is unclear to me, reading through the docs, how to do this. Anyone who could enlighten me?

Posts: 1

Participants: 1

Read full topic

Private provider hosting

$
0
0

@mhumeSF wrote:

I use alot of community modules with https://github.com/runatlantis/atlantis. Though using community modules requires building a docker image with many of the providers rolled into the image. And if an update of a provider is needed, again we must roll a new image. Is there a way to point terraform at a proxy to search for self-maintained providers externally from terraform? Just to mock the behavior of official providers?

Posts: 2

Participants: 1

Read full topic

Failure to extrapolate security group ID

$
0
0

@aflatto wrote:

Terraform Version
Terraform v0.12.23

provider.aws v2.53.0
Terraform Configuration Files

...resource "aws_elasticache_subnet_group" "redis-group" {
  name       = "${var.ENVIRONMENT}-redis-group"
  subnet_ids = "${data.aws_subnet_ids.private.ids}"
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_elasticache_cluster" "redis" {
  cluster_id               = "${var.ENVIRONMENT}"
  engine                   = "redis"
  node_type                = "cache.t2.small"
  num_cache_nodes          = 1
  parameter_group_name     = "default.redis3.2"
  engine_version           = "3.2.4"
  port                     = 6379
  subnet_group_name        = "${aws_elasticache_subnet_group.redis-group.name}"
  security_group_ids       = ["${data.aws_security_groups.vpn-sg.id}", "${data.aws_security_groups.peering-sg.id}", "${data.aws_security_groups.office-sg.id}"]
  snapshot_retention_limit = 14
  tags = {
    Name        = "[${var.ENVIRONMENT}-Redis]",
    Team        = "Infra",
    Provisioner = "Terraform",
    Environment = "[${var.ENVIRONMENT}]"
  }
  lifecycle {
    create_before_destroy = true
  }
}

Debug Output
terraform.tfstate

{
 "mode": "data",
       "type": "aws_security_groups",
       "name": "office-sg",
       "provider": "provider.aws",
       "instances": [
         {
           "schema_version": 0,
           "attributes": {
             "filter": [
               {
                 "name": "group-name",
                 "values": [
                   "*Site*"
                 ]
               },
               {
                 "name": "vpc-id",
                 "values": [
                   "vpc-0a6ec6b4"
                 ]
               }
             ],
             "id": "terraform-20200318113653210400000001",
             "ids": [
               "sg-076fabfab18"
             ],
             "tags": null,
             "vpc_ids": [
               "vpc-0a6ec6b4b"
             ]
           }
         }
       ]
     },
     {
       "mode": "data",
       "type": "aws_security_groups",
       "name": "peering-sg",
       "provider": "provider.aws",
       "instances": [
         {
           "schema_version": 0,
           "attributes": {
             "filter": [
               {
                 "name": "group-name",
                 "values": [
                   "*peer*"
                 ]
               },
               {
                 "name": "vpc-id",
                 "values": [
                   "vpc-0a6ec6b"
                 ]
               }
             ],
             "id": "terraform-20200318113653211800000002",
             "ids": [
               "sg-08246fbbcb"
             ],
             "tags": null,
             "vpc_ids": [
               "vpc-0a6ec6b4b"
             ]
           }
         }
       ]
     },
     {
       "mode": "data",
       "type": "aws_security_groups",
       "name": "vpn-sg",
       "provider": "provider.aws",
       "instances": [
         {
           "schema_version": 0,
           "attributes": {
             "filter": [
               {
                 "name": "group-name",
                 "values": [
                   "*VPN*"
                 ]
               },
               {
                 "name": "vpc-id",
                 "values": [
                   "vpc-0a6ec6b"
                 ]
               }
             ],
             "id": "terraform-20200318113653217700000003",
             "ids": [
               "sg-09f626b389"
             ],
             "tags": null,
             "vpc_ids": [
               "vpc-0a6ec6b4b"
             ]
           }
         }
       ]
     },

Expected Behavior
The security groups exist, and are in use by other EC2 instances, so terraform should read the SG id and add them to the array to associate with the Elasticache cluster.

Actual Behavior
Error: error creating Elasticache Cache Cluster: InvalidParameterValue: Some security group Id not recognized by EC2: securityGroupIds[[terraform-20200318113653211800000002, terraform-20200318113653210400000001, terraform-20200318113653217700000003]], awsAccountId[008770191051]
status code: 400, request id: 2a588f10-c29d-4b57-87ff-dd7213b0adfd

  on databases.tf line 10, in resource "aws_elasticache_cluster" "redis":
  10: resource "aws_elasticache_cluster" "redis" {

I was told to use ‘ids’ and not ‘id’ in the call for the SG but when I do that i get
Error: Incorrect attribute value type

  on databases.tf line 19, in resource "aws_elasticache_cluster" "redis":
  19:   security_group_ids       = ["${data.aws_security_groups.vpn-sg.ids}", "${data.aws_security_groups.peering-sg.ids}", "${data.aws_security_groups.office-sg.ids}"]
    |----------------
    | data.aws_security_groups.office-sg.ids is list of string with 1 element
    | data.aws_security_groups.peering-sg.ids is list of string with 1 element
    | data.aws_security_groups.vpn-sg.ids is list of string with 1 element

Inappropriate value for attribute "security_group_ids": element 0: string
required.

Anyone can help me understand what is wrong ?

Posts: 1

Participants: 1

Read full topic

Viewing all 11468 articles
Browse latest View live


Latest Images