Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Referencing module (cache) directory?

$
0
0

Let’s assume I have a module that looks something like this:

resource "null_resource" "deployment_package" {
  triggers = {
    source_code = base64sha256(file("code/lambda_function.py"))
  }

  provisioner "local-exec" {
    command = "code/create_deployment_package.sh"
  }
}

Where the code (lambda_function.py & create_deployment_package.sh) is located in the “code” directory inside the terraform module.

The module itself is hosted in a git repo, so I use it like that:

module "foo" {
  source = "git::git@mygitlab.my.domain:terraform-modules/my_module.git"
}

Now when I terraform init a cache is created at .terraform/modules/foo/.

Is there a way to reference that directory in code?

The Problem is code/lambda_function.py of course won’t find the correct file, I had to use .terraform/modules/foo/code/lambda_function.py. But “foo” here obviously depends on how I name my module.

So how would I resolve this issue?
Is there a way to determine the cache directory dynamically or to otherwise reference files in the module code?

3 posts - 1 participant

Read full topic


Terraform 0.13.0 - for_each loop in module

$
0
0

Hi,

I want to create Azure groups and roles (users already exist) using terrform. I have working code, but wanted to make use of the for_each for modules in terraform 0.13.0.

My original root module calls a child module multiple times (ie the root module has multiple module blocks calling the same module), and feeds in different user/group/role variables for each module block. My .tfvars file has a long list of users/groups/roles which is required.

Original modules are like this, and I have a module block for group1, one for group2 and so on. Thet three variables grab from .tfvars and pass that var into the module.

module “group1” {
source = “…/modules”
group = var.group1-group
users = var.group1-users
roles = var.group1-roles
}

Using for_each in a module block, I wanted to have just one module block. So far I have this:

module “groups” {
source = “…/modules”
for_each = var.groups
/above is a new “set” variable I defined in tfvars that holds “group1” “group2” and so on/
group = format("%s%s%s",“var.”,each.key,"-group")
users = format("%s%s%s",“var.”,each.key,"-users")
roles = format("%s%s%s",“var.”,each.key,"-roles")
}

The problem is the “format” command creates strings like:
“var.group1-group”
“var.group1-users”
“var.group1-roles”

and the actual string is passed to the module rather than the variable contained var.group1-groups. Basically I want to get:
var.group1-group and not “var.group1-group”
and so on for all the variables.

Any ideas - or do I need to rethink my entire design to make use of for_each.

Regards,
Scott

1 post - 1 participant

Read full topic

Accessing resources indirectly

$
0
0

So I have (thanks to Constructing resource names within Terraform), managed to remove a pre-processing of a static map to resources using for-each.

What I now have is a map that has, as one of the properties the name/reference to a resource.

(I’ve removed nearly all of the map to the only element that references a pre-existing resource).

organisation_accounts = {
  devops = {
    organization_unit = "aws_organizations_organizational_unit.devops"
  }
  old-production = {
    organization_unit = "aws_organizations_organizational_unit.production"
  },
  old-staging = {
    organization_unit = "aws_organizations_organizational_unit.staging"
  },
  production-compute = {
    organization_unit = "aws_organizations_organizational_unit.production"
  },
  staging-compute = {
    organization_unit = "aws_organizations_organizational_unit.staging"
  },
  production-store = {
    organization_unit = "aws_organizations_organizational_unit.production",
  }
  staging-store = {
    organization_unit = "aws_organizations_organizational_unit.staging",
  }
  production-testing = {
    organization_unit = "aws_organizations_organizational_unit.testing"
  }
  staging-testing = {
    organization_unit = "aws_organizations_organizational_unit.testing"
  }
}

What I want to do is use the value as a reference to the actual resource, but I can’t find the appropriate syntax for this.

resource "aws_organizations_account" "organization_account" {
  for_each  = var.organisation_accounts
  email     = each.value.email
  name      = "digitickets-${each.key}"
  parent_id = "${each.value.organization_unit}.id"
  role_name = "OrganizationAccountAccessRole"

  lifecycle {
    ignore_changes = [role_name]
  }

  tags = merge(local.base_tags, {
    "Name" = "${var.environment}-account-${each.key}"
  })
}

The above produces the error

Error: invalid value for parent_id (see https://docs.aws.amazon.com/organizations/latest/APIReference/API_MoveAccount.html#organizations-MoveAccount-request-DestinationParentId)

  on organization_accounts.tf line 1, in resource "aws_organizations_account" "organization_account":
   1: resource "aws_organizations_account" "organization_account" {

for each of the accounts.

I can’t work out how to “lookup” resources.

Any help on this would be appreciated.

1 post - 1 participant

Read full topic

How to flatten 2-dim lists by "column" in terraform?

$
0
0

According to the https://www.terraform.io/docs/configuration/functions/flatten.html , we could flatten 2-dim lists by “row”.

> flatten([["a", "b"], ["c","d"], ["e"]])
["a", "b", "c", "d", "e"]

The question is row to flatten 2-dim lists by “column” in terraform ?

input:

> [["a", "b"], ["c","d"], ["e"]]

pretty print:

[
    ["a","b"], 
    ["c","d"], 
    ["e"]
]


output:

["a", "c", "e", "b", "d"]

i.e. “a”, “c”, “e” is the first column. “b”, “d” is the second column

It is easy in python or java language, but I can not search on the google how to implement it in Terraform.

Thank you !

2 posts - 2 participants

Read full topic

Looking for inspiration on how to build a UX/UI for non-technical Terraform users

$
0
0

The goal is to either build a custom UI, with drop-downs, check-boxes…, that would allow non technical users to spin up resources on Azure or read data from an existing IT infrastructure data repository DB that could be used as mapping for a workload migration; on-premise to cloud.

Anyone have info that could be used as a starting point.

Thanks!

1 post - 1 participant

Read full topic

Create each sg rule for each node group?

$
0
0

Hey folks, i’ve been banging my head against the wall on this one, and not sure if it is possible.

We are spinning up EKS clusters, and would like to use managed node groups. However, the limitation of managed node groups is that you cannot provide the security group for each node group-- amazon creates one for you.

We are currently barred from using managed node groups because we need to control the ports on the node security groups, but then I thought of a hack I could try that might work (and it sort of does, just not to the extent I would like).

I have a null data source that waits for the node group to be created, then a data block that looks up the security group that amazon created by name, which allows us to provide it in an aws_security_group_rule resource, therefore managing the SG.

Here is the request: I would like to be able to create security group rules for each rule defined within each node group configuration. Here is some example code that I have that is working:

node_groups = {
  default = {
    instance_type = "t3.medium"
    disk_size     = 20
  }
  node_group_2 = {
    instance_type = "m4.large"
  }

additional_nodegroup_sg_rules = {
  k8s = {
    node_group               = "default"
    description              = "app database",
    type                     = "ingress",
    from_port                = 5432
    to_port                  = 5432
    protocol                 = "tcp"
    source_security_group_id = "sg-12345"
  },
  ssh = {
    node_group  = "node_group_2"
    description = "SSH",
    type        = "ingress",
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.60.192.0/28"]
  },
}

data "aws_security_group" "node_group_sg" {
  for_each = var.node_groups
  filter {
    name   = "tag:eks"
    values = [aws_eks_node_group.node_group[each.key].node_group_name]
  }
  depends_on = [data.null_data_source.node_groups_sg]
}

resource "aws_security_group_rule" "managed_node_ssh_access" {
  for_each                 = var.additional_nodegroup_sg_rules
  security_group_id        = data.aws_security_group.node_group_sg[lookup(each.value, "node_group", null)].id
  description              = lookup(each.value, "description", null)
  type                     = lookup(each.value, "type", null)
  from_port                = lookup(each.value, "from_port", null)
  to_port                  = lookup(each.value, "to_port", null)
  protocol                 = lookup(each.value, "protocol", null)
  cidr_blocks              = lookup(each.value, "cidr_blocks", null)
  source_security_group_id = lookup(each.value, "source_security_group_id", null)
}

The limitation here is that if the user has a single rule they wish to apply to ALL node groups, it has to be defined multiple times, once for each specific node group. I would like to be able to set the value of node_group within additional_nodegroup_sg_rules to "all" and only have to define it once, but have it propagate down to all the node group sgs.

I have tried every manner I can come up with of manipulating the data to create values and a workflow that supports this, but I just don’t think it is possible.

I think the only way would be to somehow convert the data with a local so that:

additional_nodegroup_sg_rules = {
  ssh = {
    node_group  = "all"
    description = "SSH",
    type        = "ingress",
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.60.192.0/28"]
  },
}

turns into a map that looks like:

global_nodegroup_sg_rules = {
  ssh_default = {
    node_group  = "default"
    description = "SSH",
    type        = "ingress",
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.60.192.0/28"]
  },
  ssh_node_group_2 = {
    node_group  = "node_group_2"
    description = "SSH",
    type        = "ingress",
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.60.192.0/28"]
  },
}

and then have an additional sg rule resource block that runs against local.global_nodegroup_sg_rules

Any ideas?

Thanks in advance!

2 posts - 2 participants

Read full topic

Error create pipeline azure Devops

Unable to destroy resources using "terraform destroy"

$
0
0

I have tested using the command line “terraform destroy” on both Terraform versions 0.11.13 & 0.12.26, installed on Ubuntu 18.04 & Windows 10 respectively.

Once I confirm by saying Yes to destroy, it shows the result as “0 destroyed”. I can see the resources being created using the “terraform apply” but the “terraform destroy” is not deleting them.

Any suggestions?

1 post - 1 participant

Read full topic


Custom VPC, cannot ping/ssh EC2 in Private Subnet from Public Subnet

$
0
0

Hello Gurus, I’m creating custom vpc and using public and private subnet. all looks good as pasted below. however when I try to ping from ec2 instance in a public subnet to ec2 instance in a private subnet, it does now work. Please let me know what I’m missing.

provider “aws” {
region = “eu-west-2” # London
access_key = “My KEY”
secret_key = “MY SECRET KEY”
}

Custom VPC

resource “aws_vpc” “MyVPC” {
cidr_block = “10.0.0.0/16”
instance_tenancy = “default” # For Prod use “dedicated”

tags = {
Name = “MyVPC”
}
}

Creates “Main Route Table”, “NACL” & “default Security Group”

Create Public Subnet, Associate with our VPC, Auto assign Public IP

resource “aws_subnet” “PublicSubNet” {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = “eu-west-2a” # AZ within London, 1 Subnet = 1 AZ
cidr_block = “10.0.1.0/24” # Check using this later > “${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}”
map_public_ip_on_launch = “true” # Auto assign Public IP for Public Subnet
tags = {
Name = “PublicSubNet”
}
}

Create Private Subnet, Associate with our VPC

resource “aws_subnet” “PrivateSubNet” {
vpc_id = aws_vpc.MyVPC.id # Our VPC
availability_zone = “eu-west-2b” # AZ within London region, 1 Subnet = 1 AZ
cidr_block = “10.0.2.0/24” # Check using this later > “${cidrsubnet(data.aws_vpc.MyVPC.cidr_block, 4, 1)}”
tags = {
Name = “PrivateSubNet”
}
}

Only 1 IGW per VPC

resource “aws_internet_gateway” “MyIGW” {
vpc_id = aws_vpc.MyVPC.id
tags = {
Name = “MyIGW”
}
}

New Public route table, so we can keep “default main” route table as Private. Route out to MyIGW

resource “aws_route_table” “MyPublicRouteTable” {
vpc_id = aws_vpc.MyVPC.id # Our VPC

route { # Route out IPV4
cidr_block = “0.0.0.0/0” # IPV4 Route Out for all
#ipv6_cidr_block = “::/0” The parameter destinationCidrBlock cannot be used with the parameter destinationIpv6CidrBlock # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
route { # Route out IPV6
ipv6_cidr_block = “::/0” # IPV6 Route Out for all
gateway_id = aws_internet_gateway.MyIGW.id # Target : Internet Gateway created earlier
}
tags = {
Name = “MyPublicRouteTable”
}
}

Associate “PublicSubNet” with the public route table created above, removes it from default main route table

resource “aws_route_table_association” “PublicSubNetnPublicRouteTable” {
subnet_id = aws_subnet.PublicSubNet.id
route_table_id = aws_route_table.MyPublicRouteTable.id
}

Create new security group “WebDMZ” for WebServer

resource “aws_security_group” “WebDMZ” {
name = “WebDMZ”
description = “Allows SSH & HTTP requests”
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC

ingress {
description = “Allows SSH requests for VPC: IPV4”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] # SSH restricted to my laptop public IP /32
}
ingress {
description = “Allows HTTP requests for VPC: IPV4”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] # You can use Load Balancer
}
ingress {
description = “Allows HTTP requests for VPC: IPV6”
from_port = 80
to_port = 80
protocol = “tcp”
ipv6_cidr_blocks = ["::/0"]
}
egress {
description = “Allows SSH requests for VPC: IPV4”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] # SSH restricted to my laptop public IP /32
}
egress {
description = “Allows HTTP requests for VPC: IPV4”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
description = “Allows HTTP requests for VPC: IPV6”
from_port = 80
to_port = 80
protocol = “tcp”
ipv6_cidr_blocks = ["::/0"]
}
}

Create new EC2 instance (WebServer01) in Public Subnet

Get ami id from Console

resource “aws_instance” “WebServer01” {
ami = “ami-01a6e31ac994bbc09”
instance_type = “t2.micro”
subnet_id = aws_subnet.PublicSubNet.id
key_name = “MyEC2KeyPair” # To connect using key pair
security_groups = [aws_security_group.WebDMZ.id] # Assign WebDMZ security group created above
#vpc_security_group_ids = [aws_security_group.WebDMZ.id]
tags = {
Name = “WebServer01”
}
}

Create new security group “MyDBSG” for WebServer

resource “aws_security_group” “MyDBSG” {
name = “MyDBSG”
description = “Allows Public WebServer to Communicate with Private DB Server”
vpc_id = aws_vpc.MyVPC.id # Our VPC : SGs cannot span VPC

ingress {
description = “Allows ICMP requests: IPV4” # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“10.0.1.0/24”] # Public Subnet CIDR block, can be “WebDMZ” security group id too as below
security_groups = [aws_security_group.WebDMZ.id] # Tried this as above was not working, but still doesn’t work
}
ingress {
description = “Allows SSH requests: IPV4” # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user@Private Ip Address -i MyPvKey.pem Private Key pasted in MyPvKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
ingress {
description = “Allows HTTP requests: IPV4”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
ingress {
description = “Allows HTTPS requests : IPV4”
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
ingress {
description = “Allows MySQL/Aurora requests”
from_port = 3306
to_port = 3306
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
egress {
description = “Allows ICMP requests: IPV4” # For ping,communication, error reporting etc
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“10.0.1.0/24”] # Public Subnet CIDR block, can be “WebDMZ” security group id too
}
egress {
description = “Allows SSH requests: IPV4” # You can SSH from WebServer01 to DBServer, using DBServer private ip address and copying private key to WebServer
from_port = 22 # ssh ec2-user@Private Ip Address -i MyPvtKey.pem Private Key pasted in MyPvKey.pem chmod 400 MyPvtKey.pem
to_port = 22 # Not a good practise to use store private key on WebServer, instead use Bastion Host (Hardened Image, Secure) to connect to Private DB
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
egress {
description = “Allows HTTP requests: IPV4”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
egress {
description = “Allows HTTPS requests : IPV4”
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
egress {
description = “Allows MySQL/Aurora requests”
from_port = 3306
to_port = 3306
protocol = “tcp”
cidr_blocks = [“10.0.1.0/24”]
}
}

Create new EC2 instance (DBServer) in Private Subnet, Associate “MyDBSG” Security Group

resource “aws_instance” “DBServer” {
ami = “ami-01a6e31ac994bbc09”
instance_type = “t2.micro”
subnet_id = aws_subnet.PrivateSubNet.id
key_name = “MyEC2KeyPair” # To connect using key pair
security_groups = [aws_security_group.MyDBSG.id] # THIS WAS GIVING ERROR WHEN ASSOCIATING
#vpc_security_group_ids = [aws_security_group.MyDBSG.id]
tags = {
Name = “DBServer”
}
}

Elastic IP required for NAT Gateway

resource “aws_eip” “nateip” {
vpc = true
tags = {
Name = “NATEIP”
}
}

DBServer in private subnet cannot access internet, so add “NAT Gateway” in Public Subnet

Add Target as NAT Gateway in default main route table. So there is route out to Internet.

Now you can do yum update on DBServer

resource “aws_nat_gateway” “NATGW” { # Create NAT Gateway in each AZ so in case of failure it can use other
allocation_id = aws_eip.nateip.id # Elastic IP allocation
subnet_id = aws_subnet.PublicSubNet.id # Public Subnet

tags = {
Name = “NATGW”
}
}

Main Route Table add NATGW as Target

resource “aws_default_route_table” “DefaultRouteTable” {
default_route_table_id = aws_vpc.MyVPC.default_route_table_id

route {
cidr_block = “0.0.0.0/0” # IPV4 Route Out for all
nat_gateway_id = aws_nat_gateway.NATGW.id # Target : NAT Gateway created above
}

tags = {
Name = “DefaultRouteTable”
}
}

1 post - 1 participant

Read full topic

Terraform support for rolling deployments on ECS with draining

$
0
0

We have a bunch of C# game services running in ECS (AWS). When we make make updates to these services, my preferred deployment strategy looks like this:

  • we roll out the new version of the service
  • all new players will hit the new service version
  • we let existing players finish their game on the old servers, which takes some minutes (draining)
  • when all players have left the old service version, we want to it shut down

How does Terraform support this scenario?
Cheers,
Reik

1 post - 1 participant

Read full topic

Azure webhook creation using terraform

$
0
0

I have created an Azure automation account and deploy a runbook using terraform. I want to publish a runbook through webhook. But in terraform, there is no such a module found for webhook.

Is there any way to create a webhook using terraform? I search on many URLs but I didn’t find any useful option.

1 post - 1 participant

Read full topic

Object type variable definition syntax and variable validation

$
0
0

Is there a way to define as part of an object definition the variable validation.

Eg Say we have the following variable defined

  variable "VMSize" {
    type = string
    default = "Small"
    validation {
      condition = can(regex("^Small$|^Medium$|^Large$", var.VMSize))
      error_message = "Invalid VM Size."
    }
  }

How can one define it as part of an object?

The following doesn’t work (with v0.13-beta1).

variable "VMCluster" {
  type = object({
    "VMSize" = {
      type = string
    default = "Small"
    validation {
      condition = can(regex("^Small$|^Medium$|^Large$", var.VMSize))
      error_message = "Invalid VM Size."
    }
  }
})}

In general, will HCL be revising the object type definition to allow them to contain normally defined variables rather than the <KEY> = <TYPE> syntax?

The following would be great:

variable "VMCluster" {
  type = object({
    variable "VMSize"  {
      type = string
    default = "Small"
    validation {
      condition = can(regex("^Small$|^Medium$|^Large$", var.VMSize))
      error_message = "Invalid VM Size."
    }
  }
})}

1 post - 1 participant

Read full topic

Terraform state mv commands in powershell fail using names with spaces

$
0
0

We have an issue where when we try to move items using tf state mv with spaces it wil parse incorrectly in powershell.

As you can see below, the CLI interpets the space as a new item and fails the state mv call

If we run the command in command prompt with modified escaping it will work, we can’t find a good way to make it work in powershell.

Command in PS (fails)
terraform state mv ‘local_file.file[“b c”]’ ‘local_file.file[“e f”]’

Command in CMD
terraform state mv “local_file.file[“b c”]” “local_file.file[“e f”]”

Output

2020/06/14 23:59:22 [INFO] Terraform version: 0.12.23
2020/06/14 23:59:22 [INFO] Go runtime version: go1.12.13
2020/06/14 23:59:22 [INFO] CLI args: string{“C:\Windows\terraform.exe”, “state”, “mv”, “local_file.file[“b”, “c”]”, “local_file.file[“e”, “f”]”}
2020/06/14 23:59:22 [DEBUG] Attempting to open CLI config file: C:\Users\p120b60.CORPADDS\AppData\Roaming\terraform.rc
2020/06/14 23:59:22 [DEBUG] File doesn’t exist, but doesn’t need to. Ignoring.
2020/06/14 23:59:22 [INFO] CLI command args: string{“state”, “mv”, “local_file.file[“b”, “c”]”, “local_file.file[“e”, “f”]”}
Exactly two arguments expected.

main.tf
provider “azurerm” {

version = “=2.13.0”

skip_provider_registration = “true”

features {}

}

locals {

filenames = toset([“a”, “b c”])

}

terraform {

backend “local” {

path = "./terraformconfig.tfstate"

}

}

resource “local_file” “file” {

for_each = local.filenames

filename = each.value

}

TF version
Terraform v0.12.23

  • provider.azurerm v2.13.0
  • provider.local v1.4.0

1 post - 1 participant

Read full topic

EC2 Instance with 2 Interfaces in 2 private subnets

$
0
0

Hi

I have created this terraform template referencing a variable file. Basically I want to create 2 Instances which will have two interfaces, and each of these interfaces will reside in two seperate subnet. My brain is hurting to try figure it out. What is happening subnet ID is that it is applying subnet 1 to instance1 and subnet 2 to instance2. I want the instance to launch in subnet 1 and subnet 2. eth0 pointing to subnet xxxx and eth1 pointing to subnet bbbb. Hope this makes sense. Thanks

Here is my code

vars.tf

variable “ami”
{default = “ami-xxxx”
}

variable “instance_count” {
default = “2”
}
variable “instance_tags” {
type = “list”
default = [“testpc1”, “testpc22”]
}
variable “subnet_id” {
type = “string”
default = “subnet-xxxxxxx”
}

variable “subnet_ids” {
description = “A list of VPC Subnet IDs to launch in”
type = “list”
default = [“subnet-xxxx” ,“subnet-bbbb”]
}

variable “instance_type” {
default = “t2.micro”
}

main.tf

resource “aws_instance” “my-instance” {
count = “{var.instance\_count}" ami = "{var.ami}”
subnet_id = “{element(var.subnet\_ids, count.index)}" instance\_type = "{var.instance_type}”

tags = {
Name = “${element(var.instance_tags, count.index)}”
}
}

2 posts - 1 participant

Read full topic

Update/Replace resource when source code has changed

$
0
0

I would like to ask how to configure an update/replace to a Google Cloud Function whenever it’s source code it’s changed.

The terraform is running on a gitlab ci and not storing state. It creates the cloud function when it does not exist but if I run the CI again I get error: 409 Function unzip already exists.

I wonder if someone could explain me how to detect a change to a specific resource and redeploy it.

Currently I have the following tf file:

 provider "google" {
      project     = "test-project"
      region      = "europe-west1"
    }


resource "google_cloudfunctions_function" "unzip" {
  name        = "unzip"
  runtime     = "python37"
  entry_point = "extractor"
  project     = "test-project"
  available_memory_mb   = 256

  event_trigger   {
    event_type = "google.storage.object.finalize"
    resource = "temp-bucket"
  }
  
  source_repository{
    url = "https://source.developers.google.com/projects/test-project/repos/infra/moveable-aliases/develop/paths/cloud_functions/unzip"
  } 
}

1 post - 1 participant

Read full topic


Difficulty using JSON as the language for my configuration files

$
0
0

I am a small-scale individual user of Terraform for the configuration of a few servers I run in the AWS cloud. I dislike the proliferation of data serialization formats, and believe that JSON is fully adequate for almost all purposes. So I try to standardise for myself on using JSON files. The literature of Terraform says a number of times that HCL (“native syntax”) is “easier” and " recommended". But they also say that the terraform software can use JSON files as well as HCL files; and that for automation this may be an acceptable choice.

However, the documentation of using JSON is very limited. The one link to a full spec of the HCL use of JSON in 0.12+ is broken. And there are very few examples to be found of json config files. There is also no conversion tool provided.

I have tried a number of times to set up a fairl basic set of files to create a couple of servers in a custom vpc with one private and one public subnet. But terraform does not accept my json files. I know that under terraform 0.11, with maybe a few quirks, it did. Now it does not. Neither when I convert using one of the web or command-line tools. Nor when I make all the tweakings that I can think of.

Has anyone else had this experience? Is there any source of ideas or examples that someone can suggest? I’d be most grateful for any pointers where to look. If needed I can give an example of simple json files that are rejected by terraform init and by terraform validate. But I think that the point itself is general, and clear enough.

1 post - 1 participant

Read full topic

Using AWS Cli Statement in locals block

$
0
0

Hey all,

I want store the result of an aws cli statement in locals variable. My actual code looks like this:

locals {
  vpnendpointid = "$${aws ec2 describe-client-vpn-endpoints --filters Name=tag:Name,Values=endpoint --query 'ClientVpnEndpoints[*].[ClientVpnEndpointId]' --output text --region eu-central-1}"
}

I need to store this to execute later a delete statement…

Can anyone help me?

Friendly regards,
Samir

1 post - 1 participant

Read full topic

Terraform Cloud and Google GCP credentials

$
0
0

How do i add Google GCP credentials as variables? The documentation only shows how to add the credentials using a file and it doesn’t tell me how to add and use the credentials in Terraform cloud.

2 posts - 1 participant

Read full topic

Terraform tree structure

$
0
0

Hi all, I’m pretty green on TF and need some guidance with directory structure etc. I have a small project that is growing but I am running into issues which i assume are limitations to how im referencing modules/resources.

My TF tree looks like this:

Tree:
├── acm
│ ├── main.tf
│ └── variables.tf
├── cluster1
│ ├── service1
│ │ ├── alb
│ │ │ ├── main.tf
│ │ │ └── variables.tf
│ │ ├── main.tf
│ │ ├── iam
│ │ │ └── role.tf
│ │ └── messaging
│ │ ├── queue.tf
│ │ ├── sns.tf
│ │ ├── sqs.tf
│ │ └── variables.tf
│ ├── main.tf
│ ├── rds
│ │ ├── rds.tf
│ │ └── variables.tf
│ └── variable.tf
├── main.tf
├── variables.tf

Problem:
When I was trying to get the ACM cert ARN and pass it to the ALB that gets created for service1, I cannot reference it in any way.

I assume i need to restructure and rethink how I’m terraforming before it becomes too cumbersome to manage

Heres a github link with the code im working with.
git@github.com:nhatfield/temp-terraform.git

appreciate any guidance, feedback, etc

Thanks

1 post - 1 participant

Read full topic

Is there a way in terraform to give users choice of authentication ( like password or keyvault... etc) at the input level?

$
0
0

I am using azure provider, and for the sake of vm authentication i want to provide users 2 types of authentication ( randomly generate password and key vault based) .

below is the snapshot of code how i am doing for individual approach, but i need to convert these based on user authentication choice. For example if user want to key vault authentication, it should ask vault related inputs and create password based on it. If user wants dynamic password it should not ask vault parameters or execute vault resources, it simply has to run random resource and create password.

data “azurerm_resource_group” “rg_keyvault” {
name = “${var.azure_secret_rg}”
}

data “azurerm_key_vault” “keyvault” {
name = “{var.azure_keyvault_name}" resource_group_name = "{data.azurerm_resource_group.rg_keyvault.name}”
}

data “azurerm_key_vault_secret” “bigip_admin_password” {
name = “{var.azure_keyvault_secret_name}" key_vault_id = "{data.azurerm_key_vault.keyvault.id}”
}

resource “random_password” “password” {

}

resource “azurerm_virtual_machine” “x” {


os_profile {
admin_username = var.f5_username

#admin_password = random_password.password.result
#admin_password = data.azurerm_key_vault_secret.bigip_admin_password.value

}

1 post - 1 participant

Read full topic

Viewing all 11357 articles
Browse latest View live