@dhineshbabuelango wrote:
Is there a way if I can take aws_elb name from just knowing the tags
Posts: 2
Participants: 2
@dhineshbabuelango wrote:
Is there a way if I can take aws_elb name from just knowing the tags
Posts: 2
Participants: 2
@dhineshbabuelango wrote:
I have a resource like this, where I need to mention multiple counts one is to trigger the resource based on condition other count is to apply the same value to all my records
variable enable {
default = true
}variable “records”{
default = example1.com, example2.com, example3.com
}resource “aws_route53_record” “www” {
count = length(var.record_set_name)zone_id = var.private_zone_id
name = var.records[count.index]
type = “A”
count = var.enable ? 1 : 0alias {
name = var.elb_hostname
zone_id = “${data.aws_elb_hosted_zone_id.main.id}”
evaluate_target_health = true
}
}
Posts: 1
Participants: 1
@grimm26 wrote:
Much like application software releases can have accompanying database migrations to massage tables and schemas to match with the new application code, I think it would be nice to have a feature like this for terraform. Here’s the issue:
Sometimes when I make an update to a module, I make a change to an underlying resource that requires astate mv
operation to be done before the first apply is done with the upgraded module. I (or someone else that paid attention to the README) have to manually run aterraform state mv foo bar
before the apply or incur a possible outage because a resource will be destroyed and recreated. If there could be an accompanying “migration” definition that could trigger certain actions so that the state mv command(s) would be automatically run along with the apply, this would make life easier :).Has anyone else had this thought? Implementation ideas?
Posts: 2
Participants: 2
@fgarcia-cnb wrote:
Forgive me as this is more an ARM template issue combined with a terraform limitation. We have an arm template with the following outputs (static list of managed identities):
"outputs": { "managedIdentity1": { "condition": "[if(greaterOrEquals(variables('webAppCountNum'), 1), bool('true'), bool('false'))]", "type": "string", "value": "[reference(variables('managedIdentities').managedIdentity[0].name, '2016-08-01', 'Full').identity.principalId]" }, "managedIdentity2": { "condition": "[if(greaterOrEquals(variables('webAppCountNum'), 2), bool('true'), bool('false'))]", "type": "string", "value": "[reference(variables('managedIdentities').managedIdentity[1].name, '2016-08-01', 'Full').identity.principalId]" }, "managedIdentity3": { "condition": "[if(greaterOrEquals(variables('webAppCountNum'), 3), bool('true'), bool('false'))]", "type": "string", "value": "[reference(variables('managedIdentities').managedIdentity[2].name, '2016-08-01', 'Full').identity.principalId]" } //continues,
I’d like to refactor this to be a dynamic list that will match the webAppCountNum. I believe I can accomplish this using output iteration, although there might be an issue using the reference function with count (haven’t tested yet):
from msdocYou can’t use it with count because the count must be determined before the reference function is resolved.
there is also a separate issue where terraform can only access scalar outputs
with all these limitations…
is there a way to generate this dynamic output list and have it be consumable by terraform?
My idea was to convert the output array into a comma delimited string and have terraform split the data, but the restrictions on the “reference” function limit its use. If i could generate the array, i could probably convert it using this method
Posts: 1
Participants: 1
@mikek wrote:
In earlier versions of Terraform (before the inception of for_each), all of our resource definitions used the
count
meta_argument.If we wanted to access a particular module that was sourcing one of our resources, that could be done with
module.example.default[0]
ormodule.example.default[1]
.If we wanted to do the same for a resource that is now implementing the
for_each
meta argument - is the only way to do so to specify the actual name of key we’d like? I’m primarily concerned about the case where there are many keys to choose from and we’d like to select all of them, or perhaps a few specific ones.Would we have to use some sort of for expression and use an if to filter out for something specific? Is there something similar to the splat operator that can be used - or the splat operator itself?
What if we’d like to pass in the module to depends_on - could we simply do that via
depends_on = [module.example.default]
as is?
Posts: 1
Participants: 1
@LondonAppDev wrote:
Hello!
I have some Terraform that deploys an app to AWS ECS.
My question is, is it safe to run Terraform on a CI/CD tool where the logs are open to the public (for example, Travis-CI.org or GitLab CI/CD with public repos).
In my pipelines I am run a plan and deploy stage and I cannot see any sensitive outputs in logs, however I wanted to check if this it’s possible to safely run the projects publicly?
Posts: 1
Participants: 1
@Altern1ty wrote:
Hello,
Currently our team is using ARM templates and we are looking to move to using Terraform for our Azure environment.
One issue is we generally link every possible resource to send logs to our log analytics so that we have all logging centralized in one place.
The documentation for azurerm_log_analytics_linked_service says it only works for automation accounts.
Is there a way to use Terraform to link other resources to log analytics? This is a critical feature for us and is required by our security and compliance team.
Thanks!
Posts: 1
Participants: 1
@gadgetmerc wrote:
I’m wondering the performance difference between data resources. The environment I’m working on is somewhat of a “Power Terramod” configuration. It currently has lots of data resources which causes plan/applies to take forever. I’m looking for a path forward to make plan/applies go faster.
My understanding is that I can do an external calls (as we are) using a data resource that hits a provider (data.aws_security_group.name.id). Split the env into multiple states and then use remote state (data.terraform_remote_state.sgid). Or consolidate states and extent modules with outputs that give direct pathing to the resource (module.env-sec-groups.sg-id-1)
They each have pros and cons, but I’m currently trying to solve the performance problem. Obviously keeping them all in one state would be the most performant since its all in memory but has other problems. I’m curious what the performance difference between a standard data.aws_security_group.name.id and data.terraform_remote_state.sgid. They are both making an external call to AWS (due to my remote backend in S3), but one has to be faster than the other. Maybe its not much but doing it hundreds of times adds up.
I thought about trying to build some sort of test harness so accurately test but I figured I would ask the community before starting a whole project.
Thanks!
Posts: 1
Participants: 1
@madpipeline wrote:
I have the following
aws_subnet
definition:resource "aws_subnet" "private-db" { count = 2 vpc_id = aws_vpc.app.id cidr_block = cidrsubnet(aws_vpc.app.cidr_block, 3, count.index) }
And the following
aws_db_subnet_group
definition:resource "aws_db_subnet_group" "main" { name = local.prefix subnet_ids = ["${aws_subnet.private-db.*.id}"] }
I am not clear on what is the proper syntax to use to provide the list of
subnet_ids
here. I always get this or a similar error during planning:Error: Incorrect attribute value type on database.tf line 24, in resource "aws_db_subnet_group" "main": 24: subnet_ids = ["${aws_subnet.private-db.*.id}"] |---------------- | aws_subnet.private-db is tuple with 2 elements Inappropriate value for attribute "subnet_ids": element 0: string required.
I’ve tried the following syntax’s:
aws_subnet.private-db
aws_subnet.private-db.*.id
[aws_subnet.private-db]
[aws_subnet.private-db.*.id]
list(aws_subnet.private-db.*.id)
"${aws_subnet.private-db.*.id}"
- This works in0.11
"${list(aws_subnet.private-db.*.id)}"
What is the proper syntax to use in this scenario?
What don’t I understand about how Terraform0.12
processes this situation different from0.11
?Please advise.
Posts: 1
Participants: 1
@lindaburns wrote:
I am having issues with my ‘variables.tf’ code.
The type = string for variable “resource_group_name” gets this error message: Unknown token:4:20 IDENT string [4,20]I cannot figure out how to fix this…
variable “resource_group_name” {
type = string
description = “The name of the resource group”
}
Posts: 5
Participants: 3
@RaviKumar1209 wrote:
I am trying to create a NLB using a static private IP address but I don’t see an option with Terraform aws_nlb resource. Is there a workaround for this?
Posts: 1
Participants: 1
@philthynz wrote:
I am using “azurerm_windows_virtual_machine” instead of “azurerm_virtual_machine”. I have seen some examples here for how to enable WinRM on new VM’s.
Can we have some examples on how to do the same with “azurerm_windows_virtual_machine”? Some of the code and blocks mentioned are not compatible with “azurerm_windows_virtual_machine”.
Thanks
Posts: 1
Participants: 1
@eliob83 wrote:
For “maintenance” reasons, I am trying to get a list of specified AWS subnets. This way, I could add subnets to my files, and they would join the list on their own if correctly specified (with a specific tag for example).
So I tried to data the subnets, however they are created within the same terraform files, therefore cannot be loaded. I thought about adding “depends_on”, but that mean I would have to add the new ones explicitly, that’s exactly what I am trying to avoid.
Could I simply “declare” a list where I could put my subnets, like a variable but depending on resources creation?
Am i trying something I should not? Should I simply abandon this idea?
Posts: 1
Participants: 1
@nyue wrote:
How should I be referencing the generated S3 ARN ?
The following fails ?
provider "aws" { region = "ca-central-1" } resource "aws_s3_bucket" "b" { bucket = "nicholas-yue-my-tf-test-bucket" } resource "aws_s3_bucket_policy" "b" { bucket = aws_s3_bucket.b.id policy = <<POLICY { "Version": "2012-10-17", "Statement": [ { "Sid": "Example permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::083230063072:role/ACI-Webhooks" }, "Action": [ "s3:GetBucketLocation", "s3:ListBucket" ], "Resource": "${aws_s3_bucket.b.arn}" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::083230063072:role/ACI-Webhooks" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::nicholas-yue-my-tf-test-bucket/*" } ] } POLICY } output "S3-ARN" { value = aws_s3_bucket.b.arn }
Posts: 1
Participants: 1
@alchemist_ubi wrote:
I am using the following configuration.:
terraform { required_version = "0.12.21" backend "gcs" { bucket = "testing-5" prefix = "infrastructure/github/testing" credentials = "./.tmp/credentials.json" } }
I was expecting to fail using Terraform Cloud since there is no file called
./tmp/credentials.json.
But it seems that Terraform Cloud ignores my backend configuration. The run went ahead just fine, but it used Terraform Cloud for saving the state instead of my backend configuration.
Is this intentional by design?
I am confused on how to make this to work with GCP backend for saving the state.
Posts: 1
Participants: 1
@glitchcrab wrote:
I have a bit of a chicken and egg situation and I’d welcome some input from more some more knowledgeable folks.
I’m attempting to use Mastercard’s restapi provider to interact with an API, but the difficulty is that I need to make use of an ID returned on creation of the resource.
I can get the ID of the cluster from the API’s response when initially creating the cluster, but then I would need to use that in the nodepool creation in order to specify the correct API path. Below is some code which I know won’t work, but it shows what I want to achieve.
resource "restapi_object" "cluster" { ... } output "clusterid" { value = jsondecode(restapi_object.cluster.api_response).id } resource "restapi_object" "nodepool" { path = "/v5/clusters/${output.clusterid}/nodepools/" ... depends_on = [ restapi_object.cluster, ] }
Any suggestions welcome!
Posts: 2
Participants: 1
@belliap wrote:
Hi all,
i tried to create a VM with multiple hard disk with only one mount point, it’s possible?
What i’m expecting is that when i create a vm, inside it i see only one hard disk but when i click on it i accede to five different hard disk.
I hope to be clear.
Thank you in advance for your help.
Posts: 1
Participants: 1
@gchamon wrote:
I have configured experiments in the
terraform
block for variable validation. I can’t make Terraform Cloud work with it though. It keeps accusing validation as experimental opt-in feature, even with the necessary configuration.Am I doing something wrong, or are experiments unsupported by Terraform Cloud?
Posts: 2
Participants: 2
@4n0nym1ty wrote:
I have two aws env dev & QA, in dev env dynamo DB with auto-scaling terraform script enable billing mode provisioned but in QA env billing mode try to enable PAY_PER_REQUEST through terraform script while updating billing mode, I m getting the error.
Posts: 1
Participants: 1
@mcraw wrote:
Has anyone successfully created and/or used a s3 event trigger/notification (once an object is uploaded to the s3 bucket) to run a script in ECS/EC2?
The architecture for this is S3 --> Trigger —> SQS —> ECS/EC2 Instance.
I used the resource “aws_sqs_queue” “queue” found at https://www.terraform.io/docs/providers/aws/r/s3_bucket_notification.html. Then I created my ECS and EC2 instance(s).
I created permissions so the ECS will have access to the trigger (once the object is PUT into the S3 bucket). However, the python script I am trying to run is not running.
Any pointers would help. I am quite new to terraform.
Posts: 1
Participants: 1