Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11461 articles
Browse latest View live

Terraform - Customize vm question -- credentials to be used

$
0
0

@jasonwilliams14 wrote:

This is a pretty basic question, but I want to understand how terraform will customize an OS VM (new Linux VM for example). I know you specify in the main.tf what you want customized (networking, hostname, etc.), but terraform would need credentials to log into the newly cloned/created VM and make those changes? That being said, how/where do you specify those credentials? I’ve been looking at the docs, but i am unable to locate that section.

Thanks.

Posts: 4

Participants: 2

Read full topic


How to manage multiple customer deployment in Terraform

$
0
0

@thgsiddhimorajkar wrote:

I have a question regarding how to manage multiple customers deployment in Terraform .I want to know a convenient way, if there is any.

Posts: 2

Participants: 2

Read full topic

Question about mongodbatlas provider

$
0
0

@simon-guerin wrote:

I’m trying to automate the deployment of our MongoDB Atlas estate. MongoDB Atlas has an API as part of the cluster configuration which allows to set point in time restore pitEnabled

https://docs.atlas.mongodb.com/reference/api/clusters-create-one/

Looking in the documentation for the terraform resource mongodbatlas_cluster it doesn’t have this option

Is there a plan to implement this and if so do you know when?

Thanks

Simon

Posts: 1

Participants: 1

Read full topic

AWS Client VPN Endpoint

$
0
0

@donsamiro wrote:

Hello,

how can I use AWS Client VPN? I found AWS VPC Endpoint and AWS EC2 Client VPN Endpoint. Which one should I use if I want to create an Infrastructure where I will get access to a private Subnet in a VPC due a Client VPN Endpoint? Do anyone have a little code example for me or a hint? I would be thankful for any advice.

Friendly regards,

Sam

Posts: 1

Participants: 1

Read full topic

Best-practice for Terraform access to Azure subscription

$
0
0

@JohnDelisle wrote:

I’m looking for best-practice guidance for granting App Registration (Service Principal) access to an Azure Subscription for use with Terraform.

As I understand Azure RBAC, you require Owner privilege at the Subscription scope to create/delete Resource Groups and to manage RBAC of a Resource Group. This implies that the App Registration used by Terraform requires Owner at the Subscription scope, if you wish to use Terraform to provision Resource Groups.

I anticipate having many teams deploying products/ services to the same Azure Subscription. If they’re all using the same App Registration (or even multiple App Registrations) with Owner privilege at the Subscription scope, they can inadvertently damage each others’ work. A member of Team A can accidentally delete a resource belonging to Team B, for example.

Is there a best-practice model for this scenario? I want to limit the access of each App Registration in a more compartmentalized way, where each App Registration is restricted to a subset of Resource Groups, without needing to use dozens of Subscriptions to do so.

How are other large organizations solving for this?

Thanks!

Posts: 1

Participants: 1

Read full topic

Template over map

$
0
0

@okgolove wrote:

Hello!

Terraform 0.12 introduced new string templates:

<<EOT
%{ for ip in aws_instance.example.*.private_ip }
server ${ip}
%{ endfor }
EOT

I’d like to ask can it iterate over a map? Like:

<<EOT
%{ for ip, name in var.my_cool_map }
${name} ${ip}
%{ endfor }
EOT

If no, how can I achieve that?
Like in 0.11, create an additional template and use count?
Thanks!

Posts: 1

Participants: 1

Read full topic

Azure NetApp Files - Volume mount path

$
0
0

@sudev5678 wrote:

Hello,
I don’t seem to find a way to get the complete mount path (including IP address) once an Azure NetApp Files volume is created? I don’t think it’s being exported in the attributes.
I also checked the Data Source: azurerm_netapp_volume


Do I need to report this else where?

Thanks in advance!

Posts: 1

Participants: 1

Read full topic

Unable to associate eip

$
0
0

@sebpo wrote:

Getting the following error

Error: Error associating EIP: MissingParameter: Either public IP or allocation id must be specified
status code: 400

Here is my configuration file:

provider “aws” {
region = “us-east-1”
access_key = “”
secret_key = “”
}

resource “aws_eip_association” “myeip” {
instance_id = “aws_instance.myweb.id”
}

resource “aws_instance” “myweb” {
ami = “ami-09d069a04349dc3cb”
instance_type = “t2.micro”
security_groups = ["${aws_security_group.mysg.name}"]

}

resource “aws_security_group” “mysg” {
name = “web-server-sg”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“116.75.30.5/32”]
}

ingress {
from_port = 21
to_port = 21
protocol = “tcp”
cidr_blocks = [“116.75.30.5/32”]
}

ingress {
from_port = 25
to_port = 25
protocol = “tcp”
cidr_blocks = [“116.75.30.5/32”]
}
}

Posts: 2

Participants: 2

Read full topic


Creating aws cloudwatch alarm with SNS subscription

Use template_body and parameters_body in azurerm_template_deployment how List

$
0
0

@belliap wrote:

Hi to all!!!

i have this problem:

i try to create an azurerm_template_deployment in this way:

resource “azurerm_template_deployment” “test-logic-app-template”
{
count = “dollar {length(azurerm_logic_app_workflow.test-logic-app-name)}”
name = “dollar {element(azurerm_logic_app_workflow.test-logic-app-
name.*.id,count.index)}”
template_body = “dollar {file(element(var.legacy_tb_la,count.index))}”
parameters_body = “dollar {file(element(var.legacy_pb_la,count.index))}”
resource_group_name = “dollar {var.resource_group_name}”
deployment_mode = “Incremental”
}

variables legacy_tb_la and legacy_pb_la are a lists:

legacy_tb_la = ["./file1.json","./file2.json"]
legacy_pb_la = ["./file1.parameters_DEV.json","./file2.parameters_DEV.json"]

variable “legacy_tb_la” {
description = “Path for the template of Logic App”
type = “list”
}

variable “legacy_pb_la” {
description = “Path for the parameters of Logic App”
type = “list”
}

when i try to run the command terraform apply, i receive this error:

Error creating deployment: resources.DeploymentsClient#CreateOrUpdate: Invalid input: autorest/validation: validation failed: parameter=deploymentName constraint=MaxLength value="…" details: value length must be less than or equal to 64

Someone can help me?
thanks in advance.

Posts: 1

Participants: 1

Read full topic

How to automatically destroy a resource while destroying other?

$
0
0

@pmgupte wrote:

I need to create two resources A and B. B depends on A for a key it provides when A is created. Creating it all is fine, no issues whatsoever.

Now, when I say terraform destroy -target B, I want A to get destroyed as well automatically. I.e. I do not want to say destroy -target A B. (I don’t want to remember their relationship.)

Is there a way to do that?

Posts: 2

Participants: 2

Read full topic

Unable to pass a module's output as input to another module

$
0
0

@sai-ns wrote:

Hello everybody, this is my first post in this community and also fairly new to Terrafrom. Please correct or disregard deviation from community guidelines.

I currently have a directory structure as below and I am trying to output db_endpoint from cosmosdb module and use it in environments/dev/keyvault.tf to pass it as a secret
├── environments
│ └── dev
│ ├── cosmosdb.tf
│ ├── keyvault.tf
│ ├── variables.tf
├── modules
│ ├── cosmosdb
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── keyvault
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
Here is my output from cosmosdb module. modules/cosmosdb/output.tf
output “db_endpoint” {
value = azurerm_cosmosdb_account.example.endpoint
description = “Database endpint url”
}
Below is what I have in environments/dev/keyvault.tf (tried using data source, but it still comes back with same error)

/data “azurerm_cosmosdb_account” “dbconnectionurl” {
name = module.cosmosdb.name
resource_group_name = var.resourcegroupname
endpoint = module.cosmosdb.db_endpoint
}
/
module “keyvault-dev” {
source = “…/…/modules/keyvault”
secrets = {
“dbConnectionUrl” = module.cosmosdb.db_endpoint
}
}
When I run plan, it errors out with a message “Reference to undeclared module”
Error: Reference to undeclared module

on keyvault.tf line 21, in module “keyvault-dev”:
21: “dbConnectionUrl” = module.cosmosdb.db_endpoint

No module call named “cosmosdb” is declared in the root module.

Can someone please help on how i can achieve this? Please let me know if you need more information.

Posts: 1

Participants: 1

Read full topic

Multiple simultaneous "runs" in different workspaces on Terraform Cloud

$
0
0

@briananstett wrote:

I’m looking for the ability to have DIFFERENT workspaces execute “runs” at the same time. I understand you wouldn’t want simultaneous runs in the SAME workspace but “runs” from all workspaces seem to get queued if there’s already a “run” happening in one workspace.

From this screen shot in a Terraform blog post it seems like it is possible to have multiple runs executing at once in different workspaces. Is this something you just have to pay more for?

Posts: 2

Participants: 1

Read full topic

Allocate new static IP and create new instance EC2

$
0
0

@thiagodeandrade wrote:

Hi!

How to allocate new static ip in Elastic IP with terraform?
How to get this static ip and associate a new EC2 instance

I’m tryed this, but not a same IP

resource “aws_eip” “default” {
instance = “${aws_eip.default.public_ip}”
vpc = true
}

resource “aws_instance” “newserver” {
ami = “${aws_ami.client-ami.id}”
instance_type = “t2.micro”
}

ERROR: Incorrect attribute value type

on snap-to-ec2.tf line 19, in resource “aws_route53_record” “client-dns”:
19: records = aws_eip.default.public_ip
|----------------
| aws_eip.default.public_ip is “18.211.142.50”

Inappropriate value for attribute “records”: set of string required.

Posts: 2

Participants: 2

Read full topic

Specify resource to always run last

$
0
0

@vtolstov wrote:

I’m newbies with terraform. I have specific use-case that i want to have resource that always runs last. For example slack notification. No matter how many resources can be before - it always must runs last.
How can i do that? I saw docs about depends_on, but it requires to specify resources by hand.

Posts: 4

Participants: 3

Read full topic


Waiting for a reboot with Terraform

$
0
0

@phaer wrote:

Hi,

I am provisioning hetzner cloud servers which are running a unattended install on first boot, then reboots. I need to run a local-exec provisioner after this installation procedure is finished.

If I add the provisioner to the server resource, it seems to run as soon as the server is booted, long before the installation has finished.

My second approach was to use a null_resource with a trigger on the servers status, but terraform seems to finish its process as soon as all resources a provisioned, before the trigger has a change to act on the ‘reboot’ status.

How can I make terraform wait for this reboot?

Posts: 1

Participants: 1

Read full topic

Terraform with GCP Bigquery - explicit dataset access issue

$
0
0

@sajalda23409 wrote:

This is regarding deploying GCP Bigquery dataset with Terraform 0.12.19 version CLI command.
While deploying, I found that terraform is removing dataset access from project level if I explicitly allow access to another user at dataset level through terraform CLI. The same is not happening with GCP Console CLI.
I am describing the scenario below.
I have a GCP project named “X” and under that I have below users/service Account with project level roles assigned in IAM.

  1. Ist user email address with Project OWNER role assigned.
  2. 2nd user email address with Project Viewer role assigned.
  3. A Service Account with OWNER role assigned.

I am using the same SA which has OWNER role at project level to delpoy the dataset from Terraform CLI.

I observed that once the below code executed and deployed, the 2nd user with project level Viewer role lost the access to the dataset. That user cannot see any tables under the deployed dataset. Only the assigned user/SA can access the dataset/tables.
I am using “bigquery.googleapis.com” GCP API here.

Is this any bug in Terraform/GCP Bigquery API? Kindly check and confirm. This is impacting my production deployment as I am stuck here.

Here is a sample Code I am following:

resource “google_bigquery_dataset” “my-bigquery-dataset” {

project = “X”
dataset_id = “my_ds”
friendly_name = “dataset:my_ds”
description = “This is the dataset:my_ds”
location = “US”

labels = {
airid = “111”
env = “prod”
}

access {
role = “roles/bigquery.admin”
user_by_email = “${data.google_service_account.admin_sa.email}” # Assigning bigquery admin role to SA
}

access {
role = “roles/bigquery.dataViewer”
user_by_email = “1st_user_email_addess” # Assigning dataset level Viewer access to 1st user who already has a OWNER access at project level
}
}

Posts: 1

Participants: 1

Read full topic

Understanding attributes as blocks error for aws security group

$
0
0

@omerosaienni wrote:

I am trying to use the attributes as blocks feature in Terraform 0.12.

If I build the following script, Terraform will configure the security group.

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_security_group" "allow_tls" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
  }

  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }
}

If I convert the ingress attribute into block format:

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_security_group" "allow_tls" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = aws_vpc.main.id

  ingress = [{
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
  }]

  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }
}

Terraform returns an error

Error: Incorrect attribute value type

  on main.tf line 15, in resource "aws_security_group" "allow_tls":
  15:   ingress = [{
  16:     from_port = 443
  17:     to_port   = 443
  18:     protocol  = "tcp"
  19:   }]

Inappropriate value for attribute "ingress": element 0: attributes
"cidr_blocks", "description", "ipv6_cidr_blocks", "prefix_list_ids",
"security_groups", and "self" are required.

Is this expected?

Posts: 1

Participants: 1

Read full topic

Variables Warning: External references from destroy provisioners are deprecated

$
0
0

@invad0r wrote:

Hello,

after updating to Terraform v0.12.19 I’ve already removed some of the External refrence Warnings as mentioned here:

But I don’t understand how to fix this warning:

  on ../../modules/dockerhost/main.tf line 167, in resource "vsphere_virtual_machine" "vm":
 167:     private_key = file(var.private_key_path)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.

(and 7 more similar warnings elsewhere)

from maint.tf

resource "vsphere_virtual_machine" "vm" {
...
  connection {
    host        = self.default_ip_address
    type        = "ssh"
    user        = "root"
    private_key = file(var.private_key_path)
  }

What should private_key be to avoid dependency cycles ?

How do I see the other 7 more similar warnings ?

( is there a reverse to -compact-warnings ? )

Posts: 1

Participants: 1

Read full topic

Terraform Cloud Notification - Webhook - Microsoft teams

$
0
0

@leetrollope-hf wrote:

Hey Folks,

I was wondering, if anyone in the community has succeeded integrating the Terraform Cloud Webhook notification with an incoming webhook connector on Microsoft Teams?

I’ve followed the steps on the Microsoft side to create the Webhook, but I’m getting a 400 error on the Terraform cloud side, when testing the webhook. Doesn’t appear to have a great deal on configuration options for me to change, just Name, URL and Token.

Any help would be much appreciated, Thanks!

Posts: 1

Participants: 1

Read full topic

Viewing all 11461 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>