Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11451 articles
Browse latest View live

S3 Bucket Module Source "Failed to download module"

$
0
0

@outthought wrote:

Within the context of the command terraform init from an AWS CodeBuild instance of image hashicorp/terraform:0.12.19, the module fails to download.

The source terraform template references a module package source in a s3 bucket.

 module "outbound" {
   source        = "s3::https://s3.amazonaws.com/my-modules/outbound-0.0.1.zip"
...

The command fails with this output:

Error: Failed to download module

Could not download module "outbound" (main.tf:17) source code from
"s3::https://s3.amazonaws.com/my-modules/outbound-0.0.1.zip":
NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors

This should be valid per the documentation:

The s3 endpoint is correct because the region is us-east-1.

The CodeBuild project has an IAM role with correct permissions to s3.

The code works from a local CLI command terraform init.

How do I get more information from the command output?

What am I doing wrong fetching the module package from s3?

I have tried many different s3 URLs.

The environment has the variable that would supply the task_role to terraform: ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/40292219-6c1c-424e-b072-8456e6ea8165

Posts: 1

Participants: 1

Read full topic


Deployment best practices

$
0
0

@jota wrote:

I’ve been developing an application that I would like to share with a private network. This app would be hosted remotely; not at the private network where it’s used. Hosting this app on a cloud provider or at the end-user network are not viable options in my case.

So far, Vagrant is fine for my localhost setup. But, once you want to deploy an app, security best practices have to be checked, and this is why I am writing. Maybe I should be using Terraform instead.

At the end of the day, all I need is for users to enter an address (like any website address) and see the application. This app would require a login to use. And I want to configure the host with all the security best practices.

For the app, I currently have a next.js app, which sends requests to another local next.js app that fetches data from MongoDB and returns the data in response. You get the picture: simple app; I’m having fun. The data used by the app is in a dedicated external TB drive at the host, which is nice: I can destroy Vagrant box if needed and persist my content for the next Vagrant box. And the host is behind a VPN.

My primary concerns are security (firewall and configuration) and being able to adapt/throttle the network load so it doesn’t alarm the ISP. Terraform seems to be the way to go to set all of this up, and I’m looking for the free, single admin solution. :slight_smile: Or maybe I should use Apache Web Server.

Your assistance is greatly appreciated. Any suggestions to pull this off?

Posts: 1

Participants: 1

Read full topic

Querying aws_default_security_group ID without creating?

$
0
0

@nyue wrote:

I would like to query and print the ID of the default security group but when I apply the following code, it indicates that it wishes to make a change to the default security group

I was hoping to be able to query default security group like I query default VPC

=======================================
variable “availability_zone_names” {
type = list(string)
default = [“ca-central-1a”,“ca-central-1b”]
}

provider “aws” {
region = “ca-central-1”
}

data “aws_vpc” “default” {
default = true
}

resource “aws_default_security_group” “default” {
vpc_id = data.aws_vpc.default.id
}

data “aws_subnet” “default” {
vpc_id = data.aws_vpc.default.id
default_for_az = true
availability_zone = var.availability_zone_names[0]
}

output “aws_vpc_id” {
value = data.aws_vpc.default.id
}

output “aws_subnet_vpc_id” {
value = data.aws_subnet.default.id
}

output “aws_security_group_id” {
value = aws_default_security_group.default.id
}

Posts: 2

Participants: 2

Read full topic

Support multiple values per one cost filter

$
0
0

@ButterflyServiceDesk wrote:

Hi There

Have been trying to use the aws budget resource to create budgets that filter across multiple accounts but unfortunately, it seems that this can’t be done and has been a bug for a while. Just wondering when this will be fixed as there has been 2 possible fixes/issues open waiting to be merged on GitHub for quite a while now. I have linked both below.


Terraform errors when you use an array/list value and says it expects a string but when you use a string that has numbers i.e. something like
cost_filters = { LinkedAccount = “111111111,222222222,3333333333,444444444,555555555” }

You get a filter that interoperates that whole string as the account number.

Am I using it wrong or?

Regards

Posts: 1

Participants: 1

Read full topic

Remote State File in a Build Pipeline

$
0
0

@piedmont01 wrote:

Good morning. What is best practice when using remote state in a Pipeline in Github or Jenkins? Any simple examples would be helpful; cannot wrap my head around this. After the initial init, will future build jobs refer to the remote state and act accordingly?

Posts: 1

Participants: 1

Read full topic

Data Sources and Destroy Time Provisioners

$
0
0

@ajchiarello wrote:

I’m trying to identify a way to rewrite my destroy-time provisioners so that they are no longer using external data sources (since that has been deprecated), and I’m not sure how to go about it.

My use-case is this: When I destroy a VM, I have a number of cleanup tasks that need to be performed, that use credentials for some external systems. Currently, these pull credentials from a Vault data source. I don’t want to hard code the credentials, because they would then end up in my git repo. I know that with null resources, I can define the items I need for destroy as part of the triggers block. Is there anything equivalent for other types of resources? Or is there another method I can use to provide credentials to a destroy-time provisioner that doesn’t require hardcoding them into the terraform script?

Posts: 1

Participants: 1

Read full topic

Help with an IF statement

$
0
0

@kendall-link wrote:

At the heart of what I’m trying to do… I’m trying to set the map_public_ip_on_launch value to true if the map key equals public . Thoughts?

variables.tf

variable    "az1_subnets"    {
    type    =    map
    default    = {
        public    =    "10.10.10.0/24"
        private   =    "10.10.11.0/24"
    }
}

main.tf

    ...
    resource    "aws_subnet"    "az1"    {
        for_each                =    var.az1_subnets
        ...
        map_public_ip_on_launch =    {each.key = "public" ? 1 : 0}
        ...
    }

It’s quite possible I’m being stupid… Please call me out on it as I’m still learning all of this stuff.

Posts: 2

Participants: 2

Read full topic

The backend configuration docs need updating


Null Resource create new resource without destroying original

$
0
0

@snoopytrb wrote:

I have a tf file that includes several resources and a null resource to run a script that creates a region specific configuration in AWS. I want to be able to change the variable for the aws region and have terraform create all new resources. For everything except the null resource that seems to work fine. But the null resource insists on destroying before creating the new resource in the new region. Is there a way to make terraform just leave the original resource and create the new one. I’d like for it to only destroy the null resource when terraform destroy is specifically run.

resource “null_resource” “config-s3-remediation” {
triggers = {
account_name = var.account_name
region = var.region
}
depends_on = [
aws_config_config_rule.s3_access_logging_rule,
aws_ssm_document.s3_access_logging_ssm
]

provisioner “local-exec” {
command = "python3 {path.module} /remediation_config.py add {self.triggers.region}
}
provisioner “local-exec” {
when = destroy
command = “python3 {path.module}/remediation_config.py remove {self.triggers.region}”
}
}

Posts: 1

Participants: 1

Read full topic

Referencing a generated resource in a var list of objects

$
0
0

@Altern1ty wrote:

I’m trying to build out an Azure firewall, some public IP’s and some NAT rules associated with them.

I’m running into difficulty referencing the public IP’s i’ve made in the resource in my nat rules. Since the nat rules use a count and then a nested for_each loop I can’t figure out how to have them properly reference the public IP they will be using.

My current attempt is I add the name of the public_ip manually in the var.firewall_nat_collection and am trying to filter the azurerm_public_ip.rg_public_IPs but this only returns an empty field and doesn’t seem to match.

  destination_addresses = [for ip_address, name in azurerm_public_ip.rg_public_IPs : ip_address if name == rule.value.public_ip]

Let me know what is wrong with my for statement or if there is a better way to achieve this?

relevant code: https://privatebin.net/?ec855204f5cc2d07#4YeSTx7wUjNpQq134AqZH2chiSRt2m3u5Ck4haQyPZxf

Posts: 1

Participants: 1

Read full topic

Is there a way to associate an Integration Service Environments with Logic Apps via Terraform?

$
0
0

@ved3690 wrote:

Is there a way to associate an Integration Service Environment with Logic Apps via Terraform?

I have been trying various options over the past few days and could n 't get the Logic Apps listed under the Logic Apps section in ISE via Terraform.

Could you please suggest.

Thanks in advance…

Posts: 1

Participants: 1

Read full topic

Specifying security group in a cidr_blocks

$
0
0

@nyue wrote:

I am attempting to enable SSH between the head node and a cluster of compute nodes (and among the compute nodes themselves)

In the AWS console, I am allowed to use a security group as input to the cidr_blocks

How can I achieve the same via Terraform HCL ?

resource “aws_security_group” “head_node_sg” {
name = “head_node_sg”
description = “Allow SSH inbound traffic”

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
}

resource “aws_security_group” “compute_node_sg” {
name = “compute_node_sg”
description = “Allow SSH inbound traffic”

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [
aws_security_group.head_node_sg.id
]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
}

Posts: 1

Participants: 1

Read full topic

Add GraphQL API to Terraform Cloud?

$
0
0

@binaryfire wrote:

Hi guys,

We’ve been migrating our internal Gitlab integrations from their REST API to their new GraphQL API (https://docs.gitlab.com/ee/api/graphql/) and it’s been an absolute joy. Queries are so simple, only return the data asked for and the new code is super lightweight because there’s only really one endpoint. Plus we no longer have to worry about versioning (/v1, /v2) in URLs.

Could we get a GraphQL API for Terraform Cloud?

Cheers :slight_smile:

Posts: 1

Participants: 1

Read full topic

Provider version conflict

$
0
0

@itzzmeshashi wrote:

Hi,

I’ve recently upgrade to v0.12.19 of terraform and I’m getting the below error when performing a plan:

Error: Resource instance managed by newer provider version
The current state of google_compute_address.gcp-test was created by a
newer provider version than is currently selected. Upgrade the
registry.terraform.io/-/google provider to work with this state.

And my current version’s are:

terraform version
Terraform v0.12.19

  • provider.google v3.4.0

v3.4 is the latest for google provider, I’m unable to understand the error.
Can someone please help

Thanks!!

Posts: 1

Participants: 1

Read full topic

AWS EC2 passwordless ssh - key management/sharing best practices

$
0
0

@nyue wrote:

I am setting up an MPI cluster and need to setup passwordless ssh between the head node and all the compute nodes.

I have learned about aws_security_group_rule via this article so I believe the ports are fine.

I am hoping to find information about Terraform way (best practice) to manage ssh keys for passwordless access between instances. Should I be doing all the setup via “user_data”

Cheers

Posts: 2

Participants: 2

Read full topic


Blue/Green Deployment

$
0
0

@moula wrote:

Hi,

I want to implement blue/green deployment using terraform for my application deployment,
here is my use case

application architecture like this ALB–Listener–TargetGroup–LaunchConfiguration–ASG–EC2
I’m looking for a solution for every deployment i have to create new LaunchConfiguration–ASG–TargetGroup and update existing listener then remove old TargetGroup, LC & ASG.

If you have any idea please help me in that.

Regards,
Moula

Posts: 1

Participants: 1

Read full topic

Assigning a policy to an ECR repo fails

Enforce POlicy Confirmation on Terraform Cloud

$
0
0

@andrescolodrero wrote:

is there a possibility to enforce confirmation on “apply” for more than 1 members?
The point is that users are contributors to the workspace and i´d like that at least, the apply is confirmed by 2.

Posts: 1

Participants: 1

Read full topic

Update provisoners

$
0
0

@twanc wrote:

Hi there!
Is there a way to define a provisioner which would be triggered only as the action for resource replacement? I need to handle a case when typical replacement by executing destroy->create would not work.
Thanks in advance for answer

Posts: 1

Participants: 1

Read full topic

How to escape double-quotes in the local-exec provisioner

$
0
0

@HugoSecteur4 wrote:

Hi, everybody,

I’m trying to execute a command with the Terraform “local-exec” provisioner but this command contains double quotes (") and I can’t escape them to run the command correctly.

I have reduced my problem to a simpler one, here is my code :

resource "null_resource" "Hello_World" {
  provisioner "local-exec" {
    command = "echo \"Hello World\""
  }
}

I would like the result of this command to be :

"Hello World"

But the result is :

\"Hello World\"

Execution :

null_resource.Hello_World: Destroying... [id=2381325006212666147]
null_resource.Hellgo_World: Creating...
null_resource.Hello_World: Destruction complete after 0s
null_resource.Hellgo_World: Provisioning with 'local-exec'...
null_resource.Hellgo_World (local-exec): Executing: ["cmd" "/C" "echo \"Hello World\""]
null_resource.Hellgo_World (local-exec): \"Hello World\"
null_resource.Hellgo_World: Creation complete after 0s [id=5337014350841532259]

Could you please tell me how to properly escape double-quote and not end up with \ in my final result.

Thank you in advance.
Hugo

Posts: 1

Participants: 1

Read full topic

Viewing all 11451 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>