Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11357 articles
Browse latest View live

Panic: runtime error: invalid memory address or nil pointer dereference

$
0
0

Hi there,

Our team has recently adopted Terraform Cloud. In addition to managing infrastructure with TC, we’re trying to also manage TC with TC. To do this, we have a workspace for TC management. We set up another workspace for GitHub teams using our TC management workspace, but then decided to delete it thinking we could simply rebuild it by re-applying our TC management workspace. When we queue a plan in our TC management workspace, we get a the following error:

panic: runtime error: invalid memory address or nil pointer dereference

As well as,

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

SECURITY WARNING: the "crash.log" file that was created may contain 
sensitive information that must be redacted before it is safe to share 
on the issue tracker.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Has anybody else run into this? Any idea of why we might be having this issue?

All the best,
E

1 post - 1 participant

Read full topic


Accessing variables declared and defined in parent from child module

$
0
0

Hi,

I am quite new in using Terraform, please bear with me if my questions are silly.
My code structure is as follows:
Root:-----------
main.tf
variables.tf
main_values.tfvars
nva-------(/* child module folder */)
create_firewall.tf
variables.tf
firewall_values.tfvars

I have tried executing just the child module as a separate standalone one, it worked successfully.
As a next step I wanted to try out module configuration.
Root: ---- consists of only creation of vnet and subnets.
Child Module ----- creates AzureFirewall subnet in vnet created in parent .
and Azure Firewall in that subnet.
My issues:

  1. How to refer to the vnet that is created in parent.
  2. Although all values are provided in firewall_values.tfvars, it is still asking the values from the main.tf in parent where child module is getting called.

I know as per the module configuration I need to pass all the values at the time of calling the module. Is there any way I can point to values defined in firewall_values.tfvars file. If this is possible, then child module variables definitions can be segregated.

Thanks in advance.

1 post - 1 participant

Read full topic

Terraform with github action aws MFA issue

$
0
0

How to work with github actions to create MFA token for aws.

Need to run this shell scipt to get the toke.


unset AWS_ACCESS_KEY_ID

unset AWS_SECRET_ACCESS_KEY

unset AWS_SESSION_TOKEN

export AWS_ACCESS_KEY_ID=<<YOUR_KEY>>

export AWS_SECRET_ACCESS_KEY=<<YOUR_SECRET>>

 

aws sts get-session-token --duration-seconds 36000 \

--serial-number arn:aws:iam::<<YOUR_IAM_ACCOUNT_NUMBER>>:mfa/<<YOUR_IAM_ACCOUNT>> \

--token-code <<YOUR_MFA_OTP>> \

--output json

 

export AWS_ACCESS_KEY_ID=<<GET_FROM_JSON>>

export AWS_SECRET_ACCESS_KEY=<<GET_FROM_JSON>>

export AWS_SESSION_TOKEN=<<GET_FROM_JSON>>


aws sts assume-role --role-arn arn:aws:iam::<<YOUR_DEV_ACCOUNT_NUMER>>:role/<<YOUR_ROLE>> \

--role-session-name <<YOUR_ROLE>> \

--duration 3600 \ --output json

 

export AWS_ACCESS_KEY_ID=<<GET_FROM_JSON>>

export AWS_SECRET_ACCESS_KEY=<<GET_FROM_JSON>>

export AWS_SESSION_TOKEN=<<GET_FROM_JSON>>```

Any help if terraform already doing this.

1 post - 1 participant

Read full topic

Is it supported for per-module state file?

$
0
0

I have several modules shared by different project, e.g. projectA/main.tf and projectB/main.tf, written by terraform like below

module “moduelA” {
source = “./modules/moduelA”
}

I have set the remote backend to use AWS s3 bucket “moduleA” to store state file, and would like a) moduleA to generate its own state file which separates from other modules and b) moduleA reuses the same state file regardless it’s used by projectA or projectB

Is it doable?

Thanks

1 post - 1 participant

Read full topic

Determine list of module dependencies

$
0
0

Is there a canonical way to determine what modules a given workspace depends on? We’re currently looking at .terraform/modules/modules.json, but found that it does not seem to “clean up” over time when removing/renaming modules, despite running terraform init again.
For context, we use this in linting to verify our terraform cloud workspace trigger directories are correct in a TF-monorepo setup (to avoid unnecessary plan runs), so alternatives to achive that in other ways are also valid :slight_smile:

1 post - 1 participant

Read full topic

Any way to Fetch computed value

$
0
0

Hello,

For Azure, Private Endpoint resource , private_ip_address is a computed value.

For Private DNS A Record , only IP address is allowed.

Is there a way that I can fetch the computed value(private_ip_address) and use it in Private DNS A Record .
(records = [azurerm_private_endpoint.example.private_ip_address])

If i try to use it like an attribute , I get an error.
Object does not have an attribute named private_ip_address.


2 posts - 2 participants

Read full topic

Terraform apply command destroys my newly imported resources

$
0
0

I’m using terraform to deploy infrastructure in Oracle Cloud (OCI) and I have some resources such as virtual machines that were deployed manually.
I created the below terraform script to import a VM

variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "region" {}

variable "compartment_ocid" {
  type       = string
  default    = "ocid1.compartment.oc1..aaaaaaaa....."
}

variable "availability_domain" {
  type       = string
  default    = "phiX:US-ASHBURN-AD-1"
}

variable "subnet_ocid" {
  type = string
  default    = "ocid1.subnet.oc1.iad.aaaaaaaa......"
}

variable "ssh_public_key_file"{
    type = string
    default = "/Users/cgperalt/.ssh/id_rsa.pub" 
}

provider "oci" {
  tenancy_ocid      = var.tenancy_ocid
  user_ocid         = var.user_ocid
  fingerprint       = var.fingerprint
  private_key_path  = var.private_key_path
  region            = var.region
  version           = "~> 3.17"
}

# get latest Oracle Linux 7.7 image
data "oci_core_images" "oraclelinux-7-7" {
  compartment_id = var.compartment_ocid
  operating_system = "Oracle Linux"
  operating_system_version = "7.7"
  filter {
    name = "display_name"
    values = ["^([a-zA-z]+)-([a-zA-z]+)-([\\.0-9]+)-([\\.0-9-]+)$"]
    regex = true
  }
}

resource "oci_core_instance" "example" {
  count               = 2
  compartment_id      = var.compartment_ocid
  availability_domain = var.availability_domain
  subnet_id           = var.subnet_ocid
  display_name        = "example_TF-1${count.index}"
  image               = lookup(data.oci_core_images.oraclelinux-7-7.images[0], "id")
  shape               = "VM.Standard1.1"

  metadata = {
    ssh_authorized_keys = "${file(var.ssh_public_key_file)}"
  }
}

then I ran

terraform import oci_core_instance.example <vm_id>

and then if I do
terraform plan
terraform marks the imported machine to be destroyed.
Does anybody know why this is happening ?

Thanks so much

3 posts - 2 participants

Read full topic

AWS AppSync - can we fetch the schema from S3?

$
0
0

The aws_appsync_graphql_api resource for AWS AppSync has a schema attribute for the GraphQL schema, that can be either multi-line heredoc or loaded via a file function. I’m currently using a file function, but as I’m using Terraform Cloud, that means my GraphQL schema file needs to be committed in my Terraform repository, as opposed to in a code repository where it belongs - it’s application code, not configuration.

Is there a way to specify that the schema should be loaded from S3? That’s what I do with my Lambda functions, as aws_lambda_function allows you to specify which aws_s3_bucket_object you want to load your function from, as an alternative to using a straight file reference - however to do that, aws_lambda_function has explicit s3_bucket, s3_key and s3_object_version attributes which are not available in aws_appsync_graphql_api.

4 posts - 2 participants

Read full topic


Terraform apply fails to create ec2 instances. it is complaining about the vpc_security_group_is

$
0
0

here is the code:
esource “aws_instance” “PublicEC2” {
ami = “mi-0e9089763828757e1”
instance_type = “t2.micro”
vpc_security_group_ids = “{aws_security_group.allow_ssh.id}" subnet_id = "{aws_subnet.lme_public-1a.id}”
key_name = “mykeyname”

tags = {
    Name = "PublicEC2"
}

depends_on = [aws_vpc.lme_vpc, aws_subnet.lme_public-1a, aws_security_group.allow_ssh]

}

it is throwing the following error:

Error: Incorrect attribute value type

on ec2.tf line 5, in resource “aws_instance” “PublicEC2”:
5: vpc_security_group_ids = “${aws_security_group.allow_ssh.id}”

Inappropriate value for attribute “vpc_security_group_ids”: set of string
required.

2 posts - 1 participant

Read full topic

Terraform Plan Auto Apply Risks

$
0
0

We need to provision multiple sets of predefined AWS infrastructure components. The set of components is fixed, and defined by a database of properties e.g. the properties may include a Swagger file for a Gateway API, some url patterns for a Cloudfront distrubution etc.

There is only a limited and defined variation among these components and their properties. The properties may change over lifetime of the components.

We were weighing the option of using terraform to provision and maintain these resources, vs using writing scripts ourselves using AWS SDK and APIs.

Terraform with AWS provider makes things super easy, as compared to manually writing all the provisioning and modification code. However, the team is concerned about the degree of determinism in the process, especially since we want to auto-apply the changes without human review or interaction.

Let’s say my database of properties changes and I want to make changes to a few resource sets, and add a whole new resource set.

If I can guarantee that only this terraform is changing my AWS infrastructure (No one going to the console to change anything manually, strict Infrastructure as code). Then, will the plan always work in the same way? Of course its a computer program and its ultimately deterministic and will always “work in the same way”, but what I want to ask is, to those who know the internal workings of Terraform (and the generated cloudformation) better, what are the risks of using this approach to managing our infrastructure. What risk do we avoid by taking the more painstaking SDK / API based change management approach vs terraform. Does terraform have complex, conditional optimizations built in which can result in different upgrade and change paths for different change types, which makes human review of the plan always necessary to ensure the adopted change path is not risky?

Putting the same question differently, do you see any risk in using a database to auto-generate .tfvars for fixed terraform provisioning modules, and assume that repeated re-application of different variations of those tfvars and adding/removal whole modules, will always result in the same AWS infrastructure?

Thanks
Asif

2 posts - 2 participants

Read full topic

Specifying different providers in a resource using "for_each"

$
0
0

I’m trying to create a postgres user management configuration, and I’ve got several independent postgres instances across which to manage users.

I’d like to be able to have a single module for the “user”, and loop through each provider (postgres instance), creating the user with the given username and password and specific roles per instance without adding a new module for each instance-user combination, but I can’t figure out how to specify a different provider for each instance of a resource/module using “for_each”.

I’ve even upgraded to 0.13-beta-2, to try using “for_each” on the module, since I had seen examples defining module providers as a string, but apparently those were typos, because I just get an error saying providers can’t be specified in quotes.

Here’s what I currently have that doesn’t work:

provider "postgresql" {
  alias "instance_a"
  ...
}
provider "postgresql" {
  alias "instance_b"
  ...
}
provider "postgresql" {
  alias "instance_c"
  ...
}

module "user_joe" {
  source = "../../modules/postgres-users"
  for_each = {
    instance_a = [a_roles.read_only["dbname"]]
    instance_b = [b_roles.admin["dbname2"]]
  }
  providers = {
    postgresql = "postgresql.${each.key}"
  }

  username = "joe"
  password = var.joes_password
  roles = each.value
}

It seems like this would be a useful pattern to allow for, given that certain providers refer to infrastructure that might be array-like in nature. Any ideas on how to approach this?

1 post - 1 participant

Read full topic

Accessing output of child module that is enabled/disabled using count

$
0
0

Terraform verison: 0.12.24

AzureRm: 2.0.0

I am using “count” to enable/disable child module. When i am trying to feed the output of child module to the parent module, i am running into issues.

Child module:
Backend address pool creation resource block of the child module.

resource "azurerm_lb_backend_address_pool" "lb-backendpool" {
  count = var.enable_lb == true ? 1 : 0
  resource_group_name = var.rgname
  loadbalancer_id     = element(azurerm_lb.lb.*.id, count.index)
  name                = "BackEndAddressPool"
}

Output of child module:

output "modout_poolid" {
  value      = azurerm_lb_backend_address_pool.lb-backendpool.*.id
}

Parent module:

resource "azurerm_network_interface_backend_address_pool_association" "nic-lbpool" { 
  for_each                = var.enable_lb == true ? azurerm_network_interface.nic : {}
  network_interface_id    = each.value.id
  ip_configuration_name   = "ipconfig01"
  backend_address_pool_id = [module.azure_lb.modout_poolid]
}

Terraform plan


Error: Incorrect attribute value type

 main.tf line 47, in resource "azurerm_network_interface_backend_address_pool_association" "nic-lbpool":
  47:   backend_address_pool_id = [module.azure_lb.modout_poolid]

Inappropriate value for attribute "backend_address_pool_id": string required.

1 post - 1 participant

Read full topic

Partial Terraform

$
0
0

Once in a while I discover an interesting setting in the resources I manage, such as

image

and I want to track only this setting. Sometimes even without prior state - restoring the state directly from the resource endpoint I supply to it, which is enough to identify the resource for a single request, such as GitLab project URL as in this case.

This will not only make Terraform care about partial resource definition, but will also make it stateless. I wonder how hard is it to patch Terraform to allow this use case?

1 post - 1 participant

Read full topic

How can i Use the Team token to automate plan/apply via CI

$
0
0

Hi,

Using terraform cloud, how can i use the team token for authenticate on our account with our CI process?

Currently, i tried to add the contents of /home/tclaro/.terraform.d/credentials.tfrc.json with the new generated token under the CI container that we are using with:

mkdir -p /root/.terraform.d/
echo -n “$TF_TOKEN” > /root/.terraform.d/credentials.tfrc.json
chmod 600 /root/.terraform.d/credentials.tfrc.json

But still, when the CI process executes, it prompts:

Initializing the backend…

Error: Required token could not be found

Run the following command to generate a token for app.terraform.io:
terraform login

So, How could i pass the team token as environment variable?

edit:

After checking the following document, it doesnt seems quite clear how can i achieve that:


on the CLI config, using:

credentials "app.terraform.io" {
  token = "xxxxxx.atlasv1.zzzzzzzzzzzzz"
}

Would be directly on the tf files? sorry for all the questions but im confused regarding how can i achieve CI integrtion with terraform.

1 post - 1 participant

Read full topic

Create multiple rules in AWS security Group

$
0
0

Hi,

I tried to create an AWS security group with multiple inbound rules, Normally we need to multiple ingresses in the sg for multiple inbound rules. Instead of creating multiple ingress rules separately, I tried to create a list of ingress and so that I can easily reuse the module for different applications.

PFB,

module/sg/sg.tf >>

resource "aws_security_group" "ec2_security_groups" {
  name   = var.name_security_groups
  vpc_id = var.vpc_id
}

module/sg/rules.tf >>

resource "aws_security_group_rule" "ingress_rules" {
  count             = lenght(var.ingress_rules)
  type              = "ingress"
  from_port         = var.ingress_rules[count.index][0]
  to_port           = var.ingress_rules[count.index][1]
  protocol          = var.ingress_rules[count.index][2]
  cidr_blocks       = var.ingress_rules[count.index][3]
  description       = var.ingress_rules[count.index][4]
  security_group_id = aws_security_group.ec2_security_groups.id
}

module/sg/variable.tf >>

variable "vpc_id" {
}
variable "name_security_groups" {
}
variable "ingress_rules" {
    type = list(string)
}

In the application folder,

application/dev/sg.tf >>

module "sg_test" {
  source = "../modules/sg"

  vpc_id                   = "vpc-xxxxxxxxx"
  name_security_groups = "sg_test"
  ingress_rules                     = var.sg_ingress_rules 
}

application/dev/variable.tf >>

variable "sg_ingress_rules" {
    type        = list(string)
    default     = {
        [22, 22, "tcp", "1.2.3.4/32", "test"]
        [23, 23, "tcp", "1.2.3.4/32", "test"]
    }
}

Please help to correct this or if there is any other method please suggest.

Regards,

1 post - 1 participant

Read full topic


Is there a tfe_organization datasource or similar?

$
0
0

I’m trying to access the current organization’s name in Terraform Cloud to use in terraform_remote_state blocks for accessing outputs from other workspaces - is there a way to get the “current” organization, or do I need to manually specify it as a workspace variable? I’ve just come across the situation where I have a second organization and a hardcoded organization name no longer cuts it.

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "my_org"
    workspaces = {
      name = var.tf_vpc
    }
  }
}

1 post - 1 participant

Read full topic

Vnet and Subnet creation using terrafrom is not working

$
0
0

Terraform v0.12.26
provider.azuread v0.10.0
provider.azurerm v2.15.0
provider.external v1.2.0
provider.helm v0.10.5
provider.kubernetes v1.11.3
provider.local v1.4.0
provider.null v2.1.2
provider.random v2.2.1
provider.template v2.1.2
provider.tls v2.1.1

Terraform Configuration Files

I have 3 different configuration parts to create network and subnet:

1. Everything works, but the cluster rises in custom region network.
esource “azurerm_virtual_network” “network” {
name = “{var.prefix}-{var.environment}-vnet”
address_space = var.vnet_cidr
location = var.resource_group_location
resource_group_name = var.resource_group_name

subnet {
name = "${var.prefix}-${var.environment}-subnet"
address_prefix = var.subnet_cidr
}
}

Can’t use vnet_subnet_id because it’s no subnet.id for this part.
default_node_pool {
name = “default”
node_count = 1
vm_size = “Standard_D2_v3”
vnet_subnet_id = azurerm_subnet.subnet.id
}

  1. I can specify the subnet, everything rises as needed, but nginx does not expand.
    resource “azurerm_virtual_network” “network” {
    name = “{var.prefix}-{var.environment}-vnet”
    address_space = var.vnet_cidr
    location = var.resource_group_location
    resource_group_name = var.resource_group_name

resource “azurerm_subnet” “subnet” {
name = “{var.prefix}-{var.environment}-subnet”
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.network.name
address_prefixes = var.subnet_cidr
}

default_node_pool {
name = “default”
node_count = 1
vm_size = “Standard_D2_v3”
vnet_subnet_id = azurerm_subnet.subnet.id
}

  1. The implementation of the first option through the data source, in order to get subnet_id. Terraform shows an error
    Error: Subnet “wm-test6-kubsubnet” (Virtual Network “wm-test6-vnet” / Resource Group “wm-test6”) was not found.
All resources are created, uses of dependency doesn't help.

resource "azurerm_virtual_network" "network" {
name = "${var.prefix}-${var.environment}-vnet"
address_space = var.vnet_cidr
location = var.resource_group_location
resource_group_name = var.resource_group_name

subnet {
name = “{var.prefix}-{var.environment}-subnet”
address_prefix = var.subnet_cidr
}
}

data “azurerm_resource_group” “rg” {
name = var.resource_group_name
depends_on = [azurerm_virtual_network.network]
}

data “azurerm_subnet” “k8ssubnet” {
name = “{var.prefix}-{var.environment}-kubsubnet”
virtual_network_name = azurerm_virtual_network.network.name
resource_group_name = var.resource_group_name
}

default_node_pool {
name = “default”
node_count = 1
vm_size = “Standard_D2_v3”
vnet_subnet_id = data.azurerm_subnet.k8ssubnet.id
}

1 post - 1 participant

Read full topic

0.12upgrade messing up with code logic and functionality

$
0
0

Hello!
We are migrating our infrastructure to terraform 0.12 from 0.11. Using 0.12upgrade integrated tool for this. It is well documented and is doing it’s job well.

But on one of the layers, I faced a weird problem: 0.12upgrade tool during code migration did a huge mess with code logic and even functionality. I spent a bunch of time to investigate it, but no luck.

Here are some snippets of its changes:

EXAMPLE 1

Terrafortm v0.11 (original)

count = (local.logstash_node_count) * (data.terraform_remote_state.pki.outputs.ssm_agent_enabled ? 1 : 0 ) > 0 ? 1 : 0

Terrafortm v0.12.24 (Converted code)
local.logstash_node_count * data.terraform_remote_state.pki.outputs.ssm_agent_enabled ? 1 : 0 > 0 ? 1 : 0

EXAMPLE 2

Terrafortm v0.11 (original)
port = "${lookup(var.additional_context_based_tg[element(keys(var.additional_context_based_tg),count.index)], "port")}"

Terrafortm v0.12.24 (Converted code)
port = var.additional_context_based_tg[element(keys(var.additional_context_based_tg), count.index)]["port"]

Perhaps someone faced something similar or the same issue or knows the root cause of it to point me to the right direction to resolve it.

1 post - 1 participant

Read full topic

VPC Peering problem cross account

$
0
0

Hi,

I have an existing VPC peer between AWS Account 1 and AWS Account 2 which was created manually. I am now trying to do the same via Terraform between the same accounts but the result seems to put the acceptor/requestor back to front and when i try to switch around the peer_owner_id and peer_vpc_id it then suggests the vpc_id is incorrect. I’ve tried making changes to the syntax but no luck so far.

Currently, I have a working VPC peer which was setup manually. But this is a new AWS account and I need to recreate everything using IAAS via Terraform. I need to replicate the below configuration:

Requester VPC owner = 716270604444
Requester VPC ID  = vpc-48947456
Requester VPC Region = London (eu-west-2)
Requester VPC CIDRs = 10.222.0.0/16
VPC Peering Connection = pcx-0ff4f1637e9f9e432

Accepter VPC owner = 874855963333
Accepter VPC ID = vpc-0fe237630b00f9387
Accepter VPC Region = London (eu-west-2)
Accepter VPC CIDRs = 10.240.0.0/16"
Peering connection status = Active

Currently, my terraform .tf file is setup with the below syntax and as mentioned above it does create the peer but back to front:

resource “aws_vpc_peering_connection” “vpc-peer-mgmt” {
peer_owner_id = “874855963333”
peer_vpc_id = “vpc-48947456”
vpc_id = aws_vpc.prod-vpc.id
}

Any advice would be much appreciated.

Thanks.

1 post - 1 participant

Read full topic

Terraform import azurerm_virtual_machine.XXXX

$
0
0

Hi all,
I’ve been working on importing Azure resources and I was able to do so far no problem…
azurerm_resource_group, azurerm_virtual_network, azurerm_subnet, azurerm_network_interface and now I try to import azurerm_virtual_machine that I created yesterday thru azure portal so I can test on it the “import” (before I move to my production)
I made sure that the VM Resource ID is right, and that resource address is in configuration file … like I did before for other resources but I get this Error:

    Error: resource address "azurerm_virtual_machine.MGTest_VM1" does not exist in the configuration.

Before importing this resource, please create its configuration in the root module. For example:
resource “azurerm_virtual_machine” “MGTest_VM1” {

(resource arguments)

}

not sure what is going on.
I tried to define a section as well using the new: azurerm_windows_virtual_machine but I get the same problem.
Does anyone have any ideas?

1 post - 1 participant

Read full topic

Viewing all 11357 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>