Quantcast
Channel: Terraform - HashiCorp Discuss
Viewing all 11366 articles
Browse latest View live

Source of Providers might caused an (Error: Provider configuration not present)

$
0
0

I had a null_resource that I deleted and now terraform plan is failing because of the provider doesn’t exist anymore. The provider is needed in the configuration to allow terraform to clean up the resource.

So in order to solve the issue I added the following to main.tf

terraform {
  required_providers {
    archive = "~> 1.3.0"
    null = "~> 2.1.2"
  }
}

However when I run terraform plan again it fails with

Error: Provider configuration not present

To work with null_resource.build_deps its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy null_resource.build_deps, after which you can remove the provider
configuration again.

So I run terraform providers and I got the following

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/archive] ~> 1.3.0
├── provider[registry.terraform.io/hashicorp/null] ~> 2.1.2
└── provider[registry.terraform.io/hashicorp/aws] ~> 2.0

Providers required by state:

    provider[registry.terraform.io/-/archive]

    provider[registry.terraform.io/-/aws]

    provider[registry.terraform.io/-/null]

Question
By looking at list of providers (required by configs vs required by state) you can see a slight difference in the source path. One is pulling the provider from HashiCorp in the path while the state providers has (-) in the path.

Any idea if this is what causing the problem? If so How can I fix it? Declaring providers differently maybe?

1 post - 1 participant

Read full topic


:star_struck: Offensive Terraform Modules - Automated Multi Step Offensive Attack Modules

Unable to Create local secondary index

$
0
0

I am unable to create local secondary index in terraform. getting below error.

error creating DynamoDB Table: ValidationException: One or more parameter values were invalid: ProjectionType is INCLUDE, but NonKeyAttributes is not specified
status code: 400, request id: 5CEKG1JU2HAQH51L0APBB8G0QFVV4KQNSO5AEMVJF66Q9ASUAAJG

Code is:

resource “aws_dynamodb_table” “GroupTable” {

name = “${var.dynamo_environment}-group-table”

billing_mode = “PAY_PER_REQUEST”

attribute {

name = "owner_instance_id"

type = "S"

}

attribute {

name = "id"

type = "S"

}

attribute {

name = "lower_group_name"

type = "S"

}

attribute {

name = "spec_hash"

type = "S"

}

hash_key = “owner_instance_id”

range_key = “id”

local_secondary_index {

name = "InstanceIdGroupNameIndex"

range_key = "lower_group_name"

projection_type = "INCLUDE"

non_key_attributes = ["id"]

}

local_secondary_index {

name = "InstanceIdSpecHashIndex"

range_key = "spec_hash"

projection_type = "INCLUDE"

non_key_attributes = ["id"]

}

point_in_time_recovery {

enabled = true

}

Can someone help?

1 post - 1 participant

Read full topic

Terraform import: Gitlab variable

$
0
0

Does anyone know the proper syntax for filtering resources by environment_scope when trying to import existing Gitlab variables? Below is my command and the error I receive. I have multiple variables with the same name but they are contained within separate Gitlab environments. Thanks for any help.
Command
terraform import module.cicd_sre.gitlab_project_variable.key_id 1234567:ACCESS_KEY_ID
Error
Error: GET https://gitlab.com/api/v4/projects/1234567/variables/ACCESS_KEY_ID: 409 {message: There are multiple variables with provided parameters. Please use 'filter[environment_scope]'}

1 post - 1 participant

Read full topic

404 error from RHEL Repo

$
0
0

Good Day, working on setting up some systems for my fellow Dev’s and the REPO for RHEL seems to be giving 404 errors for me.

This is what I am seeing:

failure: repodata/repomd.xml from hashicorp: [Errno 256] No more mirrors to try. 
https://rpm.releases.hashicorp.com/RHEL/7Workstation/x86_64/stable/repodata./rempmd.cml:  [Errno 14] HTTPS Error 404 - Not Found

1 post - 1 participant

Read full topic

Aws_acm_certificate domains when there's more than one aws_acm_certificate.this

$
0
0

I’m trying to follow the example at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate#referencing-domain_validation_options-with-for_each-based-resources, but I can’t seem to work out how to handle a situation where there’s more than one aws_acm_certificate.example.

It seems like the following should work, but I keep getting the error below.

variable "create_load_balancer" {
  default     = true
  description = "Controls if the Load Balancer should created (it effects almost all resources)."
  type        = bool
}

variable "target_groups" {
  type = map(list(string))
}

locals {
  tuple_of_maps = [
    for target_group, domains in var.target_groups :
    { for domain in domains : domain => target_group }
  ]

  domains = merge(flatten([local.tuple_of_maps])...)
}

resource "aws_acm_certificate" "this" {
  for_each = var.create_load_balancer ? local.domains : {}

  domain_name       = each.key
  validation_method = "DNS"

  tags = {
    ManagedBy = "Terraform"
  }

  lifecycle {
    create_before_destroy = true
  }
}

locals {
  domain_validation_options = var.create_load_balancer ? flatten([
    for d in local.domains :
    {
      for dvo in aws_acm_certificate.this[d.key].domain_validation_options : dvo.domain_name => {
        resource_record_name  = dvo.resource_record_name
        resource_record_value = dvo.resource_record_value
        resource_record_type  = dvo.resource_record_type
      }
    }
  ]) : []
}

resource "aws_route53_record" "this" {
  for_each = {
    for dvo in local.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  type            = each.value.type
  zone_id         = var.zone_id
  records         = [each.value.record]
  ttl             = 300
}
Error: Unsupported attribute

  on modules/util/aws/https-load-balancer/main.tf line 80, in locals:
  80:       for dvo in aws_acm_certificate.this[d.key].domain_validation_options : dvo.domain_name => {

This value does not have any attributes.

1 post - 1 participant

Read full topic

Retrieve an item from an object, then manipulate it

$
0
0

Hi all,
Firstly, excuse me if the question seems silly. My excuse, I have come from an Ops background :worried:

I am working with the “azurerm_log_analytics_solution” resource and it seems that the solution_name must match the product name.
For example the product would be OMSGallery/Updates and the solution_name must be Updates.

My challenge is to extract “Updates” from product.

I have declared this variable to be used;

variable “plan” {
type = list(object(
{
publisher = string
product = string
}
))

default = [
    {
        publisher = "Microsoft"
        product   = "OMSGallery/Updates"
    }
]

}

I am struggling to extract “Updates” from this variable.

I have tried various functions and using the Terraform console this is easily done when not tackling an object;element(split("/","Product/Updates"),1)

Using Terraform v0.12.26

Any help would be appreciated.
Thanks.

1 post - 1 participant

Read full topic

Variables locals and output

$
0
0

Hi

I am trying to sort out a scenario/module where I want to

a. define a variable in variables.tf file
b. perform some business logic inside main.tf using null provider and null_resources block
c. In the null_resource block I want to store the processed output in the variable which I had defined in step 1
d. output the variable in output.tf which can in turn be called in other modules

my variables.tf contains the definition for the variable called as value, it’s of type string and has no default value

this is how my main.tf looks
provider "null" { }

resource "null_resource" "get-valuefromfile" {var.value= file("/var/result/value.txt")}

In the output.tf file,

    output "valuefromfile" {value = ["${var.value}"]}

Whenever I run this
on main.tf line 15, in resource “null_resource” “get-valuefromfile”:
15: var.value= file("/var/result/value.txt")

An argument or block definition is required here. To set an argument, use the
equals sign “=” to introduce the argument value.

I am not able to figure out why value assignment throws up an error
What are the possible ways of accomplishing this.

Going by the same analogy, can I do this inside the null resource block
var.value1 = local.local_variable_lookup_from_a_map1
var.value2 = local.local_variable_lookup_from_a_map2
var.value3 = “{var.value1}{var.value2}”
output var.value3

1 post - 1 participant

Read full topic


Pattern for Managing AWS EFS in terraform

$
0
0

So our use case is that we want to create some AWS infrastructure using the last good backup of an EFS drive. I keep hitting a dead end, so wondering if anyone has a suggested approach or best practices. What I currently have is:-

1 - Have a shell script called from terraform using local_exec that uses the AWS CLI to restore to a new EFS volume. It then mounts the volume and copies files into the correct location etc.
2 - Have a tf file with the following that then sets up mount targets etc. on the EFS file system:-
data "aws_efs_file_system" "eks_cluster_persistence" { creation_token = "${var.cluster_name}-efs" }
So that the new EFS volume is created with the same creation token. On V0.13.3 this errors on apply with:-
Error: Search returned 0 results, please revise so only one is returned

I’ve tried several approaches and can’t seem to find a sensible solution. Ideally this would be a one step apply.

  • I could run the script first then import the EFS resource I guess.
  • I could create the EFS volume in terraform then manually copy all data from another restored EFS but this seems really heavy weight and slow.

Any ideas gratefully accepted.

1 post - 1 participant

Read full topic

Assistance with upgrading from 0.11.x to 0.12.x

$
0
0

I’m getting a lot of errors like this

Error: Missing item separator
on <value for var.diskspace_status> line 1:
(source code not available)
Expected a comma to mark the beginning of the next item.

When trying to run terraform apply/install my project.
I pass it into terraform like this
-var diskspace_status=$(extract_tf_list_value $check_tfstate_path diskspace_status)

I pass this variable

variable "diskspace_status" {
description = "Diskspace pre requisite"
type        = list(string)
default     = []
}

Into the checker module like this

module "registry_validate_prerequisites" {
source = "./../../common/checkers/prerequisites/validate"
diskspace_status  = var.diskspace_status

And from there this is the contents of the main.tf for this resource

resource "null_resource" "prerequisite_diskspace" {

  count = "${var.enable == "true" && contains(var.diskspace_status, "not_pass") ? 1 : 0}"
}

2 posts - 2 participants

Read full topic

Help On Dynamic Blocks

$
0
0

I am newbie and I am trying to create the security group resource as explained below and getting the following error. Can any expert please guide me.

locals {
	Security_Group_Config = {
		# 1st Security Group Details
		Resource_Type		=	"Security Group"
		Security_Group_List	=	[
			{
				Name = "SG_Database_RDS"
				Description = "Allow only specific traffic to hit RDS Databases."
				Ingress_Rules_List =	[
					{
						from_port   = "8080"
						to_port     = "8080"
						protocol    = "tcp"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Inbound Description"
					},
					{
						from_port   = "8081"
						to_port     = "8081"
						protocol    = "tcp"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Inbound Description"
					}
				]
				Egress_Rules_List	=	[
					{
						from_port   = "0"
						to_port     = "0"
						protocol    = "0"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Outbound Description"
					},
					{
						from_port   = "1"
						to_port     = "1"
						protocol    = "2"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Outbound Description"
					}
				]
			},
			{
				Name = "SG_Database_Redshift"
				Description = "Allow only specific traffic to hit RDS Databases."
				Ingress_Rules_List =	[
					{
						from_port   = "8080"
						to_port     = "8080"
						protocol    = "tcp"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Inbound Description"
					},
					{
						from_port   = "8081"
						to_port     = "8081"
						protocol    = "tcp"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Inbound Description"
					}
				]
				Egress_Rules_List	=	[
					{
						from_port   = "0"
						to_port     = "0"
						protocol    = "0"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Outbound Description"
					},
					{
						from_port   = "1"
						to_port     = "1"
						protocol    = "2"
						cidr_blocks	= "1.2.3.4/32"
						description = "Testing Security Group Outbound Description"
					}
				]
			}
		]
		
	}
  
}

	module "Security_Group_Module" {
		source 						= 	"./Networking/SecurityGroup"
		Input_VPC_ID 				=	"${module.Module_VPC.Output_Private_VPC_1_id}"
		Input_Standard_Tags			=	local.Standard_Tags
		Input_Resource_Type			=	local.Security_Group_Config.Resource_Type
		Input_Security_Group		=	local.Security_Group_Config	
	}

and My SecurityGroup_main.tf is something like :-

resource "aws_security_group" "TF_SG_1" {
	vpc_id		=	var.Input_VPC_ID

	dynamic "SG_Config" {
		for_each	=	var.Input_Security_Group.Security_Group_List
		iterator	=	"SG_Cnt"
		content {
				name		=	SG_Cnt.Name
				description	=	SG_Cnt.Description
				tags = merge(
				var.Input_Standard_Tags,
				{
					Name 			= 	SG_Cnt.Name
					Resource_Type	=	var.Input_Resource_Type
				},
				)
			 
				lifecycle {
					ignore_changes = [tags.Created_On]
				}
			
			dynamic "Ingress_Config" {
				for_each 	=	SG_Cnt[value].Ingress_Rules
				iterator	=	"Ingress_Cnt"
				content {
							from_port   = Ingress_Cnt.value.from_port
							to_port     = Ingress_Cnt.value.to_port
							protocol    = Ingress_Cnt.value.protocol
							cidr_blocks	= Ingress_Cnt.value.cidr_blocks
							description = Ingress_Cnt.value.description				
						}
			}
		}
	}
}

3 posts - 2 participants

Read full topic

Conditionally create a service linked policy

$
0
0

I’m hitting an idempotency issue: https://github.com/terraform-community-modules/tf_aws_elasticsearch/issues/23

My initial gut re-action is to use count to make this conditional like so:

data "aws_iam_role" "service_linked_role" {
  name = "AWSServiceRoleForAmazonElasticsearchService"
}

resource "aws_iam_service_linked_role" "es" {
  aws_service_name = "es.amazonaws.com"

  count = if data.aws_iam_role.service_linked_role.id != "" ? 0 : 1
}

But, hashicorp has decided they don’t want to support this: https://github.com/hashicorp/terraform/issues/16380

The next option I can think of, is move this out of my modules/aws-elasticsearch up into my main.tf, but it belongs with the elasticsearch code, imho.

Am I stuck making a modules/aws-elasticsearch-setup module which only gets called once??? There has to be a better way!

1 post - 1 participant

Read full topic

EKS create worker group with same sg as existing managed node group

$
0
0

We began with a single managed node group created in terraform, but realized we needed to create some groups with node taints and found that using worker_groups_launch_template was the easiest way to accomplish that.

However, we need connectivity between both of these ASGs. I first tried specifying additional_security_group_ids = [module.eks.cluster_primary_security_group_id], which works networking-wise. However, our nginx-ingress-controller errors when the ec2 instance is part of multiple SGs.

Is it possible to create a worker group with only the cluster_primary_security_group_id SG?

Thanks in advance for any help!

1 post - 1 participant

Read full topic

What happens once a plan finishes

$
0
0

Once a plan is finished what other tasks does terraform cloud perform? i have some simple workspaces that take 3 mins for the plan to return changes then it takes another 5 to 7 minutes where it stays in planning before returning and allowing the apply.

1 post - 1 participant

Read full topic

Explicit Providers in Modules

$
0
0

What is the recommended method to delete the resources created by a module with external providers? Is “terraform destroy -target=xxx” the only option?

module windows_vm_xxx {
source = ../virtual_machines/windows
...
...
  providers = {
    azurerm.terraform_keyvault = azurerm.terraform_keyvault
  }
}

Removing the above module code block from the .tf file and running terraform apply throws the below error:

Error: Provider configuration not present

To work with
module.windows_vm_xxx.data.azurerm_key_vault_secret.domain_username
its original provider configuration at
module.windows_vm_xxx.provider.azurerm.terraform_keyvault is required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.windows_vm_xxx.data.azurerm_key_vault_secret.domain_username,
after which you can remove the provider configuration again.

2 posts - 1 participant

Read full topic


Error: Failed to query available provider packages

$
0
0

I am new to terraform although I did get it running about a year ago.

I am trying again now but I am having problems at the 1st step. terraform init on a brand new project (the terraform-docker-demo project from the https://learn.hashicorp.com/tutorials/terraform/install-cli instructions)

It could be some old config somewhere but I did delete my /usr/local/bin/terraform folder and reinstall the latest version so it should be quite clean.

Here is my trace log…

paul@MyMac terraform-docker-demo % terraform init
2020/09/22 09:59:31 [INFO] Terraform version: 0.13.3
2020/09/22 09:59:31 [INFO] Go runtime version: go1.14.7
2020/09/22 09:59:31 [INFO] CLI args: string{"/usr/local/bin/terraform", “init”}
2020/09/22 09:59:31 [DEBUG] Attempting to open CLI config file: /Users/paul/.terraformrc
2020/09/22 09:59:31 [DEBUG] File doesn’t exist, but doesn’t need to. Ignoring.
2020/09/22 09:59:31 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2020/09/22 09:59:31 [DEBUG] ignoring non-existing provider search directory /Users/paul/.terraform.d/plugins
2020/09/22 09:59:31 [DEBUG] ignoring non-existing provider search directory /Users/paul/Library/Application Support/io.terraform/plugins
2020/09/22 09:59:31 [DEBUG] ignoring non-existing provider search directory /Library/Application Support/io.terraform/plugins
2020/09/22 09:59:31 [INFO] CLI command args: string{“init”}

Initializing the backend…
2020/09/22 09:59:31 [TRACE] Meta.Backend: no config given or present on disk, so returning nil config
2020/09/22 09:59:31 [TRACE] Meta.Backend: backend has not previously been initialized in this working directory
2020/09/22 09:59:31 [DEBUG] New state was assigned lineage “22591307-22ae-5837-7e84-15958333507f”
2020/09/22 09:59:31 [TRACE] Meta.Backend: using default local state only (no backend configuration, and no existing initialized backend)
2020/09/22 09:59:31 [TRACE] Meta.Backend: instantiated backend of type
2020/09/22 09:59:31 [DEBUG] checking for provisioner in “.”
2020/09/22 09:59:31 [DEBUG] checking for provisioner in “/usr/local/bin”
2020/09/22 09:59:31 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
2020/09/22 09:59:31 [TRACE] Meta.Backend: backend does not support operations, so wrapping it in a local backend
2020/09/22 09:59:31 [TRACE] backend/local: state manager for workspace “default” will:

  • read initial snapshot from terraform.tfstate
  • write new snapshots to terraform.tfstate
  • create any backup at terraform.tfstate.backup
    2020/09/22 09:59:31 [TRACE] statemgr.Filesystem: reading initial snapshot from terraform.tfstate
    2020/09/22 09:59:31 [TRACE] statemgr.Filesystem: snapshot file has nil snapshot, but that’s okay
    2020/09/22 09:59:31 [TRACE] statemgr.Filesystem: read nil snapshot

2020/09/22 09:59:31 [TRACE] providercache.fillMetaCache: scanning directory .terraform/plugins
Initializing provider plugins…
2020/09/22 09:59:31 [TRACE] getproviders.SearchLocalDirectory: .terraform/plugins is a symlink to .terraform/plugins
2020/09/22 09:59:31 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json
2020/09/22 09:59:31 [TRACE] HTTP client GET request to https://registry.terraform.io/.well-known/terraform.json

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
terraform-providers/docker: could not connect to registry.terraform.io: Failed
to request discovery document: Get
https://registry.terraform.io/.well-known/terraform.json”: net/http: request
canceled while waiting for connection (Client.Timeout exceeded while awaiting
headers)

1 post - 1 participant

Read full topic

Terraform Actions

$
0
0

Hello all,

I was wondering what the best way to pass credentials for github actions when you are using the local cli version, not enterprise or cloud.

I can generate access keys from an IAM user in AWS and use a run with an export for the key variables which may work however not sure how safe that would be.

Does anyone have any ideas of the best way to manage this, I would normally go terraform cloud however I know some companies are really hesitant about changing to that due to the extra cost.

1 post - 1 participant

Read full topic

Runs from TFE Standalone , resource "local_file" not writing to a file

$
0
0
resource "local_file" "command" {
  content  = <<EOT ${local.command}
    EOT
  filename = "/var/result/command.sh"
}

Hi , I am trying to write some content to a file using local_file resource.
Whenever I execute this code from my local PC with terraform opensource installation and the path pointing to a file share the entire module works.

But the same thing when I try to execute from the TFE server its simply not able to write to the file in the specified path. The path specified /var/result/command.sh is a valid path on the linux machine hosting TFE

1 post - 1 participant

Read full topic

Best Practices for creating/mounting new AWS EBS volumes

$
0
0

I was wondering if there are recommended best practices for creating extfs filesystems and then mounting those after you provision/attach the EBS volume through Terraform:

resource “aws_ebs_volume” “bookie0” {
availability_zone = “eu-west-1a”
type = “gp2”
size = 40
}

resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdf”
volume_id = aws_ebs_volume.bookie0.id
instance_id = aws_instance.bookie0.id
}

What I get here is a EBS volume attached to the AWS Instance as /dev/sdf. Which if what I would expect. But now we need to create a filesystem on that device and mount it.

  1. Should something like this be done through Terraform?
  2. What is the recommended way of doing this?

1 post - 1 participant

Read full topic

Formatting array into string for Azure policy

$
0
0

Anyone know how to format an “Array” parameter in Azure policy? Here is the code. All parameters require a string except listOfLocations, which requires an array. I’ve tried everything under the sun, even passing literals and jsonencoding the array. Here is the current code and the error produced.

variable "policy_initiatives" {
  type = list(object({
    policy_initiative = string
    policy_list = list(object({
      policy              = string
      effect              = string
      set_log_location    = bool # Used for rules that require a Log Analytics Location with parameter "logAnalytics"
      set_log_location_id = bool # Used for rules that require a Log Analytics Location with parameter "logAnalyticsWorkspaceID"
      regions             = list(string)
      operation           = string
      retention_days      = string # The required diagnostic logs retention in days with parameter "requiredRetentionDays"
      set_log_rg_name     = bool   # The resource group where the storage location resides. Set using parameter "rgName"
      set_storage_prefix  = bool   # The storage prefix name to prepend to the storage blob. Set using parameter "storagePrefix"
    }))
  }))

  description = "List of policy definitions (display names) for the Security Governance policyset"

  default = [
    {
      policy_initiative = "Security Governance"
      policy_list = [
        {
          policy              = "Network Watcher should be enabled"
          effect              = null
          set_log_location    = false
          set_log_location_id = false
          regions             = ["East US", "West US"]
          operation           = null
          retention_days      = null
          set_log_rg_name     = false
          set_storage_prefix  = false
        }
resource "azurerm_policy_set_definition" "security_governance" {
  name         = "Security governance"
  policy_type  = "Custom"
  display_name = "Security Governance"
  description  = "Assignment of the Security Governance initiative to subscription."

  metadata = <<METADATA
    {
    "category": "${var.policyset_definition_category}"
    }
METADATA

  dynamic "policy_definition_reference" {

    for_each = data.azurerm_policy_definition.security_governance
    content {
      policy_definition_id = policy_definition_reference.value.id
      reference_id         = policy_definition_reference.value.id
      parameters = {
        Effect                  = lookup(local.policy_initiatives_policy_map, policy_definition_reference.key, null)
        logAnalytics            = lookup(local.policy_initiatives_log_map, policy_definition_reference.key, null)
        logAnalyticsWorkspaceID = lookup(local.policy_initiatives_log_workspace_id_map, policy_definition_reference.key, null)
        operationName           = lookup(local.policy_initiatives_operation_map, policy_definition_reference.key, null)
        listOfLocations       = lookup(local.policy_initiatives_region_map, policy_definition_reference.key, null) == null ? null : lookup(local.policy_initiatives_region_map, policy_definition_reference.key, null)
        requiredRetentionDays = lookup(local.policy_initiatives_retention_days_map, policy_definition_reference.key, null)
        rgName                = lookup(local.policy_initiatives_rgname_map, policy_definition_reference.key, null)
        storagePrefix         = lookup(local.policy_initiatives_storageprefix_map, policy_definition_reference.key, null)
      }
    }
  }
}
Inappropriate value for attribute "parameters": element "listOfLocations":
string required.

The Azure policy definition says it requires an Array type. Here is the policy definition that I’m attempting to assign into a policy initiative (policy set):

  "properties": {
    "displayName": "Network Watcher should be enabled",
    "policyType": "BuiltIn",
    "mode": "All",
    "description": "Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure.",
    "metadata": {
      "version": "1.0.0",
      "category": "Network"
    },
    "parameters": {
      "listOfLocations": {
        "type": "Array",
        "metadata": {
          "displayName": "Locations",
          "description": "Audit if Network Watcher is not enabled for region(s).",
          "strongType": "location"
        }
      }
    }

1 post - 1 participant

Read full topic

Viewing all 11366 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>