Infrastructure as Code – Expert Network Consultant https://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Thu, 20 Oct 2022 10:38:08 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.5 Access Secrets from Azure Key Vault in Azure Kubernetes Service https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/access-secrets-from-azure-key-vault-in-azure-kubernetes-service/ Wed, 19 Oct 2022 23:00:38 +0000 http://www.expertnetworkconsultant.com/?p=5607 Continue readingAccess Secrets from Azure Key Vault in Azure Kubernetes Service]]> Before we begin to discuss how to access secrets from Azure Key Vault in Azure Kubernetes Service, let us have a quick intro to Secrets in Kubernetes.

When you hear secrets, what comes to mind is confidentiality and secrecy. In the world of Kubernetes secrets are essentially any value that you don’t want the world to know about.

The following elements, password, an API key, a connection string to a database, all fall under what a secret is. Now when comparing Secrets and ConfigMaps in Kubernetes, the main difference is the confidential data.

Both ConfigMaps and Secrets store the data the same way, with key/value pairs, but ConfigMaps are designed for plain text data, and secrets on the other hand are meant for data that must be secured and confidential to the application exclusively.

By default, Secrets are stored at rest in Key Vault, in a secure encrypted store. Secrets are only stored in the AKS cluster when a pod is running with the secret mounted as a volume in a pod. As soon as the hosting pods are removed, the secret is removed from the cluster and this is a better approach as opposed to Kubernetes secrets which gets retained after the hosting pod is removed.

RESOURCE_GROUP=corp-infrastructure-rg
KV_RESOURCE_GROUP=corp-kv-infrastructure-rg
LOCATION=eastus
AKS_CLUSTER=corpakscluster

#Create a resource group for the AKS cluster:

az group create --name $RESOURCE_GROUP --location $LOCATION

az group create --name

 az aks create \
   --resource-group $RESOURCE_GROUP \
   --name $AKS_CLUSTER \
   --network-plugin azure \
   --enable-managed-identity \
   --enable-addons azure-keyvault-secrets-provider \
   --generate-ssh-keys

az aks create

 "identity": {
        "clientId": "1456c162-3f04-40bc-a079-f1f3f7d22b16",
        "objectId": "9f8165b6-206f-4596-932f-31e80469700f",
}

Download the cluster credentials and configure kubectl to use them:

az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER
Merged "corpakscluster" as current context in /home/%user%/.kube/config

Check that the Secrets Store CSI Driver and the Azure Key Vault Provider are installed in the cluster:

$ kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

kubectl get pods

When we enable the Azure Key Vault secret provider, the add-on will create a user assigned managed identity in the node managed resource group. Store its resource ID in a variable for later use

View the resource ID of the user assigned managed identity;

az aks show -g $RESOURCE_GROUP -n $AKS_CLUSTER --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
1456c162-3f04-40bc-a079-f1f3f7d22b16

azure key vault secrets provider managed identity

Store the resource ID of the user assigned managed identity in a variable;

KV_IDENTITY_RESOURCE_ID=$(az aks show -g $RESOURCE_GROUP -n $AKS_CLUSTER --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv)

Create Azure Key Vault
Create a resource group for Azure Key vault

az group create --name $KV_RESOURCE_GROUP --location $LOCATION

Create a key vault while storing its name in a variable:

KEY_VAULT_NAME="akscorpkeyvault${RANDOM}"
az keyvault create --name $KEY_VAULT_NAME --resource-group $KV_RESOURCE_GROUP --location $LOCATION
{
 "name": "akscorpkeyvault5493"
"objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb"
"tenantId": "46edb775-xy69-41z6-7be1-03e4a0997e49"
}

Create a secret and a key in the Vault for later demonstration:

az keyvault secret set --vault-name $KEY_VAULT_NAME -n FirstSecret --value StoredValueinFirstSecret
 "name": "FirstSecret",
  "tags": {
    "file-encoding": "utf-8"
  },
  "value": "StoredValueinFirstSecret"
}

Create a key in the Vault for later demonstration:

az keyvault key create --vault-name $KEY_VAULT_NAME -n FirstKey --protection software
    "n": "t6PMnN5hTR2Oicy/fuTzQgXo49EgkS7B61gJWOeQjfw8u9tO+YoRbnPgWMnDsQWE3xE/MJyt6R0w0QwHsQa28KjdzCfq6qvJSlTSyhFfU9VJIf2YkjFtSlOpoyqYXKmHC6cS3pLrWsxDdVZTpZrgcZ8ec2deowrLDnn9mL5OKljGHmEaptocVHGWGfs9VNlxNqDAhRC4IKCQSIt6pnXc+eLo6Es0J50WhqHTGdqMG5brJGSlgEVaZobeBuvyFIxEvtt33MDjjkdiXCjKoTl8IS7/LNlvLYtDTWRvazK390IUXpldICw0xAp3layR/IDZA0diLEwQzbdESkyO18osPQ==",

Grant the AKS key vault managed identity permissions to read (GET) your key vault and view its contents:

Set policy to access keys in your key vault

az keyvault set-policy -n $KEY_VAULT_NAME --key-permissions get --spn $KV_IDENTITY_RESOURCE_ID
"objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb", granted the permissions to read the object     "objectId": "9f8165b6-206f-4596-932f-31e80469700f"
 "keys": [
            "get"
          ],

Set policy to access secrets in your key vault

az keyvault set-policy -n $KEY_VAULT_NAME --secret-permissions get --spn $KV_IDENTITY_RESOURCE_ID
"objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb", granted the permissions to read the object     "objectId": "9f8165b6-206f-4596-932f-31e80469700f"
"secrets": [
            "get"
          ]

Set policy to access certs in your key vault

az keyvault set-policy -n $KEY_VAULT_NAME --certificate-permissions get --spn $KV_IDENTITY_RESOURCE_ID
 "certificates": [
            "get"
          ],
Create Kubernetes resources
Store the tenant ID in a variable, you can get the value from the Azure AD tenant overview page:
TENANT_ID=${{put your tenant ID here}}  | TENANT_ID=46edb775-xy69-41z6-7be1-03e4a0997e49

Create a SecretProviderClass by using the following YAML, using your own values for userAssignedIdentityID, keyvaultName, tenantId, and the objects to retrieve from your key vault:


cat <<EOF | kubectl apply -f -
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-kvname-user-msi
spec:
  provider: azure
  parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "true" # true since using managed identity
    userAssignedIdentityID: 1456c162-3f04-40bc-a079-f1f3f7d22b16 #$KV_IDENTITY_RESOURCE_ID
    keyvaultName: akscorpkeyvault5493    #$KEY_VAULT_NAME
    cloudName: ""
    objects:  |
      array:
        - |
          objectName: FirstSecret        #ExampleSecret
          objectType: secret    # object types: secret, key, or cert
          objectVersion: ""     # default to latest if empty
        - |
          objectName: FirstKey        #ExampleKey
          objectType: key
          objectVersion: ""
    tenantId: 46edb775-xy69-41z6-7be1-03e4a0997e49 #$TENANT_ID
EOF

secretproviderclass.secrets-store.csi.x-k8s.io/azure-kvname-user-msi configured

At this point, you need a pod that mounts the secret and the key using the secret provider class we just created earlier above:


cat <<EOF | kubectl apply -f -
---
kind: Pod
apiVersion: v1
metadata:
  name: busybox-secrets-store-inline-user-msi
spec:
  containers:
    - name: busybox
      image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
      command:
        - "/bin/sleep"
        - "10000"
      volumeMounts:
      - name: secrets-store01-inline
        mountPath: "/mnt/secrets-store"
        readOnly: true
  volumes:
    - name: secrets-store01-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "azure-kvname-user-msi"
EOF


pod/busybox-secrets-store-inline-user-msi created

Validate secrets were mounted from the pod created earlier:

kubectl exec busybox-secrets-store-inline-user-msi -- ls /mnt/secrets-store/

Read the content(s) of the secret and key:

kubectl exec busybox-secrets-store-inline-user-msi -- cat /mnt/secrets-store/FirstSecret
kubectl exec busybox-secrets-store-inline-user-msi -- cat /mnt/secrets-store/FirstKey
]]>
How to Create Azure Standard Load Balancer with Backend Pools in Terraform https://www.expertnetworkconsultant.com/design/how-to-create-azure-standard-load-balancer-with-backend-pools-in-terraform/ Wed, 24 Aug 2022 09:00:36 +0000 http://www.expertnetworkconsultant.com/?p=5354 Continue readingHow to Create Azure Standard Load Balancer with Backend Pools in Terraform]]>
create azure standard load balancer with backend pools in terraform
Image Reference: https://docs.microsoft.com/en-us/azure/load-balancer/media/load-balancer-overview/load-balancer.svg
Building infrastructure with code is where majority of future cloud deployments will go. In this architecture of how to create azure standard load balancer with backend pools in terraform, I have created an Azure standard loadbalancer with backend pools to accomodate two linux virtual machines.

Configure a Linux virtual machine in Azure using Terraform

How to Create Azure Standard Load Balancer with Backend Pools in Terraform

Below is a list of parts which constitutes this build.

  • Resource Group
  • Virtual Machines
  • Network Interfaces
  • Standard Loadbalancer
  • Availability Sets

As it appears in Azure
moving parts to creating backend address pool addition of nics with terraform

Open your IDE and create the following Terraform files;
providers.tf
network.tf
loadbalancer.tf
virtualmachines.tf

Clone the Git Code Repository

git clone https://github.com/expertcloudconsultant/createazureloadbalancer.git

#Create the providers providers.tf

#IaC on Azure Cloud Platform | Declare Azure as the Provider
# Configure the Microsoft Azure Provider
terraform {

  required_version = ">=0.12"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>2.0"
    }
  }


}

provider "azurerm" {
  features {}
}

#Create the virutal network and subnets with with Terraform. network.tf

#Create Resource Groups
resource "azurerm_resource_group" "corporate-production-rg" {
  name     = "corporate-production-rg"
  location = var.avzs[0] #Avaialability Zone 0 always marks your Primary Region.
}



#Create Virtual Networks > Create Spoke Virtual Network
resource "azurerm_virtual_network" "corporate-prod-vnet" {
  name                = "corporate-prod-vnet"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  address_space       = ["10.20.0.0/16"]

  tags = {
    environment = "Production Network"
  }
}


#Create Subnet
resource "azurerm_subnet" "business-tier-subnet" {
  name                 = "business-tier-subnet"
  resource_group_name  = azurerm_resource_group.corporate-production-rg.name
  virtual_network_name = azurerm_virtual_network.corporate-prod-vnet.name
  address_prefixes     = ["10.20.10.0/24"]
}

#Create Private Network Interfaces
resource "azurerm_network_interface" "corpnic" {
  name                = "corpnic-${count.index + 1}"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  count               = 2

  ip_configuration {
    name                          = "ipconfig-${count.index + 1}"
    subnet_id                     = azurerm_subnet.business-tier-subnet.id
    private_ip_address_allocation = "Dynamic"

  }
}

#Create the standard load balancer with Terraform. loadbalancer.tf

#Create Load Balancer
resource "azurerm_lb" "business-tier-lb" {
  name                = "business-tier-lb"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name

  frontend_ip_configuration {
    name                          = "businesslbfrontendip"
    subnet_id                     = azurerm_subnet.business-tier-subnet.id
    private_ip_address            = var.env == "Static" ? var.private_ip : null
    private_ip_address_allocation = var.env == "Static" ? "Static" : "Dynamic"
  }
}

create loadbalancer with terraform

#Create Loadbalancing Rules

#Create Loadbalancing Rules
resource "azurerm_lb_rule" "production-inbound-rules" {
  loadbalancer_id                = azurerm_lb.business-tier-lb.id
  resource_group_name            = azurerm_resource_group.corporate-production-rg.name
  name                           = "ssh-inbound-rule"
  protocol                       = "Tcp"
  frontend_port                  = 22
  backend_port                   = 22
  frontend_ip_configuration_name = "businesslbfrontendip"
  probe_id                       = azurerm_lb_probe.ssh-inbound-probe.id
  backend_address_pool_ids        = ["${azurerm_lb_backend_address_pool.business-backend-pool.id}"]
 

}

create loadbalancing rules with terraform

#Create Probe

#Create Probe
resource "azurerm_lb_probe" "ssh-inbound-probe" {
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  loadbalancer_id     = azurerm_lb.business-tier-lb.id
  name                = "ssh-inbound-probe"
  port                = 22
}

create loadbalancing probes with terraform

created loadbalancing probes with terraform

#Create Backend Address Pool

#Create Backend Address Pool
resource "azurerm_lb_backend_address_pool" "business-backend-pool" {
  loadbalancer_id = azurerm_lb.business-tier-lb.id
  name            = "business-backend-pool"
}

create backend address pool with terraform

#Automated Backend Pool Addition

#Automated Backend Pool Addition > Gem Configuration to add the network interfaces of the VMs to the backend pool.
resource "azurerm_network_interface_backend_address_pool_association" "business-tier-pool" {
  count                   = 2
  network_interface_id    = azurerm_network_interface.corpnic.*.id[count.index]
  ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]
  backend_address_pool_id = azurerm_lb_backend_address_pool.business-backend-pool.id

}

This line of configuration is what intelligently adds the network interfaces to the backendpool. I call it a gem because it took me quite sometime to figure it all out.

 ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]

create backend address pool addition of nics with terraform

created backend address pool addition of nics with terraform

Create the Linux Virtual Machines virtualmachines.tf

# Create (and display) an SSH key
resource "tls_private_key" "linuxvmsshkey" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

#Custom Data Insertion Here
data "template_cloudinit_config" "webserverconfig" {
  gzip          = true
  base64_encode = true

  part {

    content_type = "text/cloud-config"
    content      = "packages: ['nginx']"
  }
}



# Create Network Security Group and rule
resource "azurerm_network_security_group" "corporate-production-nsg" {
  name                = "corporate-production-nsg"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name


  #Add rule for Inbound Access
  security_rule {
    name                       = "SSH"
    priority                   = 1001
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = var.ssh_access_port # Referenced SSH Port 22 from vars.tf file.
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}


#Connect NSG to Subnet
resource "azurerm_subnet_network_security_group_association" "corporate-production-nsg-assoc" {
  subnet_id                 = azurerm_subnet.business-tier-subnet.id
  network_security_group_id = azurerm_network_security_group.corporate-production-nsg.id
}



#Availability Set - Fault Domains [Rack Resilience]
resource "azurerm_availability_set" "vmavset" {
  name                         = "vmavset"
  location                     = azurerm_resource_group.corporate-production-rg.location
  resource_group_name          = azurerm_resource_group.corporate-production-rg.name
  platform_fault_domain_count  = 2
  platform_update_domain_count = 2
  managed                      = true
  tags = {
    environment = "Production"
  }
}


#Create Linux Virtual Machines Workloads
resource "azurerm_linux_virtual_machine" "corporate-business-linux-vm" {

  name                  = "${var.corp}linuxvm${count.index}"
  location              = azurerm_resource_group.corporate-production-rg.location
  resource_group_name   = azurerm_resource_group.corporate-production-rg.name
  availability_set_id   = azurerm_availability_set.vmavset.id
  network_interface_ids = ["${element(azurerm_network_interface.corpnic.*.id, count.index)}"]
  size                  =  "Standard_B1s"  # "Standard_D2ads_v5" # "Standard_DC1ds_v3" "Standard_D2s_v3"
  count                 = 2


  #Create Operating System Disk
  os_disk {
    name                 = "${var.corp}disk${count.index}"
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS" #Consider Storage Type
  }


  #Reference Source Image from Publisher
  source_image_reference {
    publisher = "Canonical"                    #az vm image list -p "Canonical" --output table
    offer     = "0001-com-ubuntu-server-focal" # az vm image list -p "Canonical" --output table
    sku       = "20_04-lts-gen2"               #az vm image list -s "20.04-LTS" --output table
    version   = "latest"
  }


  #Create Computer Name and Specify Administrative User Credentials
  computer_name                   = "corporate-linux-vm${count.index}"
  admin_username                  = "linuxsvruser${count.index}"
  disable_password_authentication = true



  #Create SSH Key for Secured Authentication - on Windows Management Server [Putty + PrivateKey]
  admin_ssh_key {
    username   = "linuxsvruser${count.index}"
    public_key = tls_private_key.linuxvmsshkey.public_key_openssh
  }

  #Deploy Custom Data on Hosts
  custom_data = data.template_cloudinit_config.webserverconfig.rendered

}

If you are interested in using the UI to create a solution as above, then follow Microsoft’s Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and two virtual machines.

]]>
Configuring Remote State for Terraform https://www.expertnetworkconsultant.com/configuring/configuring-remote-state-for-terraform/ Fri, 03 Jun 2022 23:00:05 +0000 http://www.expertnetworkconsultant.com/?p=5126 Continue readingConfiguring Remote State for Terraform]]> Configure and Store Terraform Remote State in Azure Storage

They say you never store your eggs in one basket and this saying is true for your terraform state file. In this article Configuring Remote State for Terraform, I take you through the steps in creating a highly available Azure storage account which homes your remote state file and guarantees security and better collaboration. The following set of steps is what is needed to successfully achieve the creation and configuring of a remote state for your terraform projects.

So what is Terraform Remote State?

By default, Terraform stores state locally in a file named terraform.tfstate. When working with Terraform in a team, use of a local file makes Terraform usage complicated because each user must make sure they always have the latest state data before running Terraform and make sure that nobody else runs Terraform at the same time.

  1. Create resource group for Terraform state
  2. Create storage account
  3. Create Key Vault – If Non-Existent
  4. Export Storage Account Keys
  5. Create Storage Container
  6. Create Secret Key
  7. Store Storage Key in a Key-Vault

Store values in variables

RESOURCE_GROUP_NAME=remote-terraform-state
STORAGE_ACCOUNT_NAME=tfstoragetrainingenc
CONTAINER_NAME=remote-terraform-container
KEYVAULT=tfkeyremotestate
KEYVAULTSECRET=tfbackend-state-secret

Create resource group for Terraform state

# Create resource group for Terraform state
az group create --name $RESOURCE_GROUP_NAME --location eastus

Create storage account

# Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob

Create Key Vault

#Create Key Vault
az keyvault create --name $KEYVAULT --resource-group $RESOURCE_GROUP_NAME --location eastus

Export Storage Account Keys

#Export Storage Account Keys
ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)
export ARM_ACCESS_KEY=$ACCOUNT_KEY

Create Storage Container

#Create Storage Container
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME

Create Secret Key

#Create Secret Key
az keyvault secret set --name $KEYVAULTSECRET --vault-name $KEYVAULT --value tfstatesecret

Store Storage Key in a Key-Vault

#Store Storage Key in a Key-Vault
export ARM_ACCESS_KEY=$(az keyvault secret show --name $KEYVAULTSECRET --vault-name $KEYVAULT --query value -o tsv)

Verify the created keyvault resource
create key vault for remote state

Verify the created resource

remote terraform state container on azure

Terraform backend configuration in your providers.tf file
terraform backend configuration

Terraform backend configuration sample – use unique names for your own configuration.

#IaC on Azure Cloud Platform | Declare Azure as the Provider
# Configure the Microsoft Azure Provider
terraform {

  required_version = ">=0.12"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>2.0"
    }
  }

#Azurerm Backend Configuration
   backend "azurerm" {
    resource_group_name  = "remote-terraform-state"
    storage_account_name = "tfstoragetrainingenc"
    container_name       = "remote-terraform-container"
    key                  = "terraform.tfstate"
      }

}

provider "azurerm" {
  features {}
}

Create SSH Key

ssh-keygen -t rsa 4096 -f remotekey

Initialise Terraform to use the Remote State | Backend

terraform init

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the 
  newly configured "azurerm" backend. No existing state was found in the newly     
  configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Once the backend configuration has been placed in the provider.tf configuration, you start to see the migration away from the local state to the azurerm backend.

PS C:\Workdir\terraform\azterraform> terraform init

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "azurerm" backend. An existing non-empty state already exists in
  the new backend. The two states have been saved to temporary files that will be
  removed after responding to this query.

  Previous (type "local"): C:\Users\CLOUDA~1\AppData\Local\Temp\terraform2362619277\1-local.tfstate
  New      (type "azurerm"): C:\Users\CLOUDA~1\AppData\Local\Temp\terraform2362619277\2-azurerm.tfstate

  Do you want to overwrite the state in the new backend with the previous state?
  Enter "yes" to copy and "no" to start with the existing state in the newly
  configured "azurerm" backend.

Newly created terraform state file showing unpopulated state
terraform.state file showing unpopulated state

Selecting yes ensures that the existing terraform state file is now copied to the backend (remote state file).


Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/tls from the dependency lock file
- Using previously-installed hashicorp/random v3.2.0
- Using previously-installed hashicorp/tls v3.4.0
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/azurerm v2.99.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Pre-existing state was found while migrating the previous “local” backend to the newly configured “azurerm” backend.

Newly created terraform state file showing populated state
migrated the previous local backend to the newly configure azurerm backend

Verify the state of a resource – terraform state show

PS C:\Workdir\terraform\azterraform> terraform state show azurerm_resource_group.emc-eus2-corporate-import-rg
# azurerm_resource_group.emc-eus2-corporate-import-rg:
resource "azurerm_resource_group" "emc-eus2-corporate-import-rg" {
    id       = "/subscriptions/31e9c06e-6d3f-4485-836c-ff36c38135a3/resourceGroups/emc-eus2-corporate-import-rg"
    location = "eastus2"
    name     = "emc-eus2-corporate-import-rg"
    tags     = {
        "env" = "resource-group"
    }
}


You can also create your remote state with HCL as per the configuration below in a terraform file like create-remote-storage.tf.

resource "random_string" "resource_code" {
  length  = 5
  special = false
  upper   = false
}

resource "azurerm_resource_group" "tfstate" {
  name     = "tfstate"
  location = "East US"
}

resource "azurerm_storage_account" "tfstate" {
  name                     = "tfstate${random_string.resource_code.result}"
  resource_group_name      = azurerm_resource_group.tfstate.name
  location                 = azurerm_resource_group.tfstate.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  allow_blob_public_access = true

  tags = {
    environment = "staging"
  }
}

resource "azurerm_storage_container" "tfstate" {
  name                  = "tfstate"
  storage_account_name  = azurerm_storage_account.tfstate.name
  container_access_type = "blob"
}

Below are useful Terraform state commands

terraform state pull

This command downloads the state from its current location, upgrades the local copy to the latest state file version that is compatible with locally-installed Terraform, and outputs the raw format to stdout.

terraform state pull | sc terraform.tfstate

Troubleshooting Remote State Locks

╷
│ Error: Error acquiring the state lock
│
│ Error message: state blob is already locked
│ Lock Info:
│   ID:        42ff26c4-3103-e54c-e21a-8a37a25e5884
│   Path:      tfbackendcontainer/terraform.tfstate
│   Operation: OperationTypeApply
│   Who:       DESKTOP-ATCRUJV\Cloud Architect@DESKTOP-ATCRUJV
│   Version:   1.1.9
│   Created:   2022-06-06 22:53:18.7745943 +0000 UTC
│   Info:
│
│
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.

terraform destroy -lock=false


│ Error: Failed to save state
│
│ Error saving state: blobs.Client#PutBlockBlob: Failure responding to request: StatusCode=412 -- Original Error: autorest/azure: Service returned an error. Status=412 Code="LeaseIdMissing" Message="There is currently a    
│ lease on the blob and no lease ID was specified in the request.\nRequestId:110febce-c01e-000b-1326-7a1b95000000\nTime:2022-06-07T04:27:53.9123101Z"
╵
╷
│ Error: Failed to persist state to backend
│
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working directory.    
│
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│
│ To retry writing this state, use the following command:
│     terraform state push errored.tfstate

The lock has occurred because there was a terraform action initiated by a user which has placed that lock. To fix it, go to Azure and on the blob break the lease and delete the lock.

terraform state lock on azure

terraform destroy -lock=false
]]>
Configure a Linux virtual machine in Azure using Terraform https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/configure-a-linux-virtual-machine-in-azure-using-terraform/ Tue, 24 May 2022 23:00:46 +0000 http://www.expertnetworkconsultant.com/?p=5101 Continue readingConfigure a Linux virtual machine in Azure using Terraform]]> Infrastructure as Code has become the order of the day. In this article, “Configure a Linux virtual machine in Azure using Terraform”, I seek to guide you to building your first Linux Virtual Machine in Azure. Consider these set of steps as a project to enforce your terraform knowledge.

Configure Your Environment

  • Create providers.tf file
  • Create main.tf file
  • Create vars.tf file
  • Configure Deployment Parts

  • Create a virtual network
  • Create a subnet
  • Create a public IP address
  • Create a network security group and SSH inbound rule
  • Create a virtual network interface card
  • Connect the network security group to the network interface
  • Create a storage account for boot diagnostics
  • Create SSH key
  • Create a virtual machine
  • Use SSH to connect to virtual machine
  • Create your vars.tf file

    #Variable file used to store details of repetitive references
    variable "location" {
      description = "availability zone that is a string type variable"
      type    = string
      default = "eastus2"
    }
    
    variable "prefix" {
      type    = string
      default = "emc-eus2-corporate"
    }
    

    Create your providers.tf file

    #Variable file used to store details of repetitive references
    variable "location" {
      type    = string
      default = "eastus2"
    }
    
    variable "prefix" {
      type    = string
      default = "emc-eus2-corporate"
    }
    

    In the next steps, we create the main.tf file and add the following cmdlets.

    Create a virtual network

    #Create virtual network and subnets
    resource "azurerm_virtual_network" "emc-eus2-corporate-network-vnet" {
      name                = "emc-eus2-corporate-network-vnet"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      address_space       = ["172.20.0.0/16"]
    
      tags = {
        environment = "Production"
      }
    }
    

    Create a subnet

    #Create subnet - presentation tier
    resource "azurerm_subnet" "presentation-subnet" {
      name                 = "presentation-subnet"
      resource_group_name  = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      virtual_network_name = azurerm_virtual_network.emc-eus2-corporate-network-vnet.name
      address_prefixes     = ["172.20.1.0/24"]
    }
    
    #Create subnet - data access tier
    resource "azurerm_subnet" "data-access-subnet" {
      name                 = "data-access-subnet"
      resource_group_name  = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      virtual_network_name = azurerm_virtual_network.emc-eus2-corporate-network-vnet.name
      address_prefixes     = ["172.20.2.0/24"]
    }
    

    Create a public IP address

    #Create Public IP Address
    resource "azurerm_public_ip" "emc-eus2-corporate-nic-01-pip" {
      name                = "emc-eus2-corporate-nic-01-pip"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      allocation_method   = "Dynamic"
    }
    

    Create a network security group and SSH inbound rule

    # Create Network Security Group and rule
    resource "azurerm_network_security_group" "emc-eus2-corporate-nsg" {
      name                = "emc-eus2-corporate-nsg"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
    
      security_rule {
        name                       = "SSH"
        priority                   = 1001
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = "22"
        source_address_prefix      = "*"
        destination_address_prefix = "*"
      }
    }
    
    

    Create a virtual network interface card

    # Create network interface
    resource "azurerm_network_interface" "corporate-webserver-vm-01-nic" {
      name                = "corporate-webserver-vm-01-nic"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
    
      ip_configuration {
        name                          = "corporate-webserver-vm-01-nic-ip"
        subnet_id                     = azurerm_subnet.presentation-subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.corporate-webserver-vm-01-ip.id
      }
    }
    

    Connect the network security group to the network interface

    # Connect the security group to the network interface
    resource "azurerm_network_interface_security_group_association" "corporate-webserver-vm-01-nsg-link" {
      network_interface_id      = azurerm_network_interface.corporate-webserver-vm-01-nic.id
      network_security_group_id = azurerm_network_security_group.emc-eus2-corporate-nsg.id
    }
    

    Create a storage account for boot diagnostics

    # Generate random text for a unique storage account name
    resource "random_id" "randomId" {
      keepers = {
        # Generate a new ID only when a new resource group is defined
        resource_group = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      }
      byte_length = 8
    }
    

    Create a storage account for boot diagnostics

    # Create storage account for boot diagnostics
    resource "azurerm_storage_account" "corpwebservervm01storage" {
      name                     = "diag${random_id.randomId.hex}"
      location                 = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name      = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      account_tier             = "Standard"
      account_replication_type = "LRS"
    }
    

    Create SSH Key

    # Create (and display) an SSH key
    resource "tls_private_key" "linuxsrvuserprivkey" {
      algorithm = "RSA"
      rsa_bits  = 4096
    }
    

    Create a virtual machine

    # Create virtual machine
    resource "azurerm_linux_virtual_machine" "emc-eus2-corporate-webserver-vm-01" {
      name                  = "emc-eus2-corporate-webserver-vm-01"
      location              = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name   = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      network_interface_ids = [azurerm_network_interface.corporate-webserver-vm-01-nic.id]
      size                  = "Standard_DC1ds_v3"
    
      os_disk {
        name                 = "corpwebservervm01disk"
        caching              = "ReadWrite"
        storage_account_type = "Premium_LRS"
      }
    
      source_image_reference {
        publisher = "Canonical"
        offer     = "0001-com-ubuntu-server-focal"
        sku       = "20_04-lts-gen2"
        version   = "latest"
      }
    
      computer_name                   = "corporate-webserver-vm-01"
      admin_username                  = "linuxsrvuser"
      disable_password_authentication = true
    
      admin_ssh_key {
        username   = "linuxsrvuser"
        public_key = tls_private_key.linuxsrvuserprivkey.public_key_openssh
      }
    }
    

    Terraform Plan

    The terraform plan command evaluates a Terraform configuration to determine the desired state of all the resources it declares, then compares that desired state to the real infrastructure objects being managed with the current working directory and workspace. It uses state data to determine which real objects correspond to which declared resources, and checks the current state of each resource using the relevant infrastructure provider’s API.

    terraform plan
    

    Terraform Apply

    The terraform apply command performs a plan just like terraform plan does, but then actually carries out the planned changes to each resource using the relevant infrastructure provider’s API. It asks for confirmation from the user before making any changes, unless it was explicitly told to skip approval.

    terraform apply
    

    Command to find an image based on the SKU.

    samuel@Azure:~$ az vm image list -s "2019-Datacenter" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer          Publisher               Sku              Urn                                                          UrnAlias           Version
    -------------  ----------------------  ---------------  -----------------------------------------------------------  -----------------  ---------
    WindowsServer  MicrosoftWindowsServer  2019-Datacenter  MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest  Win2019Datacenter  latest
    samuel@Azure:~$ 
    
    samuel@Azure:~$ az vm image list -s "18.04-LTS" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer         Publisher    Sku        Urn                                      UrnAlias    Version
    ------------  -----------  ---------  ---------------------------------------  ----------  ---------
    UbuntuServer  Canonical    18.04-LTS  Canonical:UbuntuServer:18.04-LTS:latest  UbuntuLTS   latest
    

    Command to find an image based on the Publisher.

    samuel@Azure:~$ az vm image list -p "Microsoft" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer          Publisher               Sku                                 Urn                                                                             UrnAlias                 Version
    -------------  ----------------------  ----------------------------------  ------------------------------------------------------------------------------  -----------------------  ---------
    WindowsServer  MicrosoftWindowsServer  2022-Datacenter                     MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest                     Win2022Datacenter        latest
    WindowsServer  MicrosoftWindowsServer  2022-datacenter-azure-edition-core  MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest  Win2022AzureEditionCore  latest
    WindowsServer  MicrosoftWindowsServer  2019-Datacenter                     MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest                     Win2019Datacenter        latest
    
    samuel@Azure:~$ az vm image list -p "Canonical" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer         Publisher    Sku        Urn                                      UrnAlias    Version
    ------------  -----------  ---------  ---------------------------------------  ----------  ---------
    UbuntuServer  Canonical    18.04-LTS  Canonical:UbuntuServer:18.04-LTS:latest  UbuntuLTS   latest
    

    At this point, the required pieces to build a Linux Virtual Machine on Azure is complete. It’s time to test your code.

    You can learn more from Hashicorp by visiting the following link.
    This article was helpful in troubleshooting issues with the Ubuntu SKU.

    ]]>