Cloud – Expert Network Consultant https://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Sat, 05 Aug 2023 19:38:09 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.5 Unlocking the Potential of Azure Portal: A Comprehensive Guide https://www.expertnetworkconsultant.com/cloud/azure-portal-unleashing-the-power-of-microsofts-cloud-management-console/ Sat, 05 Aug 2023 19:38:09 +0000 http://www.expertnetworkconsultant.com/?p=6273 Continue readingUnlocking the Potential of Azure Portal: A Comprehensive Guide]]> Introduction:

In today’s digital landscape, cloud computing has become an integral part of any organization’s IT strategy. Microsoft Azure stands out as one of the leading cloud platforms, offering a robust set of services to build, deploy, and manage applications and services. At the core of Azure’s capabilities lies the Azure Portal, a comprehensive web-based console that empowers users with streamlined cloud management and administration. In this article, we will delve into the features, functionalities, and benefits of the Azure Portal, and explore how it revolutionizes the way we interact with the cloud.

Before we begin, here’s a useful YouTube video that visually demonstrates the overview of the Azure Portal. Make sure to watch it for a more interactive learning experience:

🔥 Exciting Azure Portal Overview! 🔥 Unleash the Power of Cloud Control! 🚀

Navigating Azure Portal: An All-in-One Management Console

Azure Portal serves as the primary user interface for managing Azure resources and services. It provides a unified view of all your cloud assets, enabling you to access, monitor, and manage them efficiently from a single location. The portal’s user-friendly design caters to developers, IT administrators, and business owners alike, simplifying complex tasks and reducing operational overhead.

Azure Services Catalog: Unleashing a World of Possibilities

One of the most appealing aspects of Azure Portal is its extensive services catalog. From virtual machines to databases, AI and machine learning tools to analytics and IoT solutions, the platform hosts a vast array of services that cater to diverse business needs. This extensive selection empowers users to create tailored solutions, scale applications, and innovate with ease, all within a few clicks.

Resource Groups: Organizing for Success

Azure Portal advocates organizing resources into logical units called Resource Groups. This feature simplifies the management and administration of resources, making it easier to deploy, monitor, and secure applications. Additionally, it aids in better understanding the cost distribution across different projects, allowing for improved financial control and resource optimization.

Insights and Monitoring: Real-time Visibility for Peak Performance

Real-time insights and monitoring are essential to maintain the health and performance of cloud resources. Azure Portal excels in this area, providing a comprehensive set of tools and dashboards to monitor key performance metrics, diagnose issues, and ensure optimal resource utilization. With proactive monitoring, users can take prompt actions to prevent potential bottlenecks and outages, ensuring seamless operations.

Security and Compliance: Safeguarding Your Data

Data security is paramount in the cloud environment. Azure Portal integrates robust security features, identity management, and compliance tools, empowering users to safeguard their data and meet regulatory requirements with confidence. This focus on security ensures that your critical business data remains protected against potential threats.

Accompanying YouTube Video: Hands-on Experience

For a more immersive experience, we have created a YouTube video tour of the Azure Portal. Watch it here: https://youtu.be/Ma8-vgyb9P4. This video takes you through the Azure Portal, highlighting its key features and demonstrating how to efficiently manage cloud resources.

Conclusion: Embrace the Power of Azure Portal

In conclusion, the Azure Portal serves as a gateway to Microsoft Azure’s vast cloud infrastructure. It offers an intuitive and feature-rich platform for users to create, deploy, manage, and secure applications and services. Whether you are a seasoned cloud professional or just starting your cloud journey, the Azure Portal simplifies complex tasks, enhances efficiency, and enables innovation. Embrace the power of Azure Portal and elevate your cloud management experience to new heights.

Learn More: https://learn.microsoft.com/en-us/azure/azure-portal/azure-portal-overview

]]>
How to Create a Resource Group in Azure CLI: Step-by-Step Guide https://www.expertnetworkconsultant.com/cloud/how-to-create-a-resource-group-in-azure-cli-step-by-step-guide/ Thu, 03 Aug 2023 10:54:20 +0000 http://www.expertnetworkconsultant.com/?p=6260 Continue readingHow to Create a Resource Group in Azure CLI: Step-by-Step Guide]]> Azure Resource Groups are essential components for organizing and managing resources in Microsoft Azure. They provide a logical container to group related resources, making it easier to manage, monitor, and govern your cloud infrastructure. In this tutorial, we will guide you through the process of creating a resource group in Azure using the Azure Command-Line Interface (CLI). The CLI offers a powerful and efficient way to interact with Azure resources, enabling you to streamline your cloud management tasks.

Video Reference:
Before we begin, here’s a useful YouTube video that visually demonstrates the process of creating a resource group in Azure CLI. Make sure to watch it for a more interactive learning experience:
Mastering Azure CLI: Creating Resource Groups Like a Pro!

Step-by-Step Guide: Creating a Resource Group in Azure CLI

Step 1: Install Azure CLI:
If you haven’t already installed the Azure CLI, you can download and install it from the official website: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli. Follow the installation instructions for your specific operating system.

Step 2: Open a Terminal or Command Prompt:
Once the Azure CLI is installed, open a terminal or command prompt on your computer.

Step 3: Log in to Azure:
In the terminal, type the following command to log in to your Azure account:

az login

This will open a web page where you can enter your Azure credentials. After successful authentication, return to the terminal.

Step 4: Set Azure Subscription (Optional):
If you have multiple subscriptions associated with your account, you can set the desired subscription for resource group creation using the following command:

az account set --subscription <subscription_id>

Replace `<subscription_id>` with the ID of your desired subscription.

Step 5: Create the Resource Group:
To create a resource group, use the following command:

az group create --name <resource_group_name> --location <azure_region>

Replace `<resource_group_name>` with a unique name for your resource group, and `<azure_region>` with the region where you want your resource group to reside. Choose a region closest to your users or services for better performance.

Step 6: Verify the Resource Group Creation:
To verify that your resource group has been successfully created, you can list all your resource groups using the command:

az group list

This command will display information about all your resource groups, including the one you just created.

Conclusion:
Congratulations! You have successfully created a resource group in Azure using the Azure Command-Line Interface (CLI). Resource groups play a crucial role in organizing and managing your cloud resources effectively. By following this step-by-step guide, you can efficiently structure your Azure resources, making them easier to manage and monitor. Keep exploring Azure CLI’s capabilities to optimize your cloud management experience.

Remember, the YouTube video referenced in this article provides additional visual guidance on creating an Azure resource group via Azure CLI. Happy cloud computing and resource management!

]]>
Build and understand APIs with Python: A Comprehensive Step by Step Walkthrough https://www.expertnetworkconsultant.com/python-programming-foundations-for-the-network-engineer/build-and-understand-apis-with-python-a-comprehensive-step-by-step-walkthrough/ Mon, 01 May 2023 00:01:52 +0000 http://www.expertnetworkconsultant.com/?p=6173 Continue readingBuild and understand APIs with Python: A Comprehensive Step by Step Walkthrough]]> An API (Application Programming Interface) is a set of rules that define how different software components should interact with each other. The most common way of communicating with an API is through HTTP requests.

HTTP (HyperText Transfer Protocol) is a protocol used to transfer data over the internet. It has several methods, including GET, POST, PUT, and DELETE, which are used to perform specific actions on a resource.

To demonstrate this, we will use Flask, a Python web framework, to create a simple API with four endpoints that correspond to these HTTP methods.

The API has a simple data store consisting of three key-value pairs. Here are the endpoints and their corresponding HTTP methods:

GET method at /api to retrieve all the data from the API (Read)
POST method at /api to submit new data to the API (Create)
PUT method at /api/ to update an existing data item in the API by providing its ID (Update)
DELETE method at /api/ to delete an existing data item in the API by providing its ID (Delete)

In the example code provided, we have a simple API built with Python’s Flask framework. The API has a data store consisting of three key-value pairs. We have defined four API endpoints, one for each HTTP method, which correspond to the different CRUD (Create, Read, Update, Delete) operations that can be performed on the data store.

client and server api http request methods

The URL is the location where we can access our API, typically consisting of three components:

Protocol: denoting the communication protocol such as http:// or https://.

Domain: the server name that hosts the API, spanning from the protocol to the end of the domain extension (e.g., .com, .org, etc.). As an illustration, the domain for my website is expertnetworkconsultant.com.

Endpoint: equivalent to the pages on a website (/blog, /legal), an API can have multiple endpoints, each serving a distinct purpose. When designing an API with Python, it’s essential to define endpoints that accurately represent the underlying functionality of the API.

To test these endpoints, we can use the command-line tool cURL, or write Python code using the requests library. In the code examples provided, we use Python requests to send HTTP requests to the API endpoints and handle the responses.

Create an API with FLASK

from flask import Flask, jsonify, request

app = Flask(__name__)

# Data store for the API
data = {
    '1': 'John',
    '2': 'Mary',
    '3': 'Tom'
}

# GET method to retrieve data from the API
@app.route('/api', methods=['GET'])
def get_data():
    return jsonify(data)

# POST method to submit data to the API
@app.route('/api', methods=['POST'])
def add_data():
    req_data = request.get_json()
    data.update(req_data)
    return jsonify(req_data)

# PUT method to update data in the API
@app.route('/api/', methods=['PUT'])
def update_data(id):
    req_data = request.get_json()
    data[id] = req_data['name']
    return jsonify(req_data)

# DELETE method to delete data from the API
@app.route('/api/', methods=['DELETE'])
def delete_data(id):
    data.pop(id)
    return jsonify({'message': 'Data deleted successfully'})

if __name__ == '__main__':
    app.run(debug=True)

Note that the endpoint is hosted at http://localhost:5000/api, where localhost refers to the local machine and 5000 is the default port used by Flask. If you want to change the endpoint URL or the response message, you can modify the code accordingly.

GET request to retrieve all data:

curl -X GET http://localhost:5000/api

POST request to add new data:

curl -d '{"4": "Peter"}' -H "Content-Type: application/json" -X POST http://localhost:5000/api

PUT request to update existing data with ID 2:

curl -d '{"name": "Maria"}' -H "Content-Type: application/json" -X PUT http://localhost:5000/api/2

DELETE request to delete existing data with ID 3:

curl -X DELETE http://localhost:5000/api/3

I hope this helps you understand how APIs work and how to use the main HTTP methods in your API endpoints!

Here are some Python requests examples for the API calls:

To make a GET request to retrieve all data:

import requests

response = requests.get('http://localhost:5000/api')

if response.ok:
    data = response.json()
    print(data)
else:
    print('Failed to retrieve data:', response.text)

To make a POST request to add new data:

import requests

new_data = {'4': 'Peter'}
headers = {'Content-Type': 'application/json'}
response = requests.post('http://localhost:5000/api', json=new_data, headers=headers)

if response.ok:
    data = response.json()
    print('Data added successfully:', data)
else:
    print('Failed to add data:', response.text)

To make a PUT request to update existing data with ID 2:

import requests

updated_data = {'name': 'Maria'}
headers = {'Content-Type': 'application/json'}
response = requests.put('http://localhost:5000/api/2', json=updated_data, headers=headers)

if response.ok:
    data = response.json()
    print('Data updated successfully:', data)
else:
    print('Failed to update data:', response.text)

To make a DELETE request to delete existing data with ID 3:

import requests

response = requests.delete('http://localhost:5000/api/3')

if response.ok:
    print('Data deleted successfully')
else:
    print('Failed to delete data:', response.text)

Note that in each case, we use the requests library to make the HTTP request to the API endpoint, and then check the response status code and content to determine if the request was successful or not.

So let us perform a real API call. In this case, we are going to add another item to the data set.

import requests

new_data = {'4': 'Peter'}
headers = {'Content-Type': 'application/json'}
response = requests.post('http://localhost:5000/api', json=new_data, headers=headers)

if response.ok:
    data = response.json()
    print('Data added successfully:', data)
else:
    print('Failed to add data:', response.text)
Data added successfully: {'4': 'Peter'}

Now that we have added the data added, let us check if this newly created item is committed.

import requests

response = requests.get('http://localhost:5000/api')

if response.ok:
    data = response.json()
    print(data)
else:
    print('Failed to retrieve data:', response.text)
{'1': 'John', '2': 'Mary', '3': 'Tom', '4': 'Peter'}

Now let us go ahead to delete an item;

import requests

response = requests.delete('http://localhost:5000/api/3')

if response.ok:
    print('Data deleted successfully')
else:
    print('Failed to delete data:', response.text)

Data deleted successfully
{'1': 'John', '2': 'Mary', '4': 'Peter'}

Follow below for a good resource on the subject;
https://towardsdatascience.com/the-right-way-to-build-an-api-with-python-cd08ab285f8f
https://auth0.com/blog/developing-restful-apis-with-python-and-flask/
https://anderfernandez.com/en/blog/how-to-create-api-python/

]]>
Create an Application Gateway with Path Routing to Backend Pools https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/create-an-application-gateway-with-path-routing-to-backend-pools/ Wed, 12 Apr 2023 00:01:20 +0000 http://www.expertnetworkconsultant.com/?p=6098 Continue readingCreate an Application Gateway with Path Routing to Backend Pools]]> In this article, we’ll walk you through the process of creating two Linux Ubuntu VMs and an application gateway with path routing to one VM as an image server and the other as a video server. This setup will enable you to serve static assets, such as images and videos, from separate VMs, which can help distribute traffic and improve performance

Note: I have used this SKU size as it’s lightweight and sufficient for this lab exercise – Standard B1s (1 vcpu, 1 GiB memory)

First, we’ll create two Linux Ubuntu virtual machines in Azure. We’ll use Azure because it offers a quick and easy way to create virtual machines.

Step 1:

  • Sign in to the Azure portal.
  • Click on “Create a resource” in the top left corner of the screen.
  • Search for “Ubuntu Server” and select the “Ubuntu Server 18.04 LTS” option.
  • Choose a subscription, resource group, virtual machine name, region, and size for the virtual machine. You’ll need to create one VM for the image server and another for the video server.
  • Set up a username and password for the VM.
  • Choose “SSH public key” as the authentication type.
  • Create an SSH key pair if you don’t already have one.
  • Click “Review + create” to review your settings and create the VM.

Repeat this process to create a second VM for the video server.

Step 2: Configure the Virtual Machines

create linux virtual machines

Next, we’ll configure the virtual machines to serve static assets. We’ll use Nginx as the web server, but you can use any web server you prefer.

SSH into the image server VM or use Azure Run Command Tool.
Install Nginx by running the command

"sudo apt-get update && sudo apt-get install nginx".

Copy your images to the VM and place them in the “/var/www/html” directory.
Repeat this process on the video server VM, but copy your videos to the “/var/www/html/videos” directory.

A step by step walkthrough as per below;
Install Nginx

sudo apt-get -y update
sudo apt-get -y install nginx

Create Images Folder Path

mkdir /var/www/html/images/
echo "<h1> This is the Images Server </h1>" > /var/www/html/images/index.html

Create Videos Folder Path

mkdir /var/www/html/videos/
echo "<h1>This is the Videos Server</h1>" > /var/www/html/videos/index.html

Step 3: Create the Application Gateway

Now, we’ll create the application gateway in Azure. This will enable us to route traffic to the correct VM based on the URL path.

  • Sign in to the Azure portal.
  • Click on “Create a resource” in the top left corner of the screen.
  • Search for “Application Gateway” and select the “Application Gateway v2” option.
  • Choose a subscription, resource group, name, region, and SKU for the application gateway.
  • Choose the “Backend pools” option in the left menu.
  • Click “Add” to add a backend pool.
  • Choose the “Virtual machines” option for the backend target type.
  • Choose the image server and video server virtual machines as the targets.
  • Choose the “HTTP settings” option in the left menu.
  • Click “Add” to add an HTTP setting.
  • Choose a name for the HTTP setting and configure the protocol, port, and cookie settings.
  • Choose the “Rules” option in the left menu.
  • Click “Add” to add a rule.
  • Choose a name for the rule and configure the listener, backend target, and URL path map settings.
  • Test your application gateway by accessing the image and video servers through the gateway URL with the appropriate path.

Create Application Gateway

create application gateway

create application gateway public ip
create application gateway public ip

create application gateway with images backend pool
create application gateway with images backend pool

create application gateway with videos backend pool
create application gateway with videos backend pool

create application gateway routing rules

create application gateway listener

create application gateway images backend setting
create application gateway images backend setting

create application gateway add multiple targets to create path-based rule
create application gateway add multiple targets to create path-based rule

create application gateway add multiple images path-based rule
create application gateway add multiple images path-based rule

create application gateway videos backend setting
create application gateway videos backend setting

create application gateway add multiple videos path-based rule
create application gateway add multiple videos path-based rule

create application gateway add backend targets
create application gateway add backend targets

create application gateway frontend routing rules for backend pools
create application gateway frontend-routing-rules-backend-pools

Browse to Video Server Resource
this is the videos server

create application gateway and check health
create application gateway and check health

Check Overview of Application Gateway
overview of application gateway http requests

Awesome links for further reading;
Apache web server documentation: https://httpd.apache.org/docs/
Azure documentation: https://docs.microsoft.com/en-us/azure/
Ubuntu server documentation: https://ubuntu.com/server/docs
Virtual machines in Azure: https://docs.microsoft.com/en-us/azure/virtual-machines/
Application Gateway in Azure: https://docs.microsoft.com/en-us/azure/application-gateway/

]]>
Process Real-Time IoT Data Streams with Azure Stream Analytics https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/process-real-time-iot-data-streams-with-azure-stream-analytics/ Thu, 22 Dec 2022 00:00:17 +0000 http://www.expertnetworkconsultant.com/?p=5805 Continue readingProcess Real-Time IoT Data Streams with Azure Stream Analytics]]> In my previous article, I explained how to connect an IoT Device to Azure IoT Hub

In this article of Ingesting and Processing Streaming and IoT Data for Real-Time Analytics, we are going to explore how to get your IoT events captured in a data stream into a database of your choosing. Processing real-time IoT data streams with Azure Stream Analytics is a thing of beauty.

Scenario
Softclap Technologies, which is a company in the vehicle tracking and automation space, has completely automated its vehicle tracking processes. Their vehicles are equipped with sensors that are capable of emitting streams of data in real time. In this scenario, a Data Analyst Engineer wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.

Let us look at the pre-requisites;

  • Azure IoT Hub
  • Enrolled IoT Device
  • You can find that setup in a recent post connect an IoT Device to Azure IoT Hub.

    With the above requirements in place, go ahead to follow the remainder steps to get Azure Stream Analytics to stream your IoT events to your choice Database.

    What we are building today;

  • Azure Stream Analytics
  • Azure SQL Database Server with a Database
  • Step 1: Create Stream Analytics
    create azure stream analytics

    Step 2: Create SQL Database Server
    create sql database server

    Step 3: Configure Networking to Allow Azure services and resources to access this SQL Database Server
    Allow Azure services and resources to access this server

    Step 4: Create a SQL database
    create a sql database

    Step 5: Create Firewall Rules – this helps you access the Database

    add firewall rules

    set server firewall for sql database

    Step 6: Create Azure Stream Analytics to allow you to perform near real-time analytics on streaming data. Create a job right from your database.
    create stream analytics job

    Step 7: Select IoT Hub as Input
    Stream Analytics jobs enable you Ingest streaming data into your SQL table. Set your input and output, then author your query to transform your data.

    create stream analytics input from iot hub

    create stream analytics input from iot hub device

    You can create a new consumer group but in this setup, I have had to use the existing consumer group $Default.

    IoT Hubs limit the number of readers within one consumer group (to 5). We recommend using a separate group for each job. Leaving this field empty will use the ‘$Default’ consumer group.

    select the existing default consumer group for the stream analytics output

    Step 8: Select Output
    Since you are streaming the telemetry data to your database, select the credentials used for the output table where you can query your data from later on.

    create stream analytics output to database table

    The new table will automatically be created in your database after you initially start your Stream Analytics job

    Now you have completed the configuration for Input and Output from the IoT Hub Telemetry to the Database Table.

    complete stream analytics job with input and output

    Step 9: Telemetry Stream Shows Sample Events from the IoT Device
    sample events from mxchip-device-iot-hub

    Step 10: Click Test Query

    test query for iot events in stream analytics

    Since the objective is really to record the events in our database table, there is a need to create a table matching the schema of your test query results.

    PS: Using the click to create table has not worked well for me in the past. The fields were completely out of sync. I will therefore select view create table SQL script and then connect to the database locally or from Azure Query Editor to create the tables. Let’s dive in.

    create table to capture the events streamed into azure stream analytics

    Step 11: Open SQL Database Query Editor

    create table to capture the events using query editor

    Now that this step has completed successfully, head back to Stream Analytics and click on Start Stream Analytics Job. Starting the stream analytics job ensured that the input iot device telemetry is captured in the database predefined table which can be queried later on.start stream analytics job

    Authenticate to Database SQL Server where Output Table is stored.
    start stream analytics job to database

    Step 12: Click Start to begin writing stream data into Database table.
    streaming job running successfully

    Back to the Query Editor and below are the results.

    query database table for streamed events from iot

    And so there we have it, a successful stream of IoT events from a remote IoT device sending live telemetry ingested in our stream analytics and captured in our database table.

    Click here to learn more about other ways of ingesting data in Azure Stream Analytics.

    ]]>
    Connect an IoT Device to Azure IoT Hub https://www.expertnetworkconsultant.com/network-technology-news/connect-an-iot-device-to-azure-iot-hub/ Mon, 05 Dec 2022 10:00:25 +0000 http://www.expertnetworkconsultant.com/?p=5707 Continue readingConnect an IoT Device to Azure IoT Hub]]> Connect an IoT Device to Azure IoT Hub Internet of Things are everywhere these days, in this article, I detail how to connect an MXCHIP AZ3166 devkit to IoT Hub.

    IoT a matter of fact has become a common place in all spheres of human interaction. They are in our refrigerators, cars, gardens, submarines, space probes and robots, they are just everywhere and for a good reason mainly. Before we get super excited and you must be, let us start with the recommended requisites.

    Clone Repository for Needed Code | This has been provided by Microsoft

    git clone --recursive https://github.com/azure-rtos/getting-started.git
    

    Prepare Your Build Environment

    To install the tools:

    From File Explorer, navigate to the following path in the repo and run the setup script named get-toolchain.bat:

    getting-started\tools\get-toolchain.bat

    After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.

    Run the following code to confirm that CMake version 3.14 or later is installed.

    cmake --version
    

    Now that your build environment seem to be correctly setup, go through the next steps to get your local environment setup.

    Install Azure IoT Explorer on Your Computer
    This part requires a utility called the Azure IoT Explorer which must be installed on your computer. In this demonstration, I have installed the Windows version of the program on my operating system.

    Create Azure IoT Hub
    This part requires the creation of Azure IoT Hub which could be done using the CLI or Web UI. I will do this using the Web UI but provide the commands for the same in CLI. Follow along;

    Connect an IoT Device to Azure IoT Hub

    Successfully Created Azure IoT Hub
    Connect an IoT Device to Azure IoT Hub

    Add Device to Azure IoT Hub

    create azure iot device

    add iot device to azure iot hub

    Get Connection String

    $ az iot hub connection-string  show --hub-name mxchip-device-iot-hub.azure-devices.net
    

    Copy the connection string without the surrounding quotation characters.

    {
      "connectionString": "HostName=mxchip-device-iot-hub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=UPEgplrCL+zQyabcdefgHiJkWqEXc2vOqulTAQ1k="
    }
    [ ~ ]$ 
    

    Add Hubs on IoT Explorer using the connection string
    add connection string on azure iot explorer
    Before continuing to the next section, confirm that you’ve copied the following values:

    hostName
    deviceId
    primaryKey
    

    I made a note of the following elements;

    HostName: mxchip-device-iot-hub.azure-devices.net
    Device ID : mxchipaz366
    Primary Key : KUTkSnC6Sn0vVieeabcdefghijkllU9ko0XCOwKy4= 
    

    view devices in the hub

    check devices on iot explorer under connection

    Configure Connection on Local Repo
    Open the following file in a text editor:

    getting-started\MXChip\AZ3166\app\azure_config.h

    Comment out the following line near the top of the file as shown:

    // #define ENABLE_DPS
    

    Set the Wi-Fi constants to the following values from your local environment.

    WIFI_SSID	{Your Wi-Fi SSID}
    WIFI_PASSWORD	{Your Wi-Fi password}
    WIFI_MODE	{One of the enumerated Wi-Fi mode values in the file}
    

    configure build values for wireless connectivity

    Set the Azure IoT device information constants to the values that you saved after you created Azure resources.

    IOT_HUB_HOSTNAME	{Your Iot hub hostName value}
    IOT_DPS_REGISTRATION_ID	{Your Device ID value}
    IOT_DEVICE_SAS_KEY	{Your Primary key value}
    

    configure build values

    Build the image
    In your console or in File Explorer, run the script rebuild.bat at the following path to build the image:

    getting-started\MXChip\AZ3166\tools\rebuild.bat
    

    run rebuild batch file

    iot device flash image

    After the build completes, confirm that the binary file was created in the following path:

    getting-started\MXChip\AZ3166\build\app\mxchip_azure_iot.bin
    

    create iot device mxchip image

    Copy the binary file mxchip_azure_iot.bin to iot dev kit

    Follow the steps here.

    Launch Termite and check connectivity
    My device is on COM5. You can check for that in Command Prompt by typing mode.

    configure serial port settings on termite

    Successful Connection to Azure IoT Hub
    successful connection of iot to azure iot hub

    Check Telemetry on Azure IoT Explorer
    check telemetry for iot device

    Check Telemetry on Termite
    check telemetry for iot device on termite

    Simulate Device Telemetry

    Simply copy paste the following command to Azure Cloud Shell. It will start simulating device as it’s sending messages to IoT Hub. You can click ‘Start’ button from the Telemetry page to start monitoring the events.

    az iot device simulate --device-id mxchipdevkitaz3166 --login "HostName=**cloudmxchipiot-01.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=slKRd09jokVHPXNjDabcdeEfgHizDhmq8="
    

    View Telemetry Received from IoT Device

    az iot hub monitor-events --output table --device-id mxchipaz366 --hub-name mxchip-device-iot-hub
    
    Starting event monitor, filtering on device: mxchipaz366, use ctrl-c to stop...
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        magnetometerX: -445.5
        magnetometerY: 531
        magnetometerZ: 496.5
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        accelerometerX: -377.04
        accelerometerY: -917.31
        accelerometerZ: -130.66
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        gyroscopeX: -770
        gyroscopeY: -420
        gyroscopeZ: 770
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        humidity: 60.61
        pressure: 1014.05
        temperature: 19.88
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        magnetometerX: -408
        magnetometerY: 504
        magnetometerZ: 495
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        accelerometerX: -380.33
        accelerometerY: -915.85
        accelerometerZ: -129.93
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        gyroscopeX: -1190
        gyroscopeY: 630
        gyroscopeZ: 2800
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        humidity: 60.2
        pressure: 1014.04
        temperature: 20.2
    
    event:
      component: ''
      interface: dtmi:azurertos:devkit:gsgmxchip;2
      module: ''
      origin: mxchipaz366
      payload:
        magnetometerX: -417
        magnetometerY: 531
        magnetometerZ: 486
    

    Communicate with your IoT Device
    Run the az iot hub invoke-device-method command, and specify the method name and payload. For this method, setting method-payload to true turns on the LED, and setting it to false turns it off.

    az iot hub invoke-device-method --device-id mxchipaz366 --method-name setLedState --method-payload true --hub-name mxchip-device-iot-hub
    
    az iot hub invoke-device-method --device-id mxchipaz366 --method-name setLedState --method-payload true --hub-name mxchip-device-iot-hub
    {
      "payload": {},
      "status": 200
    }
    

    There are advanced aspects to provisioning IoT devices and the following guide helps you do just that.
    Create a new IoT Hub Device Provisioning Service
    https://learn.microsoft.com/en-gb/azure/iot-dps/quick-setup-auto-provision#create-a-new-iot-hub-device-provisioning-service

    When you encounter this error;

    "ERROR: azure_iot_nx_client_dps_entry"

    then it is likely you did not comment out #define ENABLE_DPS.

    ]]>
    Access Secrets from Azure Key Vault in Azure Kubernetes Service https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/access-secrets-from-azure-key-vault-in-azure-kubernetes-service/ Wed, 19 Oct 2022 23:00:38 +0000 http://www.expertnetworkconsultant.com/?p=5607 Continue readingAccess Secrets from Azure Key Vault in Azure Kubernetes Service]]> Before we begin to discuss how to access secrets from Azure Key Vault in Azure Kubernetes Service, let us have a quick intro to Secrets in Kubernetes.

    When you hear secrets, what comes to mind is confidentiality and secrecy. In the world of Kubernetes secrets are essentially any value that you don’t want the world to know about.

    The following elements, password, an API key, a connection string to a database, all fall under what a secret is. Now when comparing Secrets and ConfigMaps in Kubernetes, the main difference is the confidential data.

    Both ConfigMaps and Secrets store the data the same way, with key/value pairs, but ConfigMaps are designed for plain text data, and secrets on the other hand are meant for data that must be secured and confidential to the application exclusively.

    By default, Secrets are stored at rest in Key Vault, in a secure encrypted store. Secrets are only stored in the AKS cluster when a pod is running with the secret mounted as a volume in a pod. As soon as the hosting pods are removed, the secret is removed from the cluster and this is a better approach as opposed to Kubernetes secrets which gets retained after the hosting pod is removed.

    RESOURCE_GROUP=corp-infrastructure-rg
    KV_RESOURCE_GROUP=corp-kv-infrastructure-rg
    LOCATION=eastus
    AKS_CLUSTER=corpakscluster
    

    #Create a resource group for the AKS cluster:

    az group create --name $RESOURCE_GROUP --location $LOCATION
    

    az group create --name

     az aks create \
       --resource-group $RESOURCE_GROUP \
       --name $AKS_CLUSTER \
       --network-plugin azure \
       --enable-managed-identity \
       --enable-addons azure-keyvault-secrets-provider \
       --generate-ssh-keys
    

    az aks create

     "identity": {
            "clientId": "1456c162-3f04-40bc-a079-f1f3f7d22b16",
            "objectId": "9f8165b6-206f-4596-932f-31e80469700f",
    }
    

    Download the cluster credentials and configure kubectl to use them:

    az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER
    
    Merged "corpakscluster" as current context in /home/%user%/.kube/config

    Check that the Secrets Store CSI Driver and the Azure Key Vault Provider are installed in the cluster:

    $ kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'
    

    kubectl get pods

    When we enable the Azure Key Vault secret provider, the add-on will create a user assigned managed identity in the node managed resource group. Store its resource ID in a variable for later use
    

    View the resource ID of the user assigned managed identity;

    az aks show -g $RESOURCE_GROUP -n $AKS_CLUSTER --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
    1456c162-3f04-40bc-a079-f1f3f7d22b16
    

    azure key vault secrets provider managed identity

    Store the resource ID of the user assigned managed identity in a variable;

    KV_IDENTITY_RESOURCE_ID=$(az aks show -g $RESOURCE_GROUP -n $AKS_CLUSTER --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv)
    

    Create Azure Key Vault
    Create a resource group for Azure Key vault

    az group create --name $KV_RESOURCE_GROUP --location $LOCATION

    Create a key vault while storing its name in a variable:

    KEY_VAULT_NAME="akscorpkeyvault${RANDOM}"
    az keyvault create --name $KEY_VAULT_NAME --resource-group $KV_RESOURCE_GROUP --location $LOCATION
    
    {
     "name": "akscorpkeyvault5493"
    "objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb"
    "tenantId": "46edb775-xy69-41z6-7be1-03e4a0997e49"
    }
    

    Create a secret and a key in the Vault for later demonstration:

    az keyvault secret set --vault-name $KEY_VAULT_NAME -n FirstSecret --value StoredValueinFirstSecret
    
     "name": "FirstSecret",
      "tags": {
        "file-encoding": "utf-8"
      },
      "value": "StoredValueinFirstSecret"
    }
    

    Create a key in the Vault for later demonstration:

    az keyvault key create --vault-name $KEY_VAULT_NAME -n FirstKey --protection software
    
        "n": "t6PMnN5hTR2Oicy/fuTzQgXo49EgkS7B61gJWOeQjfw8u9tO+YoRbnPgWMnDsQWE3xE/MJyt6R0w0QwHsQa28KjdzCfq6qvJSlTSyhFfU9VJIf2YkjFtSlOpoyqYXKmHC6cS3pLrWsxDdVZTpZrgcZ8ec2deowrLDnn9mL5OKljGHmEaptocVHGWGfs9VNlxNqDAhRC4IKCQSIt6pnXc+eLo6Es0J50WhqHTGdqMG5brJGSlgEVaZobeBuvyFIxEvtt33MDjjkdiXCjKoTl8IS7/LNlvLYtDTWRvazK390IUXpldICw0xAp3layR/IDZA0diLEwQzbdESkyO18osPQ==",
    

    Grant the AKS key vault managed identity permissions to read (GET) your key vault and view its contents:

    Set policy to access keys in your key vault

    az keyvault set-policy -n $KEY_VAULT_NAME --key-permissions get --spn $KV_IDENTITY_RESOURCE_ID
    "objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb", granted the permissions to read the object     "objectId": "9f8165b6-206f-4596-932f-31e80469700f"
    
     "keys": [
                "get"
              ],
    

    Set policy to access secrets in your key vault

    az keyvault set-policy -n $KEY_VAULT_NAME --secret-permissions get --spn $KV_IDENTITY_RESOURCE_ID
    "objectId": "ebejced9-2f89-8176-a9u3-657f75eb36bb", granted the permissions to read the object     "objectId": "9f8165b6-206f-4596-932f-31e80469700f"
    "secrets": [
                "get"
              ]
    

    Set policy to access certs in your key vault

    az keyvault set-policy -n $KEY_VAULT_NAME --certificate-permissions get --spn $KV_IDENTITY_RESOURCE_ID
    
     "certificates": [
                "get"
              ],
    
    Create Kubernetes resources
    Store the tenant ID in a variable, you can get the value from the Azure AD tenant overview page:
    TENANT_ID=${{put your tenant ID here}}  | TENANT_ID=46edb775-xy69-41z6-7be1-03e4a0997e49
    

    Create a SecretProviderClass by using the following YAML, using your own values for userAssignedIdentityID, keyvaultName, tenantId, and the objects to retrieve from your key vault:

    
    cat <<EOF | kubectl apply -f -
    ---
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
      name: azure-kvname-user-msi
    spec:
      provider: azure
      parameters:
        usePodIdentity: "false"
        useVMManagedIdentity: "true" # true since using managed identity
        userAssignedIdentityID: 1456c162-3f04-40bc-a079-f1f3f7d22b16 #$KV_IDENTITY_RESOURCE_ID
        keyvaultName: akscorpkeyvault5493    #$KEY_VAULT_NAME
        cloudName: ""
        objects:  |
          array:
            - |
              objectName: FirstSecret        #ExampleSecret
              objectType: secret    # object types: secret, key, or cert
              objectVersion: ""     # default to latest if empty
            - |
              objectName: FirstKey        #ExampleKey
              objectType: key
              objectVersion: ""
        tenantId: 46edb775-xy69-41z6-7be1-03e4a0997e49 #$TENANT_ID
    EOF
    
    secretproviderclass.secrets-store.csi.x-k8s.io/azure-kvname-user-msi configured
    

    At this point, you need a pod that mounts the secret and the key using the secret provider class we just created earlier above:

    
    cat <<EOF | kubectl apply -f -
    ---
    kind: Pod
    apiVersion: v1
    metadata:
      name: busybox-secrets-store-inline-user-msi
    spec:
      containers:
        - name: busybox
          image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
          command:
            - "/bin/sleep"
            - "10000"
          volumeMounts:
          - name: secrets-store01-inline
            mountPath: "/mnt/secrets-store"
            readOnly: true
      volumes:
        - name: secrets-store01-inline
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "azure-kvname-user-msi"
    EOF
    
    
    pod/busybox-secrets-store-inline-user-msi created
    

    Validate secrets were mounted from the pod created earlier:

    kubectl exec busybox-secrets-store-inline-user-msi -- ls /mnt/secrets-store/
    

    Read the content(s) of the secret and key:

    kubectl exec busybox-secrets-store-inline-user-msi -- cat /mnt/secrets-store/FirstSecret
    kubectl exec busybox-secrets-store-inline-user-msi -- cat /mnt/secrets-store/FirstKey
    ]]>
    How to Successfully RDP into Azure AD-Joined Virtual Machines https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/how-to-successfully-rdp-into-a-azure-ad-joined-vm-in-azure/ Mon, 12 Sep 2022 23:00:50 +0000 http://www.expertnetworkconsultant.com/?p=5414 Continue readingHow to Successfully RDP into Azure AD-Joined Virtual Machines]]> Remote Desktop Connection does not always work with Cloud Machines. If you want to know How to Successfully RDP into Azure AD-Joined Virtual Machines, then this article is all you’d ever need.

    If you have struggled to remote desktop to a virtual machine in Azure, then it is likely to be a Windows Server or Desktop machine.

    Azure uses the AzureAADLogin extension to enable the capabilities of user logins with their domain credentials.

    It doesn’t always work and in my experience, I haven’t had much success with it up until now when I have finally figured out how to successfully rdp into a azure ad-joined vm in Azure.

    Below are the steps needed to successfully achieve our objective.

  • Create Virtual Machine
  • Install Extensions for Azure Active Directory Login
  • Turn off Network Level Authentication
  • Step 1: Create a Virtual Machine

    
    az group create --name your-resourcegroup-name --location westus
    
    az vm create \
        --resource-group your-resourcegroup-name \
        --name your-vm-name \
        --image Win2019Datacenter \
        --assign-identity \
        --admin-username localadminuser \
        --admin-password yourpassword
    
    

    Although this extension can be installed at the time of creation of the virtual machine, using the following bash commandlet would still install the extension for you.

    Step 2: Install Required Extensions

    
    az vm extension set \
        --publisher Microsoft.Azure.ActiveDirectory \
        --name AADLoginForWindows \
        --resource-group your-resourcegroup-name \
        --vm-name your-vm-name
    
    

    This article is intended to fix a peculiar problem encountered in remote desktop connections to Windows Server Virtual Machines on Azure. With the local administrator account, I could remote desktop to the virtual machine but not with domain accounts.

    Figure 1.0 – The Logon Attempt Failed.
    the logon attempt failed

    Install required extensions for the virtual machine
    Install WindowsAADLogin Extension with RBAC
    aadloginforwindows

    Enable Remote Desktop Access | 3389 on the NSG
    This can be done at the creation of the virtual machine.

    Now that you’ve created the VM and enabled the appropriate extension(s), you need to configure an Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login.

    Add either of these IAM Roles to RBAC User

  • Virtual Machine User Login
  • Users who have this role assigned can log in to an Azure virtual machine with regular user privileges.

  • Virtual Machine Administrator Login
  • Users who have this role assigned can log in to an Azure virtual machine with administrator privileges.

    
    $username=$(az account show --query user.name --output tsv)
    $rg=$(az group show --resource-group your-resourcegroup-name --query id -o tsv)
    
    az role assignment create \
        --role "Virtual Machine Administrator Login" \
        --assignee $username \
        --scope $rg
    
    

    Mitigation | Steps I followed to fix this issue.

    Windows Key + R
    

    press windows key

    Type sysdm.cpl a

    type sysdm.cpl

    Uncheck the Allow connections only from computers running Remote Desktop with Network Level Authentication (recommended) box.
    Uncheck the Allow connections only from computers running Remote Desktop with Network Level Authentication (recommended) box

    Edit the RDP file
    Add the following lines to the RDP Connection file with a text editor of your choosing. Save the file ensuring its not formatted as any other file type except with the extension *.rdp

    
    authentication level:i:2
    enablecredsspsupport:i:0
    

    Add a space character before the AzureAD domain.

    #optional line – make a note of the full-stop character before the \azuread\

    full address:s:10.X.Y.Z:3389
    prompt for credentials:i:1
    administrative session:i:1
    
    
    authentication level:i:2
    enablecredsspsupport:i:0
    
    username:s:.\azuread\username@domain.com

    .\azuread\username@domain.ext

    If you are not interested in the optional line configuration, then you will now need to enter your credentials once connection is initiated as thus;

    username: azuread\user@domain.com
    password: **************
    

    make a note of the space character before the AzureAD domain

    edit rdp connection file

    Initiate Connection to Virtual Machine

    logon to azure virtual machine with add user account

    If you have followed the above steps diligently, then the attempt to login failure should no longer exist.

    Below is a helpful community article addressing this challenge.

    If you want to learn more of how to troubleshoot virtual machines, then please follow this useful resource from Microsoft.

    ]]>
    How to Create Azure Standard Load Balancer with Backend Pools in Terraform https://www.expertnetworkconsultant.com/design/how-to-create-azure-standard-load-balancer-with-backend-pools-in-terraform/ Wed, 24 Aug 2022 09:00:36 +0000 http://www.expertnetworkconsultant.com/?p=5354 Continue readingHow to Create Azure Standard Load Balancer with Backend Pools in Terraform]]>
    create azure standard load balancer with backend pools in terraform
    Image Reference: https://docs.microsoft.com/en-us/azure/load-balancer/media/load-balancer-overview/load-balancer.svg
    Building infrastructure with code is where majority of future cloud deployments will go. In this architecture of how to create azure standard load balancer with backend pools in terraform, I have created an Azure standard loadbalancer with backend pools to accomodate two linux virtual machines.

    Configure a Linux virtual machine in Azure using Terraform

    How to Create Azure Standard Load Balancer with Backend Pools in Terraform

    Below is a list of parts which constitutes this build.

    • Resource Group
    • Virtual Machines
    • Network Interfaces
    • Standard Loadbalancer
    • Availability Sets

    As it appears in Azure
    moving parts to creating backend address pool addition of nics with terraform

    Open your IDE and create the following Terraform files;
    providers.tf
    network.tf
    loadbalancer.tf
    virtualmachines.tf

    Clone the Git Code Repository

    git clone https://github.com/expertcloudconsultant/createazureloadbalancer.git
    

    #Create the providers providers.tf

    #IaC on Azure Cloud Platform | Declare Azure as the Provider
    # Configure the Microsoft Azure Provider
    terraform {
    
      required_version = ">=0.12"
    
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "~>2.0"
        }
      }
    
    
    }
    
    provider "azurerm" {
      features {}
    }
    

    #Create the virutal network and subnets with with Terraform. network.tf

    #Create Resource Groups
    resource "azurerm_resource_group" "corporate-production-rg" {
      name     = "corporate-production-rg"
      location = var.avzs[0] #Avaialability Zone 0 always marks your Primary Region.
    }
    
    
    
    #Create Virtual Networks > Create Spoke Virtual Network
    resource "azurerm_virtual_network" "corporate-prod-vnet" {
      name                = "corporate-prod-vnet"
      location            = azurerm_resource_group.corporate-production-rg.location
      resource_group_name = azurerm_resource_group.corporate-production-rg.name
      address_space       = ["10.20.0.0/16"]
    
      tags = {
        environment = "Production Network"
      }
    }
    
    
    #Create Subnet
    resource "azurerm_subnet" "business-tier-subnet" {
      name                 = "business-tier-subnet"
      resource_group_name  = azurerm_resource_group.corporate-production-rg.name
      virtual_network_name = azurerm_virtual_network.corporate-prod-vnet.name
      address_prefixes     = ["10.20.10.0/24"]
    }
    
    #Create Private Network Interfaces
    resource "azurerm_network_interface" "corpnic" {
      name                = "corpnic-${count.index + 1}"
      location            = azurerm_resource_group.corporate-production-rg.location
      resource_group_name = azurerm_resource_group.corporate-production-rg.name
      count               = 2
    
      ip_configuration {
        name                          = "ipconfig-${count.index + 1}"
        subnet_id                     = azurerm_subnet.business-tier-subnet.id
        private_ip_address_allocation = "Dynamic"
    
      }
    }
    

    #Create the standard load balancer with Terraform. loadbalancer.tf

    #Create Load Balancer
    resource "azurerm_lb" "business-tier-lb" {
      name                = "business-tier-lb"
      location            = azurerm_resource_group.corporate-production-rg.location
      resource_group_name = azurerm_resource_group.corporate-production-rg.name
    
      frontend_ip_configuration {
        name                          = "businesslbfrontendip"
        subnet_id                     = azurerm_subnet.business-tier-subnet.id
        private_ip_address            = var.env == "Static" ? var.private_ip : null
        private_ip_address_allocation = var.env == "Static" ? "Static" : "Dynamic"
      }
    }
    

    create loadbalancer with terraform

    #Create Loadbalancing Rules

    #Create Loadbalancing Rules
    resource "azurerm_lb_rule" "production-inbound-rules" {
      loadbalancer_id                = azurerm_lb.business-tier-lb.id
      resource_group_name            = azurerm_resource_group.corporate-production-rg.name
      name                           = "ssh-inbound-rule"
      protocol                       = "Tcp"
      frontend_port                  = 22
      backend_port                   = 22
      frontend_ip_configuration_name = "businesslbfrontendip"
      probe_id                       = azurerm_lb_probe.ssh-inbound-probe.id
      backend_address_pool_ids        = ["${azurerm_lb_backend_address_pool.business-backend-pool.id}"]
     
    
    }
    

    create loadbalancing rules with terraform

    #Create Probe

    #Create Probe
    resource "azurerm_lb_probe" "ssh-inbound-probe" {
      resource_group_name = azurerm_resource_group.corporate-production-rg.name
      loadbalancer_id     = azurerm_lb.business-tier-lb.id
      name                = "ssh-inbound-probe"
      port                = 22
    }
    

    create loadbalancing probes with terraform

    created loadbalancing probes with terraform

    #Create Backend Address Pool

    #Create Backend Address Pool
    resource "azurerm_lb_backend_address_pool" "business-backend-pool" {
      loadbalancer_id = azurerm_lb.business-tier-lb.id
      name            = "business-backend-pool"
    }
    

    create backend address pool with terraform

    #Automated Backend Pool Addition

    #Automated Backend Pool Addition > Gem Configuration to add the network interfaces of the VMs to the backend pool.
    resource "azurerm_network_interface_backend_address_pool_association" "business-tier-pool" {
      count                   = 2
      network_interface_id    = azurerm_network_interface.corpnic.*.id[count.index]
      ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]
      backend_address_pool_id = azurerm_lb_backend_address_pool.business-backend-pool.id
    
    }
    

    This line of configuration is what intelligently adds the network interfaces to the backendpool. I call it a gem because it took me quite sometime to figure it all out.

     ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]
    

    create backend address pool addition of nics with terraform

    created backend address pool addition of nics with terraform

    Create the Linux Virtual Machines virtualmachines.tf

    # Create (and display) an SSH key
    resource "tls_private_key" "linuxvmsshkey" {
      algorithm = "RSA"
      rsa_bits  = 4096
    }
    
    #Custom Data Insertion Here
    data "template_cloudinit_config" "webserverconfig" {
      gzip          = true
      base64_encode = true
    
      part {
    
        content_type = "text/cloud-config"
        content      = "packages: ['nginx']"
      }
    }
    
    
    
    # Create Network Security Group and rule
    resource "azurerm_network_security_group" "corporate-production-nsg" {
      name                = "corporate-production-nsg"
      location            = azurerm_resource_group.corporate-production-rg.location
      resource_group_name = azurerm_resource_group.corporate-production-rg.name
    
    
      #Add rule for Inbound Access
      security_rule {
        name                       = "SSH"
        priority                   = 1001
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = var.ssh_access_port # Referenced SSH Port 22 from vars.tf file.
        source_address_prefix      = "*"
        destination_address_prefix = "*"
      }
    }
    
    
    #Connect NSG to Subnet
    resource "azurerm_subnet_network_security_group_association" "corporate-production-nsg-assoc" {
      subnet_id                 = azurerm_subnet.business-tier-subnet.id
      network_security_group_id = azurerm_network_security_group.corporate-production-nsg.id
    }
    
    
    
    #Availability Set - Fault Domains [Rack Resilience]
    resource "azurerm_availability_set" "vmavset" {
      name                         = "vmavset"
      location                     = azurerm_resource_group.corporate-production-rg.location
      resource_group_name          = azurerm_resource_group.corporate-production-rg.name
      platform_fault_domain_count  = 2
      platform_update_domain_count = 2
      managed                      = true
      tags = {
        environment = "Production"
      }
    }
    
    
    #Create Linux Virtual Machines Workloads
    resource "azurerm_linux_virtual_machine" "corporate-business-linux-vm" {
    
      name                  = "${var.corp}linuxvm${count.index}"
      location              = azurerm_resource_group.corporate-production-rg.location
      resource_group_name   = azurerm_resource_group.corporate-production-rg.name
      availability_set_id   = azurerm_availability_set.vmavset.id
      network_interface_ids = ["${element(azurerm_network_interface.corpnic.*.id, count.index)}"]
      size                  =  "Standard_B1s"  # "Standard_D2ads_v5" # "Standard_DC1ds_v3" "Standard_D2s_v3"
      count                 = 2
    
    
      #Create Operating System Disk
      os_disk {
        name                 = "${var.corp}disk${count.index}"
        caching              = "ReadWrite"
        storage_account_type = "Standard_LRS" #Consider Storage Type
      }
    
    
      #Reference Source Image from Publisher
      source_image_reference {
        publisher = "Canonical"                    #az vm image list -p "Canonical" --output table
        offer     = "0001-com-ubuntu-server-focal" # az vm image list -p "Canonical" --output table
        sku       = "20_04-lts-gen2"               #az vm image list -s "20.04-LTS" --output table
        version   = "latest"
      }
    
    
      #Create Computer Name and Specify Administrative User Credentials
      computer_name                   = "corporate-linux-vm${count.index}"
      admin_username                  = "linuxsvruser${count.index}"
      disable_password_authentication = true
    
    
    
      #Create SSH Key for Secured Authentication - on Windows Management Server [Putty + PrivateKey]
      admin_ssh_key {
        username   = "linuxsvruser${count.index}"
        public_key = tls_private_key.linuxvmsshkey.public_key_openssh
      }
    
      #Deploy Custom Data on Hosts
      custom_data = data.template_cloudinit_config.webserverconfig.rendered
    
    }
    

    If you are interested in using the UI to create a solution as above, then follow Microsoft’s Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and two virtual machines.

    ]]>
    Securing and Monitoring Your Network Security on Azure https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/securing-and-monitoring-your-network-security-on-azure/ Wed, 13 Jul 2022 12:00:23 +0000 http://www.expertnetworkconsultant.com/?p=5173 Continue readingSecuring and Monitoring Your Network Security on Azure]]> Do you know what is really going on with your network? Securing and Monitoring Your Network Security on Azure must be a defacto practice for all engineers.

    I love dashboards, yes the likes that gives me useful information, and bless God that Azure has such dashboards in place with rich detail that helps you the engineer to truly know what is happening on your network.

    Deployment in an Azure Region showing Benign and Malicious traffic from different Regions
    deployment in an azure region showing benign and malicious traffic from different regions

    So here is the whole story. I had a project to deploy a new infrastructure which I gladly did in Terraform. There are a set of practices where cloud infrastructure is concerned. Spinning up virtual machines or instances is only a piece of the story.
    NSGs as we call them have the ability to show you a great deal of information which by default is not enabled until certain pieces of configuration are set in place.

    To test access to the virtual machine, I had created a Public IP and attached to the Network Interface of the Virtual Machine temporarily.

    I had created a Public IP and attached to the Network Interface of the Virtual Machine temporarily.

    Most problems originate from temporary rules, configurations in the name of tests. I had forgotten to test and also to remove the rule. This is the reason why it is of utmost importance to enable NSG Flow Logs and also to keep an eye on your Network Watcher. Never assume full protection until you have worked to attain it.

    Create NSG Flow Logs to Capture Traffic Flows into NSG on Azure
    create an nsg flow logs to capture traffic flows into nsg on azure

    deployment in an azure region showing benign and malicious traffic

    Download Sample Script to Check for Malicious Traffic

    AzureNetworkAnalytics_CL
    | where SubType_s == 'FlowLog'
        and (FASchemaVersion_s == '1' or FASchemaVersion_s == '2')
        and FlowStartTime_t between (datetime('2022-06-03T04:44:29.994Z') .. datetime('2022-06-04T04:44:29.994Z'))
    

    Figure 1.0 – Showing Malicious Traffic to Ports 22
    log analytics showing malicious traffic towards the inbound ssh port

    Deployment in an Azure Region showing allowed Malicious Traffic
    deployment in an azure region showing allowed malicious traffic

    deployment in an azure region showing malicious traffic

    Identifying Source of Attacks

    Checked the NSG for allowed Inbound Rules – SSH Inbound Was to Blame

    Use Azure CLI to query this information – Include the Default Azure NSG Rules

    az network nsg rule list --include-default --resource-group  --nsg-name 
    

    ssh port tcp 22 inbound

    Check Malicious Traffic Flows

    check malicious traffic flow topology on azure

    What I did to mitigate this security failure.

    1. Removed offending SSH rule on the NSG
    2. Removed the Public IP

    Network security group security rules are evaluated by priority using the combination of source, source port, destination, destination port, and protocol to allow or deny the traffic. A security rules can’t have the same priority and direction as an existing rule. You can’t delete default security rules, but you can override them with rules that have a higher priority.

    Check Inbound Security Rules and delete the offending SSH Inbound Rule
    check inbound security rules and delete the offending ssh inbound rule

    Disassociate and Delete the Public IP attached to a Virtual Machine Network Interface
    disassociate and delete the public ip attached to a virtual machine network interface

    These articles are extremely useful;
    Enable Azure Network Watcher
    Traffic Analytics – frequently asked questions

    ]]>