Saturday, 31 January 2026

Use Terraform to Switch Azure Key Vault to Use RBAC Permisions from Access Policies Without a Downtime for Applications/Users

 For Azure Key Vaults access policies based permision setup is now legacy and all key vaults will have to use Azure RBAC permisions eventually for data access permisions according to offcial Microsft documentation here. Using terraform we can setup the changes. However, we have to be carefull about the switching to RBAC from access policies in production scenarios to avoid interptions to applications. Taking two step approach, first set RBAC permisions and in a next release performing switch to RBAC for key vault will help the transtion to be smooth. Let's look at how to setup this requirement with terraform.

The expectation is to have a keyvault setup with RBAC permisions as shown below.


 

Thursday, 15 January 2026

Using Remote Terraform State

 Sometimes resources common to multiple diffrent setups might need to be created with a common terraform code. In such cases the commeo terraform resources may need to be reffered with its state in  other terraform code. For this requirement we can use terraform remote state. Let's see how we can use terraform remote state step by step in this post.

The expectation is to refer to the Azure resources in remote terraform state as shown below. Here you can see we have reffered to the resource group name and location, and to log analytics workspace id from remote state.


Wednesday, 19 November 2025

Access Private Url within Azure vNet with Azure Pipeline Microsoft Hosted Pipeline Agent via AKS pod in the vNet

If we are using Microsoft hosted agents for Azure pipelines to deploy Azure infrastucture and need to access vNet protected urls of services deployed, we can use a pod in AKS cluster within same vNet, as a jump host. This gives us access endpoints in vNet and ability to resolve DNS defined in private DNS zones of the vNet. Let's look at staep by step how to achive this goal, while using a Microsoft hosted agent in Azure pipelines.

The expectation is to access url such as 
http://es-search.sh.aks.ch-demo-dev-euw-002.net/demoindex001/_count  so AKS hosted elastic seach is accessed via a AKS pod and get the results to the pipeline agent as shown below. Since microsoft hosted agent is outside the vNET it cannot directly reach this elastc search (deployed in AKS) url.



Thursday, 13 November 2025

Whitelist Microsoft Hosted Azure Pipline Agent IPs in Required Azure Resources and Remove Whitelisted IPs Dynamically with Azure Pipelines

  If you are deploying Azure infrastructure using Micorosft hosted Azure pipeline agents you may have to whitelist Microsoft hosted agent IP adress in resources such as storage account where you keep your terraform state, or the key vaults, if you add update secrets to the key vault via terraform and if such resouces are network protected within a vNet in Azure. If the IP is not whitelisted there will be access issue and piplines would fail to make required updates. Let's look at two steps we can implement to add agent IP to a state storage and key vault, and then remove the IP once tarraform plan or apply is done.

Expectation is to execute two tasks as shown below in the pipeline job.


Tuesday, 21 October 2025

Visualize Dead Letter Counts in RabbitMQ Deployed in AKS

 Using the prometheus data obtained by "Enabling Prometheus Data Scraping for RabbitMQ Cluster Deployed with RabbitMQ Cluster Operator on AKS with Managed Prometheus", let's create grafana chart to view any messages land in dead letter queues in the RabbitMQ cluster deployed in AKS.

The expection is to have a chart as shown below.


Tuesday, 14 October 2025

Enable Prometheus Data Scraping for RabbitMQ Cluster Deployed with RabbitMQ Cluster Operator on AKS with Managed Prometheus

 Once we have "Setup Managed Prometheus for AKS via Terraform" and  "Set Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator", we can enable monitoring for RabbitMQ in AKS. To enable Prometheus data scraping for RabbitMQ cluster on AKS, we need to deploy a service monitor. Additionally, we can deploy a pod monitor as well to scrape metrics from the RabbitMQ clsuter operator. When data scraping enabled, we would be able to get the metrics data for RabbitMQ as shown below, on Azure managed grafana using Azure managed prometheus on AKS as the data source, via an Azure monitoring workspace.


Friday, 10 October 2025

Enable Windows Data Scraping for AKS Managed Prometheus with Azure Managed Grafana

 We have "Setup Managed Prometheus for AKS via Terraform", however, that setup alone will not provide windows metrics from AKS clsuter to Azure managed Grafana. We have to addtionaly, setup Windows exporter and couple of additional configurations to make it work as decribed in the official Microsoft docs here. Let's look at step by step how to enable windows metrics for AKS with managed prometheus.

Expected outcome is getting metrics such as shown below to managed grafana and visualizing them.

Tuesday, 30 September 2025

Popular Posts