Wednesday, 19 November 2025

Access Private Url within Azure vNet with Azure Pipeline Microsoft Hosted Pipeline Agent via AKS pod in the vNet

If we are using Microsoft hosted agents for Azure pipelines to deploy Azure infrastucture and need to access vNet protected urls of services deployed, we can use a pod in AKS cluster within same vNet, as a jump host. This gives us access endpoints in vNet and ability to resolve DNS defined in private DNS zones of the vNet. Let's look at staep by step how to achive this goal, while using a Microsoft hosted agent in Azure pipelines.

The expectation is to access url such as 
http://es-search.sh.aks.ch-demo-dev-euw-002.net/demoindex001/_count  so AKS hosted elastic seach is accessed via a AKS pod and get the results to the pipeline agent as shown below. Since microsoft hosted agent is outside the vNET it cannot directly reach this elastc search (deployed in AKS) url.



Thursday, 13 November 2025

Whitelist Microsoft Hosted Azure Pipline Agent IPs in Required Azure Resources and Remove Whitelisted IPs Dynamically with Azure Pipelines

  If you are deploying Azure infrastructure using Micorosft hosted Azure pipeline agents you may have to whitelist Microsoft hosted agent IP adress in resources such as storage account where you keep your terraform state, or the key vaults, if you add update secrets to the key vault via terraform and if such resouces are network protected within a vNet in Azure. If the IP is not whitelisted there will be access issue and piplines would fail to make required updates. Let's look at two steps we can implement to add agent IP to a state storage and key vault, and then remove the IP once tarraform plan or apply is done.

Expectation is to execute two tasks as shown below in the pipeline job.


Tuesday, 21 October 2025

Visualize Dead Letter Counts in RabbitMQ Deployed in AKS

 Using the prometheus data obtained by "Enabling Prometheus Data Scraping for RabbitMQ Cluster Deployed with RabbitMQ Cluster Operator on AKS with Managed Prometheus", let's create grafana chart to view any messages land in dead letter queues in the RabbitMQ cluster deployed in AKS.

The expection is to have a chart as shown below.


Tuesday, 14 October 2025

Enable Prometheus Data Scraping for RabbitMQ Cluster Deployed with RabbitMQ Cluster Operator on AKS with Managed Prometheus

 Once we have "Setup Managed Prometheus for AKS via Terraform" and  "Set Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator", we can enable monitoring for RabbitMQ in AKS. To enable Prometheus data scraping for RabbitMQ cluster on AKS, we need to deploy a service monitor. Additionally, we can deploy a pod monitor as well to scrape metrics from the RabbitMQ clsuter operator. When data scraping enabled, we would be able to get the metrics data for RabbitMQ as shown below, on Azure managed grafana using Azure managed prometheus on AKS as the data source, via an Azure monitoring workspace.


Friday, 10 October 2025

Enable Windows Data Scraping for AKS Managed Prometheus with Azure Managed Grafana

 We have "Setup Managed Prometheus for AKS via Terraform", however, that setup alone will not provide windows metrics from AKS clsuter to Azure managed Grafana. We have to addtionaly, setup Windows exporter and couple of additional configurations to make it work as decribed in the official Microsoft docs here. Let's look at step by step how to enable windows metrics for AKS with managed prometheus.

Expected outcome is getting metrics such as shown below to managed grafana and visualizing them.

Tuesday, 30 September 2025

Thursday, 25 September 2025

Enable Prometheus Data Scraping for Bitnami Redis Standalone Deployed on AKS with Managed Prometheus

 We have discussed "Enable Prometheus Data Scraping for Bitnami Redis Cluster Deployed on AKS with Managed Prometheus" in the previous post. Similar way we can setup Prometheus data scraping for standalone redis deployments on AKS as well. Here are the steps.

The expectaion is to have the metrics sidecar run with the standalone redis master and replica pods as shown below and setup a service monitor so that redis metrics data is made available in Azure managed grafana via managed prometheus on AKS. 

Wednesday, 17 September 2025

Performing Terraform Import via Azure Pipelines for Existing Azure Resources

Importing state to terraform may be required when a resource is manually created previously, now need to be managed by terraform. Or it can be a situation where you Move code from one repo to another for reorganizing purpose and now you need to refer to exesting resource in Azure and map it to new repo terraform code. The state import can be performed with terraform import command manually. However, performin such task manually targeting production environments is not ideal and kind of impossible, in autmated deployment implementations. In this post let's discuss, a Azure pipeline task that can be used to perform the state imports in a rerunnable way.

Such task toupdate terraform state in Azure pipline should be placed between the terraform init and terraform plan tasks as shown below.


Popular Posts