Tuesday, 26 August 2025

Deploy Redis Insight on AKS to View/Update Data in Redis Cluster Deployed on AKS

 "Setup Redis Cluster with JSON and Search Modules on AKS with Binami Redis Using Custom Image" is explained in the prevous post. We can use redis insight to connect to redis cluster on AKS to explore data and add or update data. Let's look at how we can setup redis insight on AKS in this post.

The expectation is to have redis insight connected to redis clsuter on aks as shown below.


Thursday, 21 August 2025

PART2 - Using Bash Instead of PowerShell for - Setting Up Azure Managed Redis with Terraform Using AzApi

 We have discussed "Setting Up Azure Managed Redis with Terraform Using AzApi" , since direct terraform resources are not yet available until the pull request here is released. We used a PowerShell script to extract access key and output it from terraform output. In this post let's look at how to get the same steps with bash script instead of PowerShell and get the Azure Managed Redis deployed as shown below.



Tuesday, 19 August 2025

Setting Up Azure Managed Redis with Terraform Using AzApi

 The new Azure Managed Redis can be deployed with balanced compute and memory with high availability, and useful modules such as RedisJson and RedisSearch. This is really a useful and good pricing options to use Redis as a managed service in Azure. Note that it is not the enterprise redis offering in Azure and managed redis for Azure has more flexible pricing options. However, terraform support for this is yet to be added and will be available in another month or so as per the pull request here. Let's see how to get a Azure Managed Redis deployed with terraform for no using AzAPI.

The expectaion is to have a deployed Azure Managed Redis as shown below.

Wednesday, 13 August 2025

Setup Redis Cluster with JSON and Search Modules on AKS with Binami Redis Using Custom Image

 We have discussed "Build Custom Docker Images with Redis Json and Search Module Support for deploying Bitnami Redis Cluster and Standalone in AKS" previously. We have cfreated a cutom redis cluster image including the redis modules json, search etc. In this post let's explore how to setup Redis cluster on AKS using binami helm chart, while using the custom built image. The usage of custom built image is required to have the json and search modules available as described in the post "Build Custom Docker Images with Redis Json and Search Module Support for deploying Bitnami Redis Cluster and Standalone in AKS".

Note that the redis cluster deployed to AKS or to kubernetes can be only accessed withing the kubernetes cluster. Therefore, only the apps deployed to AKS/kubernetes can only access the redis endpoint in redis cluster on AKS. For allowing local development with redis on AKS, we need to setup bitnami redis standalone mode, which we will discuss in a future post.

Thursday, 31 July 2025

Build Custom Docker Images with Redis Json and Search Module Support for deploying Bitnami Redis Cluster and Standalone in AKS

 Redis cluster deployed in AKS (kubernetes) is a really useful way to use Redis cache in dotnet projects. To depoy Redis on AKS we can use Redis bitnami cluster or standalone. Cluster mode deployment is only accessible within the kubernetes clsuter, therefore for development environments, to allow local machine access to Redis in an AKS clsuter you need the standalone mode of deployment, which we will discuss in the next post. The bitnnami Redis images are not included with the json module and search module which are useful to store json documents and search them in a Redis setup. In this post let's explore how to include addtional modules in Bitnami Redis to build a custom image so that the Json and Search of Redis available in AKS once deployed.

The expection is to build custom Bitnami Redis images with module support similar to Redis 8 and get them added to Azure Container registry as shown below.


 

Wednesday, 30 July 2025

Use Directory.Packages.props to Centralize Version Management of Consumed NuGet Packages in a .NET Solution

 Using NueGet packages and keep the versions upto date is a requirement as well as a challange if the versions are defined all over in many projects, in a given dotnet solution. We can limit the reference to packages to refer only in one project for a solution as consuming projects will inherently get the references. However, when it comes to using unit test projects we must reference the packages in all test projects to ensure test dicoverability in pipelines as well as in Test Explorer of the Visual Studio. Therefore, centralize Nuget Package management is essential in a complex and large solutions.

The Directory.Packages.props is the saviour for this requirement which allows us to define all the package versions centrally in the solution, by adding a Directory.Packages.props to the root of the repo, similar to using nuget.config.


Tuesday, 22 July 2025

Use DefaultAzureCredential with C# to Work with Azure Cosmos DB Data Using "Cosmos DB Built-in Data Contributor" RBAC

 We have discussed "Add Cosmos DB Built-in Roles to Resource Identities via Terraform to Allow Role Based Access to Data in Cosmos DB" in the previous post. Now that the data constributor roles are setup in Azure Cosmos DB, let's look at how to write a simple code to access,  create Cosmos DB data using DefaultAzureCredential with C#.

The expection is to get document data created in Cosmos DB as similar to shown below.



Wednesday, 16 July 2025

Add Cosmos DB Built-in Roles to Resource Identities via Terraform to Allow Role Based Access to Data in Cosmos DB

 Azure Cosmos DB can be used with DefaultAzureCrentials in C#. However, for enabling usage of DefaultAzureCrentials  with Azure Cosmos DB requires special data roles to be added to the  Cosmos DB account. There are two built in roles data reader and data contributor. Unlike other RBAC roles in Azure these roles cannot be assigned via Azure portal and they must be added programatically. They have to added via Azure CLI, Bicep, Powershell, REST API or Terraform.

Wednesday, 9 July 2025

Expose AKS Deployed RabbitMQ AMQP Access for Local Development via Load Balancer

 We have discussed "Setting Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator" in a previous post. Within the AKS cluster apps can access RabbitMQ with AMQP with rabbitmq-cluster.rabbitmq.svc.cluster.local using cluster IP service. However it is important to expose the rabbitmq cluster for local development. For this purpose we have to setup a load banacer service as Rabbit MQ AMQP protocol is not supported via Nginx. Let's look at how to setup a load balancer service to enable local development of applications using a RabbitMQ cluster deployed in an AKS cluster in this post.

Tuesday, 24 June 2025

Setting Up Azure Storage Blob Backups with Azure Backup Vault Using Terraform

 To create automated daily backups of Azure storage account blobs we can use Azure Backup Vault resource. It is possible to configure operational backups and vaulted backups for Azure storage blobs with the backup vault backup policies and instances. We need to consider the limitations specified for operational backups and the limitations in vaulted backups, specially the limitation of only 100 blob containers are allowed to be backed up for a given Azure storage account. Another issue with vaulted backups is the newly created containers in an Azure storage will not automatically included in the backups. Therefore, the Azure vaulted backups are working well only when you have predefoined set of blob containers in your implementation, that are configured for vaulted backups. 

Saturday, 7 June 2025

Access Management Dashboard Locally for a RabbitMQ Cluster Deployed in AKS via Port Forwarding or Nginx Ingress Controller Inside VNet

 We have discussed "Setting Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator" in a previous post. RabbitMQ management dashboard is a useful tool to have simple monitoring and inspecting the setup, connections etc. in the deployed RabbitMQ cluster (It is better to implement proper monitoring and alerting, which we will discuss in a next post). let' look at how to enable access to the dashboard of RabbitMQ deployed in the AKS cluster via port forwarding as well as setting up ingress via nginx.


Friday, 30 May 2025

Run a Test on RabbitMQ Cluster in AKS with perf-test

 Once a RabbitMQ cluster is deployed on AKS as described in "Setting Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator" it would be grat to test if the deployment is working as expected. For this purpose we can use a perf-test . Let's run a perf-test step by step for RabbitMQ in AKS cluster using wsl in this post.

Expectation is to run a performance test as shown below.



Sunday, 25 May 2025

Resolve "System.NotSupportedException: Globalization Invariant Mode is not supported" in Microsoft.Data.SqlClient with Alpine Docker Image

 You may encounter exception "System.NotSupportedException: Globalization Invariant Mode is not supported" in Microsoft.Data.SqlClient in Linux Alpine docker images. Exception details are as below.

fail: Poc.Common.Api.Middleware.GlobalExceptionHandlerMiddleware[0]      Unexpected error occurred.      System.NotSupportedException: Globalization Invariant Mode is not supported.         at Microsoft.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry, SqlConnectionOverrides overrides)         at Microsoft.Data.SqlClient.SqlConnection.Open(SqlConnectionOverrides overrides)         at Microsoft.Data.SqlClient.SqlConnection.Open()         at Poc.Domain.Core.Implementation.Sql.SqlCommanRunner.RunAsync(String command) in /build/src/Shared/Domain/Poc.Domain.Core/Implementation/Sql/SqlCommanRunner.cs:line 20

Friday, 9 May 2025

Setting Up RabbitMQ Cluster in AKS Using RabbitMQ Cluster Operator

 We have discused "Setting Up RabbitMQ Cluster Operator and Topology Operator via Azure Pipelines in AKS" in the previous post. The deployed cluster oprator in AKS can be used deploy a RabbitMQ cluster. In this post let's explore deploying a production ready RabbitMQ Cluster in AKS (without TLS/SSL - we will explore that in a future post), to be used with apps deployed in same AKS cluster. 

Once successfully deployed the RabbitMQ clsuter the rabbitmq namcspace should have resources shown in the below image. We are deploying a three node RabbitMQ cluster, each pod scheduled in a different Azure availability zone node. Cluster access setup with service/rabbitmq-cluster ClusterIP, as we only need access within AKS cluster for apps. How to use it for local development, we can discuss in a future post.


Saturday, 3 May 2025

Setting Up RabbitMQ Cluster Operator and Topology Operator via Azure Pipelines in AKS

Using operators to setup RabbitMQ cluster in  AKS (kubernetes) is a recomended approach. There are two RabbitMQ operators we need to deploy into AKS for setting up a RabbitMQ cluster and managing message setup in the deployed RabbitMQ cluster.

  • Cluster Operator: Automate provisioning, management and oprations of a RabbitMQ cluster within AKS.
  • Messaging Topology Operator: Managing message topologies within the deployed RabbitMQ cluster in AKS.
In this post let's look at how to seup above two operators in AKS using Azure DevOps pipeline step.

Saturday, 26 April 2025

List All Pods and Their Priority Classes with kubectl

Sometmes it would be necessery to identify which priority classes are used in each pod in a kubernetes environment, specially to plan and reorganize priorities in apps deployed. Let's look at a query to view pods with priority classes using kubectl.

Below command will get all pods in all namespaces with their priority classes name and priority value. The highest priority number value is the highet priority.

Saturday, 19 April 2025

Saturday, 12 April 2025

Windows AKS Scaled Jobs Handle Graceful Termination for dotnet App using IHostedService When Preemption

 We have discussed "Gracefully Shut Down dotnet 8 IHostedService App - Deployed as a Windows Container in AKS - While Scale In or Pod Deallocations" previously. The approach  works fine for pods deployed as a deployment in kubernetes. Similary to the deployment pods, scaled job pod can be terminated abruptly due to preemption in kubernetes, if a high priority pod is scheduled. One way to reolve abrupt termination of scaled job pod due to preemption, would be to assign all scaled jobs to highest possible priority. However, setting highest prority to all scaled jobs is not a good solution, as the job may not require highest priority and job should be able to scheduled after other high priority app pods. Let's look at a better solution that can be implented with pre stop hook for scaled jobs running with base docker image Windows server core 2022.

Wednesday, 2 April 2025

Windows Nanoserver Image Pre Stop Hook to Avoid 502 for Requests

The pods deployed to AKS gets terminated due to reschduling, low priority evictions as well as during scaled in. We can add a termination grase period and pre stop sleep time as shown below in Linux and Windows containers to allow, sufficient time to ingress services to get updated about terminating pods. However, Windows nanoserver does not support PowerShell. Therefore, we need to use a specific mechanism to pre stop hook for nanoserver images. For nanoserver images shutdown signal is correctly get sent to the dotnet app. So we can just setup a sllep time for pre stop hook as necessary.

Saturday, 22 March 2025

Gracefully Shut Down dotnet 8 IHostedService App - Deployed as a Windows Container in AKS - While Scale In or Pod Deallocations

 Applications implemented with IHostedService in dotnet, deployed to Azure Kubernetes Services (AKS) as containers in pods get terminated when pod recheduling happens or scaling-in opertaions happen. However, unlike Linux containers, the Windows containers does not receive the signal (similar to SIGTERM or SIGINT) to graceful shutdown. Once the pre stop hook is done the container is immediatly killed disregarding the value set in the terminationgraceperiod. Since, the Windows container did not receive a message to start a graceful shut down, and it is killed abruptly, the in flight operations in the Windows app container are abandoned. Such abadoning of operations cause inconsitency in system data and cause system failures. Therefore, it is mandatory to implement a proper graceful shutdown for Windows containers as well. Let's explore the issue in detail and how to implement a proper solution to enable graceful Windows container shut down, for dotent apps implemented with IHostedService. The issue is happening in mcr.microsoft.com/dotnet/runtime:8.0-windowsservercore-ltsc2022 images and the solution is tested with the same.

Windows app pod scaled-in or pod rescedule 


Wednesday, 5 March 2025

Setting Up Alert for AKS Pod Restarts Using Log Analytics Workspace and Grafana

 Azure Kubernetes Services (AKS)  pod restarts can be obtained from the KubePodInventory of the connected log analytics workspace. This data can be depicted in a graph in grafana as described in the post "Pod Restart Counts Grafana Chart with Azure Monitor for AKS". Let's explore how to use same information to create an alert using Grafana to notify when pod restarts are happening in apps in a given kubernetes namespace. 

The expectation is to fire alerts from Grafana as shwon below. Note that the alerts can target to send emails, slack notficaition etc. which is not discussed in this post.

Monday, 3 March 2025

Using "grep" with "kubectl logs" to Filter Container/Pod Logs

 kubectl logs command helps us to inspect logs of pods in AKS/kubernetes and useful to diagnose issues. However, when there is too much logs it is harder to read through and find out errors easily. Further, filtering out logs for a given timestamp may be useful at times to identify the issues. In this post let's explore usage of grep with kubectl logs command to filter logs. 

Let's take first example to filter for a timestamp in keda operator pod logs. Here -i says to ignore case in logs.

kubectl logs keda-operator-79d756dd66-69gsc -n keda | grep -i '2025-03-04T07:20:24'


Wednesday, 29 January 2025

Setup Azure File Share Capacity Alert to Slack with Terraform

 Setting up an Azure File Share capacity alert is useful to know when you reach at least 80% of allocated quota for the file share. This will give the teams ample time to increase the allocation to avoid out of space issues. If we are using standard tier for storage account then we need to use one storage account for each file share, to get the correct alert. Sending the alert to slack channel is a useful way to get properly alerted to take action on time. Let's use an example learn how to setup alerts for multiple Azure file shares uing terraform.

Expectation is to get the alerts to slack channel as shown below.


Popular Posts