Saturday, 2 November 2024

Use Dynamic Block Conditionally in Terraform

 Dynamic block allows to create nested multi level block structures in terraform code. Conditional usage of such blocks are really useful in many scenarios. For example, when we create Azure Cosmos DB, if we want to have read regions only in production environment, but only have one region setup for dev and QA environments we can, leverage the capabilities in Dynamic block of terraform. Let's explore with an example.

Saturday, 26 October 2024

Copy Blob Between Azure Storage Accounts with C# Using DefaultAzureCredential

 Copying a blob between Azure storage account without downloading the blob locally is realy simple with azcopy command. Now using Azure.Storage.Blobs we can copy blobs between Azure sotrages without downloading the blob locally, with C# code as well. Using DefaultAzureCredential for the copy operation is useful, when we use apps with managed identities, such as workload identity in AKS. Let's explore steps necessary to copy a blob from one storage account to another with C# console app.

How it works

When the example simple console app is executed it copies blob from sourse storage account to target storage account as shown below.


Saturday, 19 October 2024

Cleanup Strategy for Azure Container Registry Based on Azure Pipeline Retained Builds

 We generally use Azue container registry to store our application docker images when we use AKS as the ochestrator for our applications. However, piling up of previous releases images, as well as images used for developer teting in Azure container registry increase costs. Therefore it is important to have a periodic cleanup mechanism setup to remove all unused images form the registry. Let's look at a strategy we can use to cleanup Azure container registry.

Saturday, 12 October 2024

Deploying Azure Managed Grafana with Terraform

 Grafana can be used to setup monitoring and alerting with AKS. Azure provide an option to setup managed grafana dashboard, which can be integrated with managed protheus for AKS. In this post let's explore terraform code to setup managed grafana instance similar to below.


Saturday, 5 October 2024

Publish Ascii Documentation to Confluence via Azure Pipelines

We can use AsciiDoc to write technical documentation. However, confluence is a popular wiki to keep the documentation related to software projects. In this blog let's look at how to publish AsciiDoc documentation in a repo to Confluence via Azure DevOps pipelines.

Saturday, 28 September 2024

Remove 409 Conflict when Creating an Azure Storage Blob Container with .NET

 Limiting unnecerry app insights .NET dependecy logs as dicussed in the post "Reducing Log Analytics Cost for App Insights by Removing Successful Dependency Logs Ingestion from .NET Applications" is useful to reduce the costs. However, there are some .NET SDK methods, which are generating unnecessary execptions when used, filling up app indsights with exceptions. Once such method usage is BlobContainerClient.CreateIfNotExists, where it will always throw a 409 conflict, if the blob container is already exist. 


Saturday, 21 September 2024

Reducing Log Analytics Cost for App Insights by Removing Successful Dependency Logs Ingestion from .NET Applications

 We have discussed how to reduce log analytics cost by reducing container log ingestion from apps deployed to AKS in the post "Reducing Log Analytics Cost by Preventing Container Logs Ingession from Azure Kubernetes Services (AKS)". However, when we inspected the charts o f data ingestion after reducing container logs, the next major data volume is ingested by app dependecy logs, coming from .NET apps running in AKS cluster. App dependecy logs are mostly information logs coming from the .NET SDK and other dependecies such as Azure storage etc. These logs are useful when there is an issue or error in the dependecy calls. However, when the dependency has run with successful state, there is not much use of that information. Since, the highest cost for app insights log analytics cost is incurred by dependency logs after we have removed container logs, it is worth to reduce the cost of log analytics data ingestion for app insights, by removing the successful dependecy logs from the app insights.

The expected outcome is as shown below. After the removal of succesful dpenedecy logs data ingestion for dependecy logs has reduce to almost zero.


Friday, 13 September 2024

Automate Health Check Validation for AKS Apps with Nginx Ingress Using Azure DevOps Pipelines

 We have discussed "Setup Application Ingress for AKS Using Nginx Ingress controller with Private IP" in a previous post. Once application and nginx ingress setup is deployed it takes some time for containers and pods to be ready for acepting traffic. We should validate whether the application contianers are deployed with correct docker image and running required replicas as specified in the horizontal pod autoscalers. Then we should verify if ingress for apps are setup corectly. If only all these health checks succeeed we can enable live traffic into a newly deployed set of applications in an AKS cluster (This is useful in blue-green deployments with new cluster or node pool to represent the blue or green instance). Let's use PowerShell and write health validation script and get it executed in Azure piplines to automate health check validation for apps deployed to AKS with nginx ingress.

Expected outecome is to validate ingress setup for application in an Azure DevOps pipelines task as below.



Saturday, 7 September 2024

Setup Application Ingress for AKS Using Nginx Ingress controller with Private IP

 We have dicussed "Deploy Nginx Ingress Conroller with Private IP (Limited Access to vNET) to AKS with Terraform, Helm and Azure DevOps Pipelines" and then "Automate Validation of Nginx Ingress Controller Setup in AKS with an Azure Pipeline Task" is discussed. As the next step, let's explore how to setup ingress for application deployed in AKS, using Nginx ingress controller deployed with private IP. The purpose is exposoing the application in AKS within the virtual network, so, that other applications in the same virtual network can access the application securely, without having to expose application in AKS publicly.

The expected outcome is to have ingress for apps setup as shown below.


Saturday, 31 August 2024

Automate Validation of Nginx Ingress Controller Setup in AKS with an Azure Pipeline Task

 We have dicussed "Deploy Nginx Ingress Conroller with Private IP (Limited Access to vNET) to AKS with Terraform, Helm and Azure DevOps Pipelines" in the previous post. Once deployed it takes few seconds to fewminutes to get the Nginx ingress controller with private IP ready in AKS. Let's explore how to automate AKS Nginx ingress controller validation inAzure pipelines in this post.

Friday, 23 August 2024

Deploy Nginx Ingress Conroller with Private IP (Limited Access to vNET) to AKS with Terraform, Helm and Azure DevOps Pipelines

We have dicussed "Create Azure CNI based AKS Cluster with Application Gateway Ingress Controller (AGIC) Using Terraform" and "Create Azure CNI based AKS Cluster with Application Gateway Ingress Controller (AGIC) Using Terraform" in previous posts. However, Nginx is a popular ingress controller for kubernetes. When using Nginx ingress controller with AKS  we can avoid the cost of an application gateway used for AGIC. In this post we are going to explore setting up Nginx ingress controller for AKS to expose applications running in AKS within the virtual network privately (without exposing them publicly), so , that only the other applications, or Azure services within the vNet can get access to the apps in AKS.

Expected outcome is to setup Nginx ingress controller in AKS with private IP frm AKS subnet as shown below.


Saturday, 17 August 2024

Deploying Kubernetes Event Driven Autoscaling (KEDA) AKS add-on with Terraform and Azure Pipelines

 We have discussed deploying Kubernetes Event Drivern Autoscaling (KEDA) with workload identity in AKS in the post "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity". Then we discussed how to use an Azure DevOps pipeline to automate deployment of KEDA in the post "Deploying Kubernetes Event Drivern Autoscaling (KEDA) with Azure Pipelines Using Helm". If you are setting up KEDA with helm as discussed instead of using AKS KEDA add-on you will have to monitor the  documentation here and ensure supported version is used. However, with AKS it is better to use Microsoft supported KEDA add-on as it will be having better support in case of an issue. Additionally it will be the correct supported version of KEDA getting setup with add-on, based on AKS kubernetes version used according to the documentation here. Let's see what we need to do in terraform and in Azure piplines to get AKS setup with KEDA add-on.

Saturday, 10 August 2024

Copy Environment Across Team Projects in Azure DevOps

 We have dicussed how to "Copy Variable Groups Across Team Projects in Azure DevOps" in a previous post. In similar way we can use a scrip with Azure DevOps REST API to copy environment defined in Azure DevOps across team projects.

Friday, 2 August 2024

Copy Variable Groups Across Team Projects in Azure DevOps

 You can easily clone a variabe group in Azure DevOps within a given team project. However, There is no straight forward way to copy a variable group from a team project to another team project. We can write a scrpt using Azure DevOps REST API to achive the requirement.

Saturday, 27 July 2024

Resolve "GraphQL: Resource not accessible by integration (addLabelsToLabelable)" in GitHub Actions While Updating an Issue Label

 GitHub action workflow can be setup to set a lable to any newly created issue, using below code. If we want to add a lable "triage" to a new issue once opened we can create below workflow.

on:
  issues:
    types:
      - opened

jobs:
  label_issue:
    runs-on: ubuntu-latest
    steps:
      - env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          ISSUE_URL: ${{ github.event.issue.html_url }}
        run: |
          gh issue edit $ISSUE_URL --add-label "triage"

HHowever, you may see error "GraphQL: Resource not accessible by integration (addLabelsToLabelable)", when the workflow is executed. Let's see how we can resolve the issue in this post.

Friday, 19 July 2024

Resolve "System.ArgumentNullException: Value cannot be null. (Parameter 'sharedKeyCredential')" in Generaiting SaS Uri for Azure Storage Blob while using DefaultAzureCredential

 To share an Azure blob for  downloading or editing requires sharing a link with a shared access signature (SaS) with required permissions. The BlobContainerClient.GenerateSasUri Method helps to generate a Uri to share. The  BlobContainerClient.GenerateSasUri Method works only if the BlobServiceClient is created using storage access signature as shown below.

string connectionString = "DefaultEndpointsProtocol=https;AccountName=cheuw001assetssthot;AccountKey=xxxxxxxxxxxxxxxxxxxxxxx==;EndpointSuffix=core.windows.net";
BlobServiceClient blobServiceClient = new(connectionString);

The usage of paswordless authentication using managed identities is the recommended approach to use Azure resources.  If the DefaultAzureCredential is used with managed identity (user assigned or system assigned) to create BlobServiceClient as shown below,  BlobContainerClient.GenerateSasUri Method  failes with error "System.ArgumentNullException: Value cannot be null. (Parameter 'sharedKeyCredential')".

BlobServiceClient blobServiceClient = new(
    new Uri("https://cheuw001assetssthot.blob.core.windows.net/"),
    new DefaultAzureCredential());

Friday, 12 July 2024

Restrict State Transitions in Azure DevOps Work Items

 Azure DevOps work items for example User Story work item can be moved from one state to another in a workflow. As per this question in tech communties a requirement is there to resrict a New state user story from moving to Closed state directly. But if the uer story is in another state such as Active it should be able to moved to Closed state. Let's explore how to implement a solution for this Azure DevOps.

We can use customized templates in Azure DevOps to customize the work items and work flows. To restrict moving a user story from New to Closed state we can implement a rule in user story work item as below.

  • Open custom template user story work item.
  • Then go to rules tab and add a new rule.
  • Provide a rule name. 
  • Add condition "Work item state is moved from" and select New as the value.
  • Add action "Restrict transition to state" and select Closed as the value.

Saturday, 6 July 2024

Conditional Whitelisting of IPs in Azure Key Vault with Terraform

Azure key vaults protected by vNet (vitual network) need to be added with local IP addreses, to allowed IP list,  if need to access secrets etc. in the key vault from the local machines (not considering VPN and private endpoints).  How to use dynamic list of IPs need to be whitelisted in the key vault, conditionally via terraform  IaC (infrastructure as code) is bit tricky to implement. In this post let's explore how to dynamically whitelist, set of IPs in Azure key vault using terraform, with an example.

Consider a situation, where few IPs need to be whitelisted in key vault always and few other IPs (let's say set of developer machine IPs), only in development environment.

Saturday, 29 June 2024

Local Docker Container Run with DefaultAzureCredentials

 We discuss how to enable workload identity for continers running in AKS in the post "Setting Up Azure Workload Identity for Containers in Azure Kubernetes Services (AKS) Using Terraform - Improved Security for Containers in AKS". However, when we use DefaultAzureCredentials and try to run docker containers locally from a development machine, we do not have the workload identity support. With Visual Studio we can run with the Azure AD user and run applications successfully. But if we are using a docker run command and run docker container locally, we will have to use app registration/service principal. Then we have to grant the service principal with required roles in Azure resources we would need to access from the application. Let's take a look at an example with Azure app config.

Tuesday, 11 June 2024

Jump Into a Container Deployed in AKS (kubernetes)

 We may sometimes want to jump into a container deployed in kubernetes pod to investigate the conntents of a container, such as files in it or even we may want to run commands and see how they work inside a deployed container. For that purpose we need to jump into the container and obtian the command shell in that container. Let's look at how we can jump into both Linux and Windows containers.

Monday, 3 June 2024

Multiple KEDA Triggers for a Scaled Job with Event Hubs in AKS

 Kubernetes scaled job helps us running one job per event/message we recive from the queue/even hub. We can have an event handler job which can handle more than one type of event messages or  queue messages. Let's look at what we need to consider when we are defining more than one trigger, with kubernetes event drivern autoscaler (KEDA) for a scaled job.

Thursday, 16 May 2024

Mount Azure Storage Fileshare Created with Terraform on AKS

 We can mount Azure file share to containers in AKS as explained in dcumentation here, and we can use static vloume  mounting to use existing Azure file share. The documentation only explains how to setup static volume mount with Azure CLI. In this post let's look at stpes for using terraform provisioned Azure file share strage as a static volume mount in AKS, using kuberntes yaml.


Monday, 6 May 2024

FormatterServices.GetUninitializedObject is Obsolete, What can we use instead?

The FormatterServices.GetUninitializedObject is obsolete and shows a warning as shown below if we try to use it in our code. The FormatterServices class is obsolete as per documentation.


What is the alternative for this? Let's find out with an example.

Thursday, 25 April 2024

Installing Mising Fonts in windowsservercore-ltsc2022 Docker Image Using Azure Pipelines with az acr build

 The missing fonts in windowsservercore-ltsc2022  docker images can be installed as described in the blog post here. However, when we are not using hosted agents, and when we use kubernetes based self hosted build agents, we do not have access to host machine, to perform the all steps described in the post "Adding optional font packages to Windows containers". Since docker is not supported on self hosted build agent running as container in AKS, we have to use az acr build to build the docker images in such cases. To setup fonts in this kind of a situation in a Azure DevOps pipeline we can take the steps described in this post.

Monday, 15 April 2024

Dynamically Control Matrix of Jobs in GitHub Actions Based on Input Parameter Value

 We can use matrix in GitHub Actions to use a single defnition of job to create multiple jobs as described here in the documentation. Let's say we input the list of application names we want to build, as an input parameter to the action workflow, and need to have the ability to remove the items from the app list at the time of triggering it manually (run workflow). For example, we have 4 apps by default. However, when we need we should be able to build only one or two out of them using the same action workflow without having to change, the workflow defintion. In this post let's explore how we can achieve that with GitHub actions workflow, utilizing the matrix strategy, and dynamical setting the matrix value.

Friday, 5 April 2024

Loop Jobs based on Parameter Value in Azure DevOps Pipelines

 Consider a situation where we want to perform same set of steps in a pipeline, multiple times. A good example would be building or deloying multiple apps, using same set of steps. Let's explore this example to understand how we can loop through set of pipeline steps, to build multiple apps using a list of app names provided as a parameter in the pipeline.

Sunday, 31 March 2024

Update Azure Pipeline Library Group Variable Value in Azure Pipeline using CLI

We can set a variable value in Azure piplines using task.setvariable. This will only set a variable in the pipeline but not in a variable group. If we want to set a variable in a library variable group in Azure DevOps, we have to use command line azure-devops extension  for Azure CLI. Let's explore how to update a library variable group variable value using Azure pipeline step.

Saturday, 16 March 2024

Deploying Kubernetes Event Drivern Autoscaling (KEDA) with Azure Pipelines Using Helm

 We have discussed how to deploy KEDA using helm in the post "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity" .  Instead of deploying KEDA manually it is better to automate the deployment. Let's look at the steps to get KEDA deployed using Azure pipelines.

Saturday, 20 January 2024

Scale Pods in AKS with Kubernetes Event Driven Autoscaling (KEDA) ScaledJob Based on Azure Service Bus Queue as a Trigger

 In previous posts we discussed "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity" and how to "Set Up (KEDA) Authentication Trigger for Azure Storage Queue/Service Bus in AKS". With that now we can proceed to setup kubernetes scaled job in AKS to run a pod when the Azure service bus queue received a message. Using scaled job we are going to start a job (pod) once a messsage is received in the queue and then receive the massage in the pod container app, process and complete the message and complete the job execution with a pod complete. So, there will be a different pod and a container (kubernetes job) processing each message recived in the Azure service bus queue.

Saturday, 13 January 2024

Setting Up (KEDA) Authentication Trigger for Azure Storage Queue/Service Bus in AKS

We have discussed setting up Kubernetes Event Drivern Autoscaling (KEDA) with AKS workload identity in the post, "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity". Purpose of KEDA is to once we receive messages in a queue, such as Azure storage queue or Azure service bus queue we have to scale a scaledjob/deployment in kubernetes.

To setup authentication for the KEDA to communicate and monitor such a queue to scale a job or deployment, it should authentication to access the queue. We can set up the required authentication using using connnection strings for Azure service bus or storage queue . Instead of using such connection strings or shared access keys we can authenticate to the queue using the workload identity, since we have already enabled wrkload identity in KEDA as described in "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity".

Saturday, 6 January 2024

Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity

 We have discuss setting up workload identity in AKS to be used with application containers we deploy to AKS in the post "Setting Up Azure Workload Identity for Containers in Azure Kubernetes Services (AKS) Using Terra- Improved Security for Contianers in AKS". Kubernetes Event Drivern Autoscaling (KEDA) is the mechanism we need to use when we want to scale our deployments, or specially kubernetes jobs (A pod that runs for completion). In this post let's look at how to setup KEDA with workload identity, so that we can use KEDA in a later posts to run a Kubernets job, autoscaled based on the messages received to a storage queue or Azure service bus.

Popular Posts