Saturday 6 July 2024

Conditional Whitelisting of IPs in Azure Key Vault with Terraform

Azure key vaults protected by vNet (vitual network) need to be added with local IP addreses, to allowed IP list,  if need to access secrets etc. in the key vault from the local machines (not considering VPN and private endpoints).  How to use dynamic list of IPs need to be whitelisted in the key vault, conditionally via terraform  IaC (infrastructure as code) is bit tricky to implement. In this post let's explore how to dynamically whitelist, set of IPs in Azure key vault using terraform, with an example.

Consider a situation, where few IPs need to be whitelisted in key vault always and few other IPs (let's say set of developer machine IPs), only in development environment.

Saturday 29 June 2024

Local Docker Container Run with DefaultAzureCredentials

 We discuss how to enable workload identity for continers running in AKS in the post "Setting Up Azure Workload Identity for Containers in Azure Kubernetes Services (AKS) Using Terraform - Improved Security for Containers in AKS". However, when we use DefaultAzureCredentials and try to run docker containers locally from a development machine, we do not have the workload identity support. With Visual Studio we can run with the Azure AD user and run applications successfully. But if we are using a docker run command and run docker container locally, we will have to use app registration/service principal. Then we have to grant the service principal with required roles in Azure resources we would need to access from the application. Let's take a look at an example with Azure app config.

Tuesday 11 June 2024

Jump Into a Container Deployed in AKS (kubernetes)

 We may sometimes want to jump into a container deployed in kubernetes pod to investigate the conntents of a container, such as files in it or even we may want to run commands and see how they work inside a deployed container. For that purpose we need to jump into the container and obtian the command shell in that container. Let's look at how we can jump into both Linux and Windows containers.

Monday 3 June 2024

Multiple KEDA Triggers for a Scaled Job with Event Hubs in AKS

 Kubernetes scaled job helps us running one job per event/message we recive from the queue/even hub. We can have an event handler job which can handle more than one type of event messages or  queue messages. Let's look at what we need to consider when we are defining more than one trigger, with kubernetes event drivern autoscaler (KEDA) for a scaled job.

Thursday 16 May 2024

Mount Azure Storage Fileshare Created with Terraform on AKS

 We can mount Azure file share to containers in AKS as explained in dcumentation here, and we can use static vloume  mounting to use existing Azure file share. The documentation only explains how to setup static volume mount with Azure CLI. In this post let's look at stpes for using terraform provisioned Azure file share strage as a static volume mount in AKS, using kuberntes yaml.


Monday 6 May 2024

FormatterServices.GetUninitializedObject is Obsolete, What can we use instead?

The FormatterServices.GetUninitializedObject is obsolete and shows a warning as shown below if we try to use it in our code. The FormatterServices class is obsolete as per documentation.


What is the alternative for this? Let's find out with an example.

Thursday 25 April 2024

Installing Mising Fonts in windowsservercore-ltsc2022 Docker Image Using Azure Pipelines with az acr build

 The missing fonts in windowsservercore-ltsc2022  docker images can be installed as described in the blog post here. However, when we are not using hosted agents, and when we use kubernetes based self hosted build agents, we do not have access to host machine, to perform the all steps described in the post "Adding optional font packages to Windows containers". Since docker is not supported on self hosted build agent running as container in AKS, we have to use az acr build to build the docker images in such cases. To setup fonts in this kind of a situation in a Azure DevOps pipeline we can take the steps described in this post.

Monday 15 April 2024

Dynamically Control Matrix of Jobs in GitHub Actions Based on Input Parameter Value

 We can use matrix in GitHub Actions to use a single defnition of job to create multiple jobs as described here in the documentation. Let's say we input the list of application names we want to build, as an input parameter to the action workflow, and need to have the ability to remove the items from the app list at the time of triggering it manually (run workflow). For example, we have 4 apps by default. However, when we need we should be able to build only one or two out of them using the same action workflow without having to change, the workflow defintion. In this post let's explore how we can achieve that with GitHub actions workflow, utilizing the matrix strategy, and dynamical setting the matrix value.

Friday 5 April 2024

Loop Jobs based on Parameter Value in Azure DevOps Pipelines

 Consider a situation where we want to perform same set of steps in a pipeline, multiple times. A good example would be building or deloying multiple apps, using same set of steps. Let's explore this example to understand how we can loop through set of pipeline steps, to build multiple apps using a list of app names provided as a parameter in the pipeline.

Sunday 31 March 2024

Update Azure Pipeline Library Group Variable Value in Azure Pipeline using CLI

We can set a variable value in Azure piplines using task.setvariable. This will only set a variable in the pipeline but not in a variable group. If we want to set a variable in a library variable group in Azure DevOps, we have to use command line azure-devops extension  for Azure CLI. Let's explore how to update a library variable group variable value using Azure pipeline step.

Saturday 16 March 2024

Deploying Kubernetes Event Drivern Autoscaling (KEDA) with Azure Pipelines Using Helm

 We have discussed how to deploy KEDA using helm in the post "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity" .  Instead of deploying KEDA manually it is better to automate the deployment. Let's look at the steps to get KEDA deployed using Azure pipelines.

Saturday 20 January 2024

Scale Pods in AKS with Kubernetes Event Drivern Autoscaling (KEDA) ScaledJob Based on Azure Service Bus Queue as a Trigger

 In previous posts we discussed "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity" and how to "Set Up (KEDA) Authentication Trigger for Azure Storage Queue/Service Bus in AKS". With that now we can proceed to setup kubernetes scaled job in AKS to run a pod when the Azure service bus queue received a message. Using scaled job we are going to start a job (pod) once a messsage is received in the queue and then receive the massage in the pod container app, process and complete the message and complete the job execution with a pod complete. So, there will be a different pod and a container (kubernetes job) processing each message recived in the Azure service bus queue.

Saturday 13 January 2024

Setting Up (KEDA) Authentication Trigger for Azure Storage Queue/Service Bus in AKS

We have discussed setting up Kubernetes Event Drivern Autoscaling (KEDA) with AKS workload identity in the post, "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity". Purpose of KEDA is to once we receive messages in a queue, such as Azure storage queue or Azure service bus queue we have to scale a scaledjob/deployment in kubernetes.

To setup authentication for the KEDA to communicate and monitor such a queue to scale a job or deployment, it should authentication to access the queue. We can set up the required authentication using using connnection strings for Azure service bus or storage queue . Instead of using such connection strings or shared access keys we can authenticate to the queue using the workload identity, since we have already enabled wrkload identity in KEDA as described in "Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity".

Saturday 6 January 2024

Setting Up Kubernetes Event Drivern Autoscaling (KEDA) in AKS with Workload Identity

 We have discuss setting up workload identity in AKS to be used with application containers we deploy to AKS in the post "Setting Up Azure Workload Identity for Containers in Azure Kubernetes Services (AKS) Using Terra- Improved Security for Contianers in AKS". Kubernetes Event Drivern Autoscaling (KEDA) is the mechanism we need to use when we want to scale our deployments, or specially kubernetes jobs (A pod that runs for completion). In this post let's look at how to setup KEDA with workload identity, so that we can use KEDA in a later posts to run a Kubernets job, autoscaled based on the messages received to a storage queue or Azure service bus.

Popular Posts