We have dicussed "Create Azure CNI based AKS Cluster with Application Gateway Ingress Controller (AGIC) Using Terraform" and "Create Azure CNI based AKS Cluster with Application Gateway Ingress Controller (AGIC) Using Terraform" in previous posts. However, Nginx is a popular ingress controller for kubernetes. When using Nginx ingress controller with AKS we can avoid the cost of an application gateway used for AGIC. In this post we are going to explore setting up Nginx ingress controller for AKS to expose applications running in AKS within the virtual network privately (without exposing them publicly), so , that only the other applications, or Azure services within the vNet can get access to the apps in AKS.
Expected outcome is to setup Nginx ingress controller in AKS with private IP frm AKS subnet as shown below.
If we are using AGIC as the ingress controller for AKS we cannot use Azure CNI Overlay networking (which allows to use Azure CNI networking wihtout having to assign pod IPs without using subnet IP addresses, only nodes use subnet IPs). See more information on Azure CNI Overlay vs Flat networking in documentation here. For, Nginx ingress controller we can use Azure CNI overlay networking. Let's first change AKS cluster setup in terraform to use overlay networking as below.
In azurerm_kubernetes_cluster we have to add network plugin mode to overlay and set a CIDR for pods.
We need to use a user assigned identity for AKS identity, for the purpose of letting AKS to read subnet information when we are using a cistom vNet. Therefore, let's set the same user assigned identity we use for setting up AKS workload identity, as the AKS cluster identity.
We have to dremove usage of AGIC with AKS if we are using it already.
Full AKS cluster setup terraform code below.
resource "azurerm_kubernetes_cluster" "aks_cluster" { lifecycle { ignore_changes = [default_node_pool[0].node_count] } name = "${var.prefix}-${var.project}-${var.environment_name}-aks-${var.deployment_name}" kubernetes_version = local.kubernetes_version sku_tier = "Standard" location = var.location resource_group_name = var.rg_name dns_prefix = "${var.prefix}-${var.project}-${var.environment_name}-aks-${var.deployment_name}-dns" node_resource_group = "${var.prefix}-${var.project}-${var.environment_name}-aks-${var.deployment_name}-rg" image_cleaner_enabled = false # As this is a preview feature keep it disabled for now. Once feture is GA, it should be enabled. image_cleaner_interval_hours = 48 network_profile { network_plugin = "azure" load_balancer_sku = "standard" #region Nginx-change01 network_plugin_mode = "overlay" pod_cidr = "100.112.0.0/12" #endregion } storage_profile { file_driver_enabled = true } default_node_pool { name = "chlinux" orchestrator_version = local.kubernetes_version node_count = 1 enable_auto_scaling = true min_count = 1 max_count = 4 vm_size = "Standard_B4ms" os_sku = "Ubuntu" vnet_subnet_id = var.subnet_id max_pods = 30 type = "VirtualMachineScaleSets" scale_down_mode = "Delete" zones = ["1", "2", "3"] upgrade_settings { drain_timeout_in_minutes = 0 max_surge = "10%" node_soak_duration_in_minutes = 0 } } timeouts { update = "180m" delete = "180m" } # Enable workload identity requires both below to be set to true oidc_issuer_enabled = true workload_identity_enabled = true #region Nginx-change01 # identity { # type = "SystemAssigned" # } identity { type = "UserAssigned" identity_ids = [var.user_assigned_identity] } #endregion windows_profile { admin_username = "nodeadmin" admin_password = "AdminPasswd@001" } #region Nginx-change01 # ingress_application_gateway { # gateway_id = azurerm_application_gateway.aks.id # } #endregion key_vault_secrets_provider { secret_rotation_enabled = false } workload_autoscaler_profile { keda_enabled = true } azure_active_directory_role_based_access_control { azure_rbac_enabled = false managed = true tenant_id = var.tenant_id # add sub owners as cluster admin admin_group_object_ids = [ var.sub_owners_objectid] # azure AD group object ID } oms_agent { log_analytics_workspace_id = var.log_analytics_workspace_id } depends_on = [ azurerm_application_gateway.aks ] tags = merge(tomap({ Service = "aks_cluster" }), var.tags) }
The windows node pool used with this demo AKS cluster is not added here as there is no specific change for windows node pool for Nginx.
The user assigned identity of AKS must have Network Contributor permission to the subnet used in AKS. We can set it up as shown below.
The next step is setting up a private dns zone with dns A records pointing to private IP. The private IP should be selected for Nginx ingress should be a private IP within the AKS subnet address range. Since the blu green deployment approach is used an Nginx IP for blue and green clusters should be defined. It is better to assign an IP adress at the last set of IPs in the subnet range. Since with overlay networking for AKS we are only using subnet IPs for nodes. This provides scalability to large cluster size as subnet IP usage is less.
The IPs can be defined in variable group as below.
The dns should be setup as below.
Deploy AKS cluster with terraform Task in Azure DevOps piplines.
- task: TerraformCLI@0 displayName: 'Run terraform init' inputs: command: init environmentServiceName: '${{ parameters.serviceconnection }}' workingDirectory: "$(System.DefaultWorkingDirectory)/infra/Deployment/Terraform" backendType: azurerm ensureBackend: false backendServiceArm: '${{ parameters.serviceconnection }}' backendAzureRmSubscriptionId: "79ed27b4-3346-42b2-952c-055955487701" backendAzureRmResourceGroupName: "rg-demo-tfstate" backendAzureRmStorageAccountName: 'stdemotfstate001' backendAzureRmContainerName: 'tfstate' backendAzureRmKey: '$(tfstatefile)' - task: TerraformCLI@0 displayName: 'Run terraform apply' name: terraformApply inputs: command: apply environmentServiceName: '${{ parameters.serviceconnection }}' workingDirectory: "$(System.DefaultWorkingDirectory)/infra/Deployment/Terraform" commandOptions: $(System.ArtifactsDirectory)/$(envname)_$(sys_deployment_phase)_TerraformTfplan/$(envname)-$(sys_deployment_phase).tfplan
The prerequisites as above in AKS should be deployed with kubenetes task in Azure DevOps.
- task: qetza.replacetokens.replacetokens-task.replacetokens@5 displayName: 'Replace tokens in k8s_prerequisites.yaml' inputs: rootDirectory: '$(System.ArtifactsDirectory)' targetFiles: 'k8s_prerequisites.yaml' actionOnMissing: fail tokenPattern: custom tokenPrefix: '${' tokenSuffix: '}$' - task: Kubernetes@1 displayName: 'Deploy k8s prerequisites' inputs: connectionType: 'Azure Resource Manager' azureSubscriptionEndpoint: '${{ parameters.serviceconnection }}' azureResourceGroup: 'ch-demo-$(envname)-rg' kubernetesCluster: 'ch-demo-$(envname)-aks-$(sys_app_deploy_instance_suffix)' useClusterAdmin: true command: apply arguments: '-f k8s_prerequisites.yaml' workingDirectory: '$(System.ArtifactsDirectory)'
Then we can use a hlm install and use hlm to deploy Nginx with private IP to AKS via Azure DevOps as below.
- task: HelmInstaller@0 displayName: 'Install Helm latest' inputs: helmVersion: latest
- task: AzureCLI@2 displayName: 'Deploy nginx & update KEDA operator identity' inputs: azureSubscription: '${{ parameters.serviceconnection }}' scriptType: pscore scriptLocation: inlineScript inlineScript: | $rgName = 'ch-demo-$(envname)-rg'; $aksName = 'ch-demo-$(envname)-aks-$(sys_app_deploy_instance_suffix)'; Write-Host (-join('AKS instance: ',$aksName)); az aks get-credentials -n $aksName -g $rgName --admin --overwrite-existing helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update $private_ip_nginx = '$(private_ip_nginx_blue)'; $sys_app_deploy_instance_suffix = '$(sys_app_deploy_instance_suffix)'; if ($sys_app_deploy_instance_suffix -eq 'green') { $private_ip_nginx = '$(private_ip_nginx_green)'; } Write-Host (-join('Ingress internal IP: ',$private_ip_nginx)); helm upgrade ingress-nginx ingress-nginx/ingress-nginx --install ` --namespace ingress-nginx ` --version 4.11.2 ` --set controller.replicaCount=2 ` --set controller.nodeSelector."kubernetes\.io/os"=linux ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ` --set controller.service.loadBalancerIP=$private_ip_nginx ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ` --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux kubectl config delete-context (-join($aksName,'-admin'))
Once executed the Nginx ingress controller with Private IP should setup in AKS.
In the next posts let's explore how to achive below tasks.
- Automate Validation of Nginx Ingress Controller Setup in AKS with an Azure Pipeline Task.
- Setup Application Ingress for AKS Using Nginx Ingress controller with Private IP.
- Automate Health Check Validation for AKS Apps with Nginx Ingress Using Azure DevOps Pipelines.
No comments:
Post a Comment