Thursday, 19 February 2026

High Availability Deployment of Nginx Gateway Fabric Replacing Retired Ingress Nginx in AKS - Part 2 - Deploy Nginx-Gateway-Fabric

In part 1 "High Availability Deployment of Nginx Gateway Fabric Replacing Retired Ingress Nginx in AKS - Part 1 - Plan for Smooth Transition", we have discussed the plan to transtion from retired ingess-nginx to nginx-gateway, for an AKS cluster where we have hosted elatic search.  In this post let's look at steps necessary to deploy nginx-gateway.

The expection is to have successfully deployed high available nginx-gateway with nginx gateway fabric.


As the first step let's create namespace for nginx-gateway. 

# Namespace in AKS for Nginx Gateway
---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-gateway

Note that the current retired ingress-nginx is deployed in namespace ingress-nginx and we are deploying the gateway to a new namespace. After the deployment now the cluster now has the new namespace. The deployment is done in AKS via Azure devOps piplines using kubectl.


The next step is getting the Gateway API custom resource defintions  (CRDs) getting deployed to the AKS cluster. We can download latest standard version from github here . Download the latest version, standard-install.yaml and rename it to gateway_api_crds.yaml. Then we can add it to our code repo in the path pipelines\aks_manifests\nginx_gateway\gateway_api_crds.yaml.

Then in pipeline install service task we are going to call service installer powershell script (pipelines\scripts\deploy_services.ps1) as shown below, passing the required parameters. Note that here the IP address for internal vNET is obtained from the Azure DevOps variable group. There are addtional parameters related to ingress-nginx, elastic etc., which we do ot have to pay much attention.

- task: AzureCLI@2
  displayName: 'Deploy services'
  inputs:
    azureSubscription: '${{ parameters.serviceconnection }}'
    scriptType: pscore
    scriptLocation: inlineScript
    inlineScript: |
      $rgName = 'ch-demo-$(envname)-rg';
      $aksName = 'ch-demo-$(envname)-aks';
      $serviceDeployNodepool = '$(sys_sh_svc_deploy_nodepool_suffix)';
      $bluegreenMode = '${{ parameters.deployinfra }}';
      $nginxIngressControllerLoadBalancerIp = '$(sh_private_ip_nginx)';
      $nginxGatewayLoadBalancerIp = '$(sh_private_ip_nginx_gateway)';
      $elasticUrl = 'http://es-search.sh.aks.ch-demo-$(envname).net';
      $keyvaultName = 'ch-demo-$(envname)-kv';
      $appCofigName = 'ch-demo-$(env)-appconfig-ac';

      az aks get-credentials -n $aksName -g $rgName --admin --overwrite-existing

      $(System.DefaultWorkingDirectory)/pipelines/scripts/deploy_services.ps1 `
        -manifestPath '$(System.DefaultWorkingDirectory)/pipelines/aks_manifests/' `
        -bluegreenMode $bluegreenMode `
        -serviceDeployNodepool $serviceDeployNodepool `
        -nginxIngressControllerLoadBalancerIp $nginxIngressControllerLoadBalancerIp `
        -nginxGatewayLoadBalancerIp $nginxGatewayLoadBalancerIp `
        -elasticUrl $elasticUrl `
        -keyvaultName $keyvaultName `
        -appConfigName $appCofigName `
        -rgName $rgName
     
      kubectl config delete-context (-join($aksName,'-admin'))

The script is in the path pipelines\scripts\deploy_services.ps1 and now contains the steps to deploy the CRDs for Gateway API. Pay special attention to the CRDs deployment setion in the below script.

#region Gateway-API CRDs
$nginxGatewayCrdsManifest= -join($ManifestPath,'nginx_gateway/','gateway_api_crds.yaml');
Write-Host (-join('Deploying Gateway-API CRDs with: ',$nginxGatewayCrdsManifest, ' ...'));
kubectl apply --server-side -f $nginxGatewayCrdsManifest;
Write-Host ('Successfully deployed Gateway-API CRDs.');
Write-Host ('=========================================================');
#endregion Gateway-API CRDs


param
(
    [string]$manifestPath,
    [string]$bluegreenMode,
    [string]$serviceDeployNodepool,
    [string]$nginxIngressControllerLoadBalancerIp,
    [string]$nginxGatewayLoadBalancerIp,
    [string]$elasticUrl,
    [string]$keyvaultName,
    [string]$appConfigName,
    [string]$rgName
)

#region Functions
function Invoke-Pod-GetIfExists {
    param (
        [string]$podName,
        [string]$aksNamespace,
        [string]$output = 'wide'
    )

    $podOutput = $null;
    $podGetSuccess = $true;

    try 
    {
        $podOutput = kubectl get pod $podName -n $aksNamespace -o $output 2>&1;    
    }
    catch 
    {
        $podGetSuccess = $false;
    }    

    if (($LASTEXITCODE -eq 0) -and ($podGetSuccess)) 
    {
        Write-Host (-join('Pod exists: ',$podName));
        return [PSCustomObject]@{
            Exists = $true
            Output = $podOutput};
    } 
    else 
    {
        Write-Warning (-join('Pod not found: ',$podName));
        $global:LASTEXITCODE = 0;
        return [PSCustomObject]@{
            Exists = $false
            Output = $podOutput};
    }
}

function Invoke-AKS-App-Health-Check
{
    param
    (
        [string]$aksNamespace,
        [string[]]$apps,
        [string]$appLabelName = 'app.kubernetes.io/name',
        [int]$appHealthCheckMaxAttempts = 20, # Wait for maximum 10 minutes till apps are fully deployed and running in AKS
        [int]$appHealthCheckIntervalSeconds = 30, # Check cycle in each 30 seconds
        [int]$appReadyInitialWaitSeconds = 5,
        [string]$readyPodCountValue = '1/1' # Default value for pod ready count, can be changed to 2/2 or 3/3 etc. based on the number of containers in a pod (e.g. for sidecar containers
    )

    $appRestartCheckMaxAttempts = 3; # Max 15 minutes wait for restarts to stabilize
    $appRestartCheckIntervalSeconds = 300; # Wait time for restarts to stabilize (max cap 5 minutes in k8s restart policy https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
    $appRestartCheckMaxCycles = 3; # Ensure $appRestartCheckMaxAttempts are accomodated

    $appHealthCheckAttempt = 0;
    $appRestartCheckAttempt = 0;
    $appRestartCheckCycle = 0;

    Write-Host (-join('AKS app health check max attempts ',$appHealthCheckMaxAttempts));
    Write-Host (-join('AKS app restart check max attempts ',$appRestartCheckMaxAttempts));
    Write-Host (-join('AKS app restart check max cycles ',$appRestartCheckMaxCycles));
    Write-Host ('--------------------------------------------------------');
    Write-Host (-join('Waiting for namespace ',$aksNamespace,' apps to be ready...'));
    Start-Sleep -Seconds $appReadyInitialWaitSeconds;

    do
    {
        $allAppPodsReady = $true;
        $restaredPods = @{}; # Empty hash table
        $appHealthCheckAttempt++;

        foreach($app in $apps)
        {
            Write-Host ('--------------------------------------------------------');
            Write-Host (-join('Inspecting app ',$app,' in namespace ',$aksNamespace, ' app label name is ',$appLabelName,' ...'));


            kubectl get pods -l "$appLabelName=$app" -n "$aksNamespace" # printing pods for pipeline logs
            Write-Host ('--------------------------------------------------------');

            $pods = kubectl get pods -l "$appLabelName=$app" -n "$aksNamespace" -o json | ConvertFrom-Json;

            if (($null -eq $pods) -or ($null -eq $pods.items) -or ($pods.items.Count -le 0))
            {
                $allAppPodsReady = $false;
                Write-Error (-join('No pods found for ',$app,' in namespace ',$aksNamespace));
                exit 1;
            }

            Write-Host (-join('Inspecting pod(s) for the app ',$app,' ...'));

            foreach($pod in $pods.items)
            {
                $podName = $pod.metadata.name;

                $podInfoString = $null;
                $podExistsResult = $null;
                $podExistsResult = Invoke-Pod-GetIfExists -podName $podName -aksNamespace $aksNamespace;

                if (($null -ne $podExistsResult) -and ($podExistsResult.Exists))
                {
                    # Filter pod status - This is required as pod status phase in json, only has Pending and Running status which is not sufficient if pods terminate or crash
                    $podInfoString = $podExistsResult.Output[1] -replace '\s+',';';
                    $podInfo = $podInfoString.Split(';');
                    $podReady = $podInfo[1];
                    $podStatus = $podInfo[2];

                    if(($null -eq $pod.status) -or ($null -eq $pod.spec) -or ($null -eq $pod.status.containerStatuses) -or ($null -eq $pod.spec.containers) -or ($pod.status.containerStatuses.Count -le 0) -or ($pod.spec.containers.Count -le 0))
                    {
                        Write-Warning (-join($podName,' is not yet healthy. Marking app as not healthy...'));
                        $allAppPodsReady = $false;
                    }
                    else
                    {
                        $podRestarts = $pod.status.containerStatuses[0].restartCount
                        $podContainerReady = $pod.status.containerStatuses[0].ready;
                        $podContainerStarted = $pod.status.containerStatuses[0].started;

                        Write-Host (-join($podName,' status is ',$podReady,' ',$podStatus,'. Restarts are ',$podRestarts,'. Container ready state is ',$podContainerReady,' started state is ',$podContainerStarted));

                        if (($podStatus -ne 'Running') -or ($podReady -ne $readyPodCountValue) -or (-not $podContainerReady) -or (-not $podContainerStarted))
                        {
                            Write-Warning (-join($podName,' is not yet healthy. Marking app as not healthy...'));
                            $allAppPodsReady = $false;
                        }
                        elseif ($podRestarts -gt 0)
                        {
                            Write-Warning (-join($podName,' has ',$podRestarts,' restart(s). collecting information for further verification...'));
                            $restaredPods.Add($podName,$podRestarts);
                        }
                    }
                }
                else
                {
                    Write-Warning (-join($podName,' is not found. Marking app as not healthy...'));
                    $allAppPodsReady = $false; 
                }
            }
            Write-Host ('--------------------------------------------------------');
        }
        if (($allAppPodsReady) -and ($restaredPods.Count -le 0))
        {
            Write-Host (-join('All apps are ready in health check attempt ',$appHealthCheckAttempt,'.')) -ForegroundColor Green;
        }
        elseif (($allAppPodsReady) -and ($restaredPods.Count -gt 0))
        {
            $appRestartCheckAttempt++;

            if($appRestartCheckAttempt -eq 1)
            {
                $appRestartCheckCycle++;

                if (($appHealthCheckAttempt + $appRestartCheckMaxAttempts) -gt $appHealthCheckMaxAttempts)
                {
                    $appHealthCheckMaxAttempts = $appHealthCheckAttempt + $appRestartCheckMaxAttempts;
                }
            }

            if (($appRestartCheckAttempt -le $appRestartCheckMaxAttempts) -and ($appRestartCheckCycle -le $appRestartCheckMaxCycles))
            {
                Write-Host (-join('All apps are ready in health check attempt ',$appHealthCheckAttempt,', but have restarts in ',$restaredPods.Count,' pod(s). Waiting for 5 minutes before reinspecting pod restarts...'));
                Start-Sleep -Seconds $appRestartCheckIntervalSeconds

                foreach ($restartedPodName in $restaredPods.Keys)
                {
                    $previousRestarts = $restaredPods[$restartedPodName];

                    $podExistsResult = $null;
                    $currentPod = $null;
                    $podExistsResult = Invoke-Pod-GetIfExists -podName $restartedPodName -aksNamespace $aksNamespace -output json;
                    
                    if (($null -ne $podExistsResult) -and ($podExistsResult.Exists))
                    {
                        $currentPod = $podExistsResult.Output | ConvertFrom-Json;
                    }

                    if (($null -eq $currentPod) -or ($null -eq $currentPod.status) -or ($null -eq $currentPod.status.containerStatuses) -or ($currentPod.status.containerStatuses.Count -le 0))
                    {
                        Write-Warning (-join($restartedPodName,' is not in a stable state.'));
                        $allAppPodsReady = $false;
                    }
                    else
                    {
                        $currentPodRestarts = $currentPod.status.containerStatuses[0].restartCount;

                        if ($currentPodRestarts -gt $previousRestarts)
                        {
                            Write-Warning (-join($restartedPodName,' restarts are not stabilized. Previous: ',$previousRestarts,' Current:',$currentPodRestarts));
                            $allAppPodsReady = $false;
                        }
                        else
                        {
                            Write-Host (-join($restartedPodName,' restarts are stabilized. Previous: ',$previousRestarts,' Current:',$currentPodRestarts));
                        }
                    }
                }

                if ($allAppPodsReady)
                {
                    Write-Host (-join('Pod restarts are stabilized. All apps are ready in health check attempt ',$appHealthCheckAttempt,', in restart check cycle ',$appRestartCheckCycle,' and in restart check attempt ',$appRestartCheckAttempt,'.')) -ForegroundColor Green;
                }
                else
                {
                    Write-Warning (-join('Pod restarts are not stabilized. All apps are not ready in health check attempt ',$appHealthCheckAttempt,', in restart check cycle ',$appRestartCheckCycle,' and in restart check attempt ',$appRestartCheckAttempt,'. Waiting for ',$appHealthCheckIntervalSeconds,' seconds before next check...'));
                    Write-Host ('--------------------------------------------------------');
                    Write-Host ('--------------------------------------------------------');
                    Start-Sleep -Seconds $appHealthCheckIntervalSeconds
                }
            }
            else
            {
                Write-Error (-join('All apps are not ready in health check attempt ',$appHealthCheckAttempt,', in restart check cycle ',$appRestartCheckCycle,' and in restart check attempts ', $appRestartCheckAttempt,'. Deployment failed..'));
                exit 1;
            }
        }
        else
        {
            Write-Warning (-join('All apps are not ready in health check attempt ',$appHealthCheckAttempt,' Waiting for ',$appHealthCheckIntervalSeconds,' seconds before next check...'));
            Write-Host ('--------------------------------------------------------');
            Write-Host ('--------------------------------------------------------');
            $appRestartCheckAttempt = 0;
            Start-Sleep -Seconds $appHealthCheckIntervalSeconds
        }

    } until($allAppPodsReady -or ($appHealthCheckAttempt -ge $appHealthCheckMaxAttempts))

    if ($allAppPodsReady)
    {
        Write-Host (-join('All apps ready in namespace ',$aksNamespace,'.')) -ForegroundColor Green;
        Write-Host ('--------------------------------------------------------');
        Write-Host ('--------------------------------------------------------');
    }
    else
    {
        Write-Error (-join('All apps are not ready in namespace ',$aksNamespace,'. Deployment failed.'));
        exit 1;
    }
}

function Invoke-AKS-Load-Balancer-Health-Check
{
    param
    (
        [string]$aksNamespace,
        [string]$loadBalancerServiceName,
        [string]$loadBalancerIP,
        [int]$maxAttempts = 60, # Wait for maximum 15 minutes till a load balancer is ready
        [int]$intervalSeconds = 15 # Check cycle in each 15 seconds
    )

    $attempts = 0

    do
    {
        Write-Host (-join('Waiting for load balancer ',$loadBalancerServiceName,' to be ready in the namespace ',$aksNamespace,' ...'));
        $attempts++;
        Start-Sleep -Seconds $intervalSeconds;
        $loadBalancerService = kubectl get service/$loadBalancerServiceName -n $aksNamespace -o json | ConvertFrom-Json;

    } until ((($null -ne $loadBalancerService) `
                -and ($loadBalancerService.status.loadBalancer.ingress.ip -eq $loadBalancerIP)) `
            -or ($attempts -ge $maxAttempts))

    if (($null -ne $loadBalancerService) `
        -and ($loadBalancerService.status.loadBalancer.ingress.ip -eq $loadBalancerIP))
    {
        Write-Host (-join('Load balancer ',$loadBalancerServiceName,' is ready in the namespace ',$aksNamespace,'.'));
    }
    else
    {
        Write-Error (-join('Load balancer ',$loadBalancerServiceName,' is not ready in the namespace ',$aksNamespace,'.'));
        exit 1;
    }
}
#endregion Functions

#region Version Check
Write-Host ('=========================================================');
Write-Host ('Checking kubectl version ...');
kubectl version
Write-Host ('--------------------------------------------------------');
Write-Host ('Checking helm version ...');
helm version;
Write-Host ('=========================================================');
#endregion Version Check

#region Gateway-API CRDs
$nginxGatewayCrdsManifest = -join($ManifestPath,'nginx_gateway/','gateway_api_crds.yaml');
Write-Host (-join('Deploying Gateway-API CRDs with: ',$nginxGatewayCrdsManifest, ' ...'));
kubectl apply --server-side -f $nginxGatewayCrdsManifest;
Write-Host ('Successfully deployed Gateway-API CRDs.');
Write-Host ('=========================================================');
#endregion Gateway-API CRDs

Once the CRDs deployed we can set cert manager to enable gateway API by setting below shown config in cert-manager deployment. You can download cert manager latest version yaml from here.


Then we can add a section to pipelines\scripts\deploy_services.ps1  to get cert-manager deployed as shown below.


#region cert-manager
$certManagerManifest = -join($ManifestPath,'cert_manager/','cert_manager.yaml');

Write-Host (-join('Deploying cert-manager with: ',$certManagerManifest, ' ...'));
kubectl apply -f $certManagerManifest;
Invoke-AKS-App-Health-Check -aksNamespace 'cert-manager' -apps @('cert-manager','cainjector','webhook');
Write-Host ('Successfully deployed cert-manager.');
Write-Host ('=========================================================');

if ($bluegreenMode -eq 'True')
{
    Write-Host ('Waiting 180 seconds for cert manager web hooks to be operational ...');
    Start-Sleep -Seconds 180; # Need to wait for cert manager web hooks to be fully ready and operational, before deploying operators using it.
}
#endregion cert-manager

Once we have cert-manager deployed we can setup nginx gateway fabric. First step is setting up prerequisites cert setting. If you are using https then use let's encrypt as described here. Since we are using within vNET traffic and only use http, following basic setup is fine with this implementation.

Create a file in pipelines\aks_manifests\nginx_gateway\nginx_gateway_fabric_prerequisites.yaml and add below. Note that this is basic setup and not sutable to use with https. But if within vNET http access this is fine to use. PDBs are setup manually for control plane and data plane here until full helm nginx gateway fabtric support for PDBs will only available in version 2.5. Note that selfhost-apps-gateway is the name of the gateway that we are going to setup. 

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: selfsigned-issuer
  namespace: nginx-gateway
spec:
  selfSigned: {}

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx-gateway-ca
  namespace: nginx-gateway
spec:
  isCA: true
  commonName: nginx-gateway
  secretName: nginx-gateway-ca
  privateKey:
    algorithm: RSA
    size: 2048
  issuerRef:
    name: selfsigned-issuer
    kind: Issuer
    group: cert-manager.io

---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: nginx-gateway-issuer
  namespace: nginx-gateway
spec:
  ca:
    secretName: nginx-gateway-ca

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx-gateway
  namespace: nginx-gateway
spec:
  secretName: server-tls
  usages:
  - digital signature
  - key encipherment
  dnsNames:
  - ngf-nginx-gateway-fabric.nginx-gateway.svc
  issuerRef:
    name: nginx-gateway-issuer

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx
  namespace: nginx-gateway
spec:
  secretName: agent-tls
  usages:
  - "digital signature"
  - "key encipherment"
  dnsNames:
  - "*.cluster.local"
  issuerRef:
    name: nginx-gateway-issuer

# Once PDB setup with helm available for nginx-gateway-fabric
# as in https://github.com/nginx/nginx-gateway-fabric/issues/4380
# Remove below PDB and update helm chart to include PDBs for nginx-gateway-fabric
# and selfhost-apps-gateway
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx-gateway-fabric-pdb
  namespace: nginx-gateway
spec:
  minAvailable: 50%
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx-gateway-fabric

---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: selfhost-apps-gateway-pdb
  namespace: nginx-gateway
spec:
  minAvailable: 50%
  selector:
    matchLabels:
      gateway.networking.k8s.io/gateway-name: selfhost-apps-gateway

Then create another file pipelines\aks_manifests\nginx_gateway\nginx_gateway_fabric_helm_values.yaml with helm overrides for nginx gateway fabric helm deployment. add below content to that file and update according to your node pool contraints. Here the setup is blue green switching of node pool deployments. Here we ensure, replicas of control plane and data plane deployed to diffrent nodes in AKS in diffrent availability zones.

certGenerator:
  nodeSelector:
    kubernetes.io/os: "linux"
    agentpool: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
  tolerations:
    - key: "nodepool"
      operator: "Equal"
      value: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
      effect: "NoSchedule"

nginxGateway:
  nodeSelector:
    kubernetes.io/os: "linux"
    agentpool: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
  tolerations:
    - key: "nodepool"
      operator: "Equal"
      value: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
      effect: "NoSchedule"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                  - linux
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - nginx-gateway-fabric
          topologyKey: kubernetes.io/hostname
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: topology.kubernetes.io/zone
      whenUnsatisfiable: DoNotSchedule
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: nginx-gateway-fabric

nginx:
  pod:
    nodeSelector:
      kubernetes.io/os: "linux"
      agentpool: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
    tolerations:
      - key: "nodepool"
        operator: "Equal"
        value: "sh${sys_sh_svc_deploy_nodepool_suffix}$"
        effect: "NoSchedule"
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/os
                  operator: In
                  values:
                    - linux
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: gateway.networking.k8s.io/gateway-name
                  operator: In
                  values:
                    - selfhost-apps-gateway
            topologyKey: kubernetes.io/hostname
    topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            gateway.networking.k8s.io/gateway-name: selfhost-apps-gateway

Then add another file pipelines\aks_manifests\nginx_gateway\nginx_gateway_setup.yaml and add below content to setup the gateway with name selfhost-apps-gateway. Here addtionaly we have setup a client policy to set max payload size for the gateway. The service.beta.kubernetes.io/azure-load-balancer-internal: "true" enable the usage of private IP within vNET and a load balancer for gateway is created with the private IP.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: selfhost-apps-gateway
  namespace: nginx-gateway
spec:
  infrastructure:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
  gatewayClassName: nginx
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: Selector
        selector:
          matchLabels:
            shared-gateway-access: "true"

---
apiVersion: gateway.nginx.org/v1alpha1
kind: ClientSettingsPolicy
metadata:
  name: selfhost-apps-gateway-client-settings
  namespace: nginx-gateway
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: selfhost-apps-gateway
  body:
    maxSize: "4g"

Now we can get the deployment done for nginx service fabric using below section added to the pipelines\scripts\deploy_services.ps1 file. Minimum replicas 3 for control and data plane  and above helm value overrides ensure, we have at least 3 replicas deployed in 3 diffrent AKS nodes in diffrent availability zones.

#region Nginx-Gateway with Nginx-Gateway-Fabric
$nginxGatewayFabricPrerequisites = -join($ManifestPath,'nginx_gateway/','nginx_gateway_fabric_prerequisites.yaml');
$nginxGatewayFabricHelmValuesManifest = -join($ManifestPath,'nginx_gateway/','nginx_gateway_fabric_helm_values.yaml');
$nginxGatewaySetupManifest = -join($ManifestPath,'nginx_gateway/','nginx_gateway_setup.yaml');

Write-Host (-join('Deploying Nginx-Gateway-Fabric prerequisites with: ',$nginxGatewayFabricPrerequisites, ' ...'));
kubectl apply -f $nginxGatewayFabricPrerequisites;
Write-Host ('Successfully deployed Nginx-Gateway-Fabric prerequisites.');
Write-Host ('=========================================================');

write-Host ('Deploying Nginx-Gateway-Fabric with helm...');
helm upgrade ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --install `
    --namespace nginx-gateway `
    --version 2.4.1 `
    -f $nginxGatewayFabricHelmValuesManifest `
    --set nginx.service.type="LoadBalancer" `
    --set nginx.service.loadBalancerIP=$nginxGatewayLoadBalancerIp `
    --set nginxGateway.autoscaling.enable=true `
    --set nginxGateway.autoscaling.minReplicas=3 `
    --set nginxGateway.autoscaling.maxReplicas=6 `
    --set nginx.autoscaling.enable=true `
    --set nginx.autoscaling.minReplicas=3 `
    --set nginx.autoscaling.maxReplicas=9

Invoke-AKS-App-Health-Check -aksNamespace 'nginx-gateway' -apps @('nginx-gateway-fabric') -appReadyInitialWaitSeconds 20;
Write-Host ('Successfully deployed Nginx-Gateway-Fabric via helm.');
Write-Host ('=========================================================');

Write-Host ('Deploying Nginx-Gateway with: ',$nginxGatewaySetupManifest, ' ...');
kubectl apply -f $nginxGatewaySetupManifest;
Invoke-AKS-App-Health-Check -aksNamespace 'nginx-gateway' -apps @('selfhost-apps-gateway') -appLabelName 'gateway.networking.k8s.io/gateway-name' -appReadyInitialWaitSeconds 30;
Invoke-AKS-Load-Balancer-Health-Check -loadBalancerServiceName 'selfhost-apps-gateway-nginx' -loadBalancerIP $nginxGatewayLoadBalancerIp -aksNamespace 'nginx-gateway';
Write-Host ('Successfully deployed Nginx-Gateway.');
Write-Host ('=========================================================');
#endregion Nginx-Gateway with Nginx-Gateway-Fabric

With this we will have nginx gateway ready and running in our AKS cluster.


A private DNS zone A record can be setup to target the private IP of the gateway internal load balancer.  In the next post let's look at steps to setting up HTTP routes using the gateway.



No comments:

Popular Posts