We have dicussed "Deploy Nginx Ingress Conroller with Private IP (Limited Access to vNET) to AKS with Terraform, Helm and Azure DevOps Pipelines" and then "Automate Validation of Nginx Ingress Controller Setup in AKS with an Azure Pipeline Task" is discussed. As the next step, let's explore how to setup ingress for application deployed in AKS, using Nginx ingress controller deployed with private IP. The purpose is exposoing the application in AKS within the virtual network, so, that other applications in the same virtual network can access the application securely, without having to expose application in AKS publicly.
The expected outcome is to have ingress for apps setup as shown below.
The first step is to deploy the application (an API in this example) to AKS. we can use below yaml and aply it with kubectl apply using kubernetes task in Azure pipelines.
Exampe deploy tasks in Azure pipelines below.
The full application deployment kubernetes manifest is below. Note that variables defined in ${aksappname}$ will be replaced by Azure piplines with the correct values.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${aksappname}$
namespace: demo
labels:
app: ${aksappname}$
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 25%
minReadySeconds: 0
selector:
matchLabels:
service: ${aksappname}$
template:
metadata:
labels:
app: ${aksappname}$
service: ${aksappname}$
azure.workload.identity/use: "true"
spec:
serviceAccountName: demo-wi-sa
nodeSelector:
"kubernetes.io/os": ${buildos}$
priorityClassName: ${aksapppriorityclassname}$
#------------------------------------------------------
# setting pod DNS policies to enable faster DNS resolution
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsConfig:
options:
# use FQDN everywhere
# any cluster local access from pods need full CNAME to resolve
# short names will not resolve to internal cluster domains
- name: ndots
value: "2"
# dns resolver timeout and attempts
- name: timeout
value: "15"
- name: attempts
value: "3"
# use TCP to resolve DNS instad of using UDP (UDP is lossy and pods need to wait for timeout for lost packets)
- name: use-vc
# open new socket for retrying
- name: single-request-reopen
#------------------------------------------------------
volumes:
# `name` here must match the name
# specified in the volume mount
- name: demo-configmap-${aksappname}$-volume
configMap:
# `name` here must match the name
# specified in the ConfigMap's YAML
name: demo-configmap
terminationGracePeriodSeconds: 300 # This must be set to a value that is greater than the preStop hook wait time.
containers:
- name: ${aksappname}$
lifecycle:
preStop:
exec:
command: [${containersleepcommand}$,"180"]
image: ${aks_shared_container_registry}$.azurecr.io/demo/${aksappname}$:${Build.BuildId}$
imagePullPolicy: Always
# probe to determine the stratup success
startupProbe:
httpGet:
path: /api/health
port: container-port
initialDelaySeconds: 30 # give 30 seconds to get container started before checking health
failureThreshold: 30 # max 300 (30*10) seconds wait for start up to succeed
periodSeconds: 10 # interval of probe (300 (30*10) start up to succeed)
successThreshold: 1 # how many consecutive success probes to consider as success
timeoutSeconds: 10 # probe timeout
terminationGracePeriodSeconds: 30 # restarts container (default restart policy is always)
# readiness probe fail will not restart container but cut off traffic to container with one failure
# as specified below and keep readiness probes running to see if container works again
readinessProbe: # probe to determine if the container is ready for traffic
httpGet:
path: /api/health
port: container-port
failureThreshold: 1 # one readiness fail should stop traffic to container
periodSeconds: 20 # interval of probe
successThreshold: 2 # how many consecutive success probes to consider as success after a failure probe
timeoutSeconds: 10 # probe timeout
# probe to determine the container is healthy and if not healthy container will restart
livenessProbe:
httpGet:
path: /api/health
port: container-port
failureThreshold: 3 # tolerates three consecutive faiures before restart trigger
periodSeconds: 40 # interval of probe
successThreshold: 1 # how many consecutive success probes to consider as success after a failure probe
timeoutSeconds: 10 # probe timeout
terminationGracePeriodSeconds: 60 # restarts container (default restart policy is always)
volumeMounts:
- mountPath: /etc/config
name: demo-configmap-${aksappname}$-volume
ports:
- name: container-port
containerPort: 80
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: http://+:80
- name: ASPNETCORE_ENVIRONMENT
value: Production
- name: CH_DEMO_CONFIG
value: /etc/config/config_${envname}$.json
resources:
limits:
memory: ${aksappcontainermemorylimit}$ # the memory limit equals to the request!
# no cpu limit! this is excluded on purpose
requests:
memory: ${aksappcontainermemorylimit}$
cpu: "${aksappcontainerrequestscpu}$"
Next, we have to define a cluster ip configuration as shown below to work with the application.
---
apiVersion: v1
kind: Service
metadata:
name: ${aksappname}$-clusterip
namespace: demo
labels:
app: ${aksappname}$
service: ${aksappname}$
spec:
type: ClusterIP
ports:
- port: 8091
targetPort: 80
protocol: TCP
selector:
service: ${aksappname}$
Then, we can define ingress with nginx as follows. Host name for application should be defined following the private dns zone setup (refer "Deploy Nginx Ingress Conroller with Private IP (Limited Access to vNET) to AKS with Terraform, Helm and Azure DevOps Pipelines"). The private dns zone pointing to private IP used by Nginx (*.aksgreen.ch-demo-dev-eus-001.net points to private IP of Nginx). So, invoice-api app in AKS can have hsot name as
invoice-api.aksgreen.ch-demo-dev-eus-001.net. The path setting below as / allows accessing any endpoint exposed by the API. The same thing with AGIC is /*.
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ${aksappname}$ namespace: demo annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 8m nginx.ingress.kubernetes.io/proxy-connect-timeout: "120" nginx.ingress.kubernetes.io/proxy-send-timeout: "120" nginx.ingress.kubernetes.io/proxy-read-timeout: "120" # kubernetes.io/ingress.class: nginx # for nginx deprecated spec: ingressClassName: nginx rules: - host: ${aksapphostname}$ http: paths: - path: / pathType: Prefix backend: service: name: ${aksappname}$-clusterip port: number: 8091
We can manually validate the deployed application in AKS, can be access using nginx ingress, by using an App gateway in the same virtual network as below using a backend setting pointing to invoice-api.aksgreen.ch-demo-dev-eus-001.net and do a health probe with http://invoice-api.aksgreen.ch-demo-dev-eus-001.net/api/health (health validation endpoint of the deployed API). Lets setup app gateway for testing the nginx as below.
App gateway backend pool
App gateway backend settingApp gateway listener
App gateway rule
App gateway health probe
When the probe is tested it shows connected and healthy, shich is confirming our ingress setup with nginx is correctly working within the virtual network.
In the next post, let's explore, how to automate health check of AKS deployed apps and validate ingress setup with nginx, using Azure pipelines.
No comments:
Post a Comment