"Setup Redis Cluster with JSON and Search Modules on AKS with Binami Redis Using Custom Image" and "Setup Managed Prometheus for AKS via Terraform" are explained in the previous posts. To enable setting up monitoring and alerting for the Redis Custer deployed in AKS the first step is to enable Prometheus data scraping in the Redis cluster we deployed on AKS. Let's look at the steps in this post.
The expectation is to have redis metrices available to Azure managed grafana via the managed prometheus in AKS as shown below.
First we have to update the bitnami helm command we used in "Setup Redis Cluster with JSON and Search Modules on AKS with Binami Redis Using Custom Image" with below parameters. This enables runnng a sidecar container for metric scraping in redis cluster pods.--set metrics.enabled=true \
--set metrics.serviceMonitor.enabled=false \
--set metrics.resources.requests.cpu=300m \
--set metrics.resources.requests.memory=1Gi \
--set metrics.resources.limits.memory=1Gi
We have to disable service monitor deployment with helm as it would try to deploy service monitor with monitoring.coreos.com/v1 but it is not available in AKS with managed prometheus. Instead we have service monitor and pod monitor in AKS with managed prometheus as shown below. Therefore, we have to disable service monitor deployment above and use our own service monitor deployment.
servicemonitors.azmonitoring.coreos.com
podmonitors.azmonitoring.coreos.com
The full helm command with above setup added is below. For more information on helm parameters refer readme here.
helm upgrade redis-cluster bitnami/redis-cluster --install \ --namespace redis \ --version 12.0.13 \ --set redis.nodeSelector."kubernetes\.io/os"=linux \ --set redis.podAntiAffinityPreset=soft \ --set global.defaultStorageClass=redis-storage \ --set persistence.enabled=true \ --set redis.useAOFPersistence="yes" \ --set persistentVolumeClaimRetentionPolicy.enabled=true \ --set persistentVolumeClaimRetentionPolicy.whenDeleted=Delete \ --set existingSecret=redis-service-credentials \ --set existingSecretPasswordKey=password \ --set redis.extraEnvVars[0].name=REDIS_EXTRA_FLAGS \ --set redis.extraEnvVars[0].value="--loadmodule /opt/redis/modules/redisbloom.so --loadmodule /opt/redis/modules/redisearch.so --loadmodule /opt/redis/modules/redistimeseries.so --loadmodule /opt/redis/modules/rejson.so" \
--set metrics.enabled=true \ --set metrics.serviceMonitor.enabled=false \ --set metrics.resources.requests.cpu=300m \ --set metrics.resources.requests.memory=1Gi \ --set metrics.resources.limits.memory=1Gi \ --set cluster.nodes=9 \ --set cluster.replicas=2 \ --set pdb.create=true \ --set pdb.minAvailable="50%" \ --set persistence.size=16Gi \ --set redis.resources.requests.memory=4Gi \ --set redis.resources.limits.memory=4Gi \ --set redis.resources.requests.cpu=500m \ --set global.security.allowInsecureImages=true \ --set image.registry=chdemoacr.azurecr.io \ --set image.repository=ch/redis/ch_redis_cluster \ --set image.tag=8.0.3-debian-12-r2
Once we deploy the Redis cluster with above helm in AKS we can see two contianers running in each of the redis cluster pods now.
It is the metrics pod.
Next we need to create the service monitor. We can setup service montor with azmonitoring.coreos.com/v1 using the below yaml.
Once the service monitor is deployed the metric data will start to appear in the Azure managed grafana via the managed prometheus data source. In future posts let's explore how to create visualizations and alerts for redis clsuter with the metrices obtained with above setup.
No comments:
Post a Comment