This article intends to provide guidance on how to install and configure the Kemp Ingress Controller in an Azure Kubernetes Service.
Table of Contents
- Kemp Ingress Controller overview
- Service Mode vs Ingress Mode
- Pre-requisites
- Deploy Kubernetes cluster in Azure Kubernetes Service
- Run your Application in Azure Kubernetes Service
- Configure the LoadMaster for Kubernetes
- Link the Kubernetes Service to the LoadMaster Virtual Service
- Validate the deployment
Kemp Ingress Controller overview
The Kemp Ingress Controller is included in the LoadMaster firmware v7.2.53(Early Access version), and includes the following capabilities:
- Automated mapping of Kubernetes service object configuration to LoadMaster Virtual Services and Sub-Virtual Service
- Support for reading Kubernetes annotations to ingest metadata information about objects
- Capabilities for communication with a Kubernetes API server
The Kemp Ingress Controller (KIC) supports two modes of operation: Service Mode and Ingress Mode.
Service Mode
The Service Mode is designed to help NetOps Teams to provide AppDev Teams self-service publishing of their application using the Kubernetes API without giving them access and control to the underlying network infrastructure.c
Let’s take a look at the Service Mode Architecture reference, shown in the diagram below:

The Service mode allows you to create a service on your Kubernetes Cluster that is not exposed externally.
You can tag the Kubernetes service with a Virtual Service ID in the LoadMaster.
The LoadMaster makes API calls to determine what pods are linked to the service and then it adds the appropriate Real Servers to the Virtual Service.
Simply put, the Kemp Ingress Controller-Service Mode- will map a Virtual Service in the LoadMaster to the Pod IP Address in your Kubernetes Cluster, and will enable auto-discovery capabilities for new Pods.
For example, If the number of pods scales up from three to five, the Virtual Service in the LoadMaster will be updated automatically and will have five Real Servers
Now let’s take a look at the Ingress Mode.
Ingress Mode
In Kubernetes, the Ingress Controller is An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination, and name-based virtual hosting.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource, as shown below:

The Ingress Mode in the Kemp Ingress Controller is designed to allow DevOps Teams to use the LoadMaster as an Ingress Controller for your Kubernetes Clusters in place of containerized ingress controller options such as HAProxy.
Ingress Mode allows you to define the ingress with an ingress controller. The LoadMaster creates one Virtual Service with multiple Sub-Virtual-Services (https://support.kemptechnologies.com/hc/en-us/articles/202138305-How-to-understand-Configure-Sub-Virtual-Services-SubVS-).
Each Sub-Virtual Service in the LoadMaster maps to the corresponding pod. This means you do not need a separate Virtual Service for each service in your Kubernetes Cluster.
The Virtual Service performs routing based on the path. If a new Pod is added to the Cluster, a new Real Server gets automatically added to the relevant Sub-Virtual-Service. The figure below highlights the Ingress Mode:

The table below addresses the use cases and Pros/Cons of using Kemp Ingress Controller Service Mode and the Ingress Mode:
Operation Mode | When to use | Pros | Cons |
Service Mode | -NetOps Team without knowledge or access of the Kubernetes infrastructure -NetOps Team who uses a different deployment and configuration management toolchain than their AppDev Teams -Managed Service Provider operating shared Kubernetes and Network infrastructure while their customers self-manage their Kubernetes-based applications | -Efficient routing of traffic to pods -Eliminates unnecessary East-West traffic – Access is restricted | -May need routes to pods defined -The pod network must not overlap with network IPs -Nodes must be on the same subnet as the LoadMaster -Single Virtual Service per service |
Ingress Mode | -Cross-functional DevOps Team who own and operate both the network and Kubernetes infrastructure in addition to the applications -NetOps Team with knowledge of and access to the Kubernetes infrastructure – NetOps Team who use the same deployment toolchain and processes as their AppDev Teams -Managed Service Provider operating shared Kubernetes and Network infrastructure in addition to managing their customers’ Kubernetes-based applications | -Efficient routing of traffic to pods Eliminates unnecessary East -West traffic Single Virtual Service for multiple services – No need for double load balancing – Kubernetes endpoints can be administered along with monolithic load-balanced services | -May need routes to pods defined -Pod network must not overlap with network IP addresses -Nodes must be on the same subnet as the LoadMaster |
Pre-requisites to Deploy Kemp Ingress Controller in an Azure Kubernetes Service:
- An active Azure subscription
- A contributor role or service principals
- LoadMaster firmware v7.2.53 or upper
Deploy an Azure Kubernetes Service (AKS) Cluster using an ARM template
The first step is to deploy a Kubernetes Cluster in Azure. We will deploy the instance of the Azure Kubernetes Service through an Azure Resource Manager Template.
Login to Azure Subscription using the Azure Portal (https://portal.azure.com). Then request a new Cloud Shell as shown below, alternatively, you can go to shell.azure.com:

Let’s create an SSH key pair using RSA encryption and a bit length of 2048 using the cmd below:
ssh-keygen -t rsa -b 2048

Now we are going to deploy the Kubernetes Cluster using an Azure Resource Manager Template. Click on the button below to sign in to Azure and deploy the template.
Provide your own values for the following template parameters:

Then click on Review + Create.

If you’re working locally, ensure you have kubectl installed. If you’re using Azure Cloud Shell, kubectl is already installed.
az aks install-cli
Then we will get the credentials and configure the Kubernetes CLI with the cmd below:
az aks get-credentials --resource-group yourResourceGroup --name yourAksCluster

Run your Application in Azure Kubernetes Service
In Azure Cloud Shell, type “code” and copy the definition below, then save the file as azure-vote.yaml, you can download this file from here: https://github.com/daveRendon/kemp/tree/master/Labs/Kemp-kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
Now let’s run the application using the kubectl apply command, and specify the name of your YAML manifest:
kubectl apply -f azure-vote.yaml
Then a Kubernetes service will expose the application front-end to the internet. This process can take a few minutes. You can use the cmd below to monitor the progress:
kubectl get service azure-vote-front --watch

You can access to the application using the external IP of the Service. Note that it is accessible using port 80.
Configure the Kemp Ingress Controller for Kubernetes
Deploy the LoadMaster in Azure.
The next step is to deploy the LoadMaster virtual machine in your Azure subscription in the same virtual network as your Azure Kubernetes Service is deployed.
You can follow this video to deploy either a single LoadMaster or an HA-Pair in Azure: https://kemptechnologies.com/videos/setup-azure-load-balancing-high-availability/
Configure the LoadMaster
Once the LoadMaster in Azure is provisioned, you can access the WUI through the Public IP Address https://Your-LoadMaster-IP:8443 to install the Kemp Ingress Controller.
After getting the license for your LoadMaster (you can get it online or offline – https://licensing.kemp.ax/offline ), go to Virtual Services > Kubernetes Settings, then click Install.

Wait for the installation to complete and click OK on the confirmation message. Then we need to reboot the LoadMaster.
Go to System Configuration > System Administration > System Reboot > Reboot.
After rebooting, the menu option changes to Kubernetes Settings and you can use this screen to link the LoadMaster with Kubernetes.

Connect the LoadMaster with Kubernetes
Now we will need to allow the LoadMaster Kemp Ingress Controller to communicate with the Kubernetes Cluster.
Go to Azure Cloud Shell and download your Kube config file. You will find this file in /home/Your-Name/.kube/config.

Download the file using the “Download File” option as shown below:

Now go back to the LoadMaster, Virtual Services > Kubernetes Settings.
Then click “Choose File” and upload the Kube config file you downloaded from Azure Cloud Shell. Then click install as shown below:

Once the Kube config file is successfully installed, you should see some information in the Contexts section as shown below:

Select the relevant Operations Mode. In this case, we will select the Service Mode, and select the Namespace to Watch.
Configure Service Mode
Configure a LoadMaster Virtual Service
In the LoadMaster UI, go to Virtual Services > Add New. Azure will populate the “Virtual Address”, and ensure the Port is set to 80 and provide the Service Name, then click “Add this Virtual Service”

Now expand the Real Servers section, and ensure the Real Server Check Method is set to HTTP Protocol, and select GET as the HTTP Method.

Take note of the Virtual Service Id number located at the very top of the page:

The Virtual Service Id number is available at the top of the Virtual Service modify screen. You need this to connect your Kubernetes service to the LoadMaster Virtual Service.
Link the Service to the LoadMaster Virtual Service
Now we need to link an existing service in Kubernetes to the LoadMaster Virtual Service:
Go Azure Cloud shell and create a new yaml file and save it, as shown below:
– https://github.com/daveRendon/kemp/blob/master/Labs/Kemp-kubernetes/kemp-service.yaml

Now run the cmd below in Azure Cloud Shell:
kubectl apply -f kemp-service.yaml
Now go to the LoadMaster VM in Azure and create a new Inbound Rule in the Network Security Group to allow traffic through port 80:

Go back to your LoadMaster and you should be able to see that the Kemp Ingress Controller now reflects the application in the Virtual Service:

Verify the access to the application.
To validate the configuration of the Kemp Ingress Controller, go to Virtual Services, View/Modify Services and you should be able to see that there’s a new Real Server listed as shown below:

To verify the configuration is working properly, you can access to the pod through the LoadMaster IP using port 80 (http://Your-LoadMaster-IP:80 ). You should be able to see the Azure Vote App:

Along this article we reviewed how you can leverage Kemp Ingress Controller to route traffic to multiple services in a Kubernetes Cluster in Azure, including the configuration of the Azure Kubernetes Service, the deployment of the LoadMaster and the integration to the application running the Kubernetes Cluster.