Authorized Source IP for OpenShift Project Identification
Avi Vantage can securely identify OpenShift projects using source IP addresses for traffic initiated from within the OpenShift cluster to outside applications.
Use Case For Authorized Source IP
In some deployments, it may be required to identify traffic based on its source IP address, to provide differential treatment based on the application. For example, in DMZ deployments there may be firewall, security, visibility, and other circumstances that may require validation of clients prior to passing their traffic on to an application. Such deployments use the source IP to validate the client.
Traffic initiated from within OpenShift clusters to outside applications is masqueraded. The actual source of this traffic is lost to the remote application.
As illustrated below, source IP 10.10.10.10 securely identifies Project Green and source IP 10.10.10.11 securely identifies Project Blue.

Figure 1. An OpenShift cluster in a blue-green deployment.
Avi Vantage network security policies prevent pods belonging to projects other than Green from using source IP 10.10.10.10; so, the remote application/firewall can securely identify Project Green using the source IP 10.10.10.10.
Configuring an Authorized Source IP Instance
Prerequisites
- For security reasons, Avi Vantage should be providing east-west services for the cluster. Refer to this section within the OpenShift installation guide to learn how this should be configured.
Note: There is an option to not use Avi for east-west services and instead use kube-proxy, with Avi handling egress-pod creation. In that case, Avi will synchronize the egress service created in K8S/OpenShift and create the egress pod for this service. Any access to the cluster IP for the egress service will hit the egress pod created by Avi without being proxied by the Avi SE. In short, Avi will manage the pods for the egress services and kube-proxy will provide load balancing to the egress pod via its egress service’s cluster IP. However, any isolation of namespaces will have to be done by a separate NetworkPolicy object in K8S.
secure_egress_mode
in the Avi OpenShift cloud configuration must be enabled as shown below:
oshiftk8s_configuration
secure_egress_mode
Overwriting the previously entered value for secure_egress_mode
save
save
- Authentication credentials for access to the OpenShift cluster must have cluster-admin privileges (should be able to create
SecurityContextConstraints
andServiceAccounts
in all projects). Certificates or user-account tokens with such privileges are required to enable this feature. - Avi Vantage needs credentials with cluster role and privileges which are, at a minimum, as shown below:
apiVersion: v1
kind: ClusterRole
metadata:
name: avirole
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods
- replicationcontrollers
- secrets
- securitycontextconstraints
- serviceaccounts
- services
verbs:
- '*'
- apiGroups:
- ""
attributeRestrictions: null
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- routes/status
verbs:
- patch
- update
- apiGroups:
- extensions
attributeRestrictions: null
resources:
- daemonsets
- ingresses
verbs:
- create
- delete
- get
- list
- update
- watch
- apiGroups:
- apps
attributeRestrictions: null
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- update
- watch
Workflow Overview
The user is expected to create an egress service per authorized source IP. To authorize multiple source IPs, it is required to create the same number of egress services within OpenShift. Avi Vantage will create a ServiceAccount
for every project in OpenShift and add it to SecurityContextConstraint
to enable pods to be created in privileged mode. The following code samples depict the order of configuration when a new project is created in OpenShift.
# oc describe scc avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Priority: [NONE]
Access:
Users: system:serviceaccount:default:avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Groups: [NONE]
Settings:
Allow Privileged: true
Default Add Capabilities: [NONE]
Required Drop Capabilities: [NONE]
Allowed Capabilities: [NONE]
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: true
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: RunAsAny
UID: [NONE]
UID Range Min: [NONE]
UID Range Max: [NONE]
SELinux Context Strategy: RunAsAny
User: [NONE]
Role: [NONE]
Type: [NONE]
Level: [NONE]
FSGroup Strategy: RunAsAny
Ranges: [NONE]
Supplemental Groups Strategy: RunAsAny
Ranges: [NONE]
# oc describe serviceaccount avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Namespace: default
Labels: [NONE]
Image pull secrets: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-dockercfg-2j07a
Mountable secrets: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln
avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-dockercfg-2j07a
Tokens: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln
avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-zxi5t
# oc describe scc avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Priority: [NONE]
Access:
Users: system:serviceaccount:default:avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Groups: [NONE]
Settings:
Allow Privileged: true
Default Add Capabilities: [NONE]
Required Drop Capabilities: [NONE]
Allowed Capabilities: [NONE]
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: true
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: RunAsAny
UID: [NONE]
UID Range Min: [NONE]
UID Range Max: [NONE]
SELinux Context Strategy: RunAsAny
User: [NONE]
Role: [NONE]
Type: [NONE]
Level: [NONE]
FSGroup Strategy: RunAsAny
Ranges: [NONE]
Supplemental Groups Strategy: RunAsAny
Ranges: [NONE]
Configuring the egress pod involves creation of a secure service with the necessary parameters provided in the annotations. Avi Vantage uses these annotations (in the order specified below) for the following three purposes.
- Allocating an IP address from the host network in the OpenShift cluster, as determined by north-south IPAM configured in the Avi Vantage OpenShift cloud. This IP is used as the
EGRESS_SOURCE IP
for the egress pod (explained later). - Creating egress
ReplicationController
with exactly 1 replica and the right parameters, as picked up from annotations below. - Updating the service selector for the secure service to point to the newly created egress pod from step #2 above.
Creating a Secure Egress Service
Service definition for a secure east-west service (secure-egress-service.json
)
Note:
networksecuritypolicy
is optional. If using the Avi egress solution withkube-proxy
, noavi_proxy
label is required in the below sample configuration for the egress service.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "secure-egress-service",
"labels": {
"svc": "secure-egress-service"
},
"annotations": {
"avi_proxy": "{\"networksecuritypolicy\": {\"rules\": [{\"index\": 1000, \"enable\": true, \"name\": \"allowtenant\", \"action\" \"NETWORK_SECURITY_POLICY_ACTION_TYPE_ALLOW\", \"match\": {\"microservice\": {\"match_criteria\": \"IS_IN\", \"group_ref\": \"/api/microservicegroup/?name=default-avi-microservicegroup\"}}}, {\"index\": 2000, \"enable\": true, \"name\": \"defaultdeny\", \"action\": \"NETWORK_SECURITY_POLICY_ACTION_TYPE_DENY\", \"match\": {\"client_ip\": {\"match_criteria\": \"IS_IN\", \"prefixes\": [{\"ip_addr\": {\"type\": \"V4\", \"addr\": \"0.0.0.0\"}, \"mask\": 0}]}}}]}}",
"egress_pod": "{\"destination_ip\": \"10.10.10.200\"}"
}
},
"spec": {
"ports": [
{
"name": "foo",
"port": 80
}
],
"type": "LoadBalancer"
}
}
egress_pod
is the annotation used to create the corresponding egress pod. destination_ip
is the destination IP address of the application outside the cluster. Avi Vantage automatically creates a pod named secure-egress-service-avi-egress-pod
, where avi-egress-pod
is the suffix of the secure service name.
Note: selector is deliberately omitted from the secure service definition above, as Avi Vantage will update the secure service’s configuration once the egress pod is created successfully.
CUSTOMIZATION — Health monitor port
“egress_pod”: "{\"hm_port\": \"1000\", \"destination_ip\": \"10.10.10.200\"}".
CUSTOMIZATION — Docker image
“egress_pod”: "{\"image\": \"private-repo:5000/avi-egress-router\", \"destination_ip\": \"10.10.10.200\"}"
CUSTOMIZATION — Node selection
There are use cases where only certain nodes in the OpenShift/Kubernetes cluster have access to the north-south external network and therefore egress pods need to be restricted solely to these nodes. This can be achieved by specifing the nodeSelector
attribute, as specified in Assigning Pods to Nodes, by adding it to the egress pod annotation as:
“egress_pod”: "{\"nodeSelector\": {\"external-accessible-node\": \"true\"}, \"destination_ip\": \"10.10.10.200\"}"
CUSTOMIZATION — Hostnames for destinations
In some cases, the hostname of the external service is desired instead of an IPv4 address. This can be achieved by this annotation:
“{\”destination_ip\”: \”ad-service.mycompany.com\”}”
Note: The FQDN must be resolvable from within the pod context in the OpenShift cluster and is a one time configuration. If the IPv4 address changes after the pod has come up, this will not be reflected in the egress pod configuration (the pod must be killed for the new FQDN to take effect).
CUSTOMIZATION — Multiple destinations
In cases where the source IP pool is sparse or a reduced set of firewall rules for the source IP is desired, a single (or a set of) source IPs can be reused for multiple services by specifying this annotation:
“{\”destinations\”: {\”dest-service1.mycompany.com\” : {\”8080\”: \”80\”}, \”10.10.20.20\”: {\”9080\”: \”80\”}}
This would create an egress service in Avi with service ports 8080 and 9080. The user can access dest-service1
by accessing the VIP:8080 and 10.10.20.20 by VIP:9080.
Avi Vantage automatically creates and maintains a microservice group per project that reflects all the current pods in that project. In the above policy, the first rule allows the microservice group default-avi-microservicegroup
, which allows all pods in the default project. The second rule denies all other pods from accessing the service. This has the effect of just allowing pods in the default project to access this service.
Creating the Service Using OpenShift Client
>oc create -f secure-egress-service.json
Post Secure Service Creation
Creating a secure service will trigger the following actions from the Avi Controller:
Action 1
An egress ReplicationController
(StatefulSet
if HA is enabled (refer to the High Availability section)) of the name service-name-avi-egress-pod
(in this case secure-egress-service-avi-egress-pod
) is created with configuration as below.
Comments inserted into the below code sample appear as [NOTE: comment text].
apiVersion: v1
kind: ReplicationController [NOTE: StatefulSet when HA is enabled]
metadata:
...
labels:
name: secure-egress-service-avi-egress-pod
name: secure-egress-service-avi-egress-pod
namespace: default
...
spec:
replicas: 1 [NOTE: 2 for StatefulSet]
selector:
name: secure-egress-service-avi-egress-pod
template:
metadata:
creationTimestamp: null
labels:
name: secure-egress-service-avi-egress-pod
name: secure-egress-service-avi-egress-pod
spec:
containers:
- env:
- name: EGRESS_SOURCE
value: 10.70.112.155 [NOTE: Source IP/s allocated by Avi Vantage IPAM, 2 comma separated source IPs in case of StatefulSets]
- name: EGRESS_DESTINATION
value: 10.10.24.85 [NOTE: "destination_ip"/”destinations” from annotation]
- name: BRIDGE_IP_ADDR
value: 172.18.0.1
- name: BRIDGE_NETMASK
value: "16"
- name: TCP_HM_PORT
value: "4" [NOTE: "hm_port" from annotation]
image: avinetworks/avi-egress-router [NOTE: "image" from annotation, defaults to "avinetworks/avi-egress-router"]
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
tcpSocket:
port: 4
timeoutSeconds: 1
...
serviceAccount: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
serviceAccountName: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5 [NOTE: serviceaccount created by Avi for every project]
Pod from the RC/StatefulSet:
# oc describe pod secure-egress-service-avi-egress-pod-s1uhm
Name: secure-egress-service-avi-egress-pod-s1uhm
Namespace: default
Security Policy: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Node: 10.70.112.61/10.70.112.61
...
Labels: name=secure-egress-service-avi-egress-pod
Status: Running
IP: 10.129.0.229
Containers:
avi-egress-router:
Container ID: docker://0bce263bcb1f7c0afacca23c999f0b154d416bd3c9fdbc3d0774dd868a95be7d
Image: avinetworks/avi-egress-router
Image ID: docker-pullable://docker.io/avinetworks/avi-egress-router@sha256:57907a14f6164167ae71866116c0a1cf7a73cc7070de5694f5184a63958f0883
Ports:
Liveness: tcp-socket :4 delay=10s timeout=1s period=3s #success=1
...
Environment Variables:
EGRESS_SOURCE: 10.70.112.155
EGRESS_DESTINATION: 10.10.24.85
BRIDGE_IP_ADDR: 172.18.0.1
BRIDGE_NETMASK: 16
TCP_HM_PORT: 4
...
EGRESS_SOURCE
uniquely identifies the project. The IP address is auto-allocated by Avi from the network in the north-south IPAM profile. In case of HA mode, this will represent both the source IPs.EGRESS_DESTINATION
is the destination IP addresses/hostnames of the applications outside the cluster.BRIDGE_IP_ADDR
is the IP address of the Avi bridge, the default for which is 172.18.0.1. This address is configurable via theavi_bridge_subnet
field in an OpenShift cloud object.BRIDGE_NETMASK
is the netmask bits for the Avi bridge, the default for which is 16.TCP_HM_PORT
is the port used for TCP health monitoring; it defaults to port 4 if not set. If set to a different value, change the portfield in thelivenessProbe
section above to match this port value.
The Avi egress pod has a TCP listener at port TCP_HM_PORT
for health monitoring purposes. The pod is configured with a livenessProbe
for health monitoring.
Action 2
The secure-egress-service
service selector configuration is updated to reflect and use the egress pod created as shown below:
# oc describe service secure-egress-service
Name: secure-egress-service
Namespace: default
Labels: svc=secure-egress-service
Selector: name=secure-egress-service-avi-egress-pod
Type: LoadBalancer
IP: 172.30.212.151
External IPs: 172.46.161.187
LoadBalancer Ingress: 172.46.161.187
Port: foo 80/TCP
NodePort: foo 30289/TCP
Endpoints: 10.129.0.229:80
Session Affinity: None
No events.
Action 3
In Avi Vantage, the secure service should be UP with one pool member as the egress pod as shown below:
# oc get pod secure-egress-service-avi-egress-pod-s1uhm -o wide
NAME READY STATUS RESTARTS AGE IP NODE
secure-egress-service-avi-egress-pod-s1uhm 1/1 Running 0 5h 10.129.0.229 10.70.112.61
Deleting the egress pod
The egress pod lifecycle is tied to the lifecycle of the secure service. Avi Vantage scales down the ReplicationController
to 0 replicas and deletes the ReplicationController
for the egress pod when the secure egress service is deleted.
Deleting ServiceAccounts and SecurityContextConstraints
Service accounts created by Avi Vantage for every project will be automatically deleted when the project is deleted in OpenShift or when the OpenShift configuration is removed from Avi Vantage. SecurityContextConstraint
is removed from OpenShift only when the associated cloud configuration is removed from Avi Vantage.
Service Usage
Pods in the default project can access the external application using the name secure-egress-service.default.sub-domain
.
- Avi DNS will resolve
secure-egress-service.default.sub-domain
to the service virtual IP on port 80 or any other port specified in the service definition. - Access to the virtual IP will be proxied to the secure egress Avi pod by the local Avi Service Engine.
- The secure egress Avi pod will source NAT the traffic (using the
EGRESS_SOURCE
IP address) to the remote application and use a destination IP address ofEGRESS_DESTINATION
. - The remote application will see traffic with a source IP address of
EGRESS_SOURCE
and destination IP address ofEGRESS_DESTINATION
on port 80.
Access Patterns
Source | Destination | Comments |
---|---|---|
Pod in “default” project | Service virtual IP | Allowed |
Pod in “default” project | Secure egress Avi pod | Allowed |
Pod in a different project | Service virtual IP | Denied by Avi |
Pod in a different project | Secure egress Avi pod | Denied by OpenShift SDN |
High Availability
When a secure Avi egress pod restarts or the host is down, OpenShift starts another instance of the pod and the service virtual IP always proxies to the right pod IP address. If faster recovery times are desired, then Avi will spin up an additional pod for the egress service (with its own source IP) by this annotation:
“{\”ha\”: true, \”destinations\”: {\”dest-service1.mycompany.com\” : {\”8080\”: \”80\”}, \”10.10.20.20\”: {\”9080\”: \”80\”}}