You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a cosmetic enhancement. The functionality is already implemented.
By default OpenShift's ingress controller (aka OpenShift Router, HA-proxy) is deployed neither using nodePort or clusterIP but instead using hostNetwork. This can be checked with:
oc -n openshift-ingress get pods router-default-7ffcd9d86b-r4fvr -o=jsonpath='{.spec.hostNetwork}'
This is configured in the operator as shown in [1] below.
We don´t provide instructions on how these deployments should be configured.
I found that using clusterIP works as desired, using the node's IPs of the nodes where the PODs are deployed:
[cloud-user@ocp-provisioner routes-bigip]$ oc -n openshift-ingress get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
router-default-7ffcd9d86b-r4fvr 1/1 Running 8 (34m ago) 68d 10.1.10.7 master-2.ocp.f5-udf.com <none> <none>
router-default-7ffcd9d86b-rcptz 1/1 Running 8 (34m ago) 68d 10.1.10.8 master-3.ocp.f5-udf.com <none> <none>
[cloud-user@ocp-provisioner routes-bigip]$ oc -n openshift-ingress get ep router-default-route-a -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2024-05-15T07:45:14Z"
creationTimestamp: "2024-05-15T07:45:14Z"
name: router-default-route-a
namespace: openshift-ingress
resourceVersion: "7878697"
uid: 10ea9494-0aeb-49c4-a90b-1bfd25d75128
subsets:
- addresses:
- ip: 10.1.10.7
nodeName: master-2.ocp.f5-udf.com
targetRef:
kind: Pod
name: router-default-7ffcd9d86b-r4fvr
namespace: openshift-ingress
uid: 8da16f47-a967-49b6-9509-26139dc40397
- ip: 10.1.10.8
nodeName: master-3.ocp.f5-udf.com
targetRef:
kind: Pod
name: router-default-7ffcd9d86b-rcptz
namespace: openshift-ingress
uid: 6b65aa98-1335-4335-bc23-1d2c380e15bc
ports:
- name: https
port: 443
protocol: TCP
- name: http
port: 80
protocol: TCP
[cloud-user@ocp-provisioner routes-bigip]$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1.ocp.f5-udf.com Ready control-plane,master,worker 333d v1.25.10+3fe2906 10.1.10.6 <none> Red Hat Enterprise Linux CoreOS 412.86.202305310359-0 (Ootpa) 4.18.0-372.58.1.el8_6.x86_64 cri-o://1.25.3-5.rhaos4.12.git44a2cb2.el8
master-2.ocp.f5-udf.com Ready control-plane,master,worker 333d v1.25.10+3fe2906 10.1.10.7 <none> Red Hat Enterprise Linux CoreOS 412.86.202305310359-0 (Ootpa) 4.18.0-372.58.1.el8_6.x86_64 cri-o://1.25.3-5.rhaos4.12.git44a2cb2.el8
master-3.ocp.f5-udf.com Ready control-plane,master,worker 333d v1.25.10+3fe2906 10.1.10.8 <none> Red Hat Enterprise Linux CoreOS 412.86.202305310359-0 (Ootpa) 4.18.0-372.58.1.el8_6.x86_64 cri-o://1.25.3-5.rhaos4.12.git44a2cb2.el8
worker-1.ocp.f5-udf.com Ready worker 333d v1.25.10+3fe2906 10.1.10.9 <none> Red Hat Enterprise Linux CoreOS 412.86.202305310359-0 (Ootpa) 4.18.0-372.58.1.el8_6.x86_64 cri-o://1.25.3-5.rhaos4.12.git44a2cb2.el8
worker-2.ocp.f5-udf.com Ready worker 333d v1.25.10+3fe2906 10.1.10.10 <none> Red Hat Enterprise Linux CoreOS 412.86.202305310359-0 (Ootpa) 4.18.0-372.58.1.el8_6.x86_64 cri-o://1.25.3-5.rhaos4.12.git44a2cb2.el8
Actual Problem
This is not documented.
Solution Proposed
This should be documented and optionally it might be worth considering if an option pool-member-type=hostnetwork being an alias pool-member-type=cluster is worth adding for clarity
A non-cosmetic reason to implement pool-member-type=hostnetwork is because when using hostnetwork and the OVN CNI, we don´t need to create static routes to the cluster IPs, which might not be even directly connected. That is, the static routes might fail to be created.
Title
RFE: add pool-member-type=hostnetwork or document
Description
This is a cosmetic enhancement. The functionality is already implemented.
By default OpenShift's ingress controller (aka OpenShift Router, HA-proxy) is deployed neither using nodePort or clusterIP but instead using hostNetwork. This can be checked with:
This is configured in the operator as shown in [1] below.
This method is also used by other ingress controllers like NGINX+ IC. See for example the controller.hostNetwork variable in https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/
We don´t provide instructions on how these deployments should be configured.
I found that using clusterIP works as desired, using the node's IPs of the nodes where the PODs are deployed:
Actual Problem
This is not documented.
Solution Proposed
This should be documented and optionally it might be worth considering if an option pool-member-type=hostnetwork being an alias pool-member-type=cluster is worth adding for clarity
Additional context
[1]
The text was updated successfully, but these errors were encountered: