You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a single cluster in EKS where the nodes are in 3x Availability Zones. I want to avoid cross-AZ load balancing because it's extremely costly at my scale. I always want BIG-IP to send to pods that are in the same AZ as the BIG-IP VE itself.
AWS Solution
I can use a NLB in AWS to get traffic into my cluster. But I have a very large amount of network traffic, and so AWS costs of cross-AZ traffic is my biggest pain point, followed by the costs of NLB throughput.
With NLB, I can have topology awareness. I.e., I can configure NLB to prefer target pods in the same AZ as the NLB itself. For example, see this documentation: Cross-zone load balancing
[NLB] distributes traffic across the registered targets in its Availability Zone only.
So the NLB solution works for me, but NLB charges themselves (ie, throughput of NLB, not cross-AZ traffic) is still prohibitive at high throughput levels. Also, I want to do some mTLS termination outside of the cluster. So I'd prefer to use BIG-IP.
BIG-IP Solution
CIS will populate a pool on the BIG-IP where there is no AZ-awareness from the BIG-IP. Cross-AZ traffic charges will be extremely high with a typical BIG-IP/CIS deployment.
One idea: Instead of 1x service with pods in 3x AZs, I could have 3x services, where the pods are pinned to a single AZ in each. Then I could have 3x VE's (standalone), each referencing different K8s services. Potentially I could use alternateBackends feature of CIS. But this feels like a poor workaround for topology awareness.
Solution Proposed
Could CIS be made topology-aware? Ie, could a BIG-IP be configured to prefer routing to pods that are within the same AZ as the VE itself?
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
FYI, I wrote this article to show that node-label-selector can be used. Especially when using a cloud provider that automatically labels nodes with AZ information. However, it would be nice to have CIS natively be aware of endpoint topology hints, because then CIS could populate BIG-IP pools with all pods, but use a priority group to prefer only pods in the same AZ as the BIG-IP.
Problem statement
I have a single cluster in EKS where the nodes are in 3x Availability Zones. I want to avoid cross-AZ load balancing because it's extremely costly at my scale. I always want BIG-IP to send to pods that are in the same AZ as the BIG-IP VE itself.
AWS Solution
I can use a NLB in AWS to get traffic into my cluster. But I have a very large amount of network traffic, and so AWS costs of cross-AZ traffic is my biggest pain point, followed by the costs of NLB throughput.
With NLB, I can have topology awareness. I.e., I can configure NLB to prefer target pods in the same AZ as the NLB itself. For example, see this documentation: Cross-zone load balancing
So the NLB solution works for me, but NLB charges themselves (ie, throughput of NLB, not cross-AZ traffic) is still prohibitive at high throughput levels. Also, I want to do some mTLS termination outside of the cluster. So I'd prefer to use BIG-IP.
BIG-IP Solution
CIS will populate a pool on the BIG-IP where there is no AZ-awareness from the BIG-IP. Cross-AZ traffic charges will be extremely high with a typical BIG-IP/CIS deployment.
One idea: Instead of 1x service with pods in 3x AZs, I could have 3x services, where the pods are pinned to a single AZ in each. Then I could have 3x VE's (standalone), each referencing different K8s services. Potentially I could use alternateBackends feature of CIS. But this feels like a poor workaround for topology awareness.
Solution Proposed
Could CIS be made topology-aware? Ie, could a BIG-IP be configured to prefer routing to pods that are within the same AZ as the VE itself?
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: