Skip to content

ingress-controller: update ingresses only if the load balancer is active #748

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

madumalt
Copy link
Collaborator

@madumalt madumalt commented Jun 16, 2025

As per the existing behavior, once the stack update completes, the ingress controller immediately updates the status of all ingresses and routegroups to reference the new load balancer. This update may sometimes happens before the new load balancer has marked its targets (skipper-ingress) as healthy, leading clients being routed to a load balancer that is not ready to serve traffic yet.

To improve this behavior we retrieve the ELBs via AWS API and build the models including the ELB states of the corresponding ELBs. Then before updating the ingresses/routegroups (updateIngress func) we check whether the ELB is in active state, if not we skip updating the ingresses/routegroups status with the ELB hostname.

As per the existing behavior, once the stack update completes, the
ingress controller immediately updates the status of all ingresses and
routegroups to reference the new load balancer. This update may
sometimes happens before the new load balancer has marked its targets
(skipper-ingress) as healthy, leading clients being routed to a load
balancer that is not ready to serve traffic yet.

To improve this behavior we retrieve the ELBs via aws api and build
the models including the ELB state of the corresponding ELB. Then
before updating the ingresses/routegroups (updateIngress func) we
check whether the ELB is in active state, if not we skip updating
the ingresses/routegroups status with the ELB hostname.

Signed-off-by: Thilina Madumal <[email protected]>
@madumalt madumalt force-pushed the update-ingress-only-if-elb-is-ready-to-serve branch from cfdc821 to 1fee90b Compare June 17, 2025 05:15
@madumalt
Copy link
Collaborator Author

👍

for _, stack := range stacks {
if err := stack.Err(); err != nil {
problems.Add("stack %s error: %w", stack.Name, err)
}
awsLB, err := awsAdapter.GetLoadBalancer(ctx, stack.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we collect loadbalancers one-by-one for each stack. As I mentioned here #745 (comment) DescribeLoadBalancers API allows to query up to 20 with a single call. Using this will reduce number of AWS API calls by 20 times; for installations with N loadbalancers fewer than 20 it will be a single call.
So I am wondering should we fetch loadbalancers for stack in batches?
@szuecs WDYT?

var nextToken *string

for {
resp, err := svc.ListStackResources(ctx, &cloudformation.ListStackResourcesInput{
Copy link
Member

@AlexanderYastrebov AlexanderYastrebov Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we do another API call (or maybe even several) to get loadbalancer ARN. I think we could store it in the stack output like we do for LoadBalancerDNSName and target group ARNs so that we have it right after FindManagedStacks without extra API calls.

Copy link
Member

@AlexanderYastrebov AlexanderYastrebov Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We we still want to use API call then we should use describe-stack-resource instead of listing all resources and filtering on client side

$ aws cloudformation describe-stack-resource --stack-name=$STACK_ANME --logical-resource-id LB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants