-
Notifications
You must be signed in to change notification settings - Fork 217
feat: add support for identity service server and updating identity service #1385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hi @afarbos. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
✅ Deploy Preview for kubernetes-sigs-cluster-api-gcp ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
I ran locally: |
97b4416
to
6bda6de
Compare
/ok-to-test |
Thanks @afarbos. Could you please rebase your commits to squash them into logical changes? This helps maintain a clean history and simplifies reverts, if needed. |
6bda6de
to
3cbacd9
Compare
sounds good, done! 3 commits:
|
3cbacd9
to
60fdfc2
Compare
} else if updateErr := s.updateCAPIKubeconfigSecret(ctx, configSecret); updateErr != nil { | ||
return fmt.Errorf("updating kubeconfig secret: %w", err) | ||
} else if kubeConfig, err = s.updateCAPIKubeconfigSecret(ctx, configSecret); err != nil { | ||
return nil, fmt.Errorf("updating kubeconfig secret: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: this also fix the error it should have been updateErr
, now everything is err
@salasberryfin friendly bump |
60fdfc2
to
a3b41e7
Compare
rebased |
a3b41e7
to
3bad5b6
Compare
6dab332
to
f702527
Compare
f702527
to
f1f8dc7
Compare
rebased |
Hey @afarbos, thanks for the contributions (and the patience) and sorry for the delay in responding. The only issue I see with this is that we're not actively testing managed cluster functionality so, would it be okay if we wait for this kubernetes/k8s.io#7665 request to be resolved so we can actually test that GKE provisioning is functioning properly? |
Yes, this was started from my other issue #1371 😭 |
f1f8dc7
to
6cf8e76
Compare
@salasberryfin I think this is now resolved, thank to you! Thank you! |
/test pull-cluster-api-provider-gcp-test |
You will need to rebase this to fix the |
6cf8e76
to
e03c99a
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: afarbos The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ack, rebased. 👀 |
e03c99a
to
c8c780a
Compare
rebased again, |
@@ -168,6 +173,11 @@ func (s *Service) Reconcile(ctx context.Context) (ctrl.Result, error) { | |||
return ctrl.Result{}, err | |||
} | |||
|
|||
err = s.reconcileIdentityService(ctx, kubeConfig, &log) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I always recommend the one-line form when we don't need the error later:
if err := s.reconcileIdentityService(..); err != nil {
(No need to fix, just my 2c)
desiredEnableIdentityService := s.scope.GCPManagedControlPlane.Spec.EnableIdentityService | ||
if desiredEnableIdentityService != existingCluster.GetIdentityServiceConfig().GetEnabled() { | ||
needUpdate = true | ||
clusterUpdate.DesiredIdentityServiceConfig = &containerpb.IdentityServiceConfig{Enabled: desiredEnableIdentityService} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I believe that some (OK, most) fields cannot be updated in "one shot". https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/ClusterUpdate says "Exactly one update can be applied to a cluster with each request, so at most one field can be provided."
I think the easiest way to handle this is probably to build the UpdateClusterRequest as we are doing here, but then to break it down into one-field-at-a-time requests when we actually go to call UpdateCluster
(I don't know whether we want to handle in this PR - or maybe it is handled somewhere else and I missed it - but it is a classic gotcha that I'm sure we'll hit!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting limit I did not know about. We're actually updating the UpdateClusterRequest
with all of the changes we detect and then calling UpdateCluster()
. Should we be having issues with GCP rejecting multiple changes at the same time? I don't recall seeing this. Does it mean that only one of the updated fields is applied and an update of multiple fields needs as many reconciliations?
identityServiceServer, err := s.getIdentityServiceServer(ctx, kubeConfig) | ||
if err != nil { | ||
err = fmt.Errorf("failed to retrieve identity service: %w", err) | ||
log.Error(err, "Failed to retrieve identity service server") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I personally think we shouldn't do this, we should rely on the caller logging, but I'm guessing this happens more often than we would like
(Another thought, likely not for this PR - we should decide whether we should pass the logr in, vs getting it from the ctx)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I second this, it is cleaner to have the caller logging the error and the called method only return the error here.
I also agree on the logger being passed as an argument, which we're doing already, and I think is overly convoluted, but I suggest we discuss this in a separate issue and open a stand-alone PR to tidy it up.
Some comments but nothing blocking IMO /lgtm |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Following #1366, the goal of this PR is to expose the server used by identity enabling easier authentication without prior cluster access or secret manipulation and update of the identity service config.
Doc: https://cloud.google.com/kubernetes-engine/docs/how-to/oidc
example:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
Release note: