-
Notifications
You must be signed in to change notification settings - Fork 317
multiproject: centralize informers; harden manager; add docs #2972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: panslava The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
69aec65 to
438d992
Compare
72a34fc to
2eb739c
Compare
Replace factory-based informer creation with centralized InformerSet that: - Creates base SharedIndexInformers directly without factories - Manages lifecycle and synchronization in one place - Provides filtered views for ProviderConfig-specific controllers - Reduces boilerplate and improves maintainability Using informers directly without factories simplifies the logic and eliminates potential mistakes from unnecessary factory usage, such as cidentally creating duplicate informers or incorrect factory scoping.
- Add manager_test.go covering: - Start adds NEG cleanup finalizer and is idempotent - Start failure and GCE client creation failure roll back finalizer - Stop closes controller channel and removes finalizer - Concurrent starts for same ProviderConfig are single-shot - Improve manager.go: - Introduce test seam via package var startNEGController - Ensure finalizer is added before start; roll back on any startup error - Delete finalizer using latest ProviderConfig from API to avoid stale updates - Wrap errors with %w and add GoDoc comments - Rename exported ControllerSet to unexported controllerSet - No expected behavior change in the happy path; robustness and testability improved.
2eb739c to
3f6e8d2
Compare
3f6e8d2 to
291e886
Compare
| // Start optional informers | ||
| startInformer(i.SvcNeg, stopCh) | ||
| startInformer(i.Network, stopCh) | ||
| startInformer(i.GkeNetworkParams, stopCh) | ||
| startInformer(i.NodeTopology, stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we are replicating the check on whether the client is nil or not multiple times. Can we consolidate in initialization the list of informers that needs to be started
| } | ||
|
|
||
| // hasSyncedFuncs returns a list of HasSynced functions for all non-nil informers. | ||
| func (i *InformerSet) hasSyncedFuncs() []func() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we consolidate these checks perhaps at initialization?
So we save the set of functions and don't have to re-calculate every time?
| if err != nil { | ||
| t.Fatalf("Start() returned error: %v", err) | ||
| } | ||
| if ok := cache.WaitForCacheSync(stop, inf.CombinedHasSynced()); !ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you expand this to show that it is waiting for multiple informers
| t.Fatalf("FilterByProviderConfig returned nil (after start)") | ||
| } | ||
| if !filteredAfter.started { | ||
| t.Fatalf("expected filtered InformerSet to have started=true after base Start") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we check all the informers?
| pcLatest, err := pccm.providerConfigClient.CloudV1().ProviderConfigs().Get(context.Background(), pc.Name, metav1.GetOptions{}) | ||
| if err != nil { | ||
| pcCopy := pc.DeepCopy() | ||
| pcCopy.Finalizers = append(pcCopy.Finalizers, finalizer.ProviderConfigNEGCleanupFinalizer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should optionally add this finalizer if it doesn't exist.
Is this to cover the case our cache hasn't updated to after us putting it on ?
| // TestStart_AddsFinalizer_AndIsIdempotent verifies that starting controllers | ||
| // for a ProviderConfig adds the NEG cleanup finalizer and that repeated starts | ||
| // are idempotent (the underlying controller is launched only once). | ||
| func TestStart_AddsFinalizer_AndIsIdempotent(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a testcase for starting a controller for a PC that already has a finalizer (in the case of controller restart)
|
|
||
| // TestStart_FailureCleansFinalizer verifies that when starting controllers | ||
| // fails, the added finalizer is rolled back and removed from the object. | ||
| func TestStart_FailureCleansFinalizer(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about the case that the PC already had a finalizer to begin with?
|
|
||
| testCases := []struct { | ||
| name string | ||
| closeProvider bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a comment to what this bool dictates in the test
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign @swetharepakula