-
Notifications
You must be signed in to change notification settings - Fork 1
fix(deps): update all non-major gomod dependencies #925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
ℹ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/common/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/decaying-lru/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwauthz/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwdb/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwes/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwlocale/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwtesting/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwutil/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/telemetry/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/property-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/tasks-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/updates-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/user-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: spicedb/migrations/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
4ef7a5c to
a6ba0b9
Compare
6c13224 to
974e135
Compare
deb6cea to
70edb84
Compare
1ba15ae to
634f7c9
Compare
dab0a2e to
8dc943c
Compare
47761c2 to
ffd0542
Compare
e9eaffe to
fdce513
Compare
3072255 to
4498309
Compare
4860ea6 to
5c612b4
Compare
5c612b4 to
ddc7d31
Compare
ddc7d31 to
03bc98b
Compare
03bc98b to
e1a5461
Compare
This PR contains the following updates:
v1.4.0->v1.5.0v0.1.0->v0.4.0v1.6.0->v1.13.0v1.2.0->v1.7.0v2.2.1+incompatible->v2.4.0+incompatiblev1.14.4->v1.16.3v1.11.0->v1.13.0v10.23.0->v10.28.0v4.18.1->v4.19.0v1.0.1->v1.1.0v2.2.0->v2.3.3v5.7.2->v5.7.6v2.4.1->v2.6.0v4.3.0->v4.9.0v1.20.5->v1.23.2v9.7.0->v9.17.0v1.33.0->v1.34.0v0.3.2->v0.3.4v1.10.0->v1.11.1v0.35.0->v0.40.0v0.34.0->v0.40.0v0.34.0->v0.40.0v0.58.0->v0.63.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v0.33.0->v0.47.0v0.25.0->v0.33.0v0.21.0->v0.31.0v1.69.2->v1.77.0v1.36.1->v1.36.10Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
BurntSushi/toml (github.com/BurntSushi/toml)
v1.5.0Compare Source
Mostly some small bugfixes, with a few small new features:
Add Position.Col, to mark the column an error occurred (#410)
Print more detailed errors in the
tomlvCLI.Ensure ParseError.Message is always set (#411)
Allow custom string types as map keys (#414)
Mark meta keys as decoded when using Unmarshaler interface (#426)
Fix encoding when nested inline table ends with map (#438)
Fix encoding of several layers of embedded structs (#430)
Fix ErrorWithPosition panic when there is no newline in the TOML document (#433)
Mariscal6/testcontainers-spicedb-go (github.com/Mariscal6/testcontainers-spicedb-go)
v0.4.0Compare Source
What's Changed
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.3.0...v0.4.0
v0.3.0Compare Source
What's Changed
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.2.0...v0.3.0
v0.2.0Compare Source
What's Changed
New Contributors
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.1.0...v0.2.0
alecthomas/kong (github.com/alecthomas/kong)
v1.13.0Compare Source
v1.12.1Compare Source
v1.12.0Compare Source
v1.11.0Compare Source
v1.10.0Compare Source
v1.9.0Compare Source
v1.8.1Compare Source
v1.8.0Compare Source
v1.7.0Compare Source
v1.6.1Compare Source
authzed/authzed-go (github.com/authzed/authzed-go)
v1.7.0Compare Source
What's Changed
New Contributors
Full Changelog: authzed/authzed-go@v1.6.0...v1.7.0
v1.6.0Compare Source
Highlights
Bring in v1.45.4 backwards-compatible changes for SpiceDB
What's Changed
d27fc02by @josephschorr in #350Full Changelog: authzed/authzed-go@v1.5.0...v1.6.0
v1.5.0Compare Source
What's New
What's Changed
Full Changelog: authzed/authzed-go@v1.4.1...v1.5.0
v1.4.1Compare Source
What's Changed
New Contributors
Full Changelog: authzed/authzed-go@v1.4.0...v1.4.1
v1.4.0Compare Source
Highlights
What's Changed
New Contributors
Full Changelog: authzed/authzed-go@v1.3.0...v1.4.0
v1.3.0Compare Source
What's Changed
Full Changelog: authzed/authzed-go@v1.2.1...v1.3.0
v1.2.1Compare Source
What's Changed
Full Changelog: authzed/authzed-go@v1.2.0...v1.2.1
coreos/go-oidc (github.com/coreos/go-oidc)
v2.4.0+incompatibleCompare Source
v2.3.0+incompatibleCompare Source
dapr/dapr (github.com/dapr/dapr)
v1.16.3: Dapr Runtime v1.16.3Compare Source
Dapr 1.16.3
This update includes bug fixes:
Sftp binding not handling reconnections
Problem
The SFTP binding, introduced in v1.15.0, did not correctly handle reconnections.
If the SFTP connection was closed externally (outside the Dapr sidecar), the sidecar would not attempt to reconnect.
Impact
In scenarios where the SFTP server or network closed the connection, the Dapr sidecar lost connectivity permanently and required a restart to restore SFTP communication.
Root Cause
The SFTP binding maintained a single long-lived connection and did not attempt to recreate it when operations failed due to network or server-side disconnects.
Once the underlying SFTP/SSH session was closed, subsequent binding operations continued to use the stale connection instead of establishing a new one, leaving the binding in a permanently broken state until the sidecar was restarted.
Solution
A new reconnection mechanism was added to the SFTP binding (PR).
When an SFTP action fails due to a connection issue, the binding now attempts to reconnect to the server and restore connectivity automatically, avoiding the need to restart the sidecar.
v1.16.2: Dapr Runtime v1.16.2Compare Source
Dapr 1.16.2
This update includes bug fixes:
HTTP API default CORS behavior
Problem
In the 1.16.0 release a change was introduced that changed the default behavior of CORS in the Dapr HTTP API. Now by default CORS headers were added to all HTTP responses. However this new behavior couldn't be disabled.
Impact
This caused problems in scenarios where CORS is handled outside of the Dapr sidecar, because the Dapr Sidecar always added CORS headers.
Solution
Revert part of the behavior introduced in this PR and change the default value of
allowed-originsflag to be an empty string, and disabling the CORS filter by default.Scheduler External etcd with multiple client endpoints
Problem
Using Scheduler in non-embed mode with multiple etcd client endpoints was not working.
Impact
It was not possible to use multiple etcd endpoints for high availability with an external etcd database for scheduler.
Root Cause
The Scheduler etcd client endpoints CLI flag was typed as an string array, rather than a string slice, causing the given value to be parsed as a single string rather than a slice of strings.
Solution
Changed the type of the etcd client endpoints CLI flag to be a string slice.
Placement not cleaning internal state after host that had actors disconnects
Problem
An actor host that had actors doesn't get properly cleaned up from placement after the sidecar is scaled down and the placement stream is closed.
Impact
This results in the placement server iterating over namespaces that no longer exist for every tick of the disseminate ticker.
Root Cause
The function
requiresUpdateInPlacementTablessould not setisActorHostto false once it is set to true, because once a host has actors the placement server keeps internal state for it and cleanup logic must be executed once the host disconnects.Solution
Update the logic in
requiresUpdateInPlacementTables.Blocked Placement dissemination during high churn
Problem
Placement would fail to ever, or very slowly, disseminate the actor table in high daprd churn scenarios.
Impact
Actors or workflows would fail to be activated, and existing actors or workflows would fail.
Root Cause
Placement used a "small" (100) queue size which when exhausted would cause a deadlock. Placement would also wait for a fully consumed channel queue before disseminating slowing down the dissemination process.
Solution
Increase the queue size to 10000 and change the dissemination logic to not wait for a fully consumed queue before disseminating.
Blocked Placement dissemination with high Scheduler dataset
Problem
Disseminations would hang for long periods of time when the Scheduler dataset was large.
Impact
Dissemination could take up to hours to complete, causing reminders to not be delivered for a long period of time.
Root Cause
The reminder migration of state store to scheduler reminders does a full decoded scan of the Scheduler database, which would take a long time if there were many entries. During this time the dissemination would be blocked.
Solution
Limit the maximum time spent doing the migration to 3 seconds.
Expose a new
global.reminders.skipMigration="true"helm chart value which will skip the migration entirely.Fix panic during actor deactivation
Problem
Daprd could panic during actor deactivation.
Impact
Daprd sidecar would crash, resulting in downtime for the application.
Root Cause
A race in the actor lock cached memory release and claiming logic meant a stale lock could be used during deactivation, double closing it, and causing a panic.
Solution
Tie the lock's lifecycle to the actor's lifecycle, ensuring the lock is only released when the actor is fully deactivated, and claimed with the actor itself.
OpenTelemetry environment variables support
Problem
OpenTelemetry
OTEL_*environment variables were not fully respected, anddapr.io/envannotation parsing broke when values contained=.Impact
OpenTelemetry resource attributes could not be reliably applied to the Dapr sidecar, degrading trace correlation with application containers, especially on Kubernetes. Configuring
OTEL_RESOURCE_ATTRIBUTESvia annotations did not work.Root Cause
=as a hard delimiter, breaking values that include=.Solution
OTEL_*variables (includingOTEL_RESOURCE_ATTRIBUTES) are honored.dapr.io/envparsing to allow values containing=.Fixing goavro bug due to codec state mutation
Problem
The goavro library had a bug where the codec state was mutated during decoding, causing the decoder to panic.
Impact
The goavro library would panic, causing the application to crash.
Root Cause
The goavro library did not correctly handle the codec state, causing it to panic when the codec state was mutated during decoding.
Solution
Update the goavro library to v2.14.1 to fix the bug. Take a more defensive approach, bringing back the old approach that always creates a new codec.
APP_API_TOKEN not passed in gRPC metadata for app callbacks
Problem
When
APP_API_TOKENwas configured, the token was not being passed in gRPC metadata for app callbacks including:This meant that applications using gRPC protocol could not authenticate incoming requests from Dapr when using the app API token security feature.
Impact
Applications that configured
APP_API_TOKENto secure their endpoints could not validate that incoming gRPC requests were from their Dapr sidecar. This broke the app API token authentication feature for gRPC applications.Root Cause
The gRPC subscription delivery, binding, and job callback code paths were directly calling the app's gRPC client without going through the channel layer abstraction. The channel layer is responsible for injecting the
APP_API_TOKENin thedapr-api-tokenmetadata header, but these direct calls bypassed this mechanism.Solution
Centralized the
APP_API_TOKENinjection logic in a helper function (AddAppTokenToContext) in the gRPC channel layer. Updated all gRPC app callback code paths (pubsub subscriptions, bindings, and job callbacks) to use this helper, ensuring the token is consistently added to the outgoing gRPC context metadata. Added comprehensive integration tests to verify token passing for all callback scenarios in both HTTP and gRPC protocols.Fixed Pulsar OAuth token renewal
Problem
The pulsar pubsub component was not renewing the OAuth token when it expired.
Impact
Applications using the pulsar pubsub component could not receive/publish messages when the OAuth token expired.
Root Cause
There was a bug in the component code that was preventing the OAuth token from being renewed when it expired.
Solution
Fixed the bug in the component code ensuring the OAuth token is renewed when it expires. Also added a test to verify the token renewal functionality. Fixed in dapr/components-contrib#4079
Fix Scheduler connection during non-graceful network interruptions
Problem
Catastrophic failure of scheduler connection during non-graceful network interruptions would not cause the dapr runtime to attempt to reconnect to Scheduler.
Impact
A true host network interruption (e.g. unplugging the network cable) would cause the dapr runtime to only recover connections to Scheduler after roughly 2 hours.
Root Cause
The gRPC KeepAlive parameters were not set correctly, causing the gRPC client to not detect broken connections in a timely manner.
Solution
The server and client KeepAlive parameters are now set to 3 second intervals with a 5 second timeout.
Prevent infinite loop when workflow state is corrupted or destroyed
Problem
Dapr workflows could enter an infinite reminder loop when the workflow state in the actor state store is corrupted or destroyed.
Impact
Dapr workflows would enter an infinite loop of reminder calls.
Root Cause
When a workflow reminder is triggered, the workflow state is loaded from the actor state store. If the state is corrupted or destroyed, the workflow would not be able to progress and would keep re-triggering the same reminder indefinitely.
Solution
Do not retry the reminder if the workflow state cannot be loaded, and instead log an error and exit the workflow execution.
v1.16.1: Dapr Runtime v1.16.1Compare Source
Dapr 1.16.1
This update includes bug fixes:
Actor Initialization Timing Fix
Problem
When running Dapr with an
--app-portspecified but no application listening on that port (either due to no server or delayed server startup), the actor runtime would initialize immediately before the app channel was ready. This created a race condition where actors were trying to communicate with an application that wasn't available yet, resulting in repeated error logs:Impact
This created a poor user experience with confusing error messages when users specified an
--app-portbut had no application listening on that port.Root cause
The actor runtime initialization was occurring before the application channel was ready, creating a race condition where actors attempted to communicate with an unavailable application.
Solution
Defer actor runtime initialization until the application channel is ready. The runtime now:
waiting for application to listen on port XXXXmessages instead of confusing error logsSidecar Injector Crash with Disabled Scheduler
Problem
The sidecar injector crashes with error (
dapr-scheduler-server StatefulSet not found) when the scheduler is disabled via Helm chart (global.scheduler.enabled: false).Impact
The crash prevents the sidecar injector from functioning correctly when the scheduler is disabled, disrupting deployments.
Root cause
A previous change caused the
dapr-scheduler-serverStatefulSet to be removed when the scheduler was disabled, instead of scaling it to 0 as originally intended. The injector, hardcoded to check for the StatefulSet in theinjector.gofile, fails when it is not found.Solution
Revert the behavior to scale the
dapr-scheduler-serverStatefulSet to 0 when the scheduler is disabled, instead of removing it, as implemented in the Helm chart.Workflow actors reminders stopped after Application Health check transition
Problem
Application Health checks transitioning from unhealthy to healthy were incorrectly configuring the scheduler clients to stop watching for actor reminder jobs.
Impact
The misconfiguration in the scheduler clients made workflows to stop executing because reminders no longer executed.
Root cause
On Application Health change daprd was able to trigger an actors update for an empty slice, which caused a scheduler client reconfiguration. However because there were no changes in the actor types, daprd never received a new version of the placement table which caused the scheduler clients to get misconfigured. This happens because when daprd sends an actor types update to the placement server daprd wipes out the known actor types in the scheduler client, and because daprd never received an acknowledgement from placement with a new table version then the scheduler client never got updated back with the actor types.
Solution
Prevent any changes to hosted actor types if the input slice i
Configuration
📅 Schedule: Branch creation - Between 06:00 PM and 09:59 PM, only on Friday ( * 18-21 * * 5 ) in timezone Europe/Berlin, Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log.