Skip to content

refactor: simplify FlowALPRebalancerPaidv1#298

Open
holyfuchs wants to merge 5 commits intomainfrom
holyfuchs/rebalancer-refactor
Open

refactor: simplify FlowALPRebalancerPaidv1#298
holyfuchs wants to merge 5 commits intomainfrom
holyfuchs/rebalancer-refactor

Conversation

@holyfuchs
Copy link
Copy Markdown
Member

Removes per-rebalancer RecurringConfig storage and the RebalancerPaid handle resource.
All PositionRebalancer instances now read from a single defaultRecurringConfig set by the admin, eliminating per-position config.
createPaidRebalancer is now fire-and-forget (no return value), and removal goes through Admin.removePaidRebalancer.
Focus on simplicity.

@holyfuchs holyfuchs requested a review from a team as a code owner March 27, 2026 14:23
@holyfuchs holyfuchs requested a review from a team April 1, 2026 14:01
Copy link
Copy Markdown
Member

@jordanschalm jordanschalm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't finished the review, but I have a general question about the rebalancer design. From past Q&As I thought we needed an architecture where a single supervisor would check position health and only schedule per-position rebalances for positions that need it (are outside the health bounds). One rebalancer per position, all independently checking health and rebalancing at the same interval does not seem like it would scale to large numbers of positions.

Mainly want to confirm my understanding; if the plan is to make this change later, that's fine.

/// Rebalancing is done on a best effort basis (even when force=true). If the position has no sink/source,
/// of either cannot accept/provide sufficient funds for rebalancing, the rebalance will still occur but will
/// not cause the position to reach its target health.
access(all) view fun positionExists(pid: UInt64): Bool {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation should move back to rebalancePosition

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noting there is some overlap here with the outstanding closePosition functionality from #208, but since that PR is removing positions from the map upon closing, this approach will work for closed positions too.

access(all) let interval: UInt64
access(all) let priority: FlowTransactionScheduler.Priority
access(all) let executionEffort: UInt64
/// feePaid = estimate.flowFee * estimationMargin
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// feePaid = estimate.flowFee * estimationMargin
/// feePaid = estimate.flowFee * estimationMargin
/// For example, for a 5% margin, set estimationMargin=1.05

access(all) let executionEffort: UInt64
/// feePaid = estimate.flowFee * estimationMargin
access(all) let estimationMargin: UFix64
/// Whether to force rebalance even when the position is already balanced
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Whether to force rebalance even when the position is already balanced
/// Whether to force rebalance even when the position is within its configured min/max health bounds.


/// Configuration for how often and how the rebalancer runs, and which account pays scheduler fees.
access(all) struct RecurringConfig {
access(all) let interval: UInt64
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
access(all) let interval: UInt64
/// Period of rebalance transactions, in seconds.
access(all) let interval: UInt64

access(all) let positionID: UInt64
access(all) var lastRebalanceTimestamp: UFix64

access(self) var selfCap: Capability<auth(FlowTransactionScheduler.Execute) &{FlowTransactionScheduler.TransactionHandler}>?
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
access(self) var selfCap: Capability<auth(FlowTransactionScheduler.Execute) &{FlowTransactionScheduler.TransactionHandler}>?
/// A capability referencing this PositionRebalancer, set by the contract when the PositionRebalancer is created (and stored).
/// This is necessary because in order to schedule the next transaction, we need to pass a persistent reference (capability) to FlowTransactionScheduler.
access(self) var selfCap: Capability<auth(FlowTransactionScheduler.Execute) &{FlowTransactionScheduler.TransactionHandler}>?

self.selfCap = cap
}

access(FlowTransactionScheduler.Execute) fun executeTransaction(id: UInt64, data: AnyStruct?) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
access(FlowTransactionScheduler.Execute) fun executeTransaction(id: UInt64, data: AnyStruct?) {
/// Invoked by FlowTransactionScheduler to execute each requested scheduled transaction.
/// @param id: the scheduled transaction ID
access(FlowTransactionScheduler.Execute) fun executeTransaction(id: UInt64, data: AnyStruct?) {


### Why `fixReschedule()` is necessary

After each run, the rebalancer calls `scheduleNext()` to book the next run with `FlowTransactionScheduler`. That call can **fail** for transient reasons (e.g. `txFunder` has insufficient balance, or the scheduler is busy). When it fails, the rebalancer emits `FailedRecurringSchedule` and does **not** schedule the next execution — leaving it stuck.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or the scheduler is busy

What does busy mean? Does it mean that the scheduler has already filled the allocation of blocks for the requested timestamp?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants