Skip to content

Async Inventory Loading Implementation#4893

Open
marchcat wants to merge 45 commits intodevelopfrom
pepper/async-inventory
Open

Async Inventory Loading Implementation#4893
marchcat wants to merge 45 commits intodevelopfrom
pepper/async-inventory

Conversation

@marchcat
Copy link
Copy Markdown
Contributor

@marchcat marchcat commented Oct 24, 2025

Summary:

Introduces asynchronous inventory skeleton loading via AIS to accelerate login (up to 12s faster for 100k+ item inventories) by hydrating from cache, fetching essentials on-demand, and throttling UI updates. Adds debug settings for concurrency/timeouts, fixes cache persistence/dehydration, and optimizes UI with lazy folder widget creation (O(n) parent-child mapping via caching).

Testing:

QA-only ForceAsyncInventorySkeleton toggle (enabled by default); monitor login times/cache integrity. No breaking changes.

Additional test:

  1. Log in on machine A. Log out
  2. Wear an outfit with multiple items of the same type on machine B. Log out.
  3. Log in on machine A. Change outfit order. Log out.
  4. Log in on machine B. Confirm order of items.

Comment thread indra/llui/llpanel.cpp Outdated
Comment thread indra/newview/llstartup.cpp Outdated
Comment thread indra/newview/llstartup.cpp Outdated
marchcat added a commit that referenced this pull request Oct 29, 2025
- store cache keys as std::string to avoid dangling references
- refresh parent entry on rename and skip unnamed views
marchcat and others added 2 commits October 29, 2025 05:13
- store cache keys as std::string to avoid dangling references
- refresh parent entry on rename and skip unnamed views
@github-actions github-actions Bot added the cmake label Oct 29, 2025
Comment thread indra/newview/llinventorymodel.cpp Outdated
Comment thread indra/newview/lllogininstance.cpp
@akleshchev
Copy link
Copy Markdown
Contributor

I feel like this bundle of optimizations needs to be split into multiple separate ones and reviewer separately.
Ex: LLView name cache thing looks promissing if incomplete, but it should not be bundled with inventory changes.

Comment thread indra/newview/llinventorypanel.cpp Outdated
marchcat and others added 19 commits April 15, 2026 16:14
* Add channel based streaming priority.

* Lessen the aggressiveness of the discards.
…l fetches, track COF UUID, library root non-essential
@akleshchev akleshchev self-requested a review April 27, 2026 18:43
@marchcat
Copy link
Copy Markdown
Contributor Author

Every file in this PR is directly related to async inventory loading. The three files that may look "unrelated" (llview.cpp/h, llpanel.cpp) are explained below.

llview.cpp/h: O(1) child view lookup cache

The existing findChildView() does a linear scan over mChildList. With synchronous loading, all inventory widgets are built in one batch at login, so this cost is paid once. With async loading, widgets are created lazily when folders are expanded (BUILD_ONE_FOLDER mode). This means findChildView() is called repeatedly during normal browsing, not just at startup — the linear scan became a bottleneck. The fix adds an unordered_map mChildNameCache to LLView for O(1) lookups, maintained in addChild(), removeChild() and setName().

llpanel.cpp: reduce repeated singleton/accessor overhead in initPanelXML()

initPanelXML() runs for every panel constructed from XML, including every inventory folder widget. With lazy creation, it fires much more often during the session. The changes cache three values that were re-fetched on every call: LLUICtrlFactory::getInstance(), node->getName() and child_registry_t::instance(). Also eliminates a redundant local string copy in favor of using mXMLFilename directly.

Why in this PR

All three files address performance issues caused by the lazy-creation approach. Without the cache, expanding a folder with hundreds of children would do O(n^2) string comparisons. The llpanel changes reduce per-widget overhead that compounds when widgets are created on-demand. They they are not unrelated work.

@akleshchev
Copy link
Copy Markdown
Contributor

llview.cpp/h: O(1) child view lookup cache

My problems with this solution:

  • It affects everything, not just inventory, names aren't unique and normally we do only one request in postBuild. With this change we keep the cache effectively forever after a first and only getChild request (we do have places that do repeated getChild, but the solution is to stop doing that). At a minimum this should be moved into LLFolderViewItem or whatever class it's used in, not be widespread.
  • Some inventories are already massive, with a huge memory footprint, and we have multiple instances in the background in any moment. If this cashes all those names, the footprint will be massive and viewer already is heavy on ram.

@marchcat
Copy link
Copy Markdown
Contributor Author

You're right, thanks!

After tracing all 31 findChildView() call sites, none of them operate on LLFolderViewFolder or LLFolderView instances. The inventory code paths use direct mItemMap lookups by UUID, not name-based findChildView. So I removed the mChildNameCache from LLView entirely (d4a557a, in #5746).

The llpanel.cpp changes (caching getInstance/getName/instance locally in initPanelXML) are kept – those are local variable optimizations with zero blast radius beyond the function scope.

@akleshchev
Copy link
Copy Markdown
Contributor

akleshchev commented Apr 30, 2026

llpanel.cpp: reduce repeated singleton/accessor overhead in initPanelXML()

I agree that it is a good change, but not specific to async inventory.
In case of inventory it would be more effective to load inventory item's xml into memory and keep it there or to avoid parsing entirely and have some prototype params in memory, ready to be copied. We need it a lot and often so it would be justified and much faster

@github-actions github-actions Bot added the llxml label May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Async Inventory: Improve Login Performance via Asynchronous Loading

5 participants