Conversation
GET /register returns the same shell script for all hosts sharing the
same registration parameters (org, location, hostgroup, activation keys).
Under bulk registration — 100+ hosts hitting the same capsule simultaneously
— this endpoint is called once per host, each time proxying to Foreman and
waiting for the ERB template to render (~103ms in profiling).
Cache the rendered script in memory (5-minute TTL) keyed on a canonical
form of the request query string. The cache key is computed by parsing the
query string, sorting parameters alphabetically, and rebuilding — so
requests that differ only in parameter order share the same cache entry
and the same Foreman response, regardless of how the client ordered them.
The implementation uses three clearly separated layers:
get '/' — handles errors only; delegates to registration_script
registration_script — owns the cache key and the business logic (what
to cache and what to do on a miss); raises
ScriptFetchError on non-200 so errors never reach
the cache write
cache(key, &block) — owns the mechanism: per-key double-checked locking
via a block abstraction that keeps the caller free
of locking concerns
Per-key locking allows concurrent requests for genuinely different keys
(e.g. different activation keys) to fetch from Foreman in parallel, while
serialising only threads competing for the same key. The per-key Mutex is
evicted from KEY_MUTEXES immediately after caching — once a key is hot,
all future requests take the lock-free fast path and KEY_MUTEXES is empty
under steady state.
Only HTTP 200 responses are cached. Non-200 responses raise ScriptFetchError
out of the cache block, which is rescued in get '/' and rendered via
handle_response without poisoning the cache.
Both KEY_MUTEXES and SCRIPT_CACHE use Concurrent::Map (already a
smart-proxy dependency) for lock-free, thread-safe access on all Ruby
VMs without relying on MRI's GIL.
Tests added:
- Cache hit: Foreman called once for repeated identical requests
- Per-key isolation: different parameter sets cached independently
- Parameter order independence: requests differing only in param order
share the same cache entry
- TTL expiry: expired entries are not served; Foreman is re-called
- Mutex eviction: KEY_MUTEXES is empty after a successful cache write
- Error non-caching: Foreman is called on every request when it errors
- setup clears both SCRIPT_CACHE and KEY_MUTEXES between tests
Makes the script cache added in #39208 observable at debug log level: registration_script cache=HIT age=42s key_prefix=org_id=1&location_id=... registration_script cache=MISS key_prefix=org_id=1&location_id=... The key is truncated to 40 characters to avoid flooding the log while still distinguishing between different parameter sets. Log level is debug so it is silent in production by default and available on demand via the smart-proxy log level setting. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
smart-proxy uses a singleton logger with a single global :log_level — there is no per-module log level support. The existing "Started GET / Finished GET" middleware already logs at INFO (one line per request); cache HIT/MISS is the same operational granularity and should be visible in production proxy.log without requiring global DEBUG mode. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
GET /register returns the same shell script for all hosts sharing the same registration parameters (org, location, hostgroup, activation keys). Under bulk registration — 100+ hosts hitting the same capsule simultaneously — this endpoint is called once per host, each time proxying to Foreman and waiting for the ERB template to render (~103ms in profiling).
Cache the rendered script in memory (5-minute TTL) keyed on a canonical form of the request query string. The cache key is computed by parsing the query string, sorting parameters alphabetically, and rebuilding — so requests that differ only in parameter order share the same cache entry and the same Foreman response, regardless of how the client ordered them.
The implementation uses three clearly separated layers:
get '/' — handles errors only; delegates to registration_script
registration_script — owns the cache key and the business logic (what
to cache and what to do on a miss); raises
ScriptFetchError on non-200 so errors never reach
the cache write
cache(key, &block) — owns the mechanism: per-key double-checked locking
via a block abstraction that keeps the caller free
of locking concerns
Per-key locking allows concurrent requests for genuinely different keys (e.g. different activation keys) to fetch from Foreman in parallel, while serialising only threads competing for the same key. The per-key Mutex is evicted from KEY_MUTEXES immediately after caching — once a key is hot, all future requests take the lock-free fast path and KEY_MUTEXES is empty under steady state.
Only HTTP 200 responses are cached. Non-200 responses raise ScriptFetchError out of the cache block, which is rescued in get '/' and rendered via handle_response without poisoning the cache.
Both KEY_MUTEXES and SCRIPT_CACHE use Concurrent::Map (already a smart-proxy dependency) for lock-free, thread-safe access on all Ruby VMs without relying on MRI's GIL.
Tests added: