spooder is a purpose-built server solution that shifts away from the dependency hell of the Node.js ecosystem, with a focus on stability and performance, which is why:
- It is built using the Bun runtime and not designed to be compatible with Node.js or other runtimes.
- It uses zero dependencies and only relies on code written explicitly for
spooderor APIs provided by the Bun runtime, often implemented in native code. - It provides streamlined APIs for common server tasks in a minimalistic way, without the overhead of a full-featured web framework.
- It is opinionated in its design to reduce complexity and overhead.
The design goal behind spooder is not to provide a full-featured web server, but to expand the Bun runtime with a set of APIs and utilities that make it easy to develop servers with minimal overhead.
In addition to the core API provided here, you can also find spooderverse which is a collection of drop-in modules designed for spooder with minimal overhead and zero dependencies.
Note
If you think a is missing a feature, consider opening an issue with your use-case. The goal behind spooder is to provide APIs that are useful for a wide range of use-cases, not to provide bespoke features better suited for userland.
It consists of two components, the CLI and the API.
- The
CLIis responsible for keeping the server process running, applying updates in response to source control changes, and automatically raising issues on GitHub via the canary feature. - The
APIprovides a minimal building-block style API for developing servers, with a focus on simplicity and performance.
# Installing globally for CLI runner usage.
bun add spooder --global
# Install into local package for API usage.
bun add spooderBoth the CLI and the API are configured in the same way by providing a spooder object in your package.json file.
Below is a full map of the available configuration options in their default states. All configuration options are optional.
If there are any issues with the provided configuration, a warning will be printed to the console but will not halt execution. spooder will always fall back to default values where invalid configuration is provided.
Note
Configuration warnings do not raise caution events with the spooder canary functionality.
The CLI component of spooder is a global command-line tool for running server processes.
spooder exposes a simple yet powerful API for developing servers. The API is designed to be minimal to leave control in the hands of the developer and not add overhead for features you may not need.
- API > Cheatsheet
- API > Logging
- API > IPC
- API > HTTP
- API > Error Handling
- API > Workers
- API > Caching
- API > Templating
- API > Cache Busting
- API > Git
- API > Database
- API > Utilities
For convenience, it is recommended that you run this in a screen session.
screen -S my-website-about-fish.net
cd /var/www/my-website-about-fish.net/
spooderspooder will launch your server either by executing the run command provided in the configuration. If this is not defined, an error will be thrown.
{
"spooder": {
"run": "bun run my_server.ts"
}
}While spooder uses a bun run command by default, it is possible to use any command string. For example if you wanted to launch a server using node instead of bun, you could do the following.
{
"spooder": {
"run": "node my_server.js"
}
}spooder can be started in development mode by providing the --dev flag when starting the server.
spooder --devThe following differences will be observed when running in development mode:
- If
run_devis configured, it will be used instead of the defaultruncommand. - Update commands defined in
spooder.updatewill not be executed when starting a server. - If the server crashes and
auto_restartis configured, the server will not be restarted, and spooder will exit with the same exit code as the server. - If canary is configured, reports will not be dispatched to GitHub and instead be printed to the console; this includes crash reports.
It is possible to detect in userland if a server is running in development mode by checking the SPOODER_ENV environment variable.
if (process.env.SPOODER_ENV === 'dev') {
// Server is running in development mode.
}You can configure a different command to run when in development mode using the run_dev option:
{
"spooder": {
"run": "bun run server.ts",
"run_dev": "bun run server.ts --inspect"
}
}Note
SPOODER_ENV should be either dev or prod. If the variable is not defined, the server was not started by the spooder CLI.
Note
This feature is not enabled by default.
In the event that the server process exits, spooder can automatically restart it.
If the server exits with a non-zero exit code, this will be considered an unexpected shutdown. The process will be restarted using an exponential backoff strategy.
{
"spooder": {
"auto_restart": {
"enabled": true,
// max restarts before giving up
"max_attempts": -1, // default (unlimited)
// max delay (ms) between restart attempts
"backoff_max": 300000, // default 5 min
// grace period after which the backoff protocol
"backoff_grace": 30000 // default 30s
}
}
}If the server exits with a 0 exit code, this will be considered an intentional shutdown and spooder will execute the update commands before restarting the server.
Tip
An intentional shutdown can be useful for auto-updating in response to events, such as webhooks.
If the server exits with 42 (SPOODER_AUTO_RESTART), the update commands will not be executed before starting the server. See Auto Update for information.
Note
This feature is not enabled by default.
When starting or restarting a server process, spooder can automatically update the source code in the working directory. To enable this feature, the necessary update commands can be provided in the configuration as an array of strings.
{
"spooder": {
"update": [
"git reset --hard",
"git clean -fd",
"git pull origin main",
"bun install"
]
}
}Each command should be a separate entry in the array and will be executed in sequence. The server process will be started once all commands have resolved.
Important
Chaining commands using && or || operators does not work.
If a command in the sequence fails, the remaining commands will not be executed, however the server will still be started. This is preferred over entering a restart loop or failing to start the server at all.
You can combine this with Auto Restart to automatically update your server in response to a webhook by exiting the process.
server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
setImmediate(async () => {
await server.stop(false);
process.exit(0);
});
return HTTP_STATUS_CODE.OK_200;
});See Instancing for instructions on how to use Auto Update with multiple instances.
In addition to being skipped in dev mode, updates can also be skipped in production mode by passing the --no-update flag.
Note
This feature is not enabled by default.
By default, spooder will start and manage a single process as defined by the run and run_dev configuration properties. In some scenarios, you may want multiple processes for a single codebase, such as variant sub-domains.
This can be configured in spooder using the instances array, with each entry defining a unique instance.
"spooder": {
"instances": [
{
"id": "dev01",
"run": "bun run --env-file=.env.a index.ts",
"run_dev": "bun run --env-file=.env.a.dev index.ts --inspect"
},
{
"id": "dev02",
"run": "bun run --env-file=.env.b index.ts",
"run_dev": "bun run --env-file=.env.b.dev index.ts --inspect"
}
]
}Instances will be managed individually in the same manner that a single process would be, including auto-restarting and other functionality.
By default, instances are all launched instantly. This behavior can be configured with the instance_stagger_interval configuration property, which defines an interval between instance launches in milliseconds.
This interval effects both server start-up, auto-restarting and crash recovery. No two instances will be launched within that interval regardless of the reason.
The canary feature functions the same for multiple instances as it would for a single instance with the caveat that the instance object as defined in the configuration is included in the crash report for diagnostics.
This allows you to define custom properties on the instance which will be included as part of the crash report.
{
"id": "dev01",
"run": "bun run --env-file=.env.a index.ts",
"sub_domain": "dev01.spooder.dev" // custom, for diagnostics
}![IMPORTANT] You should not include sensitive or confidential credentials in your instance configuration for this reason. This should always be handled using environment variables or credential storage.
Combining Auto Restart and Auto Update, when a server process exits with a zero exit code, the update commands will be run as the server restarts. This is suitable for a single-instance setup.
In the event of multiple instances, this does not work. One server instance would receive the webhook and exit, resulting in the update commands being run and that instance being restarted, leaving the other instances still running.
A solution might be to send the web-hook to every instance, but now each instance is going to restart individually, running the update commands unnecessarily and, if at the same time, causing conflicts. In addition, the concept of multiple instances in spooder is that they operate from a single codebase, which makes sending multiple webhooks a challenge - so don't do this.
The solution is to the use the IPC to instruct the host process to handle this.
server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
setImmediate(async () => {
ipc_send(IPC_TARGET.SPOODER, IPC_OP.CMSG_TRIGGER_UPDATE);
});
return HTTP_STATUS_CODE.OK_200;
});
ipc_register(IPC_OP.SMSG_UPDATE_READY, async () => {
await server.stop(false);
process.exit(EXIT_CODE.SPOODER_AUTO_UPDATE);
});In this scenario, we instruct the host process from one instance receiving the webhook to apply the updates. Once the update commands have been run, all instances are send the SMSG_UPDATE_READY event, indicating they can restart.
Exiting with the SPOODER_AUTO_UPDATE exit code instructs spooder that we're exiting as part of this process, and prevents auto-update from running on restart.
Note
This feature is not enabled by default.
canary is a feature in spooder which allows server problems to be raised as issues in your repository on GitHub.
To enable this feature, you will need a GitHub app which has access to your repository and a corresponding private key. If you do not already have those, instructions can be found below.
GitHub App Setup
Create a new GitHub App either on your personal account or on an organization. The app will need the following permissions:
- Issues - Read & Write
- Metadata - Read-only
Once created, install the GitHub App to your account. The app will need to be given access to the repositories you want to use the canary feature with.
In addition to the App ID that is assigned automatically, you will also need to generate a Private Key for the app. This can be done by clicking the Generate a private key button on the app page.
[!NOTE] The private keys provided by GitHub are in PKCS#1 format, but only PKCS#8 is supported. You can convert the key file with the following command.
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in private-key.pem -out private-key-pkcs8.keyEach server that intends to use the canary feature will need to have the private key installed somewhere the server process can access it.
"spooder": {
"canary": {
"enabled": true,
"account": "<GITHUB_ACCOUNT_NAME>",
"repository": "<GITHUB_REPOSITORY>",
"labels": ["some-label"]
}
}Replace <GITHUB_ACCOUNT_NAME> with the account name you have installed the GitHub App to, and <GITHUB_REPOSITORY> with the repository name you want to use for issues.
The repository name must in the full-name format owner/repo (e.g. facebook/react).
The labels property can be used to provide a list of labels to automatically add to the issue. This property is optional and can be omitted.
The following two environment variables must be defined on the server.
SPOODER_CANARY_APP_ID=1234
SPOODER_CANARY_KEY=/home/bond/.ssh/id_007_pcks8.key
SPOODER_CANARY_APP_ID is the App ID as shown on the GitHub App page.
SPOODER_CANARY_KEY is the path to the private key file in PKCS#8 format.
Note
Since spooder uses the Bun runtime, you can use the .env.local file in the project root directory to set these environment variables per-project.
Once configured, spooder will automatically raise an issue when the server exits with a non-zero exit code.
In addition, you can manually raise issues using the spooder API by calling caution() or panic(). More information about these functions can be found in the API section.
If canary has not been configured correctly, spooder will only print warnings to the console when it attempts to raise an issue.
Warning
Consider testing the canary feature with the caution() function before relying on it for critical issues.
It is recommended that you harden your server code against unexpected exceptions and use panic() and caution() to raise issues with selected diagnostic information.
In the event that the server does encounter an unexpected exception which causes it to exit with a non-zero exit code, spooder will provide some diagnostic information in the canary report.
Since this issue has been caught externally, spooder has no context of the exception which was raised. Instead, the canary report will contain the output from both stdout and stderr.
{
"proc_exit_code": 1,
"console_output": [
"[2.48ms] \".env.local\"",
"Test output",
"Test output",
"4 | console.warn('Test output');",
"5 | ",
"6 | // Create custom error class.",
"7 | class TestError extends Error {",
"8 | constructor(message: string) {",
"9 | super(message);",
" ^",
"TestError: Home is [IPv4 address]",
" at new TestError (/mnt/i/spooder/test.ts:9:2)",
" at /mnt/i/spooder/test.ts:13:6",
""
]
}The proc_exit_code property contains the exit code that the server exited with.
The console_output will contain the last 64 lines of output from stdout and stderr combined. This can be configured by setting the spooder.canary.crash_console_history property to a length of your choice.
{
"spooder": {
"canary": {
"crash_console_history": 128
}
}
}This information is subject to sanitization, as described in the CLI > Canary > Sanitization section, however you should be aware that stack traces may contain sensitive information.
Setting spooder.canary.crash_console_history to 0 will omit the console_output property from the report entirely, which may make it harder to diagnose the problem but will ensure that no sensitive information is leaked.
All reports sent via the canary feature are sanitized to prevent sensitive information from being leaked. This includes:
- Environment variables from
.env.local - IPv4 / IPv6 addresses.
- E-mail addresses.
# .env.local
DB_PASSWORD=secretawait panic({
a: 'foo',
b: process.env.DB_PASSWORD,
c: 'Hello [email protected]',
d: 'Client: 192.168.1.1'
});[
{
"a": "foo",
"b": "[redacted]",
"c": "Hello [e-mail address]",
"d": "Client: [IPv4 address]"
}
]The sanitization behavior can be disabled by setting spooder.canary.sanitize to false in the configuration. This is not recommended as it may leak sensitive information.
{
"spooder": {
"canary": {
"sanitize": false
}
}
}Warning
While this sanitization adds a layer of protection against information leaking, it does not catch everything. You should pay special attention to messages and objects provided to the canary to not unintentionally leak sensitive information.
In addition to the information provided by the developer, spooder also includes some system information in the canary reports.
{
"loadavg": [
0,
0,
0
],
"memory": {
"free": 7620907008,
"total": 8261840896
},
"platform": "linux",
"uptime": 7123,
"versions": {
"node": "18.15.0",
"bun": "0.6.5",
"webkit": "60d11703a533fd694cd1d6ddda04813eecb5d69f",
"boringssl": "b275c5ce1c88bc06f5a967026d3c0ce1df2be815",
"libarchive": "dc321febde83dd0f31158e1be61a7aedda65e7a2",
"mimalloc": "3c7079967a269027e438a2aac83197076d9fe09d",
"picohttpparser": "066d2b1e9ab820703db0837a7255d92d30f0c9f5",
"uwebsockets": "70b1b9fc1341e8b791b42c5447f90505c2abe156",
"zig": "0.11.0-dev.2571+31738de28",
"zlib": "885674026394870b7e7a05b7bf1ec5eb7bd8a9c0",
"tinycc": "2d3ad9e0d32194ad7fd867b66ebe218dcc8cb5cd",
"lolhtml": "2eed349dcdfa4ff5c19fe7c6e501cfd687601033",
"ares": "0e7a5dee0fbb04080750cf6eabbe89d8bae87faa",
"usockets": "fafc241e8664243fc0c51d69684d5d02b9805134",
"v8": "10.8.168.20-node.8",
"uv": "1.44.2",
"napi": "8",
"modules": "108"
},
"bun": {
"version": "0.6.5",
"rev": "f02561530fda1ee9396f51c8bc99b38716e38296",
"memory_usage": {
"rss": 99672064,
"heapTotal": 3039232,
"heapUsed": 2332783,
"external": 0,
"arrayBuffers": 0
},
"cpu_usage": {
"user": 50469,
"system": 0
}
}
}// logging
log(message: string, ...params: any[]);
log_error(message: string, ...params: any[]);
log_create_logger(prefix: string, color: ColorInput);
log_list(input: any[], delimiter = ', ');
// http
http_serve(port: number, hostname?: string): Server;
server.stop(immediate: boolean): Promise<void>;
// cookies
cookies_get(req: Request): Bun.CookieMap
// routing
server.route(path: string, handler: RequestHandler, method?: HTTP_METHODS);
server.json(path: string, handler: JSONRequestHandler, method?: HTTP_METHODS);
server.throttle(delta: number, handler: JSONRequestHandler|RequestHandler);
server.unroute(path: string);
// fallback handlers
server.handle(status_code: number, handler: RequestHandler);
server.default(handler: DefaultHandler);
server.error(handler: ErrorHandler);
server.on_slow_request(callback: SlowRequestCallback, threshold?: number);
server.allow_slow_request(req: Request);
// http generics
http_apply_range(file: BunFile, request: Request): HandlerReturnType;
// directory serving
server.dir(path: string, dir: string, options?: DirOptions | DirHandler, method?: HTTP_METHODS);
// server-sent events
server.sse(path: string, handler: ServerSentEventHandler);
// webhooks
server.webhook(secret: string, path: string, handler: WebhookHandler, branches?: string | string[]);
// websockets
server.websocket(path: string, handlers: WebsocketHandlers);
// bootstrap
server.bootstrap(options: BootstrapOptions): Promise<void>;
// error handling
ErrorWithMetadata(message: string, metadata: object);
caution(err_message_or_obj: string | object, ...err: object[]): Promise<void>;
panic(err_message_or_obj: string | object, ...err: object[]): Promise<void>;
safe(fn: Callable): Promise<void>;
// worker (main thread)
worker_pool(options: WorkerPoolOptions): Promise<WorkerPool>;
pool.id: string;
pool.send: (peer: string, id: string, data?: WorkerMessageData) => void;
pool.broadcast: (id: string, data?: WorkerMessageData) => void;
pool.on: (event: string, callback: (message: WorkerMessage) => Promise<void> | void) => void;
pool.once: (event: string, callback: (message: WorkerMessage) => Promise<void> | void) => void;
pool.off: (event: string) => void;
type WorkerPoolOptions = {
id?: string;
worker: string | string[];
size?: number;
auto_restart?: boolean | AutoRestartConfig;
onWorkerStart?: (pool: WorkerPool, worker_id: string) => void;
onWorkerStop?: (pool: WorkerPool, worker_id: string, exit_code: number) => void;
};
type AutoRestartConfig = {
backoff_max?: number; // default: 5 * 60 * 1000 (5 min)
backoff_grace?: number; // default: 30000 (30 seconds)
max_attempts?: number; // default: 5, -1 for unlimited
};
// worker (worker thread)
worker_connect(peer_id?: string): WorkerPool;
// templates
Replacements = Record<string, string | Array<string> | object | object[]> | ReplacerFn | AsyncReplaceFn;
parse_template(template: string, replacements: Replacements, drop_missing?: boolean): Promise<string>;
// cache busting
cache_bust(string|string[]: path, format: string): string|string[]
cache_bust_set_hash_length(length: number): void;
cache_bust_set_format(format: string): void;
cache_bust_get_hash_table(): Record<string, string>;
// git
git_get_hashes(length: number): Promise<Record<string, string>>;
git_get_hashes_sync(length: number): Record<string, string>;
// database utilities
db_set_cast<T extends string>(set: string | null): Set<T>;
db_set_serialize<T extends string>(set: Iterable<T> | null): string;
db_exists(db: SQL, table_name: string, value: string|number, column_name = 'id'): Promise<boolean>;
// database schema
type SchemaOptions = {
schema_table?: string;
recursive?: boolean;
};
db_get_schema_revision(db: SQL): Promise<number|null>;
db_schema(db: SQL, schema_path: string, options?: SchemaOptions): Promise<boolean>;
// caching
cache_http(options?: CacheOptions);
cache.file(file_path: string): RequestHandler;
cache.request(req: Request, cache_key: string, content_generator: () => string | Promise<string>): Promise<Response>;
// utilities
filesize(bytes: number): string;
BiMap: class BiMap<K, V>;
// ipc
ipc_register(op: number, callback: IPC_Callback);
ipc_send(target: string, op: number, data?: object);
// constants
HTTP_STATUS_TEXT: Record<number, string>;
HTTP_STATUS_CODE: { OK_200: 200, NotFound_404: 404, ... };
EXIT_CODE: Record<string, number>;
EXIT_CODE_NAMES: Record<number, string>;
IPC_TARGET: Record<string, string>;
IPC_OP: Record<string, number>;Print a message to the console using the default logger. Wrapping text segments in curly braces will highlight those segments with colour.
log('Hello, {world}!');
// > [info] Hello, world!Tagged template literals are also supported and automatically highlights values without the brace syntax.
const user = 'Fred';
log`Hello ${user}!`;Formatting parameters are supported using standard console logging formatters.
log('My object: %o', { foo: 'bar' });
// > [info] My object: { foo: 'bar' }| Specifier | Description |
|---|---|
%s |
String |
%d |
Integer |
%i |
Integer (same as %d) |
%f |
Floating point |
%o |
Object (pretty-printed) |
%O |
Object (expanded/detailed) |
%j |
JSON string |
Print an error message to the console. Wrapping text segments in curly braces will highlight those segments. This works the same as log() except it's red, so you know it's bad.
log_error('Something went {really} wrong');
// > [error] Something went really wrongCreate a log() function with a custom prefix and highlight colour.
const db_log = log_create_logger('db', 'pink');
db_log('Creating table {users}...');[!INFO] For information about
ColorInput, see the Bun Color API.
Utility function that joins an array of items together with each element wrapped in highlighting syntax for logging.
const fruit = ['apple', 'orange', 'peach'];
log(`Fruit must be one of ${fruit.map(e => `{${e}}`).join(', ')}`);
log(`Fruit must be one of ${log_list(fruit)}`);spooder provides a way to send/receive messages between different instances via IPC. See CLI > Instancing for documentation on instances.
// listen for a message
ipc_register(0x1, msg => {
// msg.peer, msg.op, msg.data
console.log(msg.data.foo); // 42
});
// send a message to dev02
ipc_send('dev02', 0x1, { foo: 42 });
// send a message to all other instances
ipc_send(IPC_TARGET.BROADCAST, 0x1, { foo: 42 });This can also be used to communicate with the host process for certain functionality, such as auto-restarting.
When sending/receiving IPC messages, the message will include an opcode. When communicating with the host process, that will be one of the following:
IPC_OP.CMSG_TRIGGER_UPDATE = -1;
IPC_OP.SMSG_UPDATE_READY = -2;
IPC_OP.CMSG_REGISTER_LISTENER = -3; // used internally by ipc_registerWhen sending/receiving your own messages, you can define and use your own ID schema. To prevent conflict with internal opcodes, always use positive values; spooder internal opcodes will always be negative.
Register a listener for IPC events. The callback will receive an object with this structure:
type IPC_Message = {
op: number; // opcode received
peer: string; // sender
data?: object // payload data (optional)
};Send an IPC event. The target can either be the ID of another instance (such as the peer ID from an IPC_Message) or one of the following constants.
IPC_TARGET.SPOODER; // communicate with the host
IPC_TARGET.BROADCAST; // broadcast to all other instancesBootstrap a server on the specified port (and optional hostname).
import { serve } from 'spooder';
const server = http_serve(8080); // port only
const server = http_serve(3000, '0.0.0.0'); // optional hostnameBy default, the server responds with:
HTTP/1.1 404 Not Found
Content-Length: 9
Content-Type: text/plain;charset=utf-8
Not FoundStop the server process immediately, terminating all in-flight requests.
server.stop(true);Stop the server process gracefully, waiting for all in-flight requests to complete.
server.stop(false);server.stop() returns a promise, which if awaited, resolves when all pending connections have been completed.
await server.stop(false);
// do something now all connections are doneRegister a handler for a specific path.
server.route('/test/route', (req, url) => {
return new Response('Hello, world!', { status: 200 });
});Unregister a specific route.
server.route('/test/route', () => {});
server.unroute('/test/route');Throttles requests going through the provided handler so that they take a minimum of delta milliseconds. Useful for preventing brute-force of sensitive endpoints.
Important
This is a rudimentary countermeasure for brute-force attacks, not a defence against timing-attacks. Always use constant-time/timing-safe comparison functions in sensitive endpoints.
server.json('/api/login', server.throttle(1000, (req, url, json) => {
// this endpoint will always take at least 1000ms to execute
}));
// works with regular routes
server.route('/reset-password', server.throttle(1000, (req, url) => {
// this route will also take at least 1000ms to execute
}));Register a JSON endpoint with automatic content validation. This method automatically validates that the request has the correct Content-Type: application/json header and that the request body contains a valid JSON object.
server.json('/api/users', (req, url, json) => {
// json is automatically parsed and validated as a plain object
const name = json.name;
const email = json.email;
// Process the JSON data
return { success: true, id: 123 };
});By default, JSON routes are registered as POST endpoints, but this can be customized:
server.json('/api/data', (req, url, json) => {
return { received: json };
}, 'PUT');The handler will automatically return 400 Bad Request if:
- The
Content-Typeheader is notapplication/json - The request body is not valid JSON
- The JSON is not a plain object (e.g., it's an array, null, or primitive value)
By default, spooder will register routes defined with server.route() and server.dir() as GET routes, while server.json() routes default to POST. Requests to these routes with other methods will return 405 Method Not Allowed.
Note
spooder does not automatically handle HEAD requests natively.
This can be controlled by providing the method parameter with a string or array defining one or more of the following methods.
GET | HEAD | POST | PUT | DELETE | CONNECT | OPTIONS | TRACE | PATCH
server.route('/test/route', (req, url) => {
if (req.method === 'GET')
// Handle GET request.
else if (req.method === 'POST')
// Handle POST request.
}, ['GET', 'POST']);Note
Routes defined with .sse() or .webhook() are always registered as 'GET' and 'POST' respectively and cannot be configured.
spooder does not provide a built-in redirection handler since it's trivial to implement one using Response.redirect, part of the standard Web API.
server.route('/redirect', () => Response.redirect('/redirected', HTTP_STATUS_CODE.MovedPermanently_301));spooder exposes HTTP_STATUS_TEXT to conveniently access status code text, and HTTP_STATUS_CODE for named status code constants.
import { HTTP_STATUS_TEXT, HTTP_STATUS_CODE } from 'spooder';
server.default((req, status_code) => {
// status_code: 404
// Body: Not Found
return new Response(HTTP_STATUS_TEXT[status_code], { status: status_code });
});
// Using named constants for better readability
server.route('/api/users', (req, url) => {
if (!isValidUser(req))
return HTTP_STATUS_CODE.Unauthorized_401;
// Process user request
return HTTP_STATUS_CODE.OK_200;
});RequestHandler is a function that accepts a Request object and a URL object and returns a HandlerReturnType.
HandlerReturnType must be one of the following.
| Type | Description |
|---|---|
Response |
https://developer.mozilla.org/en-US/docs/Web/API/Response |
Blob |
https://developer.mozilla.org/en-US/docs/Web/API/Blob |
BunFile |
https://bun.sh/docs/api/file-io |
object |
Will be serialized to JSON. |
string |
Will be sent as `text/html``. |
number |
Sets status code and sends status message as plain text. |
Note
For custom JSON serialization on an object/class, implement the toJSON() method.
HandleReturnType can also be a promise resolving to any of the above types, which will be awaited before sending the response.
Note
Returning Bun.file() directly is the most efficient way to serve static files as it uses system calls to stream the file directly to the client without loading into user-space.
Query parameters can be accessed from the searchParams property on the URL object.
server.route('/test', (req, url) => {
return new Response(url.searchParams.get('foo'), { status: 200 });
});GET /test?foo=bar HTTP/1.1
HTTP/1.1 200 OK
Content-Length: 3
barNamed parameters can be used in paths by prefixing a path segment with a colon.
Important
Named parameters will overwrite existing query parameters with the same name.
server.route('/test/:param', (req, url) => {
return new Response(url.searchParams.get('param'), { status: 200 });
});Wildcards can be used to match any path that starts with a given path.
Note
If you intend to use this for directory serving, you may be better suited looking at the server.dir() function.
server.route('/test/*', (req, url) => {
return new Response('Hello, world!', { status: HTTP_STATUS_CODE.OK_200 });
});Important
Routes are FIFO and wildcards are greedy. Wildcards should be registered last to ensure they do not consume more specific routes.
server.route('/*', () => HTTP_STATUS_CODE.MovedPermanently_301);
server.route('/test', () => HTTP_STATUS_CODE.OK_200);
// Accessing /test returns 301 here, because /* matches /test first.Register a custom handler for a specific status code.
server.handle(HTTP_STATUS_CODE.InternalServerError_500, (req) => {
return new Response('Custom Internal Server Error Message', { status: HTTP_STATUS_CODE.InternalServerError_500 });
});Register a handler for all unhandled response codes.
Note
If you return a Response object from here, you must explicitly set the status code.
server.default((req, status_code) => {
return new Response(`Custom handler for: ${status_code}`, { status: status_code });
});Register a handler for uncaught errors.
Note
Unlike other handlers, this should only return Response or Promise<Response>.
server.error((err, req, url) => {
return new Response('Custom Internal Server Error Message', { status: HTTP_STATUS_CODE.InternalServerError_500 });
});Important
It is highly recommended to use caution() or some form of reporting to notify you when this handler is called, as it means an error went entirely uncaught.
server.error((err, req, url) => {
// Notify yourself of the error.
caution({ err, url });
// Return a response to the client.
return new Response('Custom Internal Server Error Message', { status: HTTP_STATUS_CODE.InternalServerError_500 });
});server.on_slow_request can be used to register a callback for requests that take an undesirable amount of time to process.
By default requests that take longer than 1000ms to process will trigger the callback, but this can be adjusted by providing a custom threshold.
Important
If your canary reports to a public repository, be cautious about directly including the req object in the callback. This can lead to sensitive information being leaked.
server.on_slow_request(async (req, time, url) => {
// avoid `time` in the title to avoid canary spam
// see caution() API for information
await caution('Slow request warning', { req, time });
}, 500);Note
The callback is not awaited internally, so you can use async/await freely without blocking the server/request.
In some scenarios, mitigation throttling or heavy workloads may cause slow requests intentionally. To prevent these triggering a caution, requests can be marked as slow.
server.on_slow_request(async (req, time, url) => {
await caution('Slow request warning', { req, time });
}, 500);
server.route('/test', async (req) => {
// this request is marked as slow, therefore won't
// trigger on_slow_request despite taking 5000ms+
server.allow_slow_request(req);
await new Promise(res => setTimeout(res, 5000));
});Note
This will have no effect if a handler hasn't been registered with on_slow_request.
Serve files from a directory.
server.dir('/content', './public/content');Important
server.dir registers a wildcard route. Routes are FIFO and wildcards are greedy. Directories should be registered last to ensure they do not consume more specific routes.
server.dir('/', '/files');
server.route('/test', () => 200);
// Route / is equal to /* with server.dir()
// Accessing /test returns 404 here because /files/test does not exist.You can configure directory behavior using the DirOptions interface:
interface DirOptions {
ignore_hidden?: boolean; // default: true
index_directories?: boolean; // default: false
support_ranges?: boolean; // default: true
}Options-based configuration:
// Enable directory browsing with HTML listings
server.dir('/files', './public', { index_directories: true });
// Serve hidden files and disable range requests
server.dir('/files', './public', {
ignore_hidden: false,
support_ranges: false
});
// Full configuration
server.dir('/files', './public', {
ignore_hidden: true,
index_directories: true,
support_ranges: true
});When index_directories is enabled, accessing a directory will return a styled HTML page listing the directory contents with file and folder icons.
For complete control, provide a custom handler function:
server.dir('/static', '/static', (file_path, file, stat, request, url) => {
// ignore hidden files by default, return 404 to prevent file sniffing
if (path.basename(file_path).startsWith('.'))
return HTTP_STATUS_CODE.NotFound_404;
if (stat.isDirectory())
return HTTP_STATUS_CODE.Unauthorized_401;
return http_apply_range(file, request);
});| Parameter | Type | Reference |
|---|---|---|
file_path |
string |
The path to the file on disk. |
file |
BunFile |
https://bun.sh/docs/api/file-io |
stat |
fs.Stats |
https://nodejs.org/api/fs.html#class-fsstats |
request |
Request |
https://developer.mozilla.org/en-US/docs/Web/API/Request |
url |
URL |
https://developer.mozilla.org/en-US/docs/Web/API/URL |
Asynchronous directory handlers are supported and will be awaited.
server.dir('/static', '/static', async (file_path, file) => {
let file_contents = await file.text();
// do something with file_contents
return file_contents;
});Note
The directory handler function is only called for files that exist on disk - including directories.
Note
Uncaught ENOENT errors thrown from the directory handler will return a 404 response, other errors will return a 500 response.
http_apply_range parses the Range header for a request and slices the file accordingly. This is used internally by server.dir() and exposed for convenience.
server.route('/test', (req, url) => {
const file = Bun.file('./test.txt');
return http_apply_range(file, req);
});GET /test HTTP/1.1
Range: bytes=0-5
HTTP/1.1 206 Partial Content
Content-Length: 6
Content-Range: bytes 0-5/6
Content-Type: text/plain;charset=utf-8
Hello,Setup a server-sent event stream.
server.sse('/sse', (req, url, client) => {
client.message('Hello, client!'); // Unnamed event.
client.event('named_event', 'Hello, client!'); // Named event.
client.message(JSON.stringify({ foo: 'bar' })); // JSON message.
});client.closed is a promise that resolves when the client closes the connection.
const clients = new Set();
server.sse('/sse', (req, url, client) => {
clients.add(client);
client.closed.then(() => clients.delete(client));
});Connections can be manually closed with client.close(). This will also trigger the client.closed promise to resolve.
server.sse('/sse', (req, url, client) => {
client.message('Hello, client!');
setTimeout(() => {
client.message('Goodbye, client!');
client.close();
}, 5000);
});π§ server.webhook(secret: string, path: string, handler: WebhookHandler, branches?: string | string[])
Setup a webhook handler.
server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
// React to the webhook.
return HTTP_STATUS_CODE.OK_200;
});You can optionally filter webhooks by branch name using the branches parameter:
// Only trigger for main branch
server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
// This will only fire for pushes to main branch
return HTTP_STATUS_CODE.OK_200;
}, 'main');
// Trigger for multiple branches
server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
// This will fire for pushes to main or staging branches
return HTTP_STATUS_CODE.OK_200;
}, ['main', 'staging']);When branch filtering is enabled, the webhook handler will only be called for pushes to the specified branches. The branch name is extracted from the payload's ref field (e.g., refs/heads/main becomes main).
A webhook callback will only be called if the following critera is met by a request:
- Request method is
POST(returns405otherwise) - Header
X-Hub-Signature-256is present (returns400otherwise) - Header
Content-Typeisapplication/json(returns401otherwise) - Request body is a valid JSON object (returns
500otherwise) - HMAC signature of the request body matches the
X-Hub-Signature-256header (returns401otherwise) - If branch filtering is enabled, the push must be to one of the specified branches (returns
200but ignores otherwise)
Note
Constant-time comparison is used to prevent timing attacks when comparing the HMAC signature.
Register a route which handles websocket connections.
server.websocket('/path/to/websocket', {
// all of these handlers are OPTIONAL
accept: (req, url) => {
// validates a request before it is upgraded
// returns HTTP 401 if FALSE is returned
// allows you to check headers/authentication
// url parameter contains query parameters from route
// if an OBJECT is returned, the object will
// be accessible on the websocket as ws.data.*
return true;
},
open: (ws) => {
// called when a websocket client connects
},
close: (ws, code, reason) => {
// called when a websocket client disconnects
},
message: (ws, message) => {
// called when a websocket message is received
// message is a string or buffer
},
message_json: (ws, data) => {
// called when a websocket message is received
// payload is parsed as JSON
// if payload cannot be parsed, socket will be
// closed with error 1003: Unsupported Data
// messages are only internally parsed if this
// handler is present
},
drain: (ws) => {
// called when a websocket with backpressure drains
}
});Important
While it is possible to register multiple routes for websockets, the only handler which is unique per route is accept(). The last handlers provided to any route (with the exception of accept()) will apply to ALL websocket routes. This is a limitation in Bun.
spooder provides a building-block style API with the intention of giving you the blocks to construct a server your way, rather than being shoe-horned into one over-engineered mega-solution which you don't need.
For simpler projects, the scaffolding can often look the same, potentially something similar to below.
import { http_serve, cache_http, parse_template, http_apply_range, git_get_hashes } from 'spooder';
import path from 'node:path';
const server = http_serve(80);
const cache = cache_http({
ttl: 5 * 60 * 60 * 1000, // 5 minutes
max_size: 5 * 1024 * 1024, // 5 MB
use_canary_reporting: true,
use_etags: true
});
const base_file = await Bun.file('./html/base_template.html').text();
const git_hash_table = await git_get_hashes();
async function default_handler(status_code: number): Promise<Response> {
const error_text = HTTP_STATUS_CODE[status_code] as string;
const error_page = await Bun.file('./html/error.html').text();
const content = await parse_template(error_page, {
title: error_text,
error_code: status_code.toString(),
error_text: error_text
}, true);
return new Response(content, { status: status_code });
}
server.error((err: Error) => {
caution(err?.message ?? err);
return default_handler(HTTP_STATUS_CODE.InternalServerError_500);
});
server.default((req, status_code) => default_handler(status_code));
server.dir('/static', './static', async (file_path, file, stat, request) => {
// ignore hidden files by default, return 404 to prevent file sniffing
if (path.basename(file_path).startsWith('.'))
return HTTP_STATUS_CODE.NotFound_404;
if (stat.isDirectory())
return HTTP_STATUS_CODE.Unauthorized_401;
// serve css/js files directly
const ext = path.extname(file_path);
if (ext === '.css' || ext === '.js') {
const content = await parse_template(await file.text(), {
cache_bust: (file) => `${file}?v=${git_hash_table[file]}`
}, true);
return new Response(content, {
headers: {
'Content-Type': file.type
}
});
}
return http_apply_range(file, request);
});
function add_route(route: string, file: string, title: string) {
server.route(route, async (req) => {
return cache.request(req, route, async () => {
const file_content = await Bun.file(file).text();
const template = await parse_template(base_file, {
title: title,
content: file_content,
asset: (file) => git_hash_table[file]
}, true);
return template;
});
});
}
add_route('/', './html/index.html', 'Homepage');
add_route('/about', './html/about.html', 'About Us');
add_route('/contact', './html/contact.html', 'Contact Us');
add_route('/privacy', './html/privacy.html', 'Privacy Policy');
add_route('/terms', './html/terms.html', 'Terms of Service');For a project where you are looking for fine control, this may be acceptable, but for bootstrapping simple servers this can be a lot of boilerplate. This is where server.bootstrap comes in.
Bootstrap a server using spooder utilities with a straight-forward options API, cutting out the boilerplate.
const server = http_serve(80);
server.bootstrap({
base: Bun.file('./html/base_template.html'),
drop_missing_subs: false,
cache: {
ttl: 5 * 60 * 60 * 1000, // 5 minutes
max_size: 5 * 1024 * 1024, // 5 MB
use_canary_reporting: true,
use_etags: true
},
error: {
use_canary_reporting: true,
error_page: Bun.file('./html/error.html')
},
cache_bust: { // true or options
format: '$file#$hash', // default: $file?v=$hash
hash_length: 20, // default: 7
prefix: 'bust' // default: cache_bust
},
static: {
directory: './static',
route: '/static',
sub_ext: ['.css']
},
global_subs: {
'project_name': 'Some Project'
},
routes: {
'/': {
content: Bun.file('./html/index.html'),
subs: { 'title': 'Homepage' }
},
'/about': {
content: Bun.file('./html/about.html'),
subs: { 'title': 'About Us' }
},
'/contact': {
content: Bun.file('./html/contact.html'),
subs: { 'title': 'Contact Us' }
},
'/privacy': {
content: Bun.file('./html/privacy.html'),
subs: { 'title': 'Privacy Policy' }
},
'/terms': {
content: Bun.file('./html/terms.html'),
subs: { 'title': 'Terms of Service' }
}
}
});The BootstrapOptions object accepts the following properties:
Optional base template that wraps all route content. The base template should include {{content}} where the route content will be inserted.
// Base template: base.html
<html>
<head><title>{{title}}</title></head>
<body>{{content}}</body>
</html>
// Usage
server.bootstrap({
base: Bun.file('./templates/base.html'),
routes: {
'/': {
content: '<h1>Welcome</h1>',
subs: { title: 'Home' }
}
}
});Optional. Defaults to true. If explicitly disabled, templating parsing will not drop unknown substitutions.
![NOTE] If you are using a client-side framework that uses the double-brace syntax
{{foo}}such as Vue, you should set this tofalseto ensure compatibility.
Required. Defines the routes and their content. Each route can have:
content: The page content (string or BunFile)subs?: Template substitutions specific to this route
routes: {
'/about': {
content: Bun.file('./pages/about.html'),
subs: {
title: 'About Us',
description: 'Learn more about our company'
}
}
}Optional HTTP caching configuration. Can be:
- A
CacheOptionsobject (creates new cache instance) - An existing cache instance from
cache_http() - Omitted to disable caching
cache: {
ttl: 5 * 60 * 1000, // 5 minutes
max_size: 10 * 1024 * 1024, // 10 MB
use_etags: true,
use_canary_reporting: true
}Enables the use of the cache_bust() API inside templates using the {{cache_bust=file}} directive.
<link href="{{cache_bust=static/css/style.css}}">
<script src="{{cache_bust=static/js/app.js}}"></script>
<img src="{{cache_bust=static/images/logo.png}}">Since this uses the cache_bust() API internally, it is effected by the cache_bust_set_hash_length and cache_bust_set_format global functions.
Setting cache_bust to true assumes the normal defaults, however this can be customized by providing an options object.
cache_bust: { // true or options
format: '$file#$hash', // default: $file?v=$hash
hash_length: 20, // default: 7
prefix: 'bust' // default: cache_bust
},![IMPORTANT]
formatandhash_lengthinternally callcache_bust_set_formatandcache_bust_set_hash_lengthrespectively, so these values will effectcache_bust()globally.
Optional error page configuration:
error_page: Template for error pages (string or BunFile)use_canary_reporting?: Whether to report errors via canary
Error templates receive {{error_code}} and {{error_text}} substitutions.
error: {
error_page: Bun.file('./templates/error.html'),
use_canary_reporting: true
}Optional static file serving configuration:
route: URL path prefix for static filesdirectory: Local directory containing static filessub_ext?: Array of file extensions that should have template substitution applied
static: {
route: '/assets',
directory: './public',
sub_ext: ['.css', '.js'] // These files get template processing
}Files with extensions in sub_ext will have template substitutions applied before serving. This includes support for functions to generate dynamic content:
// Dynamic CSS with function-based substitutions
static: {
route: '/assets',
directory: './public',
sub_ext: ['.css']
},
global_subs: {
theme_color: () => {
const hour = new Date().getHours();
return hour < 6 || hour > 18 ? '#2d3748' : '#4a5568';
}
}This allows CSS files to use dynamic substitutions: color: {{theme_color}};
Optional global template substitutions available to all routes, error pages, and static files with sub_ext.
global_subs: {
site_name: 'My Website',
version: '1.0.0',
api_url: 'https://api.example.com',
// Function-based substitutions for dynamic content
current_year: () => new Date().getFullYear().toString(),
build_time: async () => {
// Example: fetch build timestamp from git
const process = Bun.spawn(['git', 'log', '-1', '--format=%ct']);
const output = await Bun.readableStreamToText(process.stdout);
return new Date(parseInt(output.trim()) * 1000).toISOString();
},
user_count: async () => {
// Example: dynamic user count from database
const count = await db.count('SELECT COUNT(*) as count FROM users');
return count.toLocaleString();
}
}Functions in global_subs and route-specific subs are called during template processing, allowing for dynamic content generation. Both synchronous and asynchronous functions are supported.
- Route content is loaded
- If
baseis defined, content is wrapped using{{content}}substitution - Route-specific
subsandglobal_subsare applied - Hash substitutions (if enabled) are applied
When called on a request, the cookies_get function will return a Bun.CookieMap contains all of the cookies parsed from the Cookie header on the request.
server.route('/', (req, url) => {
const cookies = cookies_get(req);
return `Hello ${cookies.get('person') ?? 'unknown'}`;
});The return Bun.CookieMap is an iterable map with a custom API for reading/setting cookies. The full API can be seen here.
Any changes made to the cookie map (adding, deletion, editing, etc) will be sent as Set-Cookie headers on the response automatically. Unchanged cookies are not sent.
server.route('/', (req, url) => {
const cookies = cookies_get(req);
cookies.set('test', 'foobar');
return 'Hello, world!';
// the response automatically gets:
// Set-Cookie test=foobar; Path=/; SameSite=Lax
});The ErrorWithMetadata class allows you to attach metadata to errors, which can be used for debugging purposes when errors are dispatched to the canary.
throw new ErrorWithMetadata('Something went wrong', { foo: 'bar' });Functions and promises contained in the metadata will be resolved and the return value will be used instead.
throw new ErrorWithMetadata('Something went wrong', { foo: () => 'bar' });Raise a warning issue on GitHub. This is useful for non-fatal issues which you want to be notified about.
Note
This function is only available if the canary feature is enabled.
try {
// Perform a non-critical action, such as analytics.
// ...
} catch (e) {
// `caution` is async, you can use it without awaiting.
caution(e);
}Additional data can be provided as objects which will be serialized to JSON and included in the report.
caution(e, { foo: 42 });A custom error message can be provided as the first parameter
Note
Avoid including dynamic information in the title that would prevent the issue from being unique.
caution('Custom error', e, { foo: 42 });Issues raised with caution() are rate-limited. By default, the rate limit is 86400 seconds (24 hours), however this can be configured in the spooder.canary.throttle property.
{
"spooder": {
"canary": {
"throttle": 86400
}
}
}Issues are considered unique by the err_message parameter, so avoid using dynamic information that would prevent this from being unique.
If you need to provide unique information, you can use the err parameter to provide an object which will be serialized to JSON and included in the issue body.
const some_important_value = Math.random();
// Bad: Do not use dynamic information in err_message.
await caution('Error with number ' + some_important_value);
// Good: Use err parameter to provide dynamic information.
await caution('Error with number', { some_important_value });This behaves the same as caution() with the difference that once panic() has raised the issue, it will exit the process with a non-zero exit code.
Note
This function is only available if the canary feature is enabled.
This should only be used as an absolute last resort when the server cannot continue to run and will be unable to respond to requests.
try {
// Perform a critical action.
// ...
} catch (e) {
// You should await `panic` since the process will exit.
await panic(e);
}safe() is a utility function that wraps a "callable" and calls caution() if it throws an error.
Note
This utility is primarily intended to be used to reduce boilerplate for fire-and-forget functions that you want to be notified about if they fail.
safe(async (() => {
// This code will run async and any errors will invoke caution().
});safe() supports both async and sync callables, as well as Promise objects. safe() can also used with await.
await safe(() => {
return new Promise((resolve, reject) => {
// Do stuff.
});
});Create a worker pool with an event-based communication system between the main thread and one or more workers. This provides a networked event system on top of the native postMessage API.
// with a single worker (id defaults to 'main')
const pool = await worker_pool({
worker: './worker.ts'
});
// with multiple workers and custom ID
const pool = await worker_pool({
id: 'main',
worker: ['./worker_a.ts', './worker_b.ts']
});
// spawn multiple instances of the same worker
const pool = await worker_pool({
worker: './worker.ts',
size: 5 // spawns 5 instances
});
// with custom response timeout
const pool = await worker_pool({
worker: './worker.ts',
response_timeout: 10000 // 10 seconds (default: 5000ms, use -1 to disable)
});
// with auto-restart enabled (boolean)
const pool = await worker_pool({
worker: './worker.ts',
auto_restart: true // uses default settings
});
// with custom auto-restart configuration
const pool = await worker_pool({
worker: './worker.ts',
auto_restart: {
backoff_max: 5 * 60 * 1000, // 5 min (default)
backoff_grace: 30000, // 30 seconds (default)
max_attempts: 5 // -1 for unlimited (default: 5)
}
});Connect a worker thread to the worker pool. This should be called from within a worker thread to establish communication with the main thread and other workers.
Parameters:
peer_id- Optional worker ID (defaults toworker-UUID)response_timeout- Optional timeout in milliseconds for request-response patterns (default: 5000ms, use -1 to disable)
// worker thread
const pool = worker_connect('my_worker'); // defaults to worker-UUID, 5000ms timeout
pool.on('test', msg => {
console.log(`Received ${msg.data.foo} from ${msg.peer}`);
});
// with custom timeout
const pool = worker_connect('my_worker', 10000); // 10 second timeout
const pool = worker_connect('my_worker', -1); // no timeout// main thread
const pool = await worker_pool({
id: 'main',
worker: './worker.ts'
});
pool.send('my_worker', 'test', { foo: 42 });
// worker thread (worker.ts)
const pool = worker_connect('my_worker');
pool.on('test', msg => {
console.log(`Received ${msg.data.foo} from ${msg.peer}`);
// > Received 42 from main
});// main thread
const pool = await worker_pool({
id: 'main',
worker: ['./worker_a.ts', './worker_b.ts']
});
pool.send('worker_a', 'test', { foo: 42 }); // send to just worker_a
pool.broadcast('test', { foo: 50 } ); // send to all workers
// worker_a.ts
const pool = worker_connect('worker_a');
// send from worker_a to worker_b
pool.send('worker_b', 'test', { foo: 500 });π§ pool.send(peer: string, id: string, data?: Record<string, any>, expect_response?: boolean): void | Promise<WorkerMessage>
Send a message to a specific peer in the pool, which can be the main host or another worker.
When expect_response is false (default), the function returns void. When true, it returns a Promise<WorkerMessage> that resolves when the peer responds using pool.respond().
// Fire-and-forget (default behavior)
pool.send('main', 'user_update', { user_id: 123, name: 'John' });
pool.send('worker_b', 'simple_event');
// Request-response pattern
const response = await pool.send('worker_b', 'calculate', { value: 42 }, true);
console.log('Result:', response.data);Note
When using expect_response: true, the promise will reject with a timeout error if no response is received within the configured timeout (default: 5000ms). You can configure this timeout in worker_pool() options or worker_connect() parameters, or disable it entirely by setting it to -1.
Broadcast a message to all peers in the pool.
pool.broadcast('test_event', { foo: 42 });Register an event handler for messages with the specified event ID. The callback can be synchronous or asynchronous.
pool.on('process_data', async msg => {
// msg.peer
// msg.id
// msg.data
});Note
There can only be one event handler for a specific event ID. Registering a new handler for an existing event ID will overwrite the previous handler.
Register an event handler for messages with the specified event ID. This is the same as pool.on, except the handler is automatically removed once it is fired.
pool.once('one_time_event', async msg => {
// this will only fire once
});Unregister an event handler for events with the specified event ID.
pool.off('event_name');Respond to a message that was sent with expect_response: true. This allows implementing request-response patterns between peers.
pool.on('calculate', msg => {
const result = msg.data.value * 2;
pool.respond(msg, { result });
});
const response = await pool.send('worker_a', 'calculate', { value: 42 }, true);
console.log(response.data.result); // 84Message Structure:
message.id- The event IDmessage.peer- The sender's peer IDmessage.data- The message payloadmessage.uuid- Unique identifier for this messagemessage.response_to- UUID of the message being responded to (only present in responses)
// main.ts
const pool = await worker_pool({
id: 'main',
worker: './worker.ts'
});
const response = await pool.send('worker_a', 'MSG_REQUEST', { value: 42 }, true);
console.log(`Got response ${response.data.value} from ${response.peer}`);
// worker.ts
const pool = worker_connect('worker_a');
pool.on('MSG_REQUEST', msg => {
console.log(`Received request with value: ${msg.data.value}`);
pool.respond(msg, { value: msg.data.value * 2 });
});Worker pools support lifecycle callbacks to monitor when workers start and stop. Callbacks receive the pool instance, allowing you to communicate with workers immediately.
const pool = await worker_pool({
worker: './worker.ts',
auto_restart: true,
onWorkerStart: async (pool, worker_id) => {
console.log(`Worker ${worker_id} started`);
await pool.send(worker_id, 'init', { config: 'value' }, true);
},
onWorkerStop: (pool, worker_id, exit_code) => {
console.log(`Worker ${worker_id} stopped with exit code ${exit_code}`);
if (exit_code !== 0 && exit_code !== 42) {
console.log(`Worker ${worker_id} crashed`);
}
}
});onWorkerStart: (pool: WorkerPool, worker_id: string) => void- Fires when a worker registers with the poolonWorkerStop: (pool: WorkerPool, worker_id: string, exit_code: number) => void- Fires when a worker stops
The worker_pool function supports automatic worker restart when workers crash or close unexpectedly. This feature includes an exponential backoff protocol to prevent restart loops.
auto_restart:boolean | AutoRestartConfig- Enable auto-restart (optional)- If
true, uses default settings - If an object, allows customization of restart behavior
- If
backoff_max:number- Maximum delay between restart attempts in milliseconds (default:5 * 60 * 1000= 5 minutes)backoff_grace:number- Time in milliseconds a worker must run successfully before restart attempts are reset (default:30000= 30 seconds)max_attempts:number- Maximum number of restart attempts before giving up (default:5, use-1for unlimited)
- Initial restart delay starts at 100ms
- Each subsequent restart doubles the delay
- Delay is capped at
backoff_max - If a worker runs successfully for
backoff_gracemilliseconds, the delay and attempt counter reset - After
max_attemptsfailures, auto-restart stops for that worker
Example:
const pool = await worker_pool({
worker: './worker.ts',
auto_restart: {
backoff_max: 5 * 60 * 1000, // cap at 5 minutes
backoff_grace: 30000, // reset after 30 seconds of successful operation
max_attempts: 5 // give up after 5 failed attempts
}
});Workers can exit gracefully without triggering an auto-restart by using the WORKER_EXIT_NO_RESTART exit code (42):
// worker thread
import { WORKER_EXIT_NO_RESTART } from 'spooder';
process.exit(WORKER_EXIT_NO_RESTART); // exits without auto-restartImportant
Each worker pipe instance expects to be the sole handler for the worker's message events. Creating multiple pipes for the same worker may result in unexpected behavior.
Initialize a file caching system that stores file contents in memory with configurable TTL, size limits, and ETag support for efficient HTTP caching.
import { cache_http } from 'spooder';
const cache = cache_http({
ttl: 5 * 60 * 1000 // 5 minutes
});
// Use with server routes for static files
server.route('/', cache.file('./index.html'));
// Use with server routes for dynamic content
server.route('/dynamic', async (req) => cache.request(req, 'dynamic-page', () => 'Dynamic Content'));
// Disable caching (useful for development mode)
const devCache = cache_http({ enabled: process.env.SPOODER_ENV !== 'dev' });
server.route('/no-cache', devCache.file('./index.html')); // Always reads from diskThe cache_http() function returns an object with two methods:
Caches static files from the filesystem. This method reads the file from disk and caches its contents with automatic content-type detection.
// Cache a static HTML file
server.route('/', cache.file('./public/index.html'));
// Cache CSS files
server.route('/styles.css', cache.file('./public/styles.css'));π§ cache.request(req: Request, cache_key: string, content_generator: () => string | Promise<string>): Promise<Response>
Caches dynamic content using a cache key and content generator function. The generator function is called only when the cache is cold (empty or expired). This method directly processes requests and returns responses, making it compatible with any request handler.
// Cache dynamic HTML content
server.route('/user/:id', async (req) => {
return cache.request(req, '/user', async () => {
const userData = await fetchUserData();
return generateUserHTML(userData);
});
});
// Cache API responses
server.route('/api/stats', async (req) => {
return cache.request(req, 'stats', () => {
return JSON.stringify({ users: getUserCount(), posts: getPostCount() });
});
});| Option | Type | Default | Description |
|---|---|---|---|
ttl |
number |
18000000 (5 hours) |
Time in milliseconds before cached entries expire |
max_size |
number |
5242880 (5 MB) |
Maximum total size of all cached files in bytes |
use_etags |
boolean |
true |
Generate and use ETag headers for cache validation |
headers |
Record<string, string> |
{} |
Additional HTTP headers to include in responses |
use_canary_reporting |
boolean |
false |
Reports faults to canary (see below) |
enabled |
boolean |
true |
When false, content is generated but not stored |
If use_canary_reporting is enabled, spooder will call caution() in two scenarios:
- The cache has exceeded it's maximum capacity and had to purge. If this happens frequently, it is an indication that the maximum capacity should be increased or the use of the cache should be evaluated.
- An item cannot enter the cache because it's size is larger than the total size of the cache. This is an indication that either something too large is being cached, or the maximum capacity is far too small.
- Files are cached for the specified TTL duration.
- Individual files larger than
max_sizewill not be cached - When total cache size exceeds
max_size, expired entries are removed first - If still over limit, least recently used (LRU) entries are evicted
ETag Support:
- When
use_etagsis enabled, SHA-256 hashes are generated for file contents - ETags enable HTTP 304 Not Modified responses for unchanged files
- Clients can send
If-None-Matchheaders for efficient cache validation
Important
The cache uses memory storage and will be lost when the server restarts. It's designed for improving response times of frequently requested files rather than persistent storage.
Note
Files are only cached after the first request. The cache performs lazy loading and does not pre-populate files on initialization.
The internal cache map can be accessed via cache.entries. This is exposed primarily for debugging and diagnostics you may wish to implement. It is not recommended that you directly manage this.
π§ parse_template(template: string, replacements: Replacements, drop_missing: boolean): Promise<string>
Replace placeholders in a template string with values from a replacement object.
const template = `
<html>
<head>
<title>{{title}}</title>
</head>
<body>
<h1>{{title}}</h1>
<p>{{content}}</p>
<p>{{ignored}}</p>
</body>
</html>
`;
const replacements = {
title: 'Hello, world!',
content: 'This is a test.'
};
const html = await parse_template(template, replacements);<html>
<head>
<title>Hello, world!</title>
</head>
<body>
<h1>Hello, world!</h1>
<p>This is a test.</p>
<p>{{ignored}}</p>
</body>
</html>By default, placeholders that do not appear in the replacement object will be left as-is. Set drop_missing to true to remove them.
await parse_template(template, replacements, true);<html>
<head>
<title>Hello, world!</title>
</head>
<body>
<h1>Hello, world!</h1>
<p>This is a test.</p>
<p></p>
</body>
</html>parse_template supports passing a function instead of a replacement object. This function will be called for each placeholder and the return value will be used as the replacement. Both synchronous and asynchronous functions are supported.
const replacer = (key: string) => {
switch (key) {
case 'timestamp': return Date.now().toString();
case 'random': return Math.random().toString(36).substring(7);
case 'greeting': return 'Hello, World!';
default: return undefined;
}
};
await parse_template('Generated at {{timestamp}}: {{greeting}} (ID: {{random}})', replacer);
// Result: "Generated at 1635789123456: Hello, World! (ID: x7k2p9m)"Custom replacer functions are supported on a per-key basis, mixing with static string replacement.
await parse_template('Hello {{foo}}, it is {{now}}', {
foo: 'world',
now: () => Date.now()
});parse_template supports key/value based substitutions using the {{key=value}} syntax. When a function replacer is provided for the key, the value is passed as a parameter to the function.
await parse_template('Color: {{hex=blue}}', {
hex: (color) => {
const colors = { blue: '#0000ff', red: '#ff0000', green: '#00ff00' };
return colors[color] || color;
}
});
// Result: "Color: #0000ff"Global replacer functions also support the value parameter:
await parse_template('Transform: {{upper=hello}} and {{lower=WORLD}}', (key, value) => {
if (key === 'upper' && value) return value.toUpperCase();
if (key === 'lower' && value) return value.toLowerCase();
return 'unknown';
});
// Result: "Transform: HELLO and world"parse_template supports conditional rendering with the following syntax.
<t-if test="foo">I love {{foo}}</t-if>Contents contained inside a t-if block will be rendered providing the given value, in this case foo is truthy in the substitution table.
A t-if block is only removed if drop_missing is true, allowing them to persist through multiple passes of a template.
parse_template supports looping arrays and objects using the items and as attributes.
<t-for items="items" as="item"><div>{{item.name}}: {{item.value}}</div></t-for>const template = `
<ul>
<t-for items="colors" as="color">
<li class="{{color.type}}">
{{color.name}}
</li>
</t-for>
</ul>
`;
const replacements = {
colors: [
{ name: 'red', type: 'warm' },
{ name: 'blue', type: 'cool' },
{ name: 'green', type: 'neutral' }
]
};
const html = await parse_template(template, replacements);<ul>
<li class="warm">red</li>
<li class="cool">blue</li>
<li class="neutral">green</li>
</ul>For simple arrays containing strings, you can iterate directly over the array items:
const template = `
<ul>
<t-for items="fruits" as="fruit">
<li>{{fruit}}</li>
</t-for>
</ul>
`;
const replacements = {
fruits: ['apple', 'banana', 'orange']
};
const html = await parse_template(template, replacements);<ul>
<li>apple</li>
<li>banana</li>
<li>orange</li>
</ul>You can access nested object properties using dot notation:
const data = {
user: {
profile: { name: 'John', age: 30 },
settings: { theme: 'dark' }
}
};
await parse_template('Hello {{user.profile.name}}, you prefer {{user.settings.theme}} mode!', data);
// Result: "Hello John, you prefer dark mode!"All placeholders inside a <t-for> loop are substituted, but only if the loop variable exists.
In the following example, missing does not exist, so test is not substituted inside the loop, but test is still substituted outside the loop.
<div>Hello {{test}}!</div>
<t-for items="missing" as="item">
<div>Loop {{test}}</div>
</t-for>await parse_template(..., {
test: 'world'
});<div>Hello world!</div>
<t-for items="missing" as="item">
<div>Loop {{test}}</div>
</t-for>Appends a hash-suffix to the provided string, formatted by default as a query parameter, for cache-busting purposes.
cache_bust('static/my_image.png'); // > static/my_image.png?v=123feaThis works on an array of paths as well.
cache_bust([
'static/js/script1.js',
'static/js/script2.js'
]);
// [
// 'static/js/script1.js?v=fffffff',
// 'static/js/script2.js?v=fffffff'
// ]![NOTE] Internally
cache_bust()usesgit_get_hashes()to hash paths, requiring the inputpathto be a valid git path. If the path cannot be resolved in git, an empty hash is substituted.
The default format for used for cache_bust() is $file?v=$hash, this can be customized per-call with the format parameter, or globally using cache_bust_set_format()
cache_bust('dogs.txt'); // > dogs.txt?v=fff
cache_bust('dogs.txt', '$file?hash=$hash'); // > dogs.txt?hash=fff
cache_bust_set_format('$file#$hash');
cache_bust('dogs.txt'); // > dogs#fffThe default hash-length used by cache_bust() is 7. This can be changed with cache_bust_set_hash_length().
![NOTE] Hashes are cached once at the specified length, therefore
cache_bust_set_hash_length()must be called before callingcache_bust()and has no effect calling it after.
cache_bust_set_hash_length(10);
cache_bust('dogs.txt'); // > dogs.txt?v=ffffffffffThis function returns the internal hash table used by cache_bust(). This is exposed to userland in the event that you which to use the hashes for other purposes, avoiding the need to call and store git_get_hashes() twice.
Retrieve git hashes for all files in the repository. This is useful for implementing cache-busting functionality or creating file integrity checks.
Important
Internally git_get_hashes() uses git ls-tree -r HEAD, so the working directory must be a git repository.
const hashes = await git_get_hashes(7);
// { 'docs/project-logo.png': '754d9ea' }You can specify the hash length (default is 7 characters for short hashes):
const full_hashes = await git_get_hashes(40);
// { 'docs/project-logo.png': 'd65c52a41a75db43e184d2268c6ea9f9741de63e' }Before v6.0.0, spooder provided a database API for sqlite and mysql while they were not available natively in bun.
Now that bun provides a native API for these, we've dropped our API in favor of those as it aligns with the mission of minimalism.
You can see the documentation for the Bun SQL API here.
Takes a database SET string and returns a Set<T> where T is a provided enum.
enum Fruits {
Apple = 'Apple',
Banana = 'Banana',
Lemon = 'Lemon'
};
const [row] = await sql`SELECT * FROM some_table`;
const set = db_set_cast<Fruits>(row.fruits);
if (set.has(Fruits.Apple)) {
// we have an apple in the set
}Takes an Iterable<T> and returns a database SET string. If the set is empty or null, it returns an empty string.
enum Fruits {
Apple = 'Apple',
Banana = 'Banana',
Lemon = 'Lemon'
};
// edit existing set
const [row] = await sql`SELECT * FROM some_table`;
const fruits = db_set_cast<Fruits>(row.fruits);
if (!fruits.has(Fruits.Lemon))
fruits.add(Fruits.Lemon);
await sql`UPDATE some_table SET fruits = ${sql(db_set_serialize(fruits))} WHERE id = ${row.id}`;
// new set from iterable
await sql`UPDATE some_table SET fruits = ${sql(db_set_serialize([Fruits.Apple, Fruits.Lemon]))}`;π§ db_exists(db: SQL, table_name: string, value: string|number, column_name = 'id'): Promise<boolean>
Returns true if a database row exists in the table.
// checks if row exists with id 1 in 'table'
const exists = await db_exists(db, 'table', 1);
// checks if row exists with column 'foo' = 'bar' in 'table'
const exists = await db_exists(db, 'table', 'bar', 'foo');db_schema executes all revisioned .sql files in a given directory, applying them to the database incrementally.
const db = new SQL('db:pw@localhost:3306/test');
await db_schema(db, './db/revisions');The above example will recursively search the ./db/revisions directory for all .sql files that begin with a positive numeric identifier.
db/revisions/000_invalid.sql // no: 0 is not valid
db/revisions/001_valid.sql // yes: revision 1
db/revisions/25-valid.sql // yes: revision 25
db/revisions/005_not.txt // no: .sql extension missing
db/revisions/invalid_500.sql // no: must begin with revRevisions are applied in numerical order, rather than the file sorting order from the operating system. Invalid files are skipped without throwing an error.
By default, schema revision is tracked in a table called db_schema. The name of this table can be customized by providing a different .schema_table option.
await db_schema(db, './db/revisions', { schema_table: 'alt_table_name' });The revision folder is enumerated recursively by default. This can be disabled by passing false to .recursive, which will only scan the top level of the specified directory.
await db_schema(db, './db/revisions', { recursive: false });Each revision file is executed within a transaction. In the event of an error, the transaction will be rolled back. Successful revision files executed before the error will not be rolled back. Subsequent revision files will not be executed after an error.
Caution
Implicit commits, such as those that modify DDL, cannot be rolled back inside a transaction.
It is recommended to only feature one implicit commit query per revision file. In the event of multiple, an error will not rollback previous implicitly committed queries within the revision, leaving your database in a partial state.
See MySQL 8.4 Reference Manual // 15.3.3 Statements That Cause an Implicit Commit for more information.
type SchemaOptions = {
schema_table: string;
recursive: boolean;
};
db_get_schema_revision(db: SQL): Promise<number|null>;
db_schema(db: SQL, schema_path: string, options?: SchemaOptions): Promise<boolean>;Returns a human-readable string representation of a file size in bytes.
filesize(512); // > "512 bytes"
filesize(1024); // > "1 kb"
filesize(1048576); // > "1 mb"
filesize(1073741824); // > "1 gb"
filesize(1099511627776); // > "1 tb"A bidirectional map that maintains a two-way relationship between keys and values, allowing efficient lookups in both directions.
const users = new BiMap<number, string>();
// Set key-value pairs
users.set(1, "Alice");
users.set(2, "Bob");
users.set(3, "Charlie");
// Lookup by key
users.getByKey(1); // > "Alice"
// Lookup by value
users.getByValue("Bob"); // > 2
// Check existence
users.hasKey(1); // > true
users.hasValue("Charlie"); // > true
// Delete by key or value
users.deleteByKey(1); // > true
users.deleteByValue("Bob"); // > true
// Other operations
users.size; // > 1
users.clear();This software is provided as-is with no warranty or guarantee. The authors of this project are not responsible or liable for any problems caused by using this software or any part thereof. Use of this software does not entitle you to any support or assistance from the authors of this project.
The code in this repository is licensed under the ISC license. See the LICENSE file for more information.

{ "spooder": { // see CLI > Usage "run": "", "run_dev": "", // see CLI > Auto Restart "auto_restart": { "enabled": false, "backoff_max": 300000, "backoff_grace": 30000, "max_attempts": -1 }, // see CLI > Auto Update "update": [ "git pull", "bun install" ], // see CLI > Canary "canary": { "enabled": false, "account": "", "repository": "", "labels": [], "crash_console_history": 64, "throttle": 86400, "sanitize": true } } }