Skip to content

Conversation

@vandemjh
Copy link

@vandemjh vandemjh commented May 19, 2025

Resolves timeout issues with older machines (especially those running on CPU) caused by a 300 second timeout constant.

  • Removes whatwg-fetch import in favor of undici
  • Adds a timeout prop to the request options, passing those to a new Agent

Resolves #72

Bundle size (kB)
Before 70.2
After 71.2

Tests

On a crummy computer (or resource limited container) run:

require('ollama')
  .default.chat({
    model: 'gemma3:27b',
    messages: [{ role: 'user', content: 'Why is the sky blue?' }],
  })
  .then((i) => console.log(i));

Resulting in:

{
  "model": "gemma3:27b",
  "created_at": "2025-05-19T18:23:45.15035873Z",
  "message": {
    "role": "assistant",
    "content": "The sky is blue because of a phenomenon called **Rayleigh scattering**. Here's a breakdown..."
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 470459553167,
  "load_duration": 6352338645,
  "prompt_eval_count": 15,
  "prompt_eval_duration": 4274237804,
  "eval_count": 500,
  "eval_duration": 459831723963
}

7 minutes, nice!

@BruceMacD
Copy link
Collaborator

Hi @vandemjh, thanks for the contribution. One challenge we have here is using a default fetch library that is supported across different JS runtimes (versus only node). Seems like we do need a way to configure this, but it looks like using undici may cause issues for people using ollama-js outside a node environment. I'll test this out to confirm if that is the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cause: HeadersTimeoutError: Headers Timeout Error

2 participants