Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jan 5, 2025

Updates the requirements on llama-cpp-python to permit the latest version.

Changelog

Sourced from llama-cpp-python's changelog.

[0.3.5]

  • feat: Update llama.cpp to ggml-org/llama.cpp@26a8406
  • fix(ci): Fix release by updating macos runner image to non-deprecated version by @​abetlen in afedfc888462f9a6e809dc9455eb3b663764cc3f
  • fix(server): add missing await statements for async exit_stack handling by @​gjpower in #1858

[0.3.4]

  • fix(ci): Build wheels for macos 13-15, cuda 12.1-12.4 by @​abetlen in ca808028bd16b8327bd84128d48015a4b1304690

[0.3.3]

[0.3.2]

[0.3.1]

[0.3.0]

  • feat: Update llama.cpp to ggml-org/llama.cpp@ea9c32b
  • feat: Enable detokenizing special tokens with special=True by @​benniekiss in #1596
  • feat(ci): Speed up CI workflows using uv, add support for CUDA 12.5 wheels by @​Smartappli in e529940f45d42ed8aa31334123b8d66bc67b0e78
  • feat: Add loading sharded GGUF files from HuggingFace with Llama.from_pretrained(additional_files=[...]) by @​Gnurro in 84c092063e8f222758dd3d60bdb2d1d342ac292e
  • feat: Add option to configure n_ubatch by @​abetlen in 6c44a3f36b089239cb6396bb408116aad262c702
  • feat: Update sampling API for llama.cpp. Sampling now uses sampler chain by @​abetlen in f8fcb3ea3424bcfba3a5437626a994771a02324b
  • fix: Don't store scores internally unless logits_all=True. Reduces memory requirements for large context by @​abetlen in 29afcfdff5e75d7df4c13bad0122c98661d251ab
  • fix: Fix memory allocation of ndarray in by @​xu-song in #1704
  • fix: Use system message in og qwen format by @​abetlen in 98eb092d3c6e7c142c4ba2faaca6c091718abbb3

[0.2.90]

... (truncated)

Commits
  • 803924b chore: Bump version
  • 801a73a feat: Update llama.cpp
  • afedfc8 fix: add missing await statements for async exit_stack handling (#1858)
  • ea4d86a fix(ci): update macos runner image to non-deprecated version
  • 002f583 chore: Bump version
  • ca80802 fix(ci): hotfix for wheels
  • a9fe0f8 chore: Bump version
  • 61508c2 Add CUDA 12.5 and 12.6 to generated output wheels
  • 5585f8a feat: Update llama.cpp
  • b9b50e5 misc: Update run server command
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) to permit the latest version.
- [Release notes](https://github.com/abetlen/llama-cpp-python/releases)
- [Changelog](https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md)
- [Commits](abetlen/llama-cpp-python@v0.2.82...v0.3.5)

---
updated-dependencies:
- dependency-name: llama-cpp-python
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jan 5, 2025
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Feb 1, 2025

Superseded by #123.

@dependabot dependabot bot closed this Feb 1, 2025
@dependabot dependabot bot deleted the dependabot/pip/llama-cpp-python-gte-0.2.82-and-lt-0.3.6 branch February 1, 2025 18:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant