Skip to content

Pull requests: vllm-project/vllm-gaudi

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

fix empty buckets issue for enforce eager mode
#761 opened Dec 25, 2025 by yangulei Loading…
multimodal model embedding fixes
#759 opened Dec 23, 2025 by libinta Loading…
debug inc
#755 opened Dec 23, 2025 by HolyFalafel Draft
Release Notes for v0.13.0 documentation Improvements or additions to documentation skip-gaudi-tests
#750 opened Dec 22, 2025 by mhelf-intel Loading…
Update lmcache examples
#748 opened Dec 19, 2025 by hsubramony Loading…
Fix async_scheduling + batched prefill
#741 opened Dec 18, 2025 by tianmu-li Loading…
Fix async_scheduling + batched prefill
#740 opened Dec 18, 2025 by tianmu-li Loading…
Added Qwen3 Test
#736 opened Dec 18, 2025 by slokesha Loading…
WA shared bias in UA
#727 opened Dec 16, 2025 by adobrzyn Loading…
DP: Fix for torch.compile
#722 opened Dec 16, 2025 by xinyu-intel Loading…
Add heterogeneous pd docs
#714 opened Dec 13, 2025 by pi314ever Draft
Create UBI based vLLM docker build instructions documentation Improvements or additions to documentation skip-gaudi-tests
#713 opened Dec 12, 2025 by ghandoura Loading…
Add ucx test
#711 opened Dec 12, 2025 by pi314ever Draft
Fix for Llama4 static quantization
#707 opened Dec 10, 2025 by vidyasiv Loading…
Unified attn FP8 perf optimizations
#705 opened Dec 9, 2025 by afierka-intel Loading…
Fix the docker image path documentation Improvements or additions to documentation skip-gaudi-tests
#691 opened Dec 5, 2025 by mhelf-intel Loading…
Add support for chunked attention (#597)
#683 opened Dec 4, 2025 by jkaniecki Loading…
Add support for chunked attention (#597)
#682 opened Dec 4, 2025 by jkaniecki Loading…
ProTip! Mix and match filters to narrow down what you’re looking for.