Skip to content

Micuks/my-vllm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13,331 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vllm website to help you get started with vllm. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


Fork: Adaptive MLFQ Scheduler for LLM Serving

This fork replaces vLLM's default FCFS scheduler with an adaptive multi-level feedback queue (MLFQ) that improves throughput and tail latency under high concurrency.

Problem

Under heavy load, FCFS treats all requests equally regardless of size. Long-running requests block short ones, inflating tail latency (TTFT) and reducing effective throughput.

Approach

Component Description
MLFQ + token demotion Requests drop priority as they consume tokens, letting short requests jump ahead
SJF penalty Deprioritizes requests with more remaining tokens (configurable prefill/decode weights)
LAS penalty Penalizes based on attained service for workload-agnostic fairness
Aging Boosts starved requests with dynamic scaling based on queue depth
Locality boost Prioritizes requests with prefix cache hits
Output backpressure Tracks pending output tokens per request; throttles token budget when consumers lag
Adaptive phase switching Automatically selects scheduling policy (FCFS / LAS / Aging-SJF) based on real-time load via EMA-smoothed RPS with hysteresis
Low-pressure bypass Reverts to baseline FCFS under light load to avoid unnecessary scheduling overhead

Key files

  • vllm/v1/core/sched/scheduler.py -- core scheduling logic
  • vllm/config/scheduler.py -- 40+ tunable config knobs
  • vllm/v1/engine/async_llm.py / output_processor.py -- backpressure feedback loop
  • tests/v1/core/test_scheduler_mlfq.py -- unit tests
  • tools/bench_* -- benchmark scripts and visualization

Results

Benchmarked with split profile (low-pressure: baseline, high-pressure: aging-SJF), burstiness=2, mixed-length workloads (random-mix), goodput thresholds (ttft:2000 tpot:200 e2el:30000), 500 prompts per RPS point.

Qwen2.5-3B-Instruct (RTX 6000 24GB):

RPS Throughput Goodput Mean TTFT P95 TTFT P95 E2EL
4 +6.3% +22.7% +25.1% +27.5% +28.2%
16 +1.9% +4.0% +1.0% +8.0% +2.5%
64 +7.9% +9.9% -0.3% +3.3% +3.8%
128 -2.5% +3.1% +17.5% +58.3% -3.1%
256 -3.9% -2.4% +9.1% +40.2% -3.7%

Key takeaways:

  • P95 TTFT improves 40--58% under high pressure (RPS 128/256) -- the primary goal
  • Low-pressure bypass ensures no regression at low RPS
  • Trade-off: slight throughput and TPOT decrease at high RPS (SJF prioritizes short requests, deferring long ones)

Qwen3-4B (earlier run, A100-class GPU):

RPS Throughput Goodput Mean TTFT P95 TTFT
128 +21.0% +35.5% -30.7% -44.8%

Full benchmark details: docs/bench_aging_sjf.md


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, Arm CPUs, and TPU. Additionally, support for diverse hardware plugins such as Intel Gaudi, IBM Spyre and Huawei Ascend.
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

  •  

Packages

 
 
 

Contributors