Search Skills

Search for skills or navigate to categories

Skillforthat
AI & Machine Learning
serving-llms-vllm

serving-llms-vllm

Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching

Category

AI & Machine Learning

Developer

davila7
davila7

Updated

Jan
2026

Tags

2
Total

Description

Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.

Skill File

SKILL.md
1Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.

Tags

ApiAi

Information

Developerdavila7
CategoryAI & Machine Learning
CreatedJan 15, 2026
UpdatedJan 15, 2026

You Might Also Like