Description
Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.
Skill File
Tags
Information
You Might Also Like
Add Uint Support
Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros
Docstring
Write docstrings for PyTorch functions and methods following PyTorch conventions
Skill Creator
Guide for creating effective skills
Claude Opus 4 5 Migration
Migrate prompts and code from Claude Sonnet 4
Agent Identifier
This skill should be used when the user asks to "create an agent", "add an agent", "write a subag...
Command Development
This skill should be used when the user asks to "create a slash command", "add a command", "write...