Description
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Skill File
Tags
Information
You Might Also Like
Add Uint Support
Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros
Docstring
Write docstrings for PyTorch functions and methods following PyTorch conventions
Skill Creator
Guide for creating effective skills
Claude Opus 4 5 Migration
Migrate prompts and code from Claude Sonnet 4
Agent Identifier
This skill should be used when the user asks to "create an agent", "add an agent", "write a subag...
Command Development
This skill should be used when the user asks to "create a slash command", "add a command", "write...