Table of Contents
2.5.1rc2

Home

  • Welcome to ⚡ Lightning Fabric
  • Install

Get started in steps

  • Basic skills
  • Intermediate skills
  • Advanced skills

Core API Reference

  • Fabric Arguments
  • Fabric Methods

Full API Reference

  • Accelerators
  • Collectives
  • Environments
  • Fabric
  • IO
  • Loggers
  • Precision
  • Strategies
  • Utilities

More

  • Examples
  • Glossary
  • How-tos
  • Style Guide
  • Overview
  • Team management
  • Production
  • Security
  • Open source
    • Overview
    • PyTorch Lightning
    • Fabric
    • Lit-GPT
    • Torchmetrics
    • Litdata
    • Lit LLaMA
    • Litserve
  • Examples
  • Glossary
  • FAQ
  • Docs >
  • Advanced skills
Shortcuts

Advanced skills¶

Use efficient gradient accumulation

Learn how to perform efficient gradient accumulation in distributed settings

advanced

Distribute communication

Learn all about communication primitives for distributed operation. Gather, reduce, broadcast, etc.

advanced

Use multiple models and optimizers

See how flexible Fabric is to work with multiple models and optimizers!

advanced

Speed up models by compiling them

Use torch.compile to speed up models on modern hardware

advanced

Train models with billions of parameters

Train the largest models with FSDP/TP across multiple GPUs and machines

advanced

Save and load very large models

Save and load very large models efficiently with distributed checkpoints

advanced

  • Advanced skills

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. Read PyTorch Lightning's Privacy Policy.