Skip to content

Add flag to turn on activation checkpointing on single GPU #835

@yaoshiang

Description

@yaoshiang

🚀 The feature, motivation and pitch

This feature would allow developers to fine tune on smaller GPUs and / or larger batch sizes, likely leading to higher MFU.

Currently, activation checkpointing only works with FSDP, not single GPU.

Perhaps this is not a worthwhile feature because no one is going to realistically fine tune on a single GPU anyways, and as a workaround, you could just turn on FSDP on a single GPU to enable activation checkpointing. I verified that this workaround works. nvidia-smi now says python is using about 45GB of ram instead of 77GB without FSDP, and my time per batch increased from 1.09 to 1.65.

After bumping the batch size from 11 to 19 to take advantage of the new memory, my TPS bumped from 20_859 to 23_583 and MFU from 50% to 56%.

TOKENIZERS_PARALLELISM=true \
torchrun --nnodes 1 --nproc_per_node 1 \
    finetuning_wrapper.py \
    --model_name meta-llama/Llama-3.2-1B-Instruct \
    --use_peft \
    --peft_method lora \
    --dataset "custom_dataset" \
    --custom_dataset.file "./memorization_dataset.py" \
    --output_dir ./output \
    --num_epochs 2 \
    --batch_size_training 11 \
    --context_length 2048 \
    --lr 1e-3 \
    --enable_fsdp
"""This is a minimal wrapper so that torchrun has a physical py file to access."""
import fire
import llama_recipes.finetuning


if __name__ == "__main__":
    fire.Fire(llama_recipes.finetuning.main)

Alternatives

No response

Additional context

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions