Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit 699ffca

Browse files
fixed QLoRA CPU issue due to internal API change. (#1503)
Signed-off-by: Ye, Xinyu <[email protected]> Co-authored-by: VincyZhang <[email protected]>
1 parent 3240713 commit 699ffca

File tree

1 file changed

+7
-0
lines changed
  • intel_extension_for_transformers/transformers/llm/finetuning

1 file changed

+7
-0
lines changed

intel_extension_for_transformers/transformers/llm/finetuning/finetuning.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -303,6 +303,13 @@ def finetune(self):
303303
bnb_4bit_use_double_quant=finetune_args.double_quant,
304304
bnb_4bit_quant_type=finetune_args.quant_type,
305305
)
306+
elif training_args.device.type == "cpu":
307+
self.device_map = "cpu"
308+
else:
309+
raise NotImplementedError(
310+
f"Unsupported device type {training_args.device.type}, only support cpu and cuda now."
311+
)
312+
306313
if finetune_args.bits not in [4, 8]:
307314
raise NotImplementedError(
308315
f"Unsupported bits {finetune_args.bits}, only support 4 and 8 now."

0 commit comments

Comments
 (0)