Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit 7914ccc

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent a08344a commit 7914ccc

File tree

1 file changed

+1
-1
lines changed
  • intel_extension_for_transformers/transformers/utils

1 file changed

+1
-1
lines changed

intel_extension_for_transformers/transformers/utils/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -833,7 +833,7 @@ def __init__(
833833
self.double_quant_group_size = double_quant_group_size
834834
# "transformer.output_layer" for chatglm series model.
835835
# "embed_out" for dolly v2 series model.
836-
self.llm_int8_skip_modules = kwargs.get("llm_int8_skip_modules",
836+
self.llm_int8_skip_modules = kwargs.get("llm_int8_skip_modules",
837837
["lm_head", "transformer.output_layer", "embed_out"])
838838
self.use_ggml = use_ggml
839839
self.use_quant = use_quant

0 commit comments

Comments
 (0)