You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following code is not extracting the tokenizer then:
if (
hasattr(task_config, "metadata")
andtask_config.metadataand"tokenizer"intask_config.metadata
):
tokenizer_value=task_config.metadata.get("tokenizer")
ifisinstance(tokenizer_value, str) andtokenizer_value:
logger.debug(f"Using custom tokenizer from metadata: {tokenizer_value}")
model_args.append(ModelArg(name="tokenizer", value=tokenizer_value))
We should instead collect it from the stored_benchmark field that has the metadata attribute.
BTW: for the same reason, the initial extraction logic in _collect_env_vars, _extract_git_source and _extract_pvc_name is probably destined to fail.
The text was updated successfully, but these errors were encountered:
dmartinol
changed the title
bug: tokenizer option is not
bug: tokenizer filed cannot be extracted from BenchmarkConfig
May 21, 2025
The tokenizer option is extracted from
task_config.metadata
, andtask_config
of typeBenchmarkConfig
that does not have themetadata
field:https://github.com/meta-llama/llama-stack/blob/2890243107c74a7a88b82595db49e9540d0a0561/llama_stack/apis/eval/eval.py#L50
The following code is not extracting the tokenizer then:
We should instead collect it from the
stored_benchmark
field that has themetadata
attribute.BTW: for the same reason, the initial extraction logic in
_collect_env_vars
,_extract_git_source
and_extract_pvc_name
is probably destined to fail.The text was updated successfully, but these errors were encountered: