Skip to content

Retry prompt model response template #2008

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hovi
Copy link
Contributor

@hovi hovi commented Jun 17, 2025

Draft for #1993

Copy link
Contributor

hyperlint-ai bot commented Jun 17, 2025

PR Change Summary

Updated the prompt model response template to reflect changes in token usage.

  • Adjusted request tokens and total tokens in the usage metrics
  • Modified the model name to reflect the latest version

Modified Files

  • docs/agents.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

@hovi hovi force-pushed the retry-prompt-model-response-template branch from 689244b to d138cb5 Compare June 17, 2025 23:40
@@ -423,6 +423,9 @@ def otel_event(self, _settings: InstrumentationSettings) -> Event:
error_details_ta = pydantic.TypeAdapter(list[pydantic_core.ErrorDetails], config=pydantic.ConfigDict(defer_build=True))


DEFAULT_MODEL_RESPONSE_TEMPLATE = 'Validator response:\n{description}\n\nFix the errors and try again.'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make the default be what it currently is?

Suggested change
DEFAULT_MODEL_RESPONSE_TEMPLATE = 'Validator response:\n{description}\n\nFix the errors and try again.'
DEFAULT_MODEL_RESPONSE_TEMPLATE = '{description}\n\nFix the errors and try again.'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Kludex
I believe what @DouweM requested, was to "fix" in terms of changing the string and this PR is supposed to do that rather than focus on making this more flexible/configurable.
I created separate issue that is gonna be about that #2009

Also after sleeping on it, I noticed, that the string was supposed to be added only if tool_name is None so I gotta change this.
And extracting to constant outside of dataclass is not necessary, I initially did it thinking I will reference this in tests, but then I saw that it's not necessary because if inline-snapshot usage.

Perhaps something simple along these lines and leave all the flexibility code for the other issue.

        if self.tool_name:
            prefix = ""
        else:
            prefix = "Validator feedback:\n"
        return f'{prefix}{description}\n\nFix the errors and try again.'

What do you guys think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hovi For this PR I was expecting to add the prefix just if isinstance(self.content, str) (because the other branch already has the N validation errors: prefix) and self.tool_name is None.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! Thanks @DouweM

    def model_response(self) -> str:
        """Return a string message describing why the retry is requested."""
        if isinstance(self.content, str):
            if self.tool_name is None:
                description = f"Validator feedback:\n{self.content}" 
            else:
                description = self.content
        else:
            json_errors = error_details_ta.dump_json(self.content, exclude={'__all__': {'ctx'}}, indent=2)
            description = f'{len(self.content)} validation errors: {json_errors.decode()}'
        return f'{description}\n\nFix the errors and try again.'

Is this what you meant?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hovi Yep that's pretty much what I was thinking, in case @Kludex agrees. I'd make it Validation feedback though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kludex please take a look at this when you get a minute if you can 🙏

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hovi As in the example you shared above, can we please only include the new Validation feedback: prefix if there's not already the N validation errors: prefix?

I'd also prefer to NOT have an overridable template on RetryPromptPart for now, and consider the question of how to override hard-coded prompt strings separately.

@Kludex Kludex self-assigned this Jun 18, 2025
@DouweM DouweM assigned DouweM and unassigned Kludex Jun 25, 2025
@pedroallenrevez
Copy link
Contributor

Nice addition! May I suggest a way to also specify how to format it?
I use jinja templating extensively and it might be useful.
So just a method like 'customize_model_template_formatting' or something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants