-
Notifications
You must be signed in to change notification settings - Fork 118
AbstractJsonExtractorOutputGuardrail behavior #1417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Good question. @cescoffier @mariofusco WDYT? |
Hum, I would have used the Quarkus mapper. Why don't we do so? Anyway, yes, I agree. |
Yeah, probably just an oversight. But even so, I am not sure the problem here would be addressed (and TBH, I am still not convinced it should be, because do we honestly want to fail if the LLM returns more fields than we expected?) |
That’s a good point. How strict do we want to be?
…On Wed 9 Apr 2025 at 13:44, Georgios Andrianakis ***@***.***> wrote:
Yeah, probably just an oversight. But even so, I am not sure the problem
here would be addressed (and TBH, I am still not convinced it should be,
because do we honestly want to fail if the LLM returns more fields than we
expected?)
—
Reply to this email directly, view it on GitHub
<#1417 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADCG7LAJH76Z7BAME5WDCL2YUBY7AVCNFSM6AAAAAB2YXMRNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBZGQYDGOBUG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
*geoand* left a comment (quarkiverse/quarkus-langchain4j#1417)
<#1417 (comment)>
Yeah, probably just an oversight. But even so, I am not sure the problem
here would be addressed (and TBH, I am still not convinced it should be,
because do we honestly want to fail if the LLM returns more fields than we
expected?)
—
Reply to this email directly, view it on GitHub
<#1417 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADCG7LAJH76Z7BAME5WDCL2YUBY7AVCNFSM6AAAAAB2YXMRNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBZGQYDGOBUG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I personally think that what is proposed here should not be done (or if it is, should not be the default) as we have no way of knowing whether an LLM will decide to include additional fields that might be totally irrelevant. |
I understand. But then the purpose of the class might be misleading, because in reality it’s only validating the JSON. And it seemingly appears to be redundant, since—as documented—this can be done in a simpler way. |
I agree on this, I would at least keep the current behavior as default. We could make this configurable, or allow to plug your own |
The |
When I read the documentation, and without seeing the class source code, the first thing I thought was that it would validate both the JSON and the deserialization. |
It definitely could be made so |
At this point I'm afraid that I don't understand what you mean with "validate the deserialization". The guardrail tries to deserialize the json into an instance of the target class and fails if it is not able to do so. Isn't this equivalent to validate the deserialization (and actually also performing it)? |
The AbstractJsonExtractorOutputGuardrail class injects the JsonGuardrailsUtils class, which uses the ObjectMapper without any kind of configuration. As a result, any DTO will always be considered valid as long as the JSON itself is valid.
Is this the expected behavior?
Shouldn't the ObjectMapper be configured with FAIL_ON_UNKNOWN_PROPERTIES set to true?
new ObjectMapper() .configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, true) .readValue(responseFromLLM.text(), ExampleDto.class);
The text was updated successfully, but these errors were encountered: