Skip to content

placeholder for hallucination #29426

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions content/en/llm_observability/terms/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,27 @@
|---|---|---|
| Evaluated on Input | Evaluated using LLM | Topic relevancy assesses whether each prompt-response pair remains aligned with the intended subject matter of the Large Language Model (LLM) application. For instance, an e-commerce chatbot receiving a question about a pizza recipe would be flagged as irrelevant. |

#### Hallucination

This check identifies instances where the LLM makes a claim that disagrees with the provided input context. TODO: LINK TO SDK DOCS FOR INSTRUMENTATION

TODO: screenshot
{{< img src="llm_observability/evaluations/hallucination_1.png" alt="A Hallucination evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}

| Evaluation Stage | Evaluation Method | Evaluation Definition |
|---|---|---|
| Evaluated on Output | Evaluated using LLM | Hallucination flags any output that disagrees with the context provided to the LLM. |

##### Hallucination Configuration

Check warning on line 187 in content/en/llm_observability/terms/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.headings

'Hallucination Configuration' should use sentence-style capitalization.
Hallucination detection makes a distinction between two types of hallucinations, which can be configured when Hallucination is enabled.

| Configuration Option | Description |
|---|---|
| Contradiction | Claims made in the LLM-generated response that go directly against the provided context |
| Unsupported Claim | Claims made in the LLM-generated response that are not grounded in the context |

Contradictions are always detected, while Unsupported Claims can be optionally included. For sensitive use cases, we recommend including Unsupported Claims.

Check warning on line 195 in content/en/llm_observability/terms/_index.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.pronouns

Avoid first-person pronouns such as 'we'.

#### Failure to Answer

This check identifies instances where the LLM fails to deliver an appropriate response, which may occur due to limitations in the LLM's knowledge or understanding, ambiguity in the user query, or the complexity of the topic.
Expand Down
Loading