Skip to content

[MLOB-2411] Add Distributed Proxy/Gateway Service Guide for LLM Observability #28593

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

sabrenner
Copy link
Contributor

What does this PR do? What is the motivation?

Merge instructions

Merge readiness:

  • Ready for merge

For Datadog employees:
Merge queue is enabled in this repo. Your branch name MUST follow the <name>/<description> convention and include the forward slash (/). Without this format, your pull request will not pass in CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.

If your branch doesn't follow this format, rename it or create a new branch and PR.

To have your PR automatically merged after it receives the required reviews, add the following PR comment:

/merge

Additional notes

Copy link
Contributor

github-actions bot commented Apr 4, 2025

@github-actions github-actions bot added the Images Images are added/removed with this PR label Apr 11, 2025
@github-actions github-actions bot added the Guide Content impacting a guide label May 20, 2025
@sabrenner sabrenner requested a review from barieom May 20, 2025 21:16
@sabrenner sabrenner marked this pull request as ready for review May 20, 2025 23:47
@sabrenner sabrenner requested a review from a team as a code owner May 20, 2025 23:47
@buraizu buraizu added the editorial review Waiting on a more in-depth review label May 21, 2025
@buraizu
Copy link
Contributor

buraizu commented May 21, 2025

Created DOCS-10979 for documentation team review

Copy link
Contributor

@barieom barieom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Sam, this is looking great! Can we also add a section with one paragraph under Advanced Tracing? https://docs.datadoghq.com/llm_observability/setup/sdk/python/#advanced-tracing


## Overview

Like any traditionally application, LLM applications can be implemented across multiple different microservices. With LLM Observability, if one of these services is a LLM proxy or gateway service, you can trace the LLM calls made by individual LLM applications in a complete end-to-end trace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Like any traditionally application, LLM applications can be implemented across multiple different microservices. With LLM Observability, if one of these services is a LLM proxy or gateway service, you can trace the LLM calls made by individual LLM applications in a complete end-to-end trace.
Like any traditional application, LLM applications can be implemented across multiple different microservices. With LLM Observability, if one of these services is an LLM proxy (or an LLM gateway service), you can trace the LLM calls made by individual LLM applications in a complete end-to-end trace that captures the full request path that jumps across multiple services.

{{% /tab %}}
{{< /tabs >}}

When making requests to the proxy or gateway service, the LLM Observability SDKs automatically propagate the ML application name from the original LLM application. The propagated ML application name takes precedence over the ML application name specified in the proxy or gateway service.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When making requests to the proxy or gateway service, the LLM Observability SDKs automatically propagate the ML application name from the original LLM application. The propagated ML application name takes precedence over the ML application name specified in the proxy or gateway service.
When making requests to the proxy or gateway service, the LLM Observability SDK automatically propagates the ML application name from the original LLM application. The propagated ML application name takes precedence over the ML application name specified in the proxy or gateway service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
editorial review Waiting on a more in-depth review Guide Content impacting a guide Images Images are added/removed with this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants