Skip to content

Commit 278471f

Browse files
Merge branch 'master' into mtullalizardi-patch-1
2 parents 33dfc32 + 711ab08 commit 278471f

File tree

47 files changed

+16041
-281
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+16041
-281
lines changed

content/en/agent/faq/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,6 @@ aliases:
2727
{{< nextlink href="agent/faq/docker-hub" >}}Docker Hub{{< /nextlink >}}
2828
{{< nextlink href="agent/faq/proxy_example_haproxy" >}}Send traffix to Datadog using HAProxy{{< /nextlink >}}
2929
{{< nextlink href="agent/faq/proxy_example_nginx" >}}Send traffix to Datadog using NGINX{{< /nextlink >}}
30-
{{< nextlink href="agent/faq/fips_proxy" >}}Agent FIPS proxy (deprecated){{< /nextlink >}}
30+
{{< nextlink href="agent/faq/fips_proxy" >}}Agent FIPS proxy{{< /nextlink >}}
3131

3232
{{< /whatsnext >}}

content/en/cloud_cost_management/tag_pipelines.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,15 @@ Tag pipelines are applied to Cloud Cost metrics from all providers. Tag pipeline
2222

2323
## Create a ruleset
2424

25+
To create a ruleset, navigate to [**Cloud Cost > Settings > Tag Pipelines**][1].
26+
2527
<div class="alert alert-warning"> You can create up to 100 rules. API-based Reference Tables are not supported. </div>
2628

2729
Before creating individual rules, create a ruleset (a folder for your rules) by clicking **+ New Ruleset**.
2830

2931
Within each ruleset, click **+ Add New Rule** and select a rule type: **Add tag**, **Alias tag keys**, or **Map multiple tags**. These rules execute in a sequential, deterministic order from top to bottom.
3032

31-
{{< img src="cloud_cost/tags_order.png" alt="A list of tag rules on the Tag Pipelines page displaying various categories such as team, account, service, department, business unit, and more" style="width:80%;" >}}
33+
{{< img src="cloud_cost/pipelines-create-ruleset.png" alt="A list of tag rules on the Tag Pipelines page displaying various categories such as team, account, service, department, business unit, and more" style="width:60%;" >}}
3234

3335
You can organize rules and rulesets to ensure the order of execution matches your business logic.
3436

@@ -38,32 +40,40 @@ Add a new tag (key + value) based on the presence of existing tags on your Cloud
3840

3941
For example, you can create a rule to tag all resources with their business unit based on the services those resources are a part of.
4042

41-
{{< img src="cloud_cost/tags_addnew.png" alt="Add new business unit tag to resources with service:processing, service:creditcard, or service:payment-notification." style="width:60%;" >}}
43+
{{< img src="cloud_cost/pipelines-add-tag.png" alt="Add new business unit tag to resources with service:process-agent or service:process-billing." style="width:60%;" >}}
44+
45+
Under the **Additional options** section, you have the following options:
4246

43-
To ensure the rule only applies if the `business_unit` tag doesn't already exist, click the toggle in the **Additional options** section.
47+
- **Only apply if tag `{tag}` doesn't exist** - Ensures the rule only applies if the specified tag (`business-unit` in the example above) doesn't already exist.
48+
- **Apply case-insensitive matching to resource tags** - Enables tags defined in the `To resources with tag(s)` field and tags from the cost data to be case insensitive. For example, if resource tags from the UI is: `foo:bar` and the tag from the cost data is `Foo:bar`, then the two can be matched.
4449

4550
### Alias tag keys
4651

4752
Map existing tag values to a more standardized tag.
4853

4954
For example, if your organization wants to use the standard `application` tag key, but several teams have a variation of that tag (like `app`, `webapp`, or `apps`), you can alias `apps` to `application`. Each alias tag rule allows you to alias a maximum of 25 tag keys to a new tag.
5055

51-
{{< img src="cloud_cost/tags_alias.png" alt="Add application tag to resources with app, webapp, or apps tag." style="width:60%;" >}}
56+
{{< img src="cloud_cost/pipelines-alias-tag.png" alt="Add application tag to resources with app, webapp, or apps tag." style="width:60%;" >}}
5257

5358
Add the application tag to resources with `app`, `webapp`, or `apps` tags. The rule stops executing for each resource after a first match is found. For example, if a resource already has a `app` tag, then the rule no longer attempts to identify a `webapp` or `apps` tag.
5459

5560
To ensure the rule only applies if the `application` tag doesn't already exist, click the toggle in the **Additional options** section.
5661

5762
### Map multiple tags
5863

59-
Use [Reference Tables][2] to add multiple tags to cost data without creating multiple rules. This will map the values from your Reference Table's primary key column to values from cost tags. If found, the pipelines adds the selected Reference Table columns as tags to cost data.
64+
Use [Reference Tables][2] to add multiple tags to cost data without creating multiple rules. This maps the values from your Reference Table's primary key column to values from cost tags. If found, the pipelines adds the selected Reference Table columns as tags to cost data.
6065

6166
For example, if you want to add information about which VPs, organizations, and business_units different AWS and Azure accounts fall under, you can create a table and map the tags.
6267

63-
{{< img src="cloud_cost/tags_mapmultiple.png" alt="Add account metadata like vp, organization, and businessunit using reference tables for tag pipelines" style="width:60%;" >}}
68+
{{< img src="cloud_cost/pipelines-map-multiple-tags.png" alt="Add account metadata like customer_name using reference tables for tag pipelines" style="width:60%;" >}}
6469

6570
Similar to [Alias tag keys](#alias-tag-keys), the rule stops executing for each resource after a first match is found. For example, if an `aws_member_account_id` is found, then the rule no longer attempts to find a `subscriptionid`.
6671

72+
Under the **Additional options** section, you have the following options:
73+
74+
- **Only apply if columns don't exist** - Ensures the defined columns are only added if they do not already exist with the associated tags from the cost data.
75+
- **Apply case-insensitive matching for primary key values** - Enables case-insensitive matching between the primary key value from the reference table and the value of the tag in the cost data where the tag key matches the primary key. For example, if the primary key value pair from UI is: foo:Bar and the tag from the cost data is foo:bar, then the two can be matched.
76+
6777
## Reserved tags
6878

6979
Certain tags such as `env` and `host` are [reserved tags][4], and are part of [Unified Service Tagging][3]. The `host` tag cannot be added in Tag Pipelines.

content/en/dora_metrics/_index.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ further_reading:
2929
<div class="alert alert-warning">DORA Metrics is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
3030
{{< /site-region >}}
3131

32-
<div class="alert alert-warning">DORA Metrics is in Preview.</div>
3332

3433
## Overview
3534

@@ -38,7 +37,7 @@ DevOps Research and Assessment (DORA) metrics are [four key metrics][1] that ind
3837
Deployment frequency
3938
: How often an organization successfully releases to production.
4039

41-
Lead time for changes
40+
Change lead time
4241
: The amount of time it takes a commit to get into production.
4342

4443
Change failure rate

content/en/dora_metrics/data_collected/_index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ further_reading:
1919
<div class="alert alert-warning">DORA Metrics is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
2020
{{< /site-region >}}
2121

22-
<div class="alert alert-warning">DORA Metrics is in Preview.</div>
2322

2423
## Overview
2524

content/en/dora_metrics/setup/_index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ further_reading:
1212
<div class="alert alert-warning">DORA Metrics is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
1313
{{< /site-region >}}
1414

15-
<div class="alert alert-warning">DORA Metrics is in Preview.</div>
1615

1716
## Overview
1817

content/en/dora_metrics/setup/deployments.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@ further_reading:
2626
<div class="alert alert-warning">DORA Metrics is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
2727
{{< /site-region >}}
2828

29-
<div class="alert alert-warning">DORA Metrics is in Preview.</div>
3029

3130
## Overview
3231

content/en/dora_metrics/setup/failures.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,11 +30,10 @@ further_reading:
3030
<div class="alert alert-warning">DORA Metrics is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.</div>
3131
{{< /site-region >}}
3232

33-
<div class="alert alert-warning">DORA Metrics is in Preview.</div>
3433

3534
## Overview
3635

37-
Failed deployment events, currently interpreted through failure events, are used to compute [change failure rate](#calculating-change-failure-rate) and [time to restore](#calculating-time-to-restore).
36+
Failure events are used to compute [change failure rate](#calculating-change-failure-rate) and [time to restore](#calculating-time-to-restore).
3837

3938
## Selecting and configuring a failure data source
4039

content/en/integrations/guide/source-code-integration.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -225,9 +225,7 @@ If you are using a host, you have two options: using Microsoft SourceLink or con
225225
The Node.js client library version 3.21.0 or later is required.
226226
</br>
227227
</br>
228-
Displaying code links and snippets for TypeScript applications requires your Node application to be run with:
229-
</br>
230-
<a href="https://nodejs.org/dist/v12.22.12/docs/api/cli.html#cli_enable_source_maps"><code>--enable-source-maps</code></a>.
228+
For transpiled Node.js applications (for example, TypeScript), make sure to generate and publish source maps with the deployed application, and to run Node.js with the <a href="https://nodejs.org/docs/latest/api/cli.html#--enable-source-maps"><code>--enable-source-maps</code></a> flag. Otherwise, code links and snippets will not work.
231229
</div>
232230

233231
#### Containers

content/en/logs/explorer/search.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ further_reading:
1919

2020
The [Logs Explorer][5] lets you search and view individual logs as a list. However, the most valuable insights often come from aggregating logs at scale. Using the search feature, you can filter logs and visualize them as timeseries charts, top lists, tree maps, pie charts, or tables to better understand trends, patterns, and outliers across your log data.
2121

22-
## Natural language query
22+
## Natural language queries
2323

24-
<div class="alert alert-info">Natural language queries for Logs is in Preview. To access this feature, request through <a href="https://www.datadoghq.com/product-preview/natural-language-queries-for-logs/">this form</a>.</div>
24+
<div class="alert alert-info"><strong>Built with Llama</strong>, Natural Language Queries (NLQ) for Logs is in Preview. To access this feature, request through <a href="https://www.datadoghq.com/product-preview/natural-language-queries-for-logs/">this form</a>.</div>
2525

2626
Use Natural Language Queries (NLQ) to describe what you're looking for in plain English. Datadog automatically translates your request into a structured log query, making it easier to explore logs without needing to write complex syntax. To access this feature, click **Ask** in the search field.
2727

content/en/logs/workspaces/_index.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,6 @@ further_reading:
77
text: "Take enhanced control of your log data with Datadog Log Workspaces"
88
---
99

10-
{{< callout url="https://www.datadoghq.com/private-beta/log-workspaces/" header="Access the Preview!" >}}
11-
Log Workspaces is in Preview.
12-
{{< /callout >}}
13-
1410
## Overview
1511
During an incident investigation, you might need to run complex queries, such as combining attributes from multiple log sources or transforming log data, to analyze your logs. Use Log Workspaces to run queries to:
1612

content/en/monitors/guide/alert_aggregation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ If you are managing your monitors with the API, use the variable `notify_by` to
7676

7777
| Type of Alert | Configuration Example |
7878
|-------------------|----------------------------------------|
79-
| Simple Alert | `"notify_by": [*]` |
79+
| Simple Alert | `"notify_by": ["*"]` |
8080
| Multi Alert | `"notify_by": [<group>]`, for example, `"notify_by": ["topic"]` |
8181

8282
For more information, see the [API documentation][4].
@@ -88,4 +88,4 @@ For more information, see the [API documentation][4].
8888
[1]: /monitors/configuration/?tab=thresholdalert#simple-alert
8989
[2]: /monitors/notify/variables/?tab=is_alert#triggered-variables
9090
[3]: /monitors/configuration/?tab=thresholdalert#multi-alert
91-
[4]: /api/latest/monitors/#create-a-monitor
91+
[4]: /api/latest/monitors/#create-a-monitor

content/en/network_monitoring/cloud_network_monitoring/setup.md

Lines changed: 50 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Cloud Network Monitoring supports use of the following provisioning systems:
9090

9191
## Setup
9292

93-
Given this tool's focus and strength is in analyzing traffic _between_ network endpoints and mapping network dependencies, it is recommended to install it on a meaningful subset of your infrastructure and a **_minimum of 2 hosts_** to maximize value.
93+
Cloud Network Monitoring is designed to analyze traffic _between_ network endpoints and map network dependencies. Datadog recommends installing CNM on a meaningful subset of your infrastructure and a **_minimum of 2 hosts_** to maximize value.
9494

9595
{{< tabs >}}
9696
{{% tab "Agent (Linux)" %}}
@@ -395,11 +395,10 @@ If you already have the [Agent running with a manifest][4]:
395395
[5]: https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L1519-L1523
396396
{{% /tab %}}
397397
{{% tab "Operator" %}}
398-
<div class="alert alert-warning">The Datadog Operator is Generally Available with the `1.0.0` version, and it reconciles the version `v2alpha1` of the DatadogAgent Custom Resource. </div>
399398
400-
[The Datadog Operator][1] is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options.
399+
[The Datadog Operator][1] simplifies deploying the Datadog Agent on Kubernetes and OpenShift. It provides deployment status, health, and error reporting through its Custom Resource status, while reducing the risk of misconfiguration with higher-level configuration options.
401400
402-
To enable Cloud Network Monitoring in Operator, use the following configuration:
401+
To enable Cloud Network Monitoring on the Datadog Operator, use the following configuration:
403402
404403
```yaml
405404
apiVersion: datadoghq.com/v2alpha1
@@ -412,7 +411,7 @@ spec:
412411
enabled: true
413412
```
414413
415-
[1]: https://github.com/DataDog/datadog-operator
414+
[1]: /containers/datadog_operator
416415
{{% /tab %}}
417416
{{% tab "Docker" %}}
418417
@@ -474,10 +473,55 @@ services:
474473
[1]: https://app.datadoghq.com/organization-settings/api-keys
475474
{{% /tab %}}
476475
{{% tab "ECS" %}}
477-
To set up on Amazon ECS, see the [Amazon ECS][1] documentation page.
476+
To set up CNM on Amazon ECS, see the [Amazon ECS][1] documentation page.
478477
479478
480479
[1]: /agent/amazon_ecs/#network-performance-monitoring-collection-linux-only
480+
{{% /tab %}}
481+
482+
{{% tab "ECS Fargate" %}}
483+
484+
<div class="alert alert-info">ECS Fargate for CNM is in Preview. Reach out to your Datadog representative to sign up.</div>
485+
486+
To enable Cloud Network Monitoring on ECS Fargate, use the following instructions:
487+
488+
**Requires Agent version `7.58` or higher**.
489+
490+
- For a new Fargate deployment, configure the Datadog Agent to monitor Fargate on ECS by enabling [process collection][1] on your Fargate hosts.
491+
492+
- For existing deployments, update your `task.json` file to include the following configuration settings:
493+
494+
```json
495+
{
496+
"containerDefinitions": [
497+
(...)
498+
"environment": [
499+
(...)
500+
{
501+
"name": "DD_SYSTEM_PROBE_NETWORK_ENABLED",
502+
"value": "true"
503+
},
504+
{
505+
"name": "DD_NETWORK_CONFIG_ENABLE_EBPFLESS",
506+
"value": "true"
507+
},
508+
{
509+
"name": "DD_PROCESS_AGENT_ENABLED",
510+
"value": "true"
511+
}
512+
],
513+
"linuxParameters": {
514+
"capabilities": {
515+
"add": [
516+
"SYS_PTRACE"
517+
]
518+
}
519+
},
520+
],
521+
}
522+
```
523+
[1]: /integrations/ecs_fargate/?tab=webui#process-collection
524+
481525
{{% /tab %}}
482526
{{< /tabs >}}
483527

content/en/observability_pipelines/set_up_pipelines/archive_logs/amazon_data_firehose.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -407,19 +407,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
407407
{{% observability_pipelines/install_worker/docker %}}
408408

409409
{{% /tab %}}
410-
{{% tab "Amazon EKS" %}}
411-
412-
{{% observability_pipelines/install_worker/amazon_eks %}}
413-
414-
{{% /tab %}}
415-
{{% tab "Azure AKS" %}}
416-
417-
{{% observability_pipelines/install_worker/azure_aks %}}
418-
419-
{{% /tab %}}
420-
{{% tab "Google GKE" %}}
410+
{{% tab "Kubernetes" %}}
421411

422-
{{% observability_pipelines/install_worker/google_gke %}}
412+
{{% observability_pipelines/install_worker/kubernetes %}}
423413

424414
{{% /tab %}}
425415
{{% tab "Linux (APT)" %}}

content/en/observability_pipelines/set_up_pipelines/archive_logs/amazon_s3.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -406,19 +406,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
406406
{{% observability_pipelines/install_worker/docker %}}
407407

408408
{{% /tab %}}
409-
{{% tab "Amazon EKS" %}}
410-
411-
{{% observability_pipelines/install_worker/amazon_eks %}}
412-
413-
{{% /tab %}}
414-
{{% tab "Azure AKS" %}}
415-
416-
{{% observability_pipelines/install_worker/azure_aks %}}
417-
418-
{{% /tab %}}
419-
{{% tab "Google GKE" %}}
409+
{{% tab "Kubernetes" %}}
420410

421-
{{% observability_pipelines/install_worker/google_gke %}}
411+
{{% observability_pipelines/install_worker/kubernetes %}}
422412

423413
{{% /tab %}}
424414
{{% tab "Linux (APT)" %}}

content/en/observability_pipelines/set_up_pipelines/archive_logs/kafka.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -410,19 +410,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
410410
{{% observability_pipelines/install_worker/docker %}}
411411

412412
{{% /tab %}}
413-
{{% tab "Amazon EKS" %}}
414-
415-
{{% observability_pipelines/install_worker/amazon_eks %}}
416-
417-
{{% /tab %}}
418-
{{% tab "Azure AKS" %}}
419-
420-
{{% observability_pipelines/install_worker/azure_aks %}}
421-
422-
{{% /tab %}}
423-
{{% tab "Google GKE" %}}
413+
{{% tab "Kubernetes" %}}
424414

425-
{{% observability_pipelines/install_worker/google_gke %}}
415+
{{% observability_pipelines/install_worker/kubernetes %}}
426416

427417
{{% /tab %}}
428418
{{% tab "Linux (APT)" %}}

content/en/observability_pipelines/set_up_pipelines/dual_ship_logs/amazon_data_firehose.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -360,19 +360,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
360360
{{% observability_pipelines/install_worker/docker %}}
361361

362362
{{% /tab %}}
363-
{{% tab "Amazon EKS" %}}
363+
{{% tab "Kubernetes" %}}
364364

365-
{{% observability_pipelines/install_worker/amazon_eks %}}
366-
367-
{{% /tab %}}
368-
{{% tab "Azure AKS" %}}
369-
370-
{{% observability_pipelines/install_worker/azure_aks %}}
371-
372-
{{% /tab %}}
373-
{{% tab "Google GKE" %}}
374-
375-
{{% observability_pipelines/install_worker/google_gke %}}
365+
{{% observability_pipelines/install_worker/kubernetes %}}
376366

377367
{{% /tab %}}
378368
{{% tab "Linux (APT)" %}}

content/en/observability_pipelines/set_up_pipelines/dual_ship_logs/amazon_s3.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -358,19 +358,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
358358
{{% observability_pipelines/install_worker/docker %}}
359359

360360
{{% /tab %}}
361-
{{% tab "Amazon EKS" %}}
361+
{{% tab "Kubernetes" %}}
362362

363-
{{% observability_pipelines/install_worker/amazon_eks %}}
364-
365-
{{% /tab %}}
366-
{{% tab "Azure AKS" %}}
367-
368-
{{% observability_pipelines/install_worker/azure_aks %}}
369-
370-
{{% /tab %}}
371-
{{% tab "Google GKE" %}}
372-
373-
{{% observability_pipelines/install_worker/google_gke %}}
363+
{{% observability_pipelines/install_worker/kubernetes %}}
374364

375365
{{% /tab %}}
376366
{{% tab "Linux (APT)" %}}

content/en/observability_pipelines/set_up_pipelines/dual_ship_logs/kafka.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -362,19 +362,9 @@ For the Datadog Archives destination, follow the instructions for the cloud prov
362362
{{% observability_pipelines/install_worker/docker %}}
363363

364364
{{% /tab %}}
365-
{{% tab "Amazon EKS" %}}
365+
{{% tab "Kubernetes" %}}
366366

367-
{{% observability_pipelines/install_worker/amazon_eks %}}
368-
369-
{{% /tab %}}
370-
{{% tab "Azure AKS" %}}
371-
372-
{{% observability_pipelines/install_worker/azure_aks %}}
373-
374-
{{% /tab %}}
375-
{{% tab "Google GKE" %}}
376-
377-
{{% observability_pipelines/install_worker/google_gke %}}
367+
{{% observability_pipelines/install_worker/kubernetes %}}
378368

379369
{{% /tab %}}
380370
{{% tab "Linux (APT)" %}}

0 commit comments

Comments
 (0)