Skip to content

Mimir received a series with an invalid label/metric name #3898

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sigurdfalk opened this issue Apr 11, 2025 · 11 comments
Open

Mimir received a series with an invalid label/metric name #3898

sigurdfalk opened this issue Apr 11, 2025 · 11 comments
Labels
bug Something isn't working needs triage

Comments

@sigurdfalk
Copy link

sigurdfalk commented Apr 11, 2025

Component(s)

collector

What happened?

Description

After upgrading from Operator version v0.117.0 and collector version v0.117.0, we see metric being dropped by Mimir due to "err-mimir-metric-name-invalid" and "err-mimir-label-invalid". More details about specific metric and label in log output.

Steps to Reproduce

We install the OTEL Operator with the Helm Chart v0.84.2 with the following values:

  replicaCount: 2
  manager:
    featureGatesMap:
      operator.observability.prometheus: true
    serviceMonitor:
      enabled: true
    prometheusRule:
      enabled: true
      defaultRules:
        enabled: true
  admissionWebhooks:
    certManager:
      enabled: false

Expected Result

Actual Result

Kubernetes Version

1.30.11

Operator version

0.120.0

Collector version

0.120.0

Environment information

Environment

OS: AzureLinux on AKS
Compiler(if manually compiled): (e.g., "go 14.2")

Log output

From Prometheus logs:

{
  "time": "2025-04-11T11:30:34.140713136Z",
  "level": "ERROR",
  "source": "queue_manager.go:1670",
  "msg": "non-recoverable error",
  "component": "remote",
  "remote_name": "a5eb7f",
  "url": "http://mimir-distributed-gateway.system-mimir.svc/api/v1/push",
  "failedSampleCount": 2000,
  "failedHistogramCount": 0,
  "failedExemplarCount": 0,
  "err": "server returned HTTP status 400 Bad Request: received a series with invalid metric name: 'http.client.duration_bucket' (err-mimir-metric-name-invalid)\n"
}

{
  "time": "2025-04-11T11:18:19.851742758Z",
  "level": "ERROR",
  "source": "queue_manager.go:1670",
  "msg": "non-recoverable error",
  "component": "remote",
  "remote_name": "a5eb7f",
  "url": "http://mimir-distributed-gateway.system-mimir.svc/api/v1/push",
  "failedSampleCount": 2000,
  "failedHistogramCount": 0,
  "failedExemplarCount": 0,
  "err": "server returned HTTP status 400 Bad Request: received a series with an invalid label: 'service.instance.id' series: 'otelcol_processor_batch_batch_send_size_bucket{cluster=\"aks-tools-mgt-test-003\", container=\"otc-container\", endpoint=\"monitoring\", instance=\"10.90.9.137:8889\", job=\"otel-daemonset-collector-monitoring' (err-mimir-label-invalid)\n"
}

Additional context

No response

@sigurdfalk sigurdfalk added bug Something isn't working needs triage labels Apr 11, 2025
@pavolloffay
Copy link
Member

How is the data pushed to mimir? Does it scrate the OTEL collector's prometheus exporter?

This does not seem like operator issue. This issue should be most likely opened in the https://github.com/open-telemetry/opentelemetry-collector-contrib/

@sigurdfalk
Copy link
Author

sigurdfalk commented Apr 11, 2025

@pavolloffay we use Prometheus Operator + ServiceMonitors. The the data is pushed to Mimir with Prometheus Remote Write. We set the following config for OpenTelemetryCollector:

    observability:
      metrics:
        enableMetrics: true

One of the ServiceMonitors is created like the one below:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  creationTimestamp: "2025-02-19T13:13:01Z"
  generation: 1
  labels:
    app: opentelemetry-operator
    app.kubernetes.io/component: opentelemetry-collector
    app.kubernetes.io/instance: system-opentelemetry-operator.otel-daemonset
    app.kubernetes.io/managed-by: opentelemetry-operator
    app.kubernetes.io/name: otel-daemonset-monitoring-collector
    app.kubernetes.io/part-of: opentelemetry
    app.kubernetes.io/version: latest
  name: otel-daemonset-monitoring-collector
  namespace: system-opentelemetry-operator
  ownerReferences:
  - apiVersion: opentelemetry.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: OpenTelemetryCollector
    name: otel-daemonset
    uid: 52c632c5-325c-48bf-b051-852df9d8eac0
  resourceVersion: "167005859"
  uid: 7d0f083f-84c1-4cc7-be71-948e7896905d
spec:
  endpoints:
  - port: monitoring
  namespaceSelector:
    matchNames:
    - system-opentelemetry-operator
  selector:
    matchLabels:
      app.kubernetes.io/component: opentelemetry-collector
      app.kubernetes.io/instance: system-opentelemetry-operator.otel-daemonset
      app.kubernetes.io/managed-by: opentelemetry-operator
      app.kubernetes.io/part-of: opentelemetry
      operator.opentelemetry.io/collector-service-type: monitoring

You still think I should open an issue in the collector repo?

@pavolloffay
Copy link
Member

Yes I think it is most likely issue with the Prometheus exporter. @swiatekm WDYT?

@swiatekm
Copy link
Contributor

@sigurdfalk it would help a lot if you could post your collector configuration. This is a problem that could conceivably be caused by any of the prometheus-related components.

@sigurdfalk
Copy link
Author

This is my config (copied after applied to k8s, so redacted/removed some noisy/sensitive fields):

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  labels:
    app: opentelemetry-operator
  name: otel-daemonset
  namespace: system-opentelemetry-operator
spec:
  config:
    exporters:
      otlphttp/loki:
        endpoint: xxx
        headers:
          Authorization: ${GRAFANA_LOKI_BASIC_AUTH}
          X-Scope-OrgID: xxx
    extensions:
      file_storage:
        directory: /var/lib/otelcol
      health_check:
        endpoint: 0.0.0.0:13134
    processors:
      batch: {}
      filter/loki-k8s:
        ...
      k8sattributes:
        ...
      memory_limiter:
        ...
      transform/filelog-receiver:
        ...
    receivers:
      filelog/k8s:
        ...
    service:
      extensions:
      - health_check
      - file_storage
      pipelines:
        logs/loki-k8s:
          exporters:
          - otlphttp/loki
          processors:
          - memory_limiter
          - k8sattributes
          - filter/loki-k8s
          - transform/filelog-receiver
          - batch
          receivers:
          - filelog/k8s
      telemetry:
        logs:
          encoding: json
          level: info
        metrics:
          address: 0.0.0.0:8889
          level: detailed
  configVersions: 3
  daemonSetUpdateStrategy: {}
  deploymentUpdateStrategy: {}
  env:
  - name: GOMEMLIMIT
    value: 13000MiB
  envFrom:
  - secretRef:
      name: collector-grafana-loki-auth
  hostNetwork: true
  ingress:
    route: {}
  ipFamilyPolicy: SingleStack
  managementState: managed
  mode: daemonset
  observability:
    metrics:
      enableMetrics: true
  podDnsConfig: {}
  replicas: 1
  resources: {}
  securityContext:
    runAsGroup: 0
    runAsUser: 0
  serviceAccount: otel-daemonset
  targetAllocator:
    allocationStrategy: consistent-hashing
    filterStrategy: relabel-config
    observability:
      metrics: {}
    prometheusCR:
      scrapeInterval: 30s
    resources: {}
  tolerations:
  - effect: NoSchedule
    operator: Exists
  upgradeStrategy: automatic
  volumeMounts:
  - mountPath: /var/log/pods
    name: varlogpods
    readOnly: true
  - mountPath: /var/lib/otelcol
    name: varlibotelcol
  volumes:
  - hostPath:
      path: /var/log/pods
    name: varlogpods
  - hostPath:
      path: /var/lib/otelcol
      type: DirectoryOrCreate
    name: varlibotelcol

@swiatekm
Copy link
Contributor

Are you sure that's the right one? It doesn't have any prometheus components in it, and only seems to be shipping logs to Loki.

@sigurdfalk
Copy link
Author

The issue is not metrics being sent by an exporter in the collector. It's when Prometheus scrapes the metrics from the collector itself on port 8889 and then sends it to
Mimir via remote write on the Prometheus instance scraping the collector metrics. I see this was maybe not intuitive to understand from the description 😅

@swiatekm
Copy link
Contributor

I think you're suffering from open-telemetry/opentelemetry-collector#12458, but I'm not sure why, given that you're using the deprecated syntax for configuring the Prometheus endpoint.

Can you post the output of the collector's Prometheus endpoint (that is, 0.0.0.0:8889 in your case)?

@sigurdfalk
Copy link
Author

Oh, I was not aware that was deprecated. Will try to update to the new config then. Anyways, this is the output on 8889 which causes the errors:

# HELP http_client_duration Measures the duration of outbound HTTP requests.
# TYPE http_client_duration histogram
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="0"} 0
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="5"} 0
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10"} 0
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="25"} 0
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="50"} 0
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="75"} 181
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="100"} 199
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="250"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="500"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="750"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="1000"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="2500"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="5000"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="7500"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10000"} 208
http_client_duration_bucket{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="+Inf"} 208
http_client_duration_sum{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 13682.703895999997
http_client_duration_count{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 208
# HELP http_client_request_size Measures the size of HTTP request messages.
# TYPE http_client_request_size counter
http_client_request_size{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 2.865194e+06
# HELP http_client_response_size Measures the size of HTTP response messages.
# TYPE http_client_response_size counter
http_client_response_size{http_method="POST",http_status_code="204",net_peer_name="loki-gateway.system-loki.svc.cluster.local",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 0
# HELP otelcol_exporter_queue_capacity Fixed capacity of the retry queue (in batches) [alpha]
# TYPE otelcol_exporter_queue_capacity gauge
otelcol_exporter_queue_capacity{data_type="logs",exporter="otlphttp/loki",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 1000
# HELP otelcol_exporter_queue_size Current size of the retry queue (in batches) [alpha]
# TYPE otelcol_exporter_queue_size gauge
otelcol_exporter_queue_size{data_type="logs",exporter="otlphttp/loki",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 1
# HELP otelcol_exporter_send_failed_log_records Number of log records in failed attempts to send to destination. [alpha]
# TYPE otelcol_exporter_send_failed_log_records counter
otelcol_exporter_send_failed_log_records{exporter="otlphttp/loki",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 0
# HELP otelcol_exporter_sent_log_records Number of log record successfully sent to destination. [alpha]
# TYPE otelcol_exporter_sent_log_records counter
otelcol_exporter_sent_log_records{exporter="otlphttp/loki",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5682
# HELP otelcol_fileconsumer_open_files Number of open files
# TYPE otelcol_fileconsumer_open_files gauge
otelcol_fileconsumer_open_files{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 66
# HELP otelcol_fileconsumer_reading_files Number of open files that are being read
# TYPE otelcol_fileconsumer_reading_files gauge
otelcol_fileconsumer_reading_files{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 0
# HELP otelcol_otelsvc_k8s_namespace_added Number of namespace add events received
# TYPE otelcol_otelsvc_k8s_namespace_added counter
otelcol_otelsvc_k8s_namespace_added{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 30
# HELP otelcol_otelsvc_k8s_pod_added Number of pod add events received
# TYPE otelcol_otelsvc_k8s_pod_added counter
otelcol_otelsvc_k8s_pod_added{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 271
# HELP otelcol_otelsvc_k8s_pod_deleted Number of pod delete events received
# TYPE otelcol_otelsvc_k8s_pod_deleted counter
otelcol_otelsvc_k8s_pod_deleted{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5
# HELP otelcol_otelsvc_k8s_pod_table_size Size of table containing pod info
# TYPE otelcol_otelsvc_k8s_pod_table_size gauge
otelcol_otelsvc_k8s_pod_table_size{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 693
# HELP otelcol_otelsvc_k8s_pod_updated Number of pod update events received
# TYPE otelcol_otelsvc_k8s_pod_updated counter
otelcol_otelsvc_k8s_pod_updated{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 43
# HELP otelcol_otelsvc_k8s_replicaset_added Number of ReplicaSet add events received
# TYPE otelcol_otelsvc_k8s_replicaset_added counter
otelcol_otelsvc_k8s_replicaset_added{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 361
# HELP otelcol_otelsvc_k8s_replicaset_updated Number of ReplicaSet update events received
# TYPE otelcol_otelsvc_k8s_replicaset_updated counter
otelcol_otelsvc_k8s_replicaset_updated{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 4
# HELP otelcol_process_cpu_seconds Total CPU user and system time in seconds [alpha]
# TYPE otelcol_process_cpu_seconds counter
otelcol_process_cpu_seconds{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 2.3200000000000003
# HELP otelcol_process_memory_rss Total physical memory (resident set size) [alpha]
# TYPE otelcol_process_memory_rss gauge
otelcol_process_memory_rss{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 2.5444352e+08
# HELP otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc') [alpha]
# TYPE otelcol_process_runtime_heap_alloc_bytes gauge
otelcol_process_runtime_heap_alloc_bytes{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 6.905364e+07
# HELP otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc') [alpha]
# TYPE otelcol_process_runtime_total_alloc_bytes counter
otelcol_process_runtime_total_alloc_bytes{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 6.71729696e+08
# HELP otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys') [alpha]
# TYPE otelcol_process_runtime_total_sys_memory_bytes gauge
otelcol_process_runtime_total_sys_memory_bytes{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 1.21463064e+08
# HELP otelcol_process_uptime Uptime of the process [alpha]
# TYPE otelcol_process_uptime counter
otelcol_process_uptime{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 42.161035341
# HELP otelcol_processor_accepted_log_records Number of log records successfully pushed into the next component in the pipeline. [deprecated since v0.110.0]
# TYPE otelcol_processor_accepted_log_records counter
otelcol_processor_accepted_log_records{processor="memory_limiter",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
# HELP otelcol_processor_batch_batch_send_size Number of units in the batch
# TYPE otelcol_processor_batch_batch_send_size histogram
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10"} 0
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="25"} 133
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="50"} 205
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="75"} 207
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="100"} 208
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="250"} 208
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="500"} 208
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="750"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="1000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="2000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="3000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="4000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="5000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="6000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="7000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="8000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="9000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="20000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="30000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="50000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="100000"} 209
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="+Inf"} 209
otelcol_processor_batch_batch_send_size_sum{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_batch_batch_send_size_count{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 209
# HELP otelcol_processor_batch_batch_send_size_bytes Number of bytes in batch that was sent. Only available on detailed level.
# TYPE otelcol_processor_batch_batch_send_size_bytes histogram
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="25"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="50"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="75"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="100"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="250"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="500"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="750"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="1000"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="2000"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="3000"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="4000"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="5000"} 0
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="6000"} 1
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="7000"} 6
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="8000"} 14
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="9000"} 28
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="10000"} 67
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="20000"} 188
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="30000"} 206
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="50000"} 207
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="100000"} 208
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="200000"} 208
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="300000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="400000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="500000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="600000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="700000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="800000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="900000"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="1e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="2e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="3e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="4e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="5e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="6e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="7e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="8e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="9e+06"} 209
otelcol_processor_batch_batch_send_size_bytes_bucket{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",le="+Inf"} 209
otelcol_processor_batch_batch_send_size_bytes_sum{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 2.873972e+06
otelcol_processor_batch_batch_send_size_bytes_count{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 209
# HELP otelcol_processor_batch_metadata_cardinality Number of distinct metadata value combinations being processed
# TYPE otelcol_processor_batch_metadata_cardinality gauge
otelcol_processor_batch_metadata_cardinality{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 1
# HELP otelcol_processor_batch_timeout_trigger_send Number of times the batch was sent due to a timeout trigger
# TYPE otelcol_processor_batch_timeout_trigger_send counter
otelcol_processor_batch_timeout_trigger_send{processor="batch",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 209
# HELP otelcol_processor_filter_logs_filtered Number of logs dropped by the filter processor
# TYPE otelcol_processor_filter_logs_filtered counter
otelcol_processor_filter_logs_filtered{filter="filter/loki-k8s",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 0
# HELP otelcol_processor_incoming_items Number of items passed to the processor. [alpha]
# TYPE otelcol_processor_incoming_items counter
otelcol_processor_incoming_items{otel_signal="logs",processor="filter/loki-k8s",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_incoming_items{otel_signal="logs",processor="k8sattributes",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_incoming_items{otel_signal="logs",processor="memory_limiter",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_incoming_items{otel_signal="logs",processor="transform/filelog-receiver",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
# HELP otelcol_processor_outgoing_items Number of items emitted from the processor. [alpha]
# TYPE otelcol_processor_outgoing_items counter
otelcol_processor_outgoing_items{otel_signal="logs",processor="filter/loki-k8s",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_outgoing_items{otel_signal="logs",processor="k8sattributes",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_outgoing_items{otel_signal="logs",processor="memory_limiter",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
otelcol_processor_outgoing_items{otel_signal="logs",processor="transform/filelog-receiver",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 5700
# HELP otelcol_receiver_accepted_log_records Number of log records successfully pushed into the pipeline. [alpha]
# TYPE otelcol_receiver_accepted_log_records counter
otelcol_receiver_accepted_log_records{receiver="filelog/k8s",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",transport=""} 5700
# HELP otelcol_receiver_refused_log_records Number of log records that could not be pushed into the pipeline. [alpha]
# TYPE otelcol_receiver_refused_log_records counter
otelcol_receiver_refused_log_records{receiver="filelog/k8s",service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1",transport=""} 0
# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_instance_id="5301a2a3-247c-4ee5-a173-1cfc84c1f3ae",service_name="otelcol-contrib",service_version="0.120.1"} 1

@swiatekm
Copy link
Contributor

That output doesn't have any of the problems Mimir complains about. Are you sure you aren't introducing them in some intermediate processing step?

@sigurdfalk
Copy link
Author

Yes, I also thought that was strange, and thought about if we are doing something in between. However, as I can toggle the issue by switching back and forth between the collector version, I find it a bit strange.. Can't really think about anything we do at our end that would impact this, our setup is pretty standard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage
Projects
None yet
Development

No branches or pull requests

3 participants