Skip to content

[Feature Request] Add ARQ event‑stream integration (parity with Celery) #1058

@DavidsonTripService

Description

@DavidsonTripService

Description

Motivation

Logfire already offers first-class instrumentation for Celery, automatically converting every task into a span and propagating context across workers. However, many teams using async-first stacks (FastAPI, Starlette, etc.) prefer ARQ as their background task runner because it:

  • Seamlessly integrates with async codebases (built on asyncio).
  • Has a minimal footprint (single Redis dependency, no separate broker).
  • Strong ecosystem synergy (maintained by the same author as Pydantic).

Adding native Logfire support for ARQ would enable these teams to gain distributed tracing and structured logging parity without needing to switch to Celery.


Desired Behaviour

  • Automatic Span Creation: Each ARQ job should create a span for the full job lifecycle (enqueue ➜ start ➜ finish).
  • Context Propagation: Spans emitted inside a job should attach to the parent "enqueue" span for better trace visibility.
  • Error Capture: Uncaught exceptions should be recorded as span events with stack-trace context for easier debugging.

For consistency, the public API could mirror the existing Celery helper:

import logfire
from arq import create_pool, Worker

logfire.instrument_arq()  # Similar to instrument_celery()

async def main():
    worker = Worker(settings=Settings)
    await worker.main()

Current Work-Arounds

Currently, teams can manually wrap each task with @logfire.instrument(...), but this approach:

  • Misses queue-level context (e.g., queue name, retries, ETA).
  • Introduces boilerplate in every codebase, reducing maintainability.

Additional Context / Pointers

  • Lifecycle Hooks: ARQ exposes several lifecycle hooks (startup, before_enqueue, after_enqueue, before_job, after_job, shutdown) that can be patched, similar to how Logfire hooks Celery signals.
  • Market Gap: OpenTelemetry lacks an official ARQ instrumentation package, making this an opportunity to fill a significant gap in the Python tracing landscape.

Possible Implementation Sketch

  • Patch arq.connections.RedisSettings.enqueue_job: Capture client-side enqueue spans.
  • Patch arq.worker.Worker: Add span start/finish logic around job execution.
  • New Helper Function: Create logfire.instrument_arq(*, log_name: str = "arq", capture_args: bool = True, …) to match the Celery helper's signature.

Happy to submit a PR or help test a prototype if this proposal is accepted.
Thanks for considering!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions