Introducing Apalis v1.0.0
After a long beta and release candidate cycle, Apalis 1.0.0 is stable. This release represents a significant maturation of the project — the API is cleaner and more composable, the backend trait system has been fully granularised, workflows now support directed acyclic graphs, and observability has been elevated to a first-class concern.
Here is what has changed, what is new, and what you need to know if you are upgrading.
A New Backend Architecture
The most significant structural change in v1 is the granularisation of the backend trait system. Where previous versions bundled capabilities into a single monolithic Backend trait, v1 splits them across focused, composable traits:
Backend— the minimal contract: polling, heartbeating, middlewareBackendExt— serialization: codec, compact representation, encoded pollingTaskSink— task enqueueing: single, bulk, stream, and pre-built task variantsExpose— observability: queue listing, worker listing, task filtering, metricsMakeShared— connection sharing across multiple workers
This means you can now write generic code that depends only on the capabilities it actually uses — a function that only pushes tasks bounds on TaskSink, not the full Backend. See the Backend Trait and Exposing Backends documentation for the full picture.
Backend crates have also moved to their own repositories, making versioning and maintenance independent of the core crate.
Breaking API Changes
Several APIs have been renamed or restructured for clarity. Here is a quick reference:
WorkerBuilder now requires .backend()
The backend must now be declared explicitly as the second step in the builder chain, before layers and data:
// Before
WorkerBuilder::new("tasty-banana")
.layer(...)
.backend(sqlite)
.build(task_fn);
// After
WorkerBuilder::new("tasty-banana")
.backend(sqlite)
.layer(...)
.build(task_fn);Monitor restarts are now factory-based
Monitor::register now accepts a closure that receives the run count, enabling per-restart configuration:
// Before
Monitor::new()
.register(WorkerBuilder::new("tasty-banana")...);
// After
Monitor::new()
.register(|runs: usize| {
WorkerBuilder::new("tasty-banana")...
});Use Monitor::should_restart to control whether a worker restarts after a given error:
Monitor::new()
.register(|_| ...)
.should_restart(|_ctx, last_err, _run| {
matches!(last_err, WorkerError::GracefulExit)
});Other renames
| Before | After |
|---|---|
WorkerContext::id() | WorkerContext::name() |
service_fn | taskfn |
Pipe::pipe_to_storage | PipeExt::pipe_to |
Workflows
v1 ships apalis_workflow as a separate crate with two workflow models:
Sequential workflows — linear pipelines built with composable combinators: and_then, filter_map, fold, repeat_until, delay_for, and delay_with. Each step receives the output of the previous one and the whole pipeline runs inside a standard worker.
DAG workflows — directed acyclic graph execution where independent nodes run in parallel and dependent nodes wait for their specific inputs. The graph is validated before execution and can be printed in DOT format for visualisation.
Both workflow types are standard Backend implementations — they slot into WorkerBuilder with no special runner. See Sequential Workflows and DAG Workflows for full documentation.
Contextual Tracing
Distributed trace context can now be propagated through the queue. Attach a TracingContext to a task at enqueue time and the worker will include the upstream trace_id, span_id, and W3C trace flags in the emitted span — linking background job execution back to the HTTP request or event that triggered it.
let context = TracingContext::new()
.with_trace_id(¤t_trace_id)
.with_span_id(¤t_span_id)
.with_trace_flags(1);
let task = Task::builder(my_job).meta(context).build();On the worker side, ContextualTaskSpan reads the stored context automatically. See Tracing Integration for full setup instructions.
OpenTelemetry Metrics Layer
A new OpenTelemetryMetricsLayer records task throughput and processing duration using the OTel Messaging semantic conventions. Two instruments are emitted per task:
messaging.client.consumed.messages— a counter tagged by worker, job type, and outcomemessaging.process.duration— a histogram with pre-configured background-job-appropriate buckets
The layer uses the global OTel meter provider, so it works with any exporter — OTLP, Jaeger, Datadog, Honeycomb — without additional configuration. See OpenTelemetry Integration.
TaskId is Now Explicit
TaskId<T> no longer defaults to RandomId. The type parameter must be declared explicitly to prevent accidental mismatches between the ID type expected by a backend and the one inferred by the compiler. Update any code that relied on the implicit default.
apalis-board Web Dashboard
apalis-board reaches a stable release candidate alongside v1. It provides a real-time web dashboard for managing Apalis backends — queue overviews, task browsing and filtering, worker health monitoring, and live tracing event streaming via server-sent events.
It is built on apalis-board-api (Axum and Actix adapters) and apalis-board-web (a Leptos frontend bundled with the crate). Full support is available for SQLite, PostgreSQL, and MySQL backends. See Web UI.
What's Next
v1 is the foundation. Near-term work includes full Expose support for the remaining backends (Redis, AMQP, RSMQ, PGMQ), expanded workflow combinators, and further ergonomic improvements to the Monitor API.
If you are upgrading from 0.x, the Changelog covers every breaking change with before/after examples. If you are starting fresh, the getting started guide reflects the v1 API throughout.
Thank you to everyone who filed issues, reviewed pull requests, and tested release candidates during the long road to 1.0.
Apalis is an open-source async job queue for Rust. GitHub · crates.io · docs.rs