Launch Week 01
March 24th - 28th, 2025
This week, we're launching one (or more) new features every day. Follow along in our Discord to get the latest updates and join the discussion.
Monday, March 24th - v1 Engine
We're excited to announce a complete rewrite of the Hatchet engine which unlocks a number of new features and improvements. Most importantly, for self-hosted users, the new engine is able to support:
- 5x less CPU, 6x less IOPs on the database
- 10x higher throughput (on a single Hatchet instance, we were able to queue up to 10k tasks/second over a 24 hour period -- that's nearly 1 billion tasks/day!)
- 30% lower latency, from 30ms -> 20ms average queue time
Monday, March 24th - Postgres-Only Mode
The Hatchet engine now supports Postgres-only mode, removing the requirement of running RabbitMQ and adding additional features for self-hosted instances of Hatchet.
Monday, March 24th - Improved Pricing for all Users
Not only does the new engine bring new features and faster queueing, but it also allows us to offer improved pricing for all users. More importantly, we now offer plans for self-hosted users who would like additional support (this won't impact the community support we offer).
Tuesday, March 25th - Managed Compute
We're thrilled to introduce Hatchet Managed Compute — a cloud runtime built for asynchronous tasks and AI workflows. Some highlights:
- Long-lived workers: run tasks with no timeouts, reusing caches or connections.
- Integrated with your queue: automatically scales your workers up or down based on queue depth.
- Infra-as-code: define machine configuration on a per-function basis.
Wednesday, March 26th - Benchmarks and Performance
We've published our Benchmarking Hatchet guide, showing how the new engine handles various throughput levels. See how Hatchet scales up to 2k+ events per second on simple infrastructure and stays stable, with insights into CPU usage, latency, and more.
Thursday, March 27th - v1 SDKs
We've released our new v1 SDKs for Python, Typescript, and Go. These SDKs have significant improvements for type support, usability, and workflow composability.
Thursday, March 27th - Conditional Execution
We've added support to all Hatchet SDKs for conditional execution of tasks. This allows you to define a set of conditions that must be met before a task is executed, making it easier to build complex workflows. In particular, this allows you to easily build:
- Human-in-the-loop workflows - you can easily await external events before continuing a workflow.
- Workflow branching - you can easily build workflows that branch based on the output of a previous task.
- Delayed execution - using our durable sleep condition, you can delay the execution of a DAG for an arbitary duration of time.
To explore these features, check out our conditional workflow deep dive.
Thursday, March 27th - APIs
We've added the following capabilities to each of our SDKs:
- Bulk cancellations and replays - you can now cancel or replay multiple workflow runs at once, either by providing a list of workflow run ids or by providing a set of filters to match against. This allows you to easily manage large numbers of workflow runs and quickly respond to failures or other issues.
- REST APIs - each SDK wraps our new REST API endpoints for listing workflows, runs, crons, and schedules. This allows you to easily build internal tooling around Hatchet and integrate with other systems.
Friday, March 28th - Durable Execution
Today we're launching durable tasks — these are tasks which are easily able to recover from failures or interruptions by storing intermediate results in a cache. If the underlying machine that your task is running on gets killed, a durable task will gracefully recover from the point of failure without needing to start from the beginning. While we've technically supported this behavior for a while with spawned workflows, we're formalizing this pattern and we've built some cool features around it, like durable events and durable sleep.
Friday, March 28th - Cancel Newest Concurrency Strategy
There are many cases where you'd like to skip running a task if you already have an existing task running. You can now accomplish this with CANCEL_NEWEST
, which is a concurrency strategy that cancels a task if an existing task or set of tasks are already running.