Launch Week 02, Day 4: Launching webhooks →

Managed infrastructure for everything async

A single platform for orchestrating AI agents, scheduling background tasks, and running mission-critical workflows

Running Billions of Tasks for Scale-Ups and Enterprises
Distill logoEllipsis logoGreptile logoMIT logoMoonhub logoMotion logoSweetspot logo
Hatchet dashboard showing workflow runs, task details, and activity navigation

How it works

There are two components to running Hatchet: the orchestration engine and your workers. Workers run on your own infrastructure, while the Hatchet orchestration engine is available as a managed service or can be self-hosted.

Diagram of Hatchet orchestration engine connecting to customer-managed workers in the cloud

Step 1
Write tasks and workflows as code

Illustration: defining tasks and workflows as code with retries and step dependencies

Tasks are simple functions. They can be composed into workflows to represent more complex logic. Your tasks automatically retry on failure and handle complex dependencies between steps.

Step 1
Write tasks and workflows as code

Illustration: defining tasks and workflows as code with retries and step dependencies

Tasks are simple functions. They can be composed into workflows to represent more complex logic. Your tasks automatically retry on failure and handle complex dependencies between steps.

Step 2
Invoke your tasks and workflows

Illustration: starting workflows from APIs, schedules, and events

Start workflows from your API, schedule them to run at specific times, or trigger them when events happen. Tasks run immediately or queue up for later.

Step 2
Invoke your tasks and workflows

Illustration: starting workflows from APIs, schedules, and events

Start workflows from your API, schedule them to run at specific times, or trigger them when events happen. Tasks run immediately or queue up for later.

Step 3
Run workers in your own cloud

Illustration: running Hatchet workers on your infrastructure and connecting to the orchestrator

Deploy workers on Kubernetes, Porter, Railway, Render, ECS, or any container platform. They automatically connect to Hatchet and can scale up or down based on workload.

Step 3
Run workers in your own cloud

Illustration: running Hatchet workers on your infrastructure and connecting to the orchestrator

Deploy workers on Kubernetes, Porter, Railway, Render, ECS, or any container platform. They automatically connect to Hatchet and can scale up or down based on workload.

Step 4
Monitor and replay

Illustration: monitoring workflows, failures, and metrics in the Hatchet dashboard

See all your worksflows in the dashboard, get alerts when tasks fail, and export metrics to your monitoring tools. Full visibility (and control) without extra setup.

Step 4
Monitor and replay

Illustration: monitoring workflows, failures, and metrics in the Hatchet dashboard

See all your worksflows in the dashboard, get alerts when tasks fail, and export metrics to your monitoring tools. Full visibility (and control) without extra setup.

Use Case

Ingestion & Indexing Pipelines

Agents and AI workflows need continuous, fast access to your data to construct context on-demand. With Hatchet, your vector databases and knowledge graphs will always be up-to-date.

Illustration of ingestion and indexing pipelines updating vector databases and knowledge bases
Scale with Hatchet

Build resilient pipelines that handle real-world complexity. Automatic retries, intelligent rate limiting, and checkpoint recovery mean your data stays fresh without constant firefighting.

  • Build RAG, document processing, and indexing pipelines with ease

  • Easily replay failed pipelines from the Hatchet UI

  • Update vector databases in real-time with exactly-once semantics

With Hatchet, we've scaled our indexing workflows effortlessly, reducing failed runs by 50% and doubling our user base in just two weeks!

Soohoon ChoiSoohoon ChoiCo-founderGreptile
Case Study
document-indexing
Loading syntax highlighting...
Use Case

AI Requests & Agents

AI agents need complex orchestration — managing tool calls, handling timeouts, maintaining conversation state, and enforcing safety constraints. Most teams end up building fragile in-process systems that are difficult to scale and maintain.

Illustration of AI agents with tool orchestration, guardrails, and durable execution
Reliability with Hatchet

Define agents as simple, durable functions with built-in orchestration primitives. Set guardrails, manage state, and handle failures gracefully.

  • Write your agent's tools as simple functions, integrated tightly with your business logic

  • Designed to be long-running with safety and security constraints

  • Built-in eventing for human-in-the-loop signaling and streaming responses

Implementing Hatchet has revolutionized our task management system, enabling us to handle a growing number of background tasks efficiently.

Shaun BerrymanShaun BerrymanStaff SWEMoonhub
Case Study
ai-agent
Loading syntax highlighting...
Use Case

Massive Parallelization

Processing thousands of documents, enriching large datasets, running agent swarms, or scheduling GPU workloads requires complex coordination. Most solutions either can't scale or become impossibly complex to manage.

Illustration of fan-out parallel work across many Hatchet workers and coordinated steps
Parallelize with Hatchet

Fan out to thousands of workers with a single function call. Built-in fairness algorithms and resource management ensure efficient utilization without manual tuning.

  • Process entire document repositories in parallel

  • Enrich millions of leads without rate limit headaches

  • Schedule GPU jobs with intelligent batching

  • Scrape web data with automatic retry and deduplication

Hatchet enables Aevy to process up to 50,000 documents in under an hour through optimized parallel execution, compared to nearly a week with our previous setup.

Ymir EgilsonYmir EgilsonCTOAevy
Case Study
agent-swarm
Loading syntax highlighting...

Core Principles

Graphic highlighting performance, durability, and code-first SDKs as Hatchet core principles

Performance

Built for low-latency, high-throughput workloads with task start times of less than 20ms. Smart assignment rules handle rate limits, fairness, and priorities without complex configuration.

Durability

Every task invocation is durably logged to a data store. When jobs fail, resume exactly where you left off — no lost work, no duplicate LLM calls, no engineering headaches.

Code-First

Hatchet SDKs are language-native so developers can write business logic as versionable, reusable, testable atomic functions.

Stylized illustration of workflow steps and orchestration connecting tasks

Build AI that scales. Consolidate your legacy orchestration into one reliable, scalable, & secure solution.

  • Enterprise-grade security, compliance, and SSO

  • Processing over 100 million tasks/day for AI-first companies

  • Custom deployment options & bring-your-own-cloud available