From FastAPI Background Tasks to Hatchet
How to migrate your FastAPI background tasks to Hatchet for better reliability and scalability.
Matt KayeSenior Software EngineerHatchetFastAPI, Python's new favorite web framework, offers a lightweight, easy-to-use background tasks feature for triggering async tasks that can run later without blocking an incoming request from completing. Of course, this is a killer feature: Eventually every app needs a way to run background tasks.
FastAPI Background Tasks 101
Under the hood, FastAPI background tasks just wrap the Starlette implementation, which simply awaits the background task after sending the response to the client.
This is a handy trick. First, we send the response, and then we run the background task in a non-blocking way afterwards. As FastAPI advertises, this lets you do things like sending emails, processing data, etc. that the client does not need to wait for.
Getting to Production
As you start getting ready to move your FastAPI application to a production setting, you might notice some issues with how FastAPI handles background tasks. In no particular order, a handful of the most immediate issues are:
- There's very little observability here. Since the task is just awaited when the response is sent, it's difficult to know if the background tasks are actually completing or if they're being dropped.
- If the server shuts down, you might have background tasks still waiting to be run. If they don't complete before the application is terminated, those tasks will be dropped, leading to data loss.
- There's no way to handle common task queueing issues like concurrency, retries, and so on.
- All of this work is being run on your web server, even though it's done in a non-blocking way. This means that background tasks still will eat up CPU and memory on your server, which can be hard to debug, and can lead to performance issues for clients.
These issues, in addition to others that are likely to come up, are the reason you might migrate your FastAPI background tasks over to a more robust tool like Hatchet when you're getting ready to ship your app to production.
Migrating to Hatchet
Hatchet's functionality is built to solve exactly these problems, and many more that you'll face as you continue to scale and overcome new obstacles! For the issues above, Hatchet solves them by:
- Having a fully featured dashboard showing task statuses, runtimes, etc. and by providing additional tools like an OpenTelemetry integration to help you monitor your application.
- Hatchet will "reassign" tasks that do not run to completion to a new (running) worker, so you don't need to worry about your worker shutting down and a task being dropped. Workers can safely shut down without data loss.
- Hatchet offers lots of concurrency, rate limiting, retrying, and many more features to help you build background tasks that adhere to your business logic.
- You can scale your Hatchet workers independently of your web server
Porting your tasks from FastAPI background tasks to Hatchet is simple - all you need to do is create Hatchet tasks out of the functions you're passing to add_task. For instance:
Would become:
And that's it! When you trigger the Hatchet task (in this case, in "fire and forget" style), your task will be sent through the Hatchet Engine to your worker, where it will execute, and report its result in the dashboard for you to see. Or if something goes wrong, you can be notified.
Feature Comparison
Ready to Migrate?
Check out our blog post on Hatchet using modern Python for a thorough introduction to Hatchet.
You can get up and running in just five minutes on Hatchet Cloud. And if you'd like to learn more, you can find us:
Or check out our documentation.
Subscribe for more technical deep dives
Stay updated with our latest work on distributed systems, workflow engines, and developer tools.