![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
Cryptocurrency News Articles
Introducing Cloudflare Containers: Run containers natively on the Workers platform
Apr 11, 2025 at 10:00 pm
It's almost the end of Developer Week and we haven't talked about containers: until now.
It is almost the end of Developer Week and we haven’t talked about containers: until now. As some of you may know, we’ve been working on a container platform behind the scenes for some time.
In late June, we plan to release Containers in open beta, and today we’ll give you a sneak peek at what makes it unique.
Workers are the simplest way to ship software around the world with little overhead. But sometimes you need to do more. You might want to:
Run user-generated code in any language
Execute a CLI tool that needs a full Linux environment
Use several gigabytes of memory or multiple CPU cores
Port an existing application from AWS, GCP, or Azure without a major rewrite
Cloudflare Containers let you do all of that while being simple, scalable, and global.
Through a deep integration with Workers and an architecture built on Durable Objects, Workers can be your:
API Gateway: Letting you control routing, authentication, caching, and rate-limiting before requests reach a container
Service Mesh: Creating private connections between containers with a programmable routing layer
Orchestrator: Allowing you to write custom scheduling, scaling, and health checking logic for your containers
Instead of having to deploy new services, write custom Kubernetes operators, or wade through control plane configuration to extend the platform, you just write code.
Let’s see what it looks like.
Deploying different application types
A stateful workload: executing AI-generated code
First, let’s take a look at a stateful example.
Imagine you are building a platform where end-users can run code generated by an LLM. This code is untrusted, so each user needs their own secure sandbox. Additionally, you want users to be able to run multiple requests in sequence, potentially writing to local files or saving in-memory state.
To do this, you need to create a container on-demand for each user session, then route subsequent requests to that container. Here’s how you can accomplish this:
First, you write some basic Wrangler config, then you route requests to containers via your Worker:
Then, deploy your code with a single command: wrangler deploy. This builds your container image, pushes it to Cloudflare’s registry, readies containers to boot quickly across the globe, and deploys your Worker.
That’s it.
How does it work?
Your Worker creates and starts up containers on-demand. Each time you call env.CODE_EXECUTOR.get(id) with a unique ID, it sends requests to a unique container instance. The container will automatically boot on the first fetch, then put itself to sleep after a configurable timeout, in this case 1 minute. You only pay for the time that the container is actively running.
When you request a new container, we boot one in a Cloudflare location near the incoming request. This means that low-latency workloads are well-served no matter the region. Cloudflare takes care of all the pre-warming and caching so you don’t have to think about it.
This allows each user to run code in their own secure environment.
Stateless and global: FFmpeg everywhere
Stateless and autoscaling applications work equally well on Cloudflare Containers.
Imagine you want to run a container that takes a video file and turns it into an animated GIF using FFmpeg. Unlike the previous example, any container can serve any request, but you still don’t want to send bytes across an ocean and back unnecessarily. So, ideally the app can be deployed everywhere.
To do this, you declare a container in Wrangler config and turn on autoscaling. This specific configuration ensures that one instance is always running and if CPU usage increases beyond 75% of capacity, additional instances are added:
To route requests, you just call env.GIF_MAKER.fetch and requests are automatically sent to the closest container:
Going beyond the basics
From the examples above, you can see that getting a basic container service running on Workers just takes a few lines of config and a little Workers code. There’s no need to worry about capacity, artifact registries, regions, or scaling.
For more advanced use, we’ve designed Cloudflare Containers to run on top of Durable Objects and work in tandem with Workers. Let’s take a look at the underlying architecture and see some of the advanced use cases it enables.
Durable Objects as programmable sidecars
Routing to containers is enabled using Durable Objects under the hood. In the examples above, the Container class from cloudflare:workers just wraps a container-enabled Durable Object and provides helper methods for common patterns. In the rest of this post, we’ll look at examples using Durable Objects directly, as this should shed light on the platform’s underlying design.
Each Durable Object acts as a programmable sidecar that can proxy requests to the container and manages its lifecycle. This allows you to control and extend your containers in ways that are hard on other
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
-
-
- Tron (TRX) Has a Ginormous "Sell Wall" That Might Prevent the Coin From Climbing Higher
- Apr 19, 2025 at 05:40 am
- Tron (TRX) has fallen by a mere amount in price, sliding 1.26% over the last 24 hours to $0.2441. But what everyone is really noticing is a ginormous “sell wall” that might prevent the coin from climbing much higher.
-
-
-
-
-
-