Free movie poster hosting using Cloudflare R2 and Workers

Cloudflare provides an absurdly large arsenal of tools and services for free. Kind of. A lot of services are only free to a certain degree but luckily I am not hitting these limits thus things stay free for me.

I want to move away from my root / dedicated / virtual servers that I piled up over the years. For a project I need movie posters. At the moment, they are hosted on one of my servers. A script downloads them from TMDB, renames them and puts them in a directory, where my nginx serves them. Said nginx is already behind Cloudflare, thus caching is already in place.

But I want to get rid of the server. AWS has some sort of free tier for their S3, but I have basically no touching points with AWS in my private projects. There are other free S3-compatible storage providers, but almost all of them have some similar or even worse limits or aren’t free at all. In comes Cloudflare R2. Their pricing model kinda fits my needs much better.

DescriptionPaid - Rates
Storage10 GB / month$0.015 / GB-month
Class A Operations1 million requests / month$4.50 / million requests
Class B Operations10 million requests / month$0.36 / million requests

While A operations are basically things like putting new objects into the bucket. B operations on the other hand are things like retrieving an object. But the most interesting part: egress bandwidth is free.

So with that amount of free requests and 10 GB of free storage, moving everything over to R2 seems to be a viable option.

The storage thing has been solved, but there’s still that “get rid of the server” part. Say hello to Cloudflare Workers. They basically integrate into everything very similar to AWS Lambdas. Luckily Cloudflare provides yet again a free tier to Workers. In short, I can do around 100k reqs/day and my code gets a CPU time of up to 10ms.

The latter caught me a bit off guard.

How should I be able to perform a request against the TMDB API, download an image and then put it into the R2 within 10ms?

Well, run-time and CPU time are completely different things. I thought I should ignore it for now.

So, I thought why not simply combine a worker with R2? The general idea:

  1. access poster url via
  2. the worker tries to get tt0816692.jpg from the attached R2 bucket
  3. if it doesn’t return an object: perform that TMDB API downloading stuff and put if there
  4. after downloading, finally serve said the file.

The configuration is stupidly simple. Set DNS entries. Deploy the code. Attach R2 bucket to worker. Basically done. I have yet to configure some limits to make sure that nobody who finds my stuff doesn’t cause any trouble with my bank.

My worker code looks something like this. Still room for improvements, but works just fine for a first version.

export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);
    const imdbid = key.split(".")[0]

    if (/ev\d{7}\/\d{4}(-\d)?|(ch|co|ev|nm|tt)\d{7}/.test(imdbid) == false || key.endsWith('.jpg') == false) {
      return new Response(null, {"status": 400});

    if (request.method == "GET") {
      let object = await env.MY_BUCKET.get(key);

      if (object === null) {
        const opts = {
          headers: { "content-type": "application/json;charset=UTF-8" }

        const response = await fetch(`${imdbid}?external_source=imdb_id&api_key=${env.TMDB_API_KEY}`, opts);
        const jsonres = await response.json();
        const poster_path = `${jsonres.movie_results[0].poster_path}`;
        const poster_res = await fetch(poster_path);
        await env.MY_BUCKET.put(key, poster_res.body);
        object = await env.MY_BUCKET.get(key);

      const headers = new Headers();
      headers.set('etag', object.httpEtag);
      return new Response(object.body, { headers });

About that 10ms CPU time limit? Well, not an issue. Not at all

Cloudflare Worker stats

All in all, this thing runs for over a week without any issues so far. Class B operations are much higher than class A ones, but I expected that due to the way things are checked, stored and retrieved from the bucket. I’m happy so far