I built my own CDN using Cloudflare R2 to serve media assets like .mp4
, .glb
, and .png
—fast, clean, and from my own domain. No more using Google Drive hacks or hitting Vercel Blob's storage limits. R2 is S3-compatible, has no egress fees, and works beautifully with CLI workflows. Here's exactly how I set it up .
Why I Didn't Use Google Drive or Vercel Blob
Google Drive
It’s not made for developers.
You can upload files, sure — but try embedding a .glb
or .mp4
in a site? You’ll get a redirect, a viewer wrapper, or a broken link.
No real direct URLs
No content-type headers
No control
It’s a file sharing tool, not a file serving tool. Same goes for Onedrive.
Vercel Blob
5GB free. 100GB on Pro.
Really good for a dedicated project or other small use cases.
But if you’re building something real — a product, a platform, a portfolio with actual media — you’ll outgrow that in a blink.
Also:
Not S3-compatible
No public folder structure
Locked into their API
You can’t just curl
or aws s3 cp
your way in. It’s a black box with a nice UI.
Vercel Blob is designed to simplify file storage and retrieval for developers, but it doesn’t use the Amazon S3 API or protocols. Instead, it’s built on Cloudflare R2 (which is S3-compatible), but Vercel abstracts away the complexity of managing buckets, permissions, and SDK configurations. This makes it easier for frontend developers—but limits interoperability with tools or systems expecting direct S3 compatibility.
Yeah, pricing is part of it — but S3 compatibility is about freedom.
Vercel Blob is great if you want to throw a file into a black box and get a URL back. It’s clean, simple, and dev-friendly. But it’s also limited. You can’t plug it into existing infrastructure. You can’t use standard tools. You can’t automate workflows outside their SDK. It’s like building a house with no doors — looks nice, but you’re stuck inside.
I still do use Blob whenever I want to ship simple fast and efficient features/ideas.
S3 compatibility means you’re playing in the open world. You can:
use the AWS CLI, Terraform, or any DevOps tool
migrate to or from AWS, R2, Backblaze, MinIO, whatever
build pipelines, backups, batch uploads, versioning, signed URLs — all with standard tools
Cloudflare R2 gives you that power — and yeah, it’s cheap. No egress fees is a game-changer. But the real win is that it doesn’t lock you in.
I know this is just a basic CDN system to allow me to transfer files — but it’s a good way. I don’t want to upgrade to Vercel Pro just for 100GB and features I’ll rarely use. I want a precise solution that fits with me.
You use Cloudflare because you’re not just building a site — you’re building infrastructure. And infrastructure should be composable, portable, and under your control.
What I Actually Needed
S3-style uploads
Public URLs I can embed anywhere
Custom domain support
No egress fees
And a workflow that lives in my terminal
Best Options for Scalable Blob Storage
I explored several good options before settling on R2. Here’s why I didn’t go with the others:
1. Cloudflare R2
S3-compatible
No egress fees (huge)
Works great with public assets
Custom domain support
$0.015/GB stored
Integrates with Workers if you want edge logic
This is what I chose. And it’s been flawless.
2. Backblaze B2
Backblaze B2 is solid for backups and media storage. The pricing is attractive, and it's also S3-compatible. But it’s not optimized for serving files globally unless paired with something like Cloudflare. That means extra setup and potential latency issues. I wanted something faster and more direct out of the box.
$0.005/GB storage
Low egress fees
Great for backups, media, static files
Can pair with Cloudflare to kill egress costs
3. Supabase Storage
Supabase is a great choice if you’re already deep in their ecosystem. But I wasn’t using their auth or database, and it felt like overkill to pull in their whole stack just for file storage. I needed something more lightweight and focused purely on asset delivery.
Dev-friendly API
Supports public/private buckets, signed URLs
Great if you’re already using Supabase for auth/db
Free tier is solid; paid plans scale well
4. Firebase Storage
Firebase is popular for media-heavy apps and has tight integration with Firebase Auth. But egress costs can skyrocket, and the S3-incompatible setup makes it hard to work with standard tools. It’s too abstracted and too expensive for my use case.
Integrates with Firebase Auth
Signed URLs, access control, CDN-backed
But egress can get expensive fast
My Setup: Cloudflare R2 + Custom Domain
I created a bucket on R2.
But when I tried uploading big files with the AWS CLI, I hit this:
An error occurred (InternalError) when calling the UploadPart operation
Turns out, newer versions of the AWS CLI don’t play nice with R2.
Fixing the AWS CLI Version Issue
Check your current version:
1aws --version
shellIf it's anything above
2.22.35
, you’ll probably hit issues.I downloaded the mac ARM64
.pkg
installer for AWS CLI v2.22.35 from AWS’s official site, installed it, and verified:1aws --version 2# should say: aws-cli/2.22.35 ...
shell
Now uploads work perfectly.
Setting Up Custom Domain with Cloudflare
My domain was on Squarespace.
Cloudflare R2 only allows custom domains if Cloudflare controls your DNS.
So I:
Added
montek.dev
to CloudflareChanged the nameservers in Squarespace to point to Cloudflare
Added a
CNAME
forassets.montek.dev
Linked it to my R2 bucket
And just like that, I had:
https://assets.montek.dev/images/blogs/cloudflare_cdn.png
Clean. Fast. Mine.
CLI Commands I Use
1# upload a file
2aws s3 cp ./1.mp4 s3://<your-bucket-name>/videos/1.mp4 \
3
4 --endpoint-url https://<your-id>.r2.cloudflarestorage.com \
5
6 --region auto \
7
8 --profile r2
but i didn’t want to remember all that
Shell Shortcuts in .zshrc
1r2url() {
2 echo "https://assets.montek.dev/$1"
3}
4
5r2upload() {
6 aws s3 cp $1 s3://<your-bucket-name>/$2 \
7 --endpoint-url https://<your-id>.r2.cloudflarestorage.com \
8 --region auto --profile r2
9}
10
11r2rename() {
12 aws s3 cp s3://<your-bucket-name>/"$1" s3://montek-assets/"$2" \
13 --endpoint-url https://<your-id>.r2.cloudflarestorage.com \
14 --region auto --profile r2
15 aws s3 rm s3://montek-assets/"$1" \
16 --endpoint-url https://<your-id>.r2.cloudflarestorage.com \
17 --region auto --profile r2
18}
19
20r2list() {
21 aws s3 ls s3://<your-bucket-name>/ --recursive \
22 --endpoint-url https://<your-id>.r2.cloudflarestorage.com \
23 --region auto --profile r2 \
24 | awk '{print $4}' | while read file; do
25 echo "https://assets.montek.dev/$file"
26 done
27}
28
Now I can just do:
1r2url videos/1.mp4
2r2rename images/old.png images/new.png
3r2list
4r2upload image.png images/image.png
All muscle memory.
The Result
I’ve got a zero-egress, custom-domain, CDN-backed asset pipeline.
I can upload anything, serve it globally, and never worry about bandwidth costs or ugly links.
This is how infrastructure should feel: Fast, clean, and under your control.