Web Development

Edge Computing for Web Developers: Cloudflare Workers and Deno Deploy

Edge Computing for Web Developers: Cloudflare Workers and Deno Deploy

Traditional web applications run on a server in one data center. A user in Tokyo requesting data from a server in Virginia waits for packets to cross the Pacific Ocean — roughly 150 milliseconds of pure network latency before your application code even starts executing. Edge computing eliminates this by distributing your code to data centers worldwide, running it milliseconds from each user.

This guide covers the major edge computing platforms available to web developers today, with working code examples, practical use cases, and honest assessments of where edge computing fits (and where it does not) in your architecture.

What Is Edge Computing?

Instead of one origin server, your application runs on hundreds of edge nodes distributed across the globe. A request from Tokyo hits a Tokyo edge node. A request from London hits a London node. A request from Sao Paulo hits a Sao Paulo node. Each response originates from the closest possible location to the user.

This is different from a CDN serving static files. Traditional CDNs cache pre-built assets — HTML pages, images, CSS bundles — and serve them from edge locations. Edge computing goes further: your actual application code executes at the edge. This means dynamic responses, personalized content, authentication checks, and API logic all run near the user rather than in a distant origin data center.

How Edge Runtimes Differ from Node.js

Edge runtimes are not Node.js running on geographically distributed servers. They use lightweight, V8-based isolates that start in microseconds rather than the milliseconds a full Node.js process needs. This means:

  • No cold starts — Isolates spin up in under 5 milliseconds, compared to 100-500ms for serverless functions on AWS Lambda
  • Web Standard APIs — Edge runtimes implement the fetch, Request, Response, URL, TextEncoder, and crypto APIs from the browser specification, not Node.js-specific APIs
  • Constrained execution — CPU time is capped (typically 10-50ms per request) and there is no persistent filesystem. This forces a stateless architecture.
  • Reduced API surface — Many Node.js built-in modules (fs, child_process, net) are unavailable. Not every npm package works at the edge.

Cloudflare Workers

Cloudflare Workers is the most mature edge computing platform, running on Cloudflare’s network of over 300 data centers in more than 100 countries. Workers use the V8 JavaScript engine directly (not Node.js) and support JavaScript, TypeScript, Rust (compiled to WebAssembly), and other languages that compile to Wasm.

A Basic Worker

export default {  async fetch(request, env) {    const url = new URL(request.url);    // API route    if (url.pathname === '/api/hello') {      return new Response(        JSON.stringify({          message: 'Hello from the edge!',          location: request.cf?.city || 'unknown',          timestamp: Date.now(),        }),        {          headers: { 'Content-Type': 'application/json' },        }      );    }    // Serve static assets from Workers Sites / Assets    return env.ASSETS.fetch(request);  },};

The request.cf object is unique to Cloudflare Workers — it contains geolocation data (city, country, continent, latitude, longitude, timezone) derived from the connecting IP address, without any third-party geolocation API call.

Workers KV: Global Key-Value Storage

Workers KV is an eventually consistent key-value store replicated across every Cloudflare edge location. Reads are fast (single-digit milliseconds) because data is cached at each edge node. Writes propagate globally within 60 seconds.

export default {  async fetch(request, env) {    const url = new URL(request.url);    const key = url.pathname.slice(1);    if (request.method === 'GET') {      const value = await env.MY_KV.get(key);      if (!value) {        return new Response('Not found', { status: 404 });      }      return new Response(value, {        headers: { 'Content-Type': 'application/json' },      });    }    if (request.method === 'PUT') {      const body = await request.text();      await env.MY_KV.put(key, body, { expirationTtl: 3600 });      return new Response('Saved', { status: 201 });    }    return new Response('Method not allowed', { status: 405 });  },};

D1: SQL at the Edge

For applications needing relational data, Cloudflare D1 is a SQLite-based database that runs at the edge. Combined with Workers, it enables full CRUD applications without a traditional database server.

Deno Deploy

Deno Deploy runs Deno — the Node.js alternative created by Ryan Dahl (who also created Node.js) — at the edge across 35+ regions. It supports TypeScript natively with zero configuration, uses Web Standard APIs, and deploys directly from a GitHub repository.

A Deno Deploy Application

Deno.serve((req: Request): Response => {  const url = new URL(req.url);  if (url.pathname === '/api/time') {    return Response.json({      time: new Date().toISOString(),      region: Deno.env.get('DENO_REGION') || 'local',    });  }  if (url.pathname === '/api/headers') {    const headers: Record<string, string> = {};    req.headers.forEach((value, key) => {      headers[key] = value;    });    return Response.json(headers);  }  return new Response('Not found', { status: 404 });});

Deno Deploy’s key advantage is its compatibility with the broader Deno ecosystem, including Deno KV (a globally distributed key-value database built into the runtime), native TypeScript support, and a permissions model that limits what code can access by default.

Deno KV: Built-In Distributed Storage

const kv = await Deno.openKv();Deno.serve(async (req: Request): Promise<Response> => {  const url = new URL(req.url);  if (url.pathname === '/api/visit') {    const key = ['visits', 'total'];    const current = await kv.get<number>(key);    const count = (current.value || 0) + 1;    await kv.set(key, count);    return Response.json({ visits: count });  }  return new Response('Not found', { status: 404 });});

Vercel Edge Functions

Vercel Edge Functions run on Cloudflare’s network but integrate tightly with the Vercel deployment platform and frameworks like Next.js. If you already deploy on Vercel, edge functions require minimal configuration — just export a function from an API route with export const runtime = 'edge'.

// app/api/geo/route.ts (Next.js App Router)export const runtime = 'edge';export async function GET(request: Request) {  const { geo } = request as any;  return Response.json({    country: geo?.country || 'unknown',    city: geo?.city || 'unknown',    region: geo?.region || 'unknown',  });}

Vercel Edge Middleware is particularly powerful — it runs before every request reaches your application, enabling authentication, redirects, A/B testing, and feature flags at the network level rather than the application level.

Practical Use Cases

Edge computing is not appropriate for every workload. Understanding where it excels helps you make informed architectural decisions.

Geolocation-Based Routing

Edge functions have access to the user’s geographic location from the connecting IP. This enables content localization, currency conversion, regulatory compliance (showing GDPR banners only to EU visitors), and regional pricing without any client-side JavaScript or third-party API call.

Authentication and Authorization

Verifying a JWT or session token at the edge means unauthorized requests never reach your origin server. This reduces load on your application infrastructure and decreases response time for legitimate requests. Combined with performance optimization techniques, edge auth can shave hundreds of milliseconds from every protected route.

A/B Testing Without Client-Side Flicker

Traditional A/B testing loads the page, checks which variant the user is in, and then swaps content — causing a visible flicker. Edge functions assign variants before the page is even generated, serving the correct version directly.

export default {  async fetch(request, env) {    const url = new URL(request.url);    const cookie = request.headers.get('Cookie') || '';    // Check for existing variant assignment    let variant = cookie.match(/ab-variant=(A|B)/)?.[1];    // Assign new variant if none exists    if (!variant) {      variant = Math.random() < 0.5 ? 'A' : 'B';    }    // Fetch the appropriate page variant from origin    url.pathname = `/variants/${variant}${url.pathname}`;    const response = await fetch(url.toString());    // Clone response and set variant cookie    const newResponse = new Response(response.body, response);    newResponse.headers.set(      'Set-Cookie',      `ab-variant=${variant}; Path=/; Max-Age=86400`    );    return newResponse;  },};

API Rate Limiting

Enforcing rate limits at the edge protects your origin server from traffic spikes. Each edge node can track request counts using KV storage or Durable Objects (for strong consistency) and reject requests that exceed the limit before they consume origin resources.

Image Transformation

Resizing, cropping, and converting images on-demand per request. An edge function receives the image URL and transformation parameters, fetches the original from storage, applies the transformation, and caches the result at the edge. Subsequent requests for the same transformation are served from cache.

Edge Databases and State Management

The biggest challenge in edge computing is data. Your code runs in 300 locations, but your database usually runs in one. Reading from the edge and writing to a centralized database recreates the latency problem you were trying to solve.

Several solutions address this:

  • Cloudflare D1 — SQLite at the edge with read replicas distributed globally. Writes go to a primary region.
  • Deno KV — Built into the Deno runtime. Globally replicated, eventually consistent reads, strongly consistent reads within a region.
  • Turso (libSQL) — SQLite-compatible database with edge replicas. Each replica is a full SQLite database that syncs with the primary.
  • PlanetScale — MySQL-compatible with read replicas in multiple regions. Not true edge, but reduces read latency significantly.
  • Upstash Redis — Redis-compatible with global replication. Useful for session storage, caching, and rate limiting at the edge.

Limitations and Trade-offs

Edge computing has real constraints that affect what you can build:

  • CPU time limits — Cloudflare Workers allow 10ms CPU time on the free plan, 50ms on paid. Long-running computations need to happen elsewhere.
  • No persistent filesystem — You cannot write to disk. All state goes to KV stores, databases, or external services.
  • Limited npm compatibility — Packages that depend on Node.js built-ins (fs, net, child_process) will not work. Check compatibility before committing to an edge-first architecture.
  • Debugging complexity — Distributed systems are harder to debug than a single server. Logs come from 300 locations. Reproducing region-specific issues requires understanding the edge topology.
  • Eventual consistency — Most edge data stores are eventually consistent. If your application requires strong consistency for every read, edge replicas introduce complexity.
  • Cost at scale — Edge computing pricing is based on request count and CPU time, not server hours. For high-throughput applications, the cost model may exceed a traditional server.

When to Use Edge Computing

Edge computing works best for workloads that are latency-sensitive, read-heavy, and stateless or eventually consistent. Middleware operations (auth, redirects, header manipulation), API responses that vary by geography, and personalization layers are ideal candidates.

It works less well for write-heavy workloads, long-running computations, and applications that need strong transactional consistency across the globe. In those cases, a well-optimized traditional architecture often outperforms an edge-first approach.

Many production architectures use a hybrid model: edge functions handle authentication, caching, and personalization, while an origin server (or serverless functions) handles business logic that needs database transactions or heavy computation. Modern frameworks increasingly support this split architecture out of the box.

Getting Started

The fastest path to edge computing depends on your current setup:

  • Already using Vercel — Add export const runtime = 'edge' to your API routes and middleware. No infrastructure changes needed.
  • Starting fresh — Cloudflare Workers offers the most mature ecosystem (KV, D1, R2 storage, Queues, Durable Objects). The free tier includes 100,000 requests per day.
  • TypeScript-first — Deno Deploy gives you native TypeScript, built-in KV storage, and a deployment model that works from a GitHub push.

Edge computing extends the principles Tim Berners-Lee envisioned for the web — decentralized, fast, and accessible to everyone regardless of location. As edge runtimes mature and edge databases become more capable, the line between edge and origin will continue to blur.

Edge Computing vs Traditional Serverless

The distinction between edge computing and traditional serverless (AWS Lambda, Google Cloud Functions, Azure Functions) comes down to geography and startup time. Traditional serverless functions run in a single region you select at deployment. If you deploy to us-east-1, users in Singapore experience cross-Pacific latency. Edge functions run in every region simultaneously, so the Singapore user hits a Singapore node.

Cold start behavior differs significantly. AWS Lambda cold starts range from 100ms to several seconds depending on the runtime and package size. Edge runtimes use V8 isolates that initialize in under 5ms. For latency-sensitive endpoints like authentication checks or API middleware, this difference is substantial.

However, traditional serverless offers capabilities edge cannot match: longer execution times (up to 15 minutes on Lambda), larger memory allocation (up to 10 GB), access to the full Node.js or Python standard library, and direct VPC connectivity to databases and internal services. The choice is not edge versus serverless — it is which workloads belong at the edge and which belong in a single-region function.

Frequently Asked Questions

Is edge computing the same as serverless?

Edge computing is a subset of serverless computing. All edge functions are serverless (you do not manage servers), but not all serverless functions run at the edge. AWS Lambda runs in a single region by default. Cloudflare Workers run in every region simultaneously. The key distinction is geographic distribution and the resulting latency characteristics.

Can I run a full web application entirely at the edge?

For read-heavy applications with modest data requirements, yes. A blog, documentation site, or marketing page can run entirely at the edge using an edge runtime plus an edge database like D1 or Turso. For applications with complex transactions, heavy writes, or large datasets, a hybrid architecture (edge for reads, origin for writes) is more practical.

How do I debug issues that only happen in specific edge regions?

Cloudflare Workers provides wrangler tail for streaming logs from all edge locations. Deno Deploy offers a similar log stream in the dashboard. For systematic debugging, add structured logging that includes the edge region identifier in every log entry, then filter by region. Some platforms allow you to pin requests to specific regions during development.

What happens when the edge node closest to my user goes down?

Edge platforms handle failover automatically. If a node in Frankfurt is unavailable, requests route to the next closest node (Amsterdam, Paris, or London, depending on the provider's topology). This failover is transparent to your code and typically adds only a few milliseconds of additional latency.