Running Next.js + Supabase in Production for African Users
We run two production applications on Next.js + Supabase serving thousands of users in Ethiopia. This is not a tutorial about setting up the stack — it is a field report about what happens when you deploy this stack for users on the other side of the world from your database.
Supabase Latency From Ethiopia
Supabase's default regions are US East, US West, and a few European options. None of them are close to Ethiopia. Our nearest available region is Frankfurt (eu-central-1), which adds approximately 120-180ms of network latency to every database round trip from an Ethiopian client.
For server-side operations (API calls from our Node.js backend, also hosted in Europe via Render), the latency is minimal — 5-15ms between European data centers. The problem is client-side Supabase queries, which were part of our original architecture for real-time subscriptions.
Our solution was to minimize client-side Supabase calls. We route almost everything through our backend API, which co-locates with Supabase in Europe. The client talks to our API (one round trip across continents), and our API talks to Supabase (fast intra-region call). This adds one hop but reduces total latency compared to a direct client-Supabase call because we can batch multiple database operations into a single API response.
For real-time features (live game state updates, balance changes), we use Supabase Realtime from the client — the persistent WebSocket connection amortizes the latency cost across many events. The initial connection takes 200-300ms, but subsequent events arrive in near-real-time.
Static vs Server Rendering: The Tradeoffs
Next.js gives you the full rendering spectrum: static generation, server-side rendering, and client-side rendering. For African users on slow connections, the choice matters enormously.
Static generation (what we use for marketing pages, game rules, informational content) is the gold standard for performance. The HTML is pre-built, served from a CDN, and loads in under a second even on 3G. We statically generate everything that does not require real-time data.
Server-side rendering would give us dynamic content with good SEO, but it adds a full round trip to our server in Europe. For Ethiopian users, this means 200-400ms just for the server response, before the browser starts rendering. We avoid SSR for user-facing pages where speed is critical.
Client-side rendering with loading states is our approach for dynamic, personalized content (account dashboards, game interfaces). The page shell loads instantly from static assets, then data populates via API calls. This feels faster because the user sees meaningful content immediately, even before their personal data loads.
The hybrid approach works well: static shell loads fast, personalized data streams in, real-time updates flow through WebSocket. The user perception is of a fast, responsive application even though the data may be 200ms behind.
PWA Implementation for Offline Capability
We implemented Service Worker caching for our betting platform. The strategy is nuanced:
Cache-first for static assets: CSS, JavaScript, images, and fonts are cached aggressively. Once loaded, they never hit the network again until the service worker updates.
Network-first for API data: We always try to fetch fresh data from the API. If the network fails, we fall back to the last cached response. This means a user can open the app offline and see their last-known balance, last-known odds, and navigate the interface — they just cannot place bets until connectivity returns.
Background sync for actions: If a user attempts an action while offline (though we prevent bet placement offline for obvious reasons), the action is queued and synced when connectivity returns. We use this for non-critical actions like preference changes and notification acknowledgments.
The PWA install rate surprised us. Over 40% of returning users have installed the PWA to their home screen. For Ethiopian users, this is significant — it saves data on subsequent visits (cached assets) and provides an app-like experience without the friction of app store installation.
Image Optimization for Slow Connections
Images are the biggest performance bottleneck for slow connections. Our optimization strategy:
Aggressive compression: All images are processed through a build-time optimization pipeline. We target file sizes under 50KB for UI elements and under 100KB for promotional images.
WebP with JPEG fallback: We serve WebP to browsers that support it (which is most modern browsers) and fall back to optimized JPEG for older devices.
LQIP (Low Quality Image Placeholder): Before the full image loads, we show a tiny (under 1KB) blurred preview. This eliminates layout shift and gives the user a visual preview while the full image loads.
Lazy loading: Images below the fold are loaded only when they scroll into view. Combined with intersection observer, this means initial page loads only fetch the images the user can actually see.
SVG for icons and UI elements: Rather than loading icon sprite sheets or font-based icons, we use inline SVGs through Lucide React. These render instantly with zero network requests.
CDN Strategy: Render vs Vercel
We evaluated both Render and Vercel for our deployment needs. The decision was not straightforward.
Vercel has better edge caching and a more sophisticated CDN. Their Edge Network has nodes closer to Africa (though still not in Africa). Static asset delivery is measurably faster from Vercel.
Render gives us more control over our backend infrastructure. We can run persistent Node.js services, manage background workers, and maintain WebSocket connections — all things that are more complex on Vercel's serverless architecture.
Our current setup: marketing site and static assets on Vercel-like static hosting with global CDN. Backend API and real-time services on Render's always-on instances. Database on Supabase in Frankfurt.
This split gives us the best of both worlds: fast static delivery for the majority of page loads, and reliable persistent services for the dynamic backend.
Key Performance Numbers
After six months of optimization, these are our production metrics for Ethiopian users:
- •First Contentful Paint: 1.2s on 4G, 2.8s on 3G
- •Time to Interactive: 1.8s on 4G, 3.5s on 3G
- •API Response Time (p95): 350ms from Ethiopian client
- •Lighthouse Score: 92 (Performance), 95 (Accessibility)
- •Total JavaScript Bundle: 180KB gzipped
- •Service Worker Cache Hit Rate: 78% of asset requests served from cache
These numbers are not world-class by developed-market standards, but they are competitive for the Ethiopian market. The key insight is that absolute numbers matter less than perceived performance — showing content quickly and loading data progressively makes the experience feel fast, even when the underlying network is slow.
The Next.js + Supabase stack is a strong choice for African-market applications. The performance ceiling is high enough, the developer experience is excellent, and the cost is manageable. What matters is understanding the constraints of your users' networks and devices, and designing your architecture to respect them.