Introduction
------------
Long-running backend processes and reactive UIs are a bad combination by default. A user clicks “upload,” your server spins up a pipeline that takes fifteen seconds, and your only options are: make them wait on a blank spinner, poll every second and waste everyone’s time, or actually solve the problem.
This article is about solving the problem. We’ll build a pattern where the backend runs a state machine and the frontend listens over a persistent HTTP connection. No WebSocket complexity, no polling interval math, no Redis required. When it works well, the UI updates the moment something happens on the server. No polling, no manual refreshes, no visible lag.
Some server-side operations can’t be shoved into a normal HTTP request-response cycle. For example, OAuth flows navigate multiple redirects and callbacks. Document pipelines upload, validate, scan, convert, and store files. Verification flows send emails, wait for confirmations, and time out. AI agent workflows need to show progress step by step, or users assume the thing is broken. You can’t just POST and wait 30 seconds. You need the server to push state updates as the process unfolds.
Before diving into the implementation, a few definitions are worth pinning down. Server-Sent Events (SSE) are a unidirectional, real-time communication protocol where the browser opens a long-lived HTTP connection and the server pushes events down it whenever it has something to say — the connection flows one way, from server to client. WebSockets are different: bidirectional, full-duplex communication over TCP where both sides can send messages at any time. Polling takes the opposite approach, where the client asks the server “anything new?” on a timer; long polling improves on this slightly by holding the connection open until there’s an answer, but both approaches generate requests even when nothing has changed. A state machine is a process that exists in exactly one of a finite set of states and transitions between them based on events — “pending → validating → scanning → complete” is a state machine. And a race condition is when your code’s output depends on which async operation finishes first, in a way you didn’t account for.
The client side
---------------
For state synchronization where the server drives the flow and the client reacts, the frontend simply needs to listen. To make this clean and reusable across different components, we can encapsulate the SSE connection logic into a custom React hook.
Here is the implementation of `useEvent`:
import { useState, useEffect, useRef, useCallback } from 'react';
export type EventStreamStatus = 'idle' | 'connecting' | 'connected' | 'error';
export interface UseEventOptions {
events?: Record<string, (event: MessageEvent<string>) => void>;
onMessage?: (event: MessageEvent<string>) => void;
onOpen?: (event: Event) => void;
onError?: (event: Event) => void;
autoReconnect?: boolean;
}
export interface UseEventReturn {
status: EventStreamStatus;
error: string | null;
connect: (url: string) => void;
disconnect: () => void;
}
export function useEvent(options: UseEventOptions = {}): UseEventReturn {
const {
events = {},
onMessage,
onOpen,
onError,
autoReconnect = true
} = options;
const eventSourceRef = useRef<EventSource | null>(null);
const urlRef = useRef<string | null>(null);
const [status, setStatus] = useState<EventStreamStatus>('idle');
const [error, setError] = useState<string | null>(null);
const eventsRef = useRef(events);
const onMessageRef = useRef(onMessage);
const onOpenRef = useRef(onOpen);
const onErrorRef = useRef(onError);
useEffect(() => {
eventsRef.current = events;
onMessageRef.current = onMessage;
onOpenRef.current = onOpen;
onErrorRef.current = onError;
});
const disconnect = useCallback(() => {
if (eventSourceRef.current) {
eventSourceRef.current.close();
eventSourceRef.current = null;
}
urlRef.current = null;
setStatus('idle');
setError(null);
}, []);
const connect = useCallback((url: string) => {
disconnect();
urlRef.current = url;
setStatus('connecting');
setError(null);
const es = new EventSource(url);
eventSourceRef.current = es;
es.onopen = (e) => {
setStatus('connected');
onOpenRef.current?.(e);
};
es.onmessage = (e) => {
onMessageRef.current?.(e);
};
es.onerror = (e) => {
setStatus('error');
setError('Connection lost');
onErrorRef.current?.(e);
es.close();
};
Object.keys(eventsRef.current).forEach((eventName) => {
es.addEventListener(eventName, (e: MessageEvent) => {
const handler = eventsRef.current[eventName];
handler?.(e);
});
});
}, [disconnect]);
useEffect(() => {
if (!autoReconnect) return;
const handleVisibilityChange = () => {
if (
document.visibilityState === 'visible' &&
urlRef.current &&
(!eventSourceRef.current || eventSourceRef.current.readyState === EventSource.CLOSED)
) {
console.log('[useEvent] Reconnecting after backgrounding');
connect(urlRef.current);
}
};
const handleOnline = () => {
if (
urlRef.current &&
(!eventSourceRef.current || eventSourceRef.current.readyState === EventSource.CLOSED)
) {
console.log('[useEvent] Reconnecting after network recovery');
connect(urlRef.current);
}
};
document.addEventListener('visibilitychange', handleVisibilityChange);
window.addEventListener('online', handleOnline);
return () => {
document.removeEventListener('visibilitychange', handleVisibilityChange);
window.removeEventListener('online', handleOnline);
};
}, [connect, autoReconnect]);
useEffect(() => {
return () => {
eventSourceRef.current?.close();
};
}, []);
return { status, error, connect, disconnect };
}
Two design decisions are worth calling out. The hook stores all callback handlers in refs and updates them via an effect rather than passing them directly into `connect`'s dependency array. If the `events` object went into that array, every parent re-render would recreate the `EventSource` connection. storing handlers in refs decouples the connection lifecycle from React's render cycle entirely. The other decision is the visibility-based reconnect. iOS and Chrome aggressively kill background network connections to save battery, so without the `document.visibilityState` listener, mobile users will frequently come back to a dead stream with no idea why their UI stopped updating. The hook re-establishes the connection the moment they tab back in.
The server side
---------------
On the backend, the core responsibility is managing active connections, executing the state machine, and ensuring no events are lost before the client connects.
Here’s the timing issue that will bite you if you don’t handle it:
// Race condition scenario:
await fetch('/api/documents/upload', { method: 'POST' }); // Returns documentId
// Backend starts processing immediately and emits events...
// But frontend hasn't called connect() yet!
connect(`/api/documents/${documentId}/stream`);
The backend starts emitting events the moment the process starts. If the frontend hasn’t connected to the SSE stream yet, those events get lost. The fix is an event queue on the backend: if a client isn’t connected when an event fires, queue it, and when the client connects, replay the queue immediately.
To handle connections and queuing, we build `SSEService`. It's a singleton that lives in memory and tracks all active SSE connections. This pattern survives Next.js hot reloads in development and gives you a consistent state across route handler invocations without needing Redis.
// lib/sse-service.ts
export type SSEEventName = 'processing-step' | 'processing-error';
export type SSEPayload = {
'processing-step': {
step: 'validating' | 'scanning' | 'extracting' | 'thumbnail' | 'complete';
documentId: string;
progress: number;
};
'processing-error': {
documentId: string;
message: string;
};
};
type Controller = ReadableStreamDefaultController<Uint8Array>;
type QueuedSSEEvent = {
name: string;
data: unknown;
};
declare global {
// eslint-disable-next-line no-var
var __sseService: SSEService | undefined;
}
class SSEService {
private controllers = new Map<string, Controller>();
private queues = new Map<string, QueuedSSEEvent[]>();
private encoder = new TextEncoder();
private format(name: string, data: unknown): string {
return `event: ${name}\ndata: ${JSON.stringify(data)}\n\n`;
}
private write(controller: Controller, name: string, data: unknown) {
try {
controller.enqueue(this.encoder.encode(this.format(name, data)));
} catch {
// Stream already closed
}
}
public register(documentId: string): ReadableStream<Uint8Array> {
const old = this.controllers.get(documentId);
if (old) {
try {
old.close();
} catch {
// ignore
}
this.controllers.delete(documentId);
}
const stream = new ReadableStream<Uint8Array>({
start: (controller) => {
this.controllers.set(documentId, controller);
// Replay all queued events immediately
const pending = this.queues.get(documentId) ?? [];
for (const { name, data } of pending) {
this.write(controller, name, data);
}
this.queues.delete(documentId);
console.log(`[SSEService] Stream opened: ${documentId}`);
},
cancel: () => {
console.log(`[SSEService] Stream cancelled by client: ${documentId}`);
this.controllers.delete(documentId);
},
});
return stream;
}
public emit<Ev extends SSEEventName>(
documentId: string,
name: Ev,
data: SSEPayload[Ev]
) {
const controller = this.controllers.get(documentId);
if (controller) {
this.write(controller, name, data); // Client connected, send immediately
} else {
// Client not connected yet, queue the event
const queue = this.queues.get(documentId) ?? [];
queue.push({ name, data });
this.queues.set(documentId, queue);
console.warn(
`[SSEService] Client not connected for "${documentId}". ` +
`Queued "${name}" (queue length: ${queue.length})`
);
}
const isTerminal =
name === 'processing-step' &&
((data as SSEPayload['processing-step']).step === 'complete');
if (isTerminal) {
setTimeout(() => this.close(documentId), 300);
}
}
public close(documentId: string) {
const controller = this.controllers.get(documentId);
if (controller) {
try {
controller.close();
} catch {
// already closed
}
this.controllers.delete(documentId);
}
this.queues.delete(documentId);
console.log(`[SSEService] Session cleaned up: ${documentId}`);
}
public get activeSessionCount(): number {
return this.controllers.size;
}
}
export const sseService =
global.__sseService ?? (global.__sseService = new SSEService());
Bringing it together: a document processing pipeline
----------------------------------------------------
The pattern in action looks like this:
┌─────────────┐ ┌─────────────┐
│ Frontend │ │ Backend │
│ │ │ │
│ 1. POST │──────────────────▶ │ Init flow │
│ │◀───────────────────│ Return ID │
│ │ { sessionId } │ │
│ │ │ │
│ 2. SSE │──────────────────▶ │ Connect │
│ Connect │ │ │
│ │ │ │
│ │◀──event: step──────│ State: A │
│ UI Update │ │ │
│ │◀──event: step──────│ State: B │
│ UI Update │ │ │
│ │◀──event: complete──│ State: Done│
│ Disconnect │ │ │
└─────────────┘ └─────────────┘
The user uploads a file, the server returns a document ID, the client connects to an SSE stream, the server validates, scans, extracts text, and generates a thumbnail. The client updates at each step, and the stream closes on completion.
The route handlers kick off the job and expose the SSE stream:
// app/api/documents/upload/route.ts
import { sseService } from '@/lib/sse-service';
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file') as File;
const documentId = crypto.randomUUID();
await db.document.create({
data: {
id: documentId,
filename: file.name,
status: 'pending',
createdAt: new Date(),
},
});
processDocument(documentId, file);
return Response.json({ documentId });
}
// app/api/documents/[documentId]/stream/route.ts
import { sseService } from '@/lib/sse-service';
export async function GET(
req: Request,
{ params }: { params: { documentId: string } }
) {
const { documentId } = params;
const stream = sseService.register(documentId);
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}
As the background job progresses, it updates state via `SSEService`:
// Background processing pipeline
async function processDocument(documentId: string, file: File) {
try {
sseService.emit(documentId, 'processing-step', {
step: 'validating',
documentId,
progress: 10,
});
await validateFileFormat(file);
sseService.emit(documentId, 'processing-step', {
step: 'scanning',
documentId,
progress: 30,
});
await scanForViruses(file);
sseService.emit(documentId, 'processing-step', {
step: 'extracting',
documentId,
progress: 60,
});
const text = await extractText(file);
sseService.emit(documentId, 'processing-step', {
step: 'thumbnail',
documentId,
progress: 80,
});
const thumbnailUrl = await generateThumbnail(file);
const storageUrl = await uploadToCloud(file);
await db.document.update({
where: { id: documentId },
data: {
status: 'completed',
text,
thumbnailUrl,
storageUrl,
},
});
sseService.emit(documentId, 'processing-step', {
step: 'complete',
documentId,
progress: 100,
});
} catch (error) {
sseService.emit(documentId, 'processing-error', {
documentId,
message: error.message,
});
await db.document.update({
where: { id: documentId },
data: { status: 'failed' },
});
}
}
On the client side, a domain hook consumes `useEvent` and manages local React state:
import { useState } from 'react';
import { useEvent } from './useEvent';
type ProcessingStep =
| 'idle'
| 'uploading'
| 'validating'
| 'scanning'
| 'extracting'
| 'thumbnail'
| 'complete'
| 'error';
export function useDocumentProcessing() {
const [step, setStep] = useState<ProcessingStep>('idle');
const [progress, setProgress] = useState(0);
const { connect, disconnect, status, error } = useEvent({
events: {
'processing-step': (e) => {
const data = JSON.parse(e.data);
setStep(data.step);
if (data.progress) {
setProgress(data.progress);
}
if (data.step === 'complete') {
disconnect();
}
},
'processing-error': (e) => {
const data = JSON.parse(e.data);
setStep('error');
console.error('Processing error:', data.message);
},
},
onError: () => {
setStep('error');
},
});
const uploadDocument = async (file: File) => {
setStep('uploading');
const formData = new FormData();
formData.append('file', file);
const res = await fetch('/api/documents/upload', {
method: 'POST',
body: formData,
});
const { documentId } = await res.json();
connect(`/api/documents/${documentId}/stream`);
};
return {
step,
progress,
status,
error,
uploadDocument,
disconnect
};
}
The UI component renders without any networking logic bleeding into the presentation layer:
function DocumentUploader() {
const {
step,
progress,
status,
error,
uploadDocument
} = useDocumentProcessing();
const handleFileSelect = (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (file) uploadDocument(file);
};
if (step === 'idle') {
return (
<div>
<input
type="file"
onChange={handleFileSelect}
accept=".pdf,.doc,.docx"
/>
<p>Upload a document to begin processing</p>
</div>
);
}
if (step === 'uploading') {
return <LoadingSpinner message="Uploading..." />;
}
if (['validating', 'scanning', 'extracting', 'thumbnail'].includes(step)) {
const messages = {
validating: 'Validating document format...',
scanning: 'Scanning for viruses...',
extracting: 'Extracting text content...',
thumbnail: 'Generating preview...',
};
return (
<div>
<ProgressBar value={progress} />
<p>{messages[step as keyof typeof messages]}</p>
</div>
);
}
if (step === 'complete') {
return (
<SuccessMessage>
Document processed successfully!
</SuccessMessage>
);
}
if (step === 'error') {
return (
<ErrorMessage>
{error || 'Processing failed'}
<button onClick={() => window.location.reload()}>
Try Again
</button>
</ErrorMessage>
);
}
}
Conslusion
----------
The separation of concerns here is clean in a way that actually matters for maintainability. The backend owns the state machine: validation logic, processing steps, error handling. The frontend owns the UI: progress bars, loading states, animations. The SSE layer is just a wire between them.
Consider a traditional polling approach with a 1-second interval:
// Polling approach
useEffect(() => {
const interval = setInterval(async () => {
const res = await fetch(`/api/status/${sessionId}`);
const { step } = await res.json();
setStep(step);
}, 1000);
return () => clearInterval(interval);
}, [sessionId]);
That’s 60 requests per minute per user, none of them useful when nothing has changed, with a guaranteed 1-second lag on every update. It scales linearly and punishes your infrastructure. The SSE version does none of that:
// SSE approach
useEvent({
events: {
'step-update': (e) => {
const { step } = JSON.parse(e.data);
setStep(step);
}
}
});
Zero requests when nothing is happening. Updates arrive the instant the server emits them. The difference in user experience and server load is immediately noticeable.
SSE doesn’t fit everything. If you need the client to send a continuous stream of data back to the server in the same connection, use WebSockets. That covers multiplayer games, collaborative text editors, anything where both sides are talking at once. SSE is strictly read-only from the client’s perspective.
This in-memory `SSEService` also has a horizontal scaling limit — it works well up to around 10,000 concurrent connections on a single server. If you need to run multiple instances, you'll need Redis pub/sub to broadcast events across them. For everything else — backend-driven flows like OAuth, job progress, live notifications, dashboard feeds — SSE is the right default. Less infrastructure, less code, easier to debug.