renderpx

Data Fetching & Sync

How should your app talk to the server?

The Problem

Every React developer has written this at some point: a useEffect that fires on mount, fetches some data, and puts it in state. It looks simple. It looks complete.

But there's a hidden problem that only surfaces when the user interacts: race conditions. If a component re-fetches based on some input, a search term, a selected user, a tab, each change fires a new request. Requests don't always resolve in the order they were sent. A slow first request can arrive after a fast second one, overwriting the correct result with stale data.

Try it below. Even IDs trigger a slow 1200ms request, odd IDs are fast (400ms). Click User 2, then immediately User 1. User 1 arrives first, then User 2 overwrites it.

Live — click User 2 then User 1 quickly to trigger the race condition

Click User 2 → User 1 quickly. User 2 takes 1200ms, User 1 takes 400ms. User 1 arrives first — but User 2's response overwrites it.

Network log
No requests yet
useEffect fetch — race conditiontsx
Loading...

The Immediate Fix

The fix is a cancellation flag: a boolean scoped to each effect run. When the effect re-runs (because userId changed), the cleanup function from the previous run sets the flag to true. When the stale response arrives, it checks the flag and ignores the result.

This is the same mechanism AbortController uses; except instead of cancelling the network request (which matters more for bandwidth), you're just telling the response handler to discard its result.

Live — same test: stale responses are now discarded (watch the network log)

Same test: click User 2 → User 1 quickly. Stale responses are now ignored — the correct user always wins.

Network log
No requests yet
Cancellation flag — stale responses ignoredtsx
Loading...

The fix is necessary but not sufficient. Race conditions are just one of many problems with raw useEffect fetching. There's also: no caching (every mount re-fetches), no request deduplication (two components fetching the same user = two API calls), no background refresh, no auto-retry. The cancellation flag fixes the symptom; React Query fixes the whole class of problems. AbortController vs cancelled flag, and what React Query handles for you →

React Query: Server State as a First-Class Citizen

The core insight behind React Query (and SWR) is that server state is fundamentally different from client state. Client state lives in your app; it's synchronous and always up-to-date. Server state lives remotely; it can change without your knowledge, it needs to be fetched asynchronously, and it can become stale.

React Query treats each piece of server state as a cache entry keyed by a queryKey. Every component that calls useQuery with the same key shares that cache entry, so duplicate components, parallel renders, and concurrent requests all collapse into a single network call.

Live — green dot = cached. Switch users and come back to see instant cache hits.

Switch users, then come back. Previously visited users load instantly from cache. Cache expires after 5 seconds — stale data shows immediately while background refresh runs.

Cache log · 0 network requests total
No navigation yet
React Query — caching, deduplication, background refreshtsx
Loading...

When React Query Isn't the Right Tool: RSC Fetch

React Query solves client-side data fetching. But if the data is needed for the initial render (especially for SEO or performance), fetching on the server is better. React Server Components make this trivial: just await in an async component.

tsx
Loading...

Rule of thumb: If the data is needed before the page renders, use RSC fetch. If the data is needed after the user does something (click, type, navigate within an SPA), use React Query. Many apps use both.

The Framework: Four Kinds of Data

Every data fetching decision starts with two questions: When does the data need to be available? And how fast does it change?

Client async
"Data needed after user interaction"
React Query or SWR
Search results, user-selected content, filtered lists
Server state
"Data needed for initial render"
RSC fetch, getServerSideProps, or static generation
Blog posts, product pages, dashboard initial load
Mutations
"Writing data, not reading it"
useMutation (React Query) or Server Actions
Form submissions, button clicks, drag and drop
Real-time
"Data changes while user is watching"
Polling (React Query refetchInterval) or WebSocket
Chat, live scores, collaborative cursors, notifications

Most production apps use all four. A dashboard page might server-fetch the initial data (RSC), use React Query for user-driven filters (client async), Server Actions for form saves (mutations), and polling for live status indicators (real-time). The mistake is applying one tool to all four.

Decision Matrix

A quick reference for choosing the right fetching strategy. In practice, a single page often uses two or three of these.

PatternCachingDeduplicationReal-timeUse When
useEffect + fetchNoneNoNo
Prototypes, one-off fetches that never change
Avoid: Any production UI where userId can change
Custom useFetch hookNoneNoNo
Shared fetch logic, race conditions matter, no library budget
Avoid: Multiple components need the same data
React Query / SWRAutomaticYesVia polling
Most client-side data fetching in production apps
Avoid: Sub-second real-time updates (WebSocket is better)
RSC fetch (Next.js)Per-request / ISRYes (same request)No
Data needed for initial render, content-heavy pages, SEO
Avoid: User-specific data that changes frequently on interaction
WebSocketNoN/ATrue push
Chat, collaboration, live dashboards, sub-second updates
Avoid: Data that changes every 30+ seconds (polling is simpler)

Progressive Complexity

The same feature (fetching a user profile) built five ways. Each step shows exactly what problem the next tool solves and when you actually need to reach for it.

Example 1: useEffect + fetch

Naive

Raw async state in a component

Raw useEffect + useState: the baseline every React developer starts with. Works for simple, one-off fetches with no caching, deduplication, or race condition protection.

tsx
Loading...

Why this works

Works when: • Single component, single fetch • userId never changes after mount • No other component needs the same data • Simple, throwaway UI Simple to understand and zero dependencies.

When this breaks

Switch userId quickly → race condition: the slower first request resolves after the second, overwriting the correct result with stale data. Also: every mount re-fetches, even if the data is seconds old. Two <UserProfile userId="1"> components on the same page make two identical API calls.

Production Patterns

The dashboard that had three fetch strategies

A product analytics dashboard with a sidebar of historical charts (rarely changes), a main metrics panel (changes hourly), and a live activity feed (updates every few seconds).

Historical charts: RSC fetch with revalidate: 3600. No client JS needed; data is stable, SEO matters, fast initial load.
Metrics panel: React Query with staleTime: 5 * 60 * 1000. Users filter by date range interactively; client state drives the queryKey, caching prevents re-fetching the same range twice.
Activity feed: React Query with refetchInterval: 10000. Polling was sufficient; real-time to the second wasn't a requirement.
What I'd do differently: add refetchIntervalInBackground: false earlier; we were polling even when the tab was hidden, which was unnecessary load.

The search that taught me about staleTime

A people search feature. Users would type a name, get results, click into a profile, press back, and the search results were gone.

Problem: Default staleTime: 0 means all queries are immediately stale. On back-navigation, React Query refetches before rendering the cached results, causing a flash of empty state.
Fix: Set staleTime: 30_000 for search results. The cached result renders instantly on back-navigation, and a background refetch runs silently if data is older than 30 seconds. Users never see an empty list.

Inheriting a codebase with useEffect everywhere

You've joined a team that fetches everything in useEffect. The instinct is to propose React Query. The right instinct is to audit first: search for useEffect + fetch to measure scope, then find the screens where users complain about stale data; those are the migration entry points, not the files with the messiest code.

Migrate one query at a time. useQuery wraps the same fetch function; the UI doesn't change, only the plumbing does. Old useEffect fetches stay until they become a visible problem; no migration sprint, no feature freeze. The migration is done when new code stops using useEffect for data fetching, not when every old instance has been removed.

tsx
Loading...

A Real Rollout

What it actually looks like to introduce a shared cache layer into a team that already has opinions, with a product that can't stop shipping.

Context

Dashboard-heavy B2B app, eight engineers, three teams each owning different panels. Each team had built their own fetch logic: custom hooks, loading state, error state, cache invalidation after mutations, all manual, all slightly different. The same endpoint was being called independently from four places on the same screen.

The problem

Cache inconsistency. Different panels on the same screen showed different values for the same metric because each team invalidated their local state independently after mutations, or didn't at all. Support tickets blamed “the dashboard showing wrong numbers.” The root cause wasn't the data; it was that four independent caches each had a slightly different view of it. The business framing: every support ticket about stale data required an engineer to investigate and reassure the customer. At scale, that was becoming a real cost.

The call

Proposed React Query as a shared cache layer, not a rewrite, not a migration sprint. Added a single QueryClient at the app root, migrated one panel as a proof of concept, and let other teams adopt at their own pace. Skipped optimistic updates in the first pass, adding them only to the mutations that generated the most support tickets. The call I'd make differently: I should have standardized query key naming conventions earlier. Two teams used different key shapes for the same endpoint, which meant the cache wasn't being shared even after adoption.

How I ran it

The hardest part was getting engineers who'd been managing isLoading state manually for two years to stop. The pitch that landed: “with useQuery, the loading/error/refetch states you wrote manually are now 3 lines, and they're correct.” One team adopted immediately. Another needed to see it survive a production incident first; stale data was auto-refreshed on tab focus, with no code change. After that, adoption was pull, not push. I wrote a shared queryKeys.ts file to standardize key shapes across teams; that's what actually made the shared cache work.

The outcome

Cache inconsistency tickets dropped significantly once panels shared the same query key. Onboarding a new dashboard panel went from “build fetch logic, loading state, error state, invalidation logic” to useQuery(queryKey, fetchFn). We never finished migrating every old useEffect. We didn't need to; the problem was solved at the boundary: new panels used React Query and shared the cache, which is where the inconsistency lived.

Common Mistakes & Hot Takes

Fetching in useEffect in 2025

This is now an anti-pattern for production. Not because useEffect is bad (it's fine) but because it doesn't handle caching, deduplication, or background refresh. You're reimplementing React Query badly. If you need client-side data fetching, use React Query or SWR. They're not heavy dependencies; they solve hard problems so you don't have to.

Putting server state in Zustand/Redux

I've seen teams reach for Zustand to store API responses, then manually invalidate it on mutations. This is React Query's entire job. Redux/Zustand is for client state (UI state, user preferences, form drafts). Server state has different semantics (staleness, revalidation, deduplication) that Zustand doesn't model.

Over-using optimistic updates

Optimistic updates are excellent for low-stakes reversible actions (like, follow, reaction). They feel wrong for high-stakes actions (payment, delete, publish). If the rollback is jarring (imagine a "Delete" button that appears to work and then un-deletes), a loading spinner is the better UX. Not every mutation needs to be optimistic.

Adding WebSockets before you need them

Polling is simpler, reliable, and works everywhere. For most "real-time" requirements, polling every 10–30 seconds is genuinely good enough. I've shipped real-time notification systems on polling that users never noticed weren't true push. Add WebSockets when polling becomes visibly inadequate, usually when you need sub-5-second updates or bidirectional communication.

❌ Building a normalized cache on top of React Query (for GraphQL)

I've seen teams using GraphQL with React Query build a custom normalized entity store (often with Jotai or Zustand) to solve the "same entity in multiple queries" consistency problem. Every query response gets piped into the store; components read from the store instead of React Query directly. This creates two sources of truth: the React Query cache and the store can drift apart, and React Query's staleTime and background refresh work on the cache, not the atoms. You've rebuilt Apollo Client, badly. If cross-query consistency is a real problem, use a GraphQL client that solves it correctly: Apollo Client or URQL →