What is your data/state strategy across views with offline?

Define a data/state strategy to avoid stale caches, races, and error waterfalls with Suspense.
Build a resilient data/state strategy using React Query + Redux, with offline support, conflict control, and clean Suspense.

answer

My data/state strategy splits concerns: Redux holds app/UI state (flags, wizard steps, auth), while React Query manages server data with SWR, cache keys, and background revalidation. I prevent race conditions via request dedupe and cancelation tied to scope; stop stale caches with TTLs, ETags, and invalidation on precise tags; and avoid error waterfalls by Suspense boundaries + error boundaries per route. Offline support uses persisted queries, mutation queues, and idempotent APIs.

Long Answer

A durable data/state strategy treats the server as the source of truth, the client as a smart cache, and the UI as a controlled façade. I use React Query for remote data and Redux (or Zustand/NgRx/Pinia analogs) for application and UI state. This separation prevents accidental coupling, makes cache lifetimes explicit, and keeps views fast and predictable—even with Suspense and offline support.

1) Split the world: UI vs remote
UI state (modals, feature flags, form draft, route params) lives in Redux. Remote state (lists, entities) lives in React Query. Redux never mirrors server entities; instead, components subscribe to query results. This avoids double sources of truth and stale duplication. In practice, each screen has: a query key, a selector for derived UI bits, and mutations that invalidate specific cache tags.

2) Reads with SWR + normalization where needed
React Query gives stale-while-revalidate, request deduping, and cache scoping. I choose cache keys like ['projects', orgId, filters] so invalidations are surgical. For complex trees, I keep the server response shape but normalize critical identity maps in memoized selectors to avoid re-render storms. Preloading via prefetchQuery on route transitions makes Suspense show spinners for milliseconds, not seconds.

3) Writes: optimistic where safe, pessimistic when risky
For additive, low-risk ops (rename item, toggle flag), I use optimistic updates with onMutate/onError/onSettled to roll back on failure. For destructive or audited operations, I go pessimistic (wait for server), sometimes showing a lightweight progress UI. Every mutation carries a clientId so the server can be idempotent; that kills retry duplication during flaky networks and powers offline support.

4) Preventing race conditions
Races appear when a slow response lands after a newer one. I avoid them by (a) keying queries by all inputs, (b) canceling in-flight queries on param changes via queryClient.cancelQueries, (c) using AbortController in fetchers, and (d) trusting React Query’s internal dedupe/stale logic. For mutation races, I serialize sensitive flows (enqueue or gate on version checks) and let the server enforce ETag/If-Match or rowVersion guards.

5) Eliminating stale caches
A cache is only useful if it expires smartly. I tune staleTime per dataset (e.g., 30s for dashboards, 0 for critical forms), use background revalidation on focus/reconnect, and invalidate by tags after writes—never “nuke the world.” Conditional GETs with ETag reduce payloads and ensure cache entries reflect reality. Periodic “heartbeat” queries keep hot views warm without flooding the pipe.

6) Blocking error waterfalls
The “waterfall of doom” happens when a parent throws and wipes children. I place Suspense and error boundaries per route/section (shell → page → widget). Data loaders prefetch on navigation; skeletons are local to widgets. If a child query fails, its boundary shows a retry affordance while siblings stay alive. For multi-query pages, I render partial results and surface non-blocking toasts rather than crash the page.

7) Real-time and convergence
For live data, I consume WebSocket/SSE/GraphQL subscriptions and update caches via queryClient.setQueryData. Incoming events are idempotent upserts keyed by id + version; bursts are batched with unstable_batchedUpdates to avoid render thrash. On reconnect, I replay missed mutations and trigger selective invalidations to reconcile.

8) Offline support
I persist the query cache (IndexedDB) and a mutation queue. Mutations carry deterministic clientId and retry policies with exponential backoff. The UI shows per-item pending badges and allows cancel. On reconnect, I replay in order; the server deduplicates by clientId and returns canonical entities so caches converge. Sensitive data is minimized or encrypted at rest.

9) Form drafts and conflicts
Forms write to Redux drafts; submitting triggers a mutation. Conflicts are caught with If-Match. On 409/412, I fetch the latest server version and present a merge UI that highlights field-level differences. After resolving, I retry with an updated precondition header. This avoids silent last-write-wins.

10) Testing and ops
I unit-test reducers/selectors and query hooks; integration tests assert optimistic rollback and conflict flows. Telemetry logs mutation lifecycle events (queued, sent, confirmed) and cache invalidations. Feature flags gate risky optimizations so I can roll them back fast.

With this data/state strategy—React Query for data, Redux for UI, careful keys, precise invalidation, and disciplined boundaries—I keep apps snappy, race-free, and resilient across Suspense, real-time, and offline support.

Table

Topic Tactic Client Mechanism Server Assist Outcome
Ownership split UI vs remote Redux for UI, React Query for data N/A Single source of truth
Cache freshness SWR + tags staleTime, focus/reconnect revalidate, tag invalidation ETag / Last-Modified Fresh views, fewer refetches
Race control Cancel & keying AbortController, cancelQueries, input-scoped keys Version checks Newer wins, no ghosts
Writes Optimistic/pessimistic onMutate rollback, gated deletes Idempotent mutations Snappy UX, safe commits
Errors Local boundaries Section Suspense + error boundaries Consistent error shape No error waterfalls
Real-time Batched upserts setQueryData, batched updates Delta events + versions Smooth live updates
Offline Persist + queue IndexedDB cache, retry/backoff, clientId Dedup by clientId Reliable offline support
Forms/conflicts Draft + merge Redux drafts, precondition headers If-Match / ETag Predictable resolves

Common Mistakes

Mirroring server entities in Redux and in React Query, creating dueling truths. Global “refetch all” after each write, hammering APIs and still showing stale caches. Over-optimistic deletes that roll back poorly. Query keys that ignore filters, causing race conditions when params change. One giant Suspense at the layout level, turning small hiccups into error waterfalls. Piping WebSocket events straight into components instead of via the cache, producing flicker. No idempotency on mutations, so offline retries duplicate orders. Persisting sensitive caches without encryption or purge on logout. Ignoring ETags/versions—hello, silent last-write-wins.

Sample Answers (Junior / Mid / Senior)

Junior:
“I keep UI state in Redux and server data in React Query. I use SWR so pages feel instant and add optimistic updates with rollback. If a request fails, an error boundary shows retry while the rest of the page stays up.”

Mid:
“My data/state strategy uses precise query keys and cancelation to prevent races. Mutations are optimistic for safe edits, pessimistic for deletes. I invalidate by tag, not globally. Offline uses a mutation queue with clientId so the server deduplicates. Real-time deltas update caches with setQueryData.”

Senior:
“I design APIs for convergence: ETags/If-Match, idempotent mutations, and delta streams. Clients split UI vs remote, use Suspense + local error boundaries, and background revalidate. Events batch into normalized cache upserts. We test rollback/conflict/offline paths, encrypt persisted caches, and log mutation lifecycles to catch regressions before users do.”

Evaluation Criteria

A strong answer defines a data/state strategy that separates UI state (Redux) from server data (React Query) and explains why that kills duplication and stale caches. It must show concrete tools to prevent race conditions: scoped query keys, request cancelation, AbortController, and server-side versions/ETags. Handling of writes should cover optimistic updates with rollback, when to go pessimistic, and idempotent mutations for offline support. For rendering, look for Suspense and localized error boundaries to avoid error waterfalls and allow partial success. Real-time updates should route through the cache with batched upserts. Finally, security/ops: encrypted persistence, purge on logout, telemetry of mutation lifecycle, and tests for rollback/conflict/offline flows. Answers that say “Redux for everything” or “just refetch” miss the bar.

Preparation Tips

Build a tiny demo: React + React Query + Redux. Add a list/detail with SWR, staleTime, and tag-based invalidation. Implement mutations with optimistic update and rollback. Wire AbortController and cancelQueries to prove race prevention on fast filter changes. Add an ETag endpoint and enforce If-Match to simulate conflicts; show a small merge UI. Persist the query cache to IndexedDB, encrypt or purge on logout, and add a queued mutation store with clientId dedupe to demo offline support. Layer Suspense and error boundaries per widget; throw a failing sub-query to show the page doesn’t collapse. Record a 60–90s story connecting these pieces into one coherent strategy.

Real-world Context

A SaaS dashboard dropped p95 latency visibly by adopting SWR + prefetch; users saw instant cached screens while React Query refreshed in background. An e-commerce cart once duplicated orders during outages; adding idempotent mutations with clientId and a queue fixed it, and offline support became a selling point. A collab tool suffered flicker from raw WebSocket handlers; routing deltas through setQueryData with versioned upserts stabilized views. A fintech killed a nasty error waterfalls issue by replacing a global Suspense with per-section boundaries, keeping charts alive when one query failed. Across stacks, the consistent win came from disciplined separation (Redux vs Query), precise invalidation, and race-aware fetch/mutate flows—one cohesive data/state strategy.

Key Takeaways

  • Split UI (Redux) and remote data (React Query); no duplicate truths.
  • Prevent races with scoped keys, cancelation, and server versions/ETags.
  • Kill stale caches via SWR, staleTime, and tag-based invalidation.
  • Avoid error waterfalls with localized Suspense + error boundaries.
  • Support offline with persisted caches, queued idempotent mutations.

Practice Exercise

Scenario:
You’re building a projects dashboard that must work offline, handle rapid filter changes, and stream real-time updates. Users often switch orgs and expect no flicker or duplicate actions.

Tasks:

  1. Ownership split: Put UI bits (filters, modal open, selected project) in Redux; all project/entity data in React Query.
  2. Reads: Create keys ['projects', orgId, filters] with staleTime=30s, refetch on focus/reconnect. Prefetch next page on hover.
  3. Writes: Implement optimistic rename and tag toggle with rollback on error; destructive archive uses pessimistic flow with confirm. Tag-invalidate only affected lists/details.
  4. Races: On org/filter change, call cancelQueries({ queryKey: ['projects'] }) and use AbortController in fetcher.
  5. Conflicts: Server returns ETag; client sends If-Match. On 412, fetch latest, show diff banner, retry with merged fields.
  6. Offline support: Persist cache to IndexedDB; queue mutations with UUID clientId, retry with backoff, purge on logout.
  7. Real-time: Subscribe to a delta stream; batch updates every 100ms and setQueryData upsert by id+version.
  8. Boundaries: Add Suspense + error boundaries per widget (list, details, activity). Failing widget shows retry while others render.
  9. Telemetry/tests: Log mutation lifecycle; test optimistic rollback, conflict merge, offline replay, and rapid filter racing.

Deliverable:
A short demo and readme that proves a fast, race-free data/state strategy with solid offline support, stable Suspense, and zero error waterfalls.

Still got questions?

Privacy Preferences

Essential cookies
Required
Marketing cookies
Personalization cookies
Analytics cookies
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.