When should Rust web stacks use WASM and how to share logic?
Rust Web Developer
answer
Use client-side WASM for heavy in-browser compute, offline/edge UX, or reuse of Rust logic (e.g., crypto, parsers) via wasm-bindgen. Choose server-side WASM (WASI/wasmtime, Spin, edge runtimes) for sandboxing, multi-tenant plugins, and portable functions at the edge. Share types and business rules in a core crate with serde derives and #[cfg(target_arch = "wasm32")] adapters. Keep builds fast with workspaces, cargo-chef, sccache, minimal features, and secure with cargo-deny, cargo-audit, and pinned dependencies.
Long Answer
Rust lets you run the same safe, zero-cost abstractions in browsers and on servers. The key is deciding where WASM fits, how to share code across targets, and how to keep builds fast and the supply chain tight.
1) When to choose client-side WASM
Client WASM shines when you need heavy, latency-sensitive compute close to the user without round trips:
- Data processing (parsing, compression, diffing), cryptography, ML inference on small models.
- Rich editors/visualizations where JS would be slow or unsafe.
- Offline/low-connectivity UX with local persistence.
Use wasm-bindgen + web-sys (or frameworks like Yew, Leptos, Sycamore) to bind to the DOM. Prefer SSR + hydration (e.g., Leptos/Next-like patterns) when first-paint matters, sending HTML from the server then hydrating WASM for interactivity.
2) When to choose server-side WASM
Server WASM (WASI + runtime like wasmtime, wasmer, or platforms like Spin, Fastly Compute@Edge, Cloudflare Workers (WASM)) is ideal for:
- Sandboxed extensions/plugins: run untrusted tenant code safely with capability-based I/O.
- Portable edge functions: the same module runs across providers.
- Polyglot isolation: drop-in logic updates without rebuilding the host.
- Horizontal elasticity: fast cold starts, tiny footprints.
Avoid it when you need deep OS integration, heavy native libs without WASI shims, or when standard Linux containers are simpler.
3) Shared types and business logic across crates
Create a Rust workspace with crates:
- core-domain/ – pure business models + rules. No platform I/O. Derive Serialize, Deserialize, Eq, Hash. Keep it #![forbid(unsafe_code)].
- core-types/ – DTOs and API shapes with serde schemas. Optionally expose ts-rs or serde_reflection/schemars to generate TS/JSON Schemas for frontend validation.
- adapters-web/ – client bindings using wasm-bindgen (feature wasm).
- adapters-server/ – HTTP handlers (Axum/Actix), DB repos, queues.
Use conditional compilation:
#[cfg(target_arch = "wasm32")]
use wasm_bindgen::prelude::*;
#[cfg(not(target_arch = "wasm32"))]
use std::time::SystemTime;
Keep business logic in core-domain with deterministic inputs/outputs; platform crates only orchestrate I/O. For binary sizes, gate costly deps behind features:
[features]
default = ["serde"]
server = ["tracing", "sqlx"]
wasm = ["wasm-bindgen", "console_error_panic_hook"]
4) Data formats, FFI, and stability
Prefer serde + JSON for portability and debuggability. For perf-critical paths consider bincode/postcard in internal hops, and MessagePack/CBOR on the wire if needed. Between WASM and host, minimize boundary crossings; batch calls and pass ArrayBuffer/Uint8Array blobs. When exposing to JS, design thin wrappers that translate between idiomatic TS types and Rust structs generated from ts-rs to avoid hand-written drift.
5) Build speed and size optimization
- Workspaces + cargo-chef to pre-bake dependency layers in CI.
- sccache or bazel remote cache for parallel builds.
- Minimize features: default-features = false for crates like reqwest/serde_json.
- Release flags: codegen-units=1, lto="thin", panic="abort" (WASM), opt-level=z for client WASM, then run wasm-opt -Oz.
- Split crates so hot-change code is small; keep heavy deps in rarely touched crates.
- Use cargo-nextest to parallelize tests; --profile=release-with-debug to speed link times for profiling builds.
6) Supply-chain hardening
- Pin versions via Cargo.lock and review updates through Renovate/Dependabot PRs.
- cargo-deny: ban yanked/licensing-incompatible crates, enforce allowed-sources (only crates.io), deny wildcard features.
- cargo-audit: fail CI on known CVEs.
- Vendor or mirror crates in critical environments; enable sparse registry; avoid git deps without commit hashes.
- For WASM plugins, sign modules and verify at load; restrict WASI caps (no filesystem/net unless required).
7) Observability and errors
Use tracing with tracing-wasm in the browser and tracing-subscriber on server. Emit structured fields (tenant, request_id). Map domain errors to HTTP via thiserror + Axum/Actix responders; use RFC 7807 style JSON for consistency. In WASM, hook console_error_panic_hook in non-prod and symbolicate with source maps.
8) Decision matrix (rule of thumb)
- Need CPU-heavy UX offline → client WASM.
- Need safe multi-tenant plugins/edge portability → server WASM.
- Need rapid delivery with deep OS libs → native server + share domain in crates; maybe client WASM merely for small features.
Bottom line: pick WASM where sandboxing, portability, or UX latency demand it; keep domain logic in a pure crate with serde types; partition features; and enforce fast builds and a hardened supply chain.
Table
Common Mistakes
- Forcing all UI into WASM; small JS glue is often simpler.
- Mixing I/O in the core crate, blocking reuse on wasm32.
- Excessive boundary calls (chatty JS↔WASM), causing jank.
- Pulling heavy server-only deps into the client via shared crates.
- No feature gating → bloated binaries and slow compiles.
- Relying on git dependencies without pinning commits or checksums.
- Skipping cargo-deny/cargo-audit; shipping known CVEs.
- Ignoring wasm-opt and LTO, creating multi-MB bundles that ruin UX.
Sample Answers
Junior:
“I would use WASM in the browser for heavy compute and reuse Rust logic with wasm-bindgen. I would keep shared types in a common crate with serde and compile it for both targets. For builds, I would cache dependencies and run cargo-audit.”
Mid:
“I choose client WASM for low-latency parsing and server WASM for sandboxed plugins. Domain types live in core-domain with serde and TS generation via ts-rs. I gate features (wasm vs server) to avoid pulling server deps into the browser. CI uses cargo-chef + sccache, and cargo-deny enforces allowed sources.”
Senior:
“I apply a decision matrix: client WASM for user-perceived latency, server WASM for isolation/edge. The workspace splits core-domain (pure, no I/O), core-types (serde DTOs + schemas), and thin adapters. We optimize size (wasm-opt, LTO), speed (workspaces, nextest), and secure the chain (pinned crates, audit/deny, WASI caps). Observability uses tracing on both sides and RFC-7807 errors server-side.”
Evaluation Criteria
Look for:
- Clear criteria for choosing client vs server WASM.
- A workspace design that shares serde models and pure business logic across targets.
- Understanding of conditional compilation and feature gating to avoid dependency bleed.
- Concrete build acceleration tactics (cargo-chef, sccache, minimal features) and WASM size optimizations.
- Supply-chain controls (cargo-deny, cargo-audit, pinning).
- Awareness of observability and consistent error modeling.
Red flags: “put everything in WASM,” giant bundles, no gating, shared crates that import OS-specific deps, or ignoring dependency risk.
Preparation Tips
- Build a small workspace: core-domain, web-server (Axum), web-client (Yew/Leptos).
- Add serde derives and generate TS types with ts-rs; verify no hand-rolled drift.
- Practice #[cfg(target_arch = "wasm32")] shims and feature flags to separate adapters.
- Measure JS↔WASM crossing costs; batch calls and pass binary blobs.
- Integrate cargo-chef, sccache, nextest; profile compile times.
- Run cargo-audit and cargo-deny; fix or ban risky crates.
- Optimize a WASM bundle with opt-level=z and wasm-opt -Oz; compare sizes.
- Add tracing + tracing-wasm and return RFC-7807 errors on server routes.
Real-world Context
A document viewer moved diffing to client WASM; bandwidth dropped and UX latency improved 3×. A multi-tenant marketplace adopted server WASM plugins via wasmtime; customer code ran safely with minimal blast radius. A fintech split logic into core-domain and used ts-rs to sync API models; integration bugs fell sharply. A team introduced cargo-chef and sccache, cutting CI from 28 to 8 minutes; enabling cargo-deny blocked a transitive yanked crate. Applying wasm-opt reduced bundle size by 45%, improving LCP and conversion.
Key Takeaways
- Use client WASM for latency-sensitive compute and server WASM for sandboxed, portable logic.
- Centralize models/logic in a pure core crate with serde; keep I/O in adapters.
- Control size and speed with feature flags, caching, LTO, and wasm-opt.
- Harden the chain: pin dependencies, cargo-audit, cargo-deny, allowed sources.
- Generate TS/schemas to prevent drift; minimize JS↔WASM boundary calls.
- Instrument with tracing and standardize server errors (RFC 7807).
Practice Exercise
Scenario:
Your product needs a cross-platform diff/merge engine used in both a browser editor and an edge function that validates submissions. You must reuse logic, keep builds fast, and satisfy a security review.
Tasks:
- Create a workspace with core-domain (diff algorithm + serde types), web-client (wasm, Yew/Leptos), and edge-validate (WASI/wasmtime or Spin).
- Add feature flags: wasm enables wasm-bindgen and console_error_panic_hook; server enables tracing and axum. Ensure core-domain has no I/O or target-specific deps.
- Generate TS bindings for DTOs using ts-rs and publish to the frontend.
- Optimize the client build: opt-level=z, lto=fat, panic=abort, and run wasm-opt -Oz. Measure bundle size before/after.
- Speed up CI with cargo-chef + sccache and cargo-nextest; record pipeline time improvements.
- Secure the chain: enable cargo-deny (ban git sources; allow only crates.io), run cargo-audit in CI, and pin all versions.
- Implement server-side WASM validation with restricted WASI caps; sign and verify modules before load.
- Add tracing in both targets and return RFC-7807 errors from the server API. Provide a short report on performance, size, and security outcomes.
Deliverable:
Repo and CI configs demonstrating shared serde types, dual-target logic, fast builds, hardened dependencies, and measurable gains in size, speed, and security.

