How to design scalable ASP.NET Core apps with REST + SignalR?
C# ASP.NET Developer
answer
A scalable ASP.NET Core app combines REST APIs for CRUD and integrations with SignalR hubs for real-time events. Stateless API servers scale horizontally behind load balancers. SignalR hubs persist connections via Redis backplane or Azure SignalR Service to support thousands of clients. Shared concerns (auth, logging, caching) live in middleware. For resilience, apply async I/O, rate limiting, and message queues. Monitoring via Application Insights validates latency and connection health.
Long Answer
Designing a scalable ASP.NET Core application that supports both RESTful APIs and real-time communication requires layering concerns around API design, connection management, and infrastructure scaling. The goal is to ensure predictable REST endpoints while handling long-lived SignalR connections gracefully under load.
1) Core architecture
- Build modular REST controllers exposing CRUD and integration endpoints.
- Implement SignalR hubs for bi-directional, low-latency messaging (chat, dashboards, notifications).
- Keep services stateless, offloading state to caches and databases.
2) Scalability patterns
- Deploy API + SignalR servers behind reverse proxies / load balancers (NGINX, Azure Front Door, AWS ALB).
- Use horizontal scaling: add nodes when demand spikes.
- For SignalR, coordinate clients across servers using a backplane (Redis, Azure SignalR Service).
3) Data and state
- REST APIs: connect to EF Core + SQL Server or Cosmos DB for persistence.
- SignalR hubs: avoid in-memory state; use distributed cache (Redis, SQL) for presence, group membership, and message delivery.
- For high-throughput workloads, push events into queues/streams (Azure Service Bus, Kafka) and broadcast via hubs.
4) Performance optimizations
- Apply async/await I/O to maximize throughput.
- Cache frequently requested REST responses in MemoryCache or Redis.
- Enable response compression for large payloads.
- Use gRPC for microservice communication if latency-sensitive.
5) Reliability and fault tolerance
- Protect APIs with retry policies, circuit breakers (Polly).
- Enforce rate limiting + throttling to shield hubs.
- Design fallback flows (e.g., queue messages if client disconnected).
- Use health checks for REST endpoints and connection lifetime events for SignalR.
6) Security
- Centralize JWT or OAuth 2.0 authentication for both REST and SignalR.
- Apply claims-based authorization in hubs and controllers.
- Encrypt traffic with HTTPS/WSS, enforce CORS policies.
- Validate inputs to prevent injection attacks.
7) Deployment considerations
- Containerize with Docker; orchestrate using Kubernetes or Azure Kubernetes Service (AKS).
- CI/CD pipelines automate builds, migrations, and hub scaling.
- For real-time global scale, prefer Azure SignalR Service, which abstracts scaling and connection routing.
8) Observability
- Use Application Insights or ELK to track REST latency, hub connection counts, dropped connections, and error rates.
- Set alerts on p95 response times and SignalR reconnect spikes.
In summary: REST APIs provide robust request/response flows, SignalR enables push-style interactions, and infrastructure patterns (backplane, queues, caching, load balancing) ensure both remain fast and reliable under heavy load.
Table
Common Mistakes
Teams often host REST APIs and SignalR hubs in the same process without considering connection pressure; long-lived hubs can starve API threads. Forgetting a backplane means clients tied to one server can’t scale horizontally. Many misuse in-memory dictionaries for tracking connections, causing data loss on restarts. Another trap is overloading hubs with business logic instead of pushing heavy work to background services/queues. Security mistakes include not applying token validation to SignalR or mixing auth schemes between REST and hubs. Finally, neglecting monitoring—no telemetry on dropped connections or hub latency—leads to silent failures under load.
Sample Answers (Junior / Mid / Senior)
Junior:
“I’d build REST APIs with controllers and add a SignalR hub for chat. I’d keep the servers stateless and use JWT for auth. Scaling would be handled with a load balancer.”
Mid:
“I’d separate REST endpoints for CRUD and use SignalR hubs for push updates. For scale, I’d add a Redis backplane so hubs work across multiple nodes. REST would use async controllers, caching, and EF Core for persistence. Both REST and SignalR would share JWT/OAuth auth. Monitoring via Application Insights.”
Senior:
“I’d architect REST APIs as stateless microservices with EF Core and caching, and SignalR hubs as thin layers integrated with a message bus for broadcast. Connections would scale via Azure SignalR Service, removing hub state from app servers. Auth flows unified (JWT, claims-based). Deployed in Docker/Kubernetes with autoscaling. Observability: p95 REST latency, hub reconnect spikes, and error traces. Fallback logic ensures message delivery even if clients drop.”
Evaluation Criteria
(1058 chars) Interviewers expect a clear separation of REST and SignalR responsibilities. Strong answers emphasize stateless REST, backplane/managed SignalR for scale, and infrastructure practices (containers, orchestration, load balancing). Security must be unified: JWT/OAuth across both layers. Observability is critical: metrics for both REST and hub traffic. They’ll probe trade-offs: when to use Redis vs Azure SignalR, how to handle backpressure, and how to avoid mixing business logic into hubs. Weak answers: “just add SignalR to the API” or “scale vertically.” Strong answers: multi-layer architecture with queues, async I/O, distributed cache, CI/CD, and clear monitoring.
Preparation Tips
Practice by building a sample ASP.NET Core app:
- Create REST endpoints with controllers for CRUD.
- Add a SignalR hub for notifications.
- Secure both with JWT bearer tokens.
- Run with Docker Compose + Redis backplane.
- Simulate scale by running multiple app instances; verify clients get broadcasts from any node.
- Add EF Core with SQL Server for REST data and Redis for caching presence.
- Enable Application Insights and measure latency, dropped connections, and request throughput.
- Test autoscaling in Kubernetes or Azure App Service.
Be ready to explain: why state must not stay in-memory, how SignalR differs from REST, why Redis/Azure SignalR matters, and how unified auth + monitoring keep the system reliable at scale.
Real-world Context
A logistics firm built an ASP.NET Core app with REST APIs for fleet data and SignalR dashboards for real-time tracking. Initially, all hub state was in-process; scaling beyond 2 nodes broke connections. Migrating to Azure SignalR Service solved connection routing at scale. Another SaaS used Redis backplane with Dockerized app servers across multiple AZs, ensuring clients could reconnect seamlessly. Security teams enforced JWT validation on hub connections, preventing token spoofing. Observability via Application Insights revealed that p95 REST latency spiked during heavy hub traffic; moving background jobs to Azure Functions stabilized both. These cases highlight that separating state, introducing backplanes, and enforcing shared auth flows make ASP.NET Core apps scale.
Key Takeaways
- REST APIs = stateless controllers; SignalR hubs = real-time layer.
- Scale hubs with Redis or Azure SignalR.
- Use async I/O, caching, and background queues.
- Unify security with JWT/OAuth across both.
- Monitor latency + hub connection health continuously.
Practice Exercise
Scenario: You’re building a collaboration app: REST APIs handle user CRUD and docs, while SignalR pushes real-time comments and presence updates. The client base will scale from 1k to 100k concurrent users.
Tasks:
- Build REST APIs with controllers, async I/O, EF Core + SQL Server. Add caching with Redis.
- Implement SignalR hubs for presence and notifications. Do not store state in memory. Track presence in Redis.
- Deploy 3 app instances behind a load balancer. Add Redis backplane to sync hubs. Validate broadcasts reach all clients.
- Secure APIs and hubs with JWT bearer tokens.
- Instrument Application Insights: log REST latency, dropped SignalR connections, and connection counts per node.
- Add background jobs with Azure Functions for heavy work (e.g., generating reports) and push events into hubs via queues.
- Run load tests simulating 50k SignalR connections. Measure latency and throughput.
- Prepare fallback: if hub overloaded, queue messages and send once reconnected.
Deliverable: Draft a 90-second pitch explaining how your design scales REST + SignalR, avoids in-memory state, secures both layers with unified auth, and validates reliability with telemetry + load tests.

