How do you optimize Laravel database performance and queries?

Tune Laravel with fast Eloquent ORM performance, safe caching, firm indexing, and zero N+1 queries.
Learn a blueprint for Laravel performance: eliminate N+1, shape Eloquent queries, apply caching, design indexes, and validate wins objectively.

answer

I optimize Laravel database interactions by starting with query profiling, then eliminating N+1 via with(), loadMissing(), and carefully scoped eager loads. I replace chatty loops with set-based queries, projections, and chunk() for large reads. I back results with cache (tags, per-tenant keys, stampede control), and design indexes that match filters and sort order. I keep writes lean with transactions and upserts, and verify improvements using Laravel Telescope, query logs, and realistic load tests.

Long Answer

High performance in Laravel is achieved by shaping data access, preventing redundant round trips, and proving every change with measurements. My strategy covers Eloquent ORM performance, caching, indexing, and systematic N+1 prevention.

1) Measure first, change later

I enable query logging per request in development and staging, use Laravel Telescope and the DB::listen hook, and capture slow queries with bindings. I categorize endpoints by read and write intensity and record baseline latency percentiles. This reveals whether bottlenecks are query count, missing indexes, heavy payloads, or serialization.

2) Eliminate N+1 with disciplined loading

I never let the view or API layer trigger implicit lazy loads. For lists I use with() to eager load precisely the relations used in the response. For conditional paths I use loadMissing() to top up relations only when they are null. I select columns explicitly with select() and relation constraints so eager loads remain small. For nested relations I apply keyed constraints, for example with(['items' => fn ($q) => $q->select('id','order_id','price')]). I avoid global eager loading that bloats unrelated queries, and I assert with automated tests that critical endpoints perform a fixed number of queries.

3) Prefer set-based operations over loops

I replace per-row work with set-based queries: whereIn instead of per-id selects, aggregate functions for counts and sums, and update or upsert for bulk writes. I move heavy filters and sorts into the database layer and project only required columns. For pipelines that must touch many rows, I use chunk() or cursor() to stream without exhausting memory, and I offload long work to queues so requests remain responsive.

4) Shape Eloquent to generate efficient SQL

I keep scopes pure and composable, use when() for conditional clauses, and avoid hidden magic that generates Cartesian products. I select the primary key and any join keys explicitly to prevent unnecessary selects. For pagination I prefer cursor pagination when consistent ordering exists, because deep offset pagination becomes slow. I apply orderBy on indexed fields and avoid random ordering in hot paths.

5) Caching with safety and intent

I use application caching for read heavy responses and expensive aggregates. Keys include version components such as model updated_at or schema version so deployments do not leak stale entries. I use cache tags to invalidate groups, and protect against stampedes with a short lock or double-checked recomputation so only one worker rebuilds on expiry while others serve slightly stale data. For per-user data I vary keys by tenant, user, and locale. I keep time to live short for volatile records and consider rememberForever only for reference tables. I never cache unbounded lists without pagination.

6) Indexing that mirrors query shapes

Indexes must match the exact filter and sort order. I create composite indexes aligned as equality fields first, then range, then sort. I prune unused and overlapping indexes to keep writes fast and memory healthy. I use migrations to add indexes with algorithms that avoid long table locks when possible. Before shipping, I run EXPLAIN plans for the top queries to confirm that the chosen index is used and that sorting is satisfied without extra work.

7) Transactions, contention, and write paths

For multi-step writes I wrap operations in transactions and keep the scope minimal. I avoid accidental table scans inside transactions and prefer upsert for deduplicated bulk writes. I apply optimistic concurrency on hot rows by checking updated_at or a version column, and I retry with bounded backoff to reduce lock contention. For counters and tallies I maintain summary tables or cached aggregates updated in a single atomic statement.

8) Serialization and payload control

I reduce payload size with Eloquent API Resources and ->makeHidden() or ->only() to send just what clients need. I avoid serializing large relations by default. For massive responses I stream JSON or CSV rather than building large arrays in memory. Smaller payloads mean fewer allocations, less network transfer, and less garbage collection.

9) Queues and asynchronous work

If an endpoint triggers heavy recalculation, report generation, or third party calls, I move that work to queues. I use idempotency keys to prevent duplicates upon retry, and I surface progress through job status endpoints or events. The web request returns quickly and database time remains bounded.

10) Guardrails and governance

I add automated tests that assert query counts for critical endpoints using assertDatabaseQueryCount styled helpers, and I run periodic slow query sampling in staging. Performance budgets exist for bundle size on the client and for query time on the server. I document indexes and cache keys so future contributors do not introduce regressions. Every meaningful optimization ships with a before and after measurement and a rollback.

With this approach, Laravel database interactions become predictable: N+1 disappears, Eloquent emits efficient SQL, indexes serve the exact query shapes, and caching reduces repeated work without risking staleness or stampedes.

Table

Area Strategy Laravel Mechanism Outcome
N+1 Prevention Precise eager loading with(), loadMissing(), constrained relations Fewer queries, stable latency
Set-Based Work Replace loops whereIn, aggregates, update, upsert, chunk() Less CPU, fewer round trips
Query Shaping Emit efficient SQL Scopes, projections, cursor pagination, indexed orderBy Lower I/O, no deep offsets
Caching Safe and targeted Cache tags, versioned keys, stampede locks High hit ratio, fresh data
Indexing Match filter and sort Composite indexes via migrations, EXPLAIN verification Index-only reads, faster sorts
Writes Keep critical sections small Transactions, optimistic checks, atomic counters Lower contention, fewer locks
Payloads Send only what is needed API Resources, only() and makeHidden() Smaller responses, faster render
Proof Measure and enforce Telescope, query logs, load tests, query-count asserts Verified, repeatable gains

Common Mistakes

  • Rendering lists that trigger N+1 by accessing relations inside Blade loops.
  • Eager loading entire relation graphs instead of selecting exactly what the response needs.
  • Using deep offset pagination on large tables rather than cursor pagination.
  • Building many overlapping indexes that slow writes and still miss the exact sort.
  • Caching without tags or version keys, causing stale or inconsistent data after updates.
  • Performing per-row queries in jobs rather than batching with whereIn or chunk().
  • Wrapping long, chatty logic inside transactions, increasing lock time and deadlocks.
  • Shipping large unfiltered payloads that waste bandwidth and memory, then blaming the database.

Sample Answers

Junior:
“I prevent N+1 by using with() on the exact relations I need and I constrain selected columns. I switch list endpoints to cursor pagination and verify queries with Telescope. I add cache with tags for expensive results and invalidate by tag on updates. I add proper indexes that match filters and sorting.”

Mid:
“I convert loops into set-based queries, for example whereIn and bulk upsert. I design composite indexes that satisfy both where and orderBy, and I confirm with EXPLAIN. I cache aggregates with versioned keys and a short lock to prevent stampedes. I keep transactions small and stream large responses.”

Senior:
“I run with budgets and proofs. I enforce eager loading policies, assert query counts in tests, and keep Eloquent projections tight. I maintain an index catalog aligned to top query shapes and prune overlaps. Cache hierarchies use tags and version keys. Heavy work moves to queues with idempotency. Every change ships with before and after timings and a safe rollback.”

Evaluation Criteria

A strong answer demonstrates a complete plan for Laravel database performance:

  • Prevent N+1 with targeted eager loading and constrained projections.
  • Replace per-row loops with set-based operations and bulk writes.
  • Align indexes to filters and sort order and verify with EXPLAIN.
  • Use caching with tags, versioned keys, and stampede protection.
  • Keep transactions short, apply optimistic checks, and use cursor pagination.
  • Reduce payload size with Resources and selective fields.
  • Prove improvements with Telescope, query listeners, query-count assertions, and realistic load tests.
    Red flags include relying on lazy loading, indiscriminate eager loading, index sprawl, naive caching, and a lack of measurement.

Preparation Tips

  • Instrument a slow endpoint with Telescope and list all queries and bindings.
  • Remove N+1 by adding with() and narrowing columns; write a test that asserts the query count.
  • Replace offset pagination with cursor pagination; compare latency at high page numbers.
  • Design a composite index that satisfies both filter and sort; validate with EXPLAIN.
  • Add a cached aggregate using tags and a short lock to prevent stampedes; simulate concurrent requests.
  • Convert a loop of per-row updates into a single upsert; verify row counts.
  • Stream a large export with cursor() or a response stream to cap memory.
  • Capture before and after timings and document the rollback procedure.

Real-world Context

An online marketplace removed N+1 from its product grid by eager loading prices and inventory with constrained selections; request count fell by more than ninety percent and median latency improved significantly. A subscription service replaced offset pagination with cursor pagination on a time ordered index and stabilized tail latency during heavy browsing. A reporting module moved per-row updates into a single upsert and cut write time drastically. A dashboard cached high-cardinality aggregates with versioned keys and stampede protection; database load during campaigns dropped while freshness remained acceptable. Each change was validated by Telescope traces, EXPLAIN plans, and load test comparisons.

Key Takeaways

  • Eliminate N+1 with targeted eager loading and constrained columns.
  • Favor set-based queries, projections, and bulk writes over loops.
  • Design composite indexes that align with filters and sort order.
  • Add caching with tags, versioned keys, and stampede control.
  • Keep transactions small, paginate with cursors, and verify every win with measurements.

Practice Exercise

Scenario:
Your orders endpoint lists recent orders with customer, totals, and last payment status. It suffers from N+1, slow deep pagination, and periodic spikes during promotions.

Tasks:

  1. Record a baseline with Telescope: query count, total time, and the slowest queries.
  2. Add with(['customer:id,name','payments:id,order_id,status,created_at']) and restrict the parent select() to the columns the API needs. Prove that the query count is constant.
  3. Replace offset pagination with cursor pagination using an indexed created_at descending order.
  4. Create a composite index (store_id, created_at desc) that satisfies both filter and sort; confirm with EXPLAIN.
  5. Introduce a cached aggregate for totals per store with tags store:{id} and a short lock to prevent stampedes; invalidate by tag on relevant updates.
  6. Convert a per-order status reconciliation loop into a single whereIn query and an upsert.
  7. Stream large exports with cursor() to cap memory.
  8. Rerun measurements and a synthetic load test reflecting promotion traffic. Capture before and after query counts, latency percentiles, and database CPU. Prepare a rollback by toggling caching and pagination changes behind configuration flags.

Deliverable:
A concise report and code changes that demonstrate improved Laravel database interactions through Eloquent ORM performance tuning, safe caching, correct indexing, and removal of N+1 issues, all validated by reproducible measurements.

Still got questions?

Privacy Preferences

Essential cookies
Required
Marketing cookies
Personalization cookies
Analytics cookies
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.