Full-Stack Assessment: Performance Optimization & Live Debugging
Product engineer (candidate) — a timed take-home using Laravel and Next.js, a live technical session where a subtle bug surfaced, and the work I did afterward to reproduce, fix, and explain it clearly.
I am keeping this company-anonymous: no employer name, no verbatim brief, and no proprietary snippets. What follows is the engineering story—how I approached aggregation performance, UI stability, what broke under concurrency in the live session, and how I closed the loop with a written follow-up.
Aggregation cost, UI churn, and a live race
The exercise required paginated data backed by a third-party REST API. A naive implementation naturally trends toward N+1-style latency when list rows each need a detail fetch. On the client, infinite scrolling plus image loading can cause layout shift if the UI does not reserve space. During the live session, stress on concurrent fetches exposed a bug: responses completed out of order, the merge step assumed stable ordering, and SWR revalidation briefly fought in-flight requests—producing inconsistent rows until refresh.
Pooling, skeleton-first UI, then a precise merge fix
On the server I replaced sequential per-row HTTP calls with concurrent requests via Laravel's HTTP client pooling so detail retrieval time tracks wall-clock parallelism instead of the sum of every round trip. On the client I used SWR for caching and deduplication, Intersection Observer to trigger the next page only when the sentinel enters view, and skeleton placeholders so the list height stays stable while data resolves. After the interview I reproduced the failure locally, fixed it by mapping pooled responses to explicit list indices before merge and tightening SWR keys so revalidation cannot clobber in-flight page fetches.
Impact
The take-home already showed I could ship a coherent full-stack slice under time pressure. The follow-up showed something rarer: I can own uncertainty, trace a cross-layer bug, and explain the fix in plain language. I also documented where I used AI-assisted tooling (boilerplate, test scaffolding, wording)—never as a substitute for validating behavior against the network tab and server logs.
I sent a short email summarizing what I observed under concurrent fetches, the ordering assumption that made the merge step fragile, and the exact guardrails I added (index-keyed merge + safer SWR keys). I included a minimal repro path and what I would monitor in production (pool timeouts, partial failures, and cache invalidation). The goal was not to re-litigate the interview—it was to leave a clear, respectful paper trail that the panel could skim in two minutes.
Tools (used transparently)
Editor with inline assistance for repetitive typing, HTTP client docs for pool semantics, and browser devtools for waterfall verification. I treated AI like a fast scratchpad: suggestions still had to pass curl, logs, and UI behavior.
Technical architecture
One sequence view: cache-first reads, pooled backend aggregation, the ordering bug class I hit live, and how the UI stays stable while paginating.