Skip to main content

Fix Checkout Latency with Three Competing Strategies

Race 3 parallel Devin sessions against a slow checkout API — each tries a different optimization, then the best approach ships.
AuthorCognition
CategoryDevin Optimization
FeaturesAdvanced
1

Define the problem and success criteria

Your checkout API (POST /api/checkout) has a p99 latency of 1.8 seconds — users are abandoning carts and your SLA target is 400ms. There are multiple valid ways to fix this: caching, query optimization, async processing, connection pooling. You don’t know which will work best until you try them, and trying them sequentially means days of waiting.Instead, use Advanced Devin to launch 3 sessions in parallel, each exploring a different strategy. After all 3 finish, Advanced Devin compares the results and ships the winner — or combines the best parts of each into a single PR.To get started, select Advanced from the agent picker on the Devin homepage, then click the Start Batch Sessions tab.
2

Write a prompt that steers each session toward a different fix

The value of running 3 sessions depends on each one exploring a genuinely different approach. Write your prompt to encourage divergence — suggest specific strategies and define what “best” means so the results are directly comparable.Tips for a good multi-strategy prompt:
  • Define “best” with ranked criteria. Listing comparison dimensions — latency, error rate, complexity, consistency — prevents Devin from defaulting to raw speed alone.
  • Suggest specific strategies. Options like “caching, query rewriting, async processing” nudge each session toward a different path.
  • Include a benchmark command. Each session needs a reproducible way to measure its own result — npm run bench, k6 run load-test.js, or a simple curl loop.
  • Point to the code. A file path like src/routes/checkout.ts ensures all 3 sessions start from the same place.
3

Compare results and pick the winner

Once all 3 sessions complete, Advanced Devin reviews their work side-by-side against your criteria — strategies used, benchmark numbers, tradeoffs — and either picks the best or synthesizes a combined solution into a final PR.Here’s what that comparison looks like for the checkout latency problem:
Session 1 — Redis response caching
  Strategy:   Cache serialized cart + inventory lookups in Redis with
              30s TTL, bypass DB for repeat requests
  p99:        1.8s -> 320ms  (PASS — 82% reduction)
  Errors:     No change
  Complexity: +1 dependency (ioredis), 2 new files
  Tradeoff:   Stale inventory data for up to 30s; 40MB Redis memory

Session 2 — Query optimization + connection pooling
  Strategy:   Replaced N+1 queries with a single JOIN, added PgBouncer
              connection pool (25 connections)
  p99:        1.8s -> 580ms  (FAIL — still above 400ms)
  Errors:     No change
  Complexity: 0 new dependencies, cleaner queries
  Tradeoff:   None significant — lower DB load overall

Session 3 — Async order processing
  Strategy:   Moved payment processing and email to a background queue
              (BullMQ), return 202 immediately after inventory check
  p99:        1.8s -> 190ms  (PASS — 89% reduction)
  Errors:     No change
  Complexity: +1 dependency (bullmq), 3 new files, webhook handler
  Tradeoff:   Checkout becomes eventually consistent; needs webhook
              for payment confirmation

Verdict: Sessions 1 and 3 both pass the 400ms target. Session 2's
query fixes are valuable but insufficient alone.

Final PR: Combined Session 2's query optimization (no cost, strictly
better) with Session 3's async processing. Payment + email moved to
queue, N+1 queries fixed. Final p99: 150ms. PR #412 opened.
You can review the individual session PRs before Advanced Devin creates the combined one. If you prefer one approach outright, just tell Devin — “go with Session 3’s approach, skip the combination.”
4

When to race 3 strategies on a single problem

Good fit — multiple valid approaches exist:
  • Performance bottlenecks where caching, query tuning, and architecture changes could all work
  • Architecture decisions with real tradeoffs (monolith extraction, state management redesign)
  • Algorithm selection for a data-heavy problem (different indexing, ranking, or ML approaches)
Bad fit — the solution is obvious:
  • Bug fixes with a clear root cause
  • Adding a standard CRUD endpoint
  • Updating dependencies or config files
This pattern uses 3x the ACUs of a single session. Reserve it for problems where you’d otherwise spend days trying approaches sequentially. For straightforward tasks, a single Devin session is faster and cheaper.You can also trigger batch sessions via the API by setting advanced_mode to batch — useful for integrating into CI pipelines that automatically race multiple fixes against a performance regression. If you want Devin to run fully autonomously without waiting for your approval on proposals, enable the bypass permissions flag so sessions auto-approve and keep moving.