High-Level Architecture — End-to-end data flow diagram
25-45 min
Web Architecture Detail — Frontend deep dive (biggest section!)
45-55 min
API Contracts — Endpoints, idempotency, error formats
55-60 min
Wrap-up — Failure scenarios, testing, questions
Phase 1: Requirements Gathering 10 min
This sets the tone for the whole interview. Be structured. Don't rush into drawing.
What are Functional Requirements?
What the system does. These describe the features and behaviors a user can see and interact with. Think of them as the "user story" answers.
Example: "A user can send money to another user" / "The system shows a transaction history" / "The system converts currencies in real-time."
If you removed a functional requirement, the user would notice something missing.
What are Non-Functional Requirements?
How the system behaves. These describe qualities and constraints that users feel but don't directly interact with. They're the "-ilities": scalability, reliability, security, performance.
Example: "The system handles 10,000 requests/second" / "Pages load in under 2 seconds" / "Data is encrypted at rest and in transit" / "99.9% uptime."
If you violated a non-functional requirement, the system would still work but would be slow, insecure, or crash under load.
Functional Requirements
What the system does
Who are the users? (consumer, business, admin?)
What are the core user flows?
What happens on success / failure?
Real-time or eventual consistency?
Multi-currency support?
Notification requirements? (email, push, in-app)
Transaction history / audit trail?
Search and filtering?
Non-Functional Requirements
How the system behaves
Scale: users count, TPS (transactions per second)
Latency: acceptable response times
Availability: uptime SLA (99.9%?)
Security: auth, encryption, PCI compliance
Performance: Core Web Vitals targets
Accessibility: WCAG compliance level
Internationalization: languages, currencies, RTL
Offline support / degraded mode?
Cost efficiency
Developer experience
What does "Real-time vs Eventual Consistency" mean?
Real-time (strong) consistency: When you send money, the balance updates instantly everywhere. If you check on another device 1 millisecond later, you see the new balance. This is slower because all parts of the system must agree before responding.
Eventual consistency: After you send money, there's a brief delay (milliseconds to seconds) before all parts of the system reflect the new state. Faster and more scalable, but you might briefly see stale data.
In a payment system you typically want both: Strong consistency for the transaction itself (don't allow overdraft — the balance must be accurate at the moment of transfer). Eventual consistency is fine for displaying transaction history (ok if it appears 1-2 seconds later on another screen).
This is a great question to ask the interviewer: "For the core transfer operation, do we need strong consistency, or is eventual consistency acceptable?" It shows you understand the trade-off between speed/scalability and data accuracy.
Questions to Always Ask
"Who is the primary user of this system?"
"What's the expected scale? Users / requests per second?"
"Web only or also mobile?"
"What's the most critical user flow to focus on?"
"Are there existing systems we integrate with?"
"What's more important: consistency or availability?"
"Any regulatory/compliance constraints?"
"Do we need real-time updates?"
"What geographies do we serve?"
Sample Requirements Conversation
This is how the first 10 minutes should feel. Read it out loud to practice the rhythm.
Money Transfer
Flight Booking
Streaming Dashboard
Interviewer: "Design a money transfer system for Revolut Business."
You: "Great. Before I start drawing, I'd like to understand the scope. Who's the primary user? A business admin sending bulk payments, or individual employees making one-off transfers?"
Interviewer: "Individual employees sending money to suppliers or other businesses."
You: "Got it. Is this international? Should we handle multi-currency with exchange rates?"
Interviewer: "Yes, international transfers with currency conversion."
You: "Understood. For scale — are we talking thousands of transfers per day, or millions? This affects whether we need message queues and async processing."
Interviewer: "Let's say tens of thousands per day."
You: "OK. Real-time question: after a user submits a transfer, should they see the status update live (pending → completed), or is it fine to poll?"
Interviewer: "Real-time would be ideal."
You: "Perfect. Let me summarize the requirements before we move on:"
Functional: User can create international transfers with currency conversion, view transfer status in real-time, see transaction history with filters, handle notifications on completion/failure.
Non-functional: Tens of thousands of daily transfers, real-time status via WebSocket, strong consistency for the transaction itself, ACID guarantees (no double-charging), PCI compliance, multi-region deployment for global users, sub-second response for API calls.
You: "Does this capture it, or should I adjust the scope?"
Interviewer: "Design a flight and hotel booking system."
You: "Interesting. A few questions to scope this right. Is this within the Revolut app — so users pay with their Revolut balance? Or standalone?"
Interviewer: "Within Revolut. Users pay with their balance."
You: "Search: Do we aggregate from multiple providers (like Skyscanner), or a single provider API?"
Interviewer: "Multiple providers, we aggregate results."
You: "Booking hold: When a user selects a flight, do we need to hold/reserve it for X minutes while they complete payment? Prices and availability change fast."
Interviewer: "Yes, a temporary hold makes sense."
You: "Offline / persistence: Should search results be cached? If the user navigates away and comes back, do we keep their search?"
Interviewer: "Yes, cache recent searches."
You: "OK, let me summarize:"
Functional: Search flights/hotels from multiple providers, filter/sort results, select and hold a booking temporarily, multi-step checkout (passengers → payment → confirm), booking confirmation + e-ticket, booking history.
Non-functional: Aggregated search across providers (latency-sensitive — need parallel requests + timeouts), cached search results (short TTL, prices change), reservation expiry with countdown timer, payment via Revolut balance (idempotency!), high availability (users expect booking to work 24/7).
Interviewer: "Design a real-time analytics dashboard for business transactions."
You: "Got it. Who's the audience? A business owner looking at their own transactions, or an internal Revolut operations team monitoring all transactions?"
Interviewer: "Business owner — their own company's transaction data."
You: "What kind of visualizations? Time series charts (spend over time), aggregate cards (total spent this month), breakdowns (by category/currency)?"
Interviewer: "All of those. Time series is the most important."
You: "How real-time? Should a new transaction appear on the chart within seconds, or is a 5-minute refresh acceptable?"
Interviewer: "Within seconds for the latest data."
You: "Data volume: How far back? Live data + last 30 days? Or years of history?"
Interviewer: "Live data plus last 12 months. Older data can be slower to load."
Functional: Real-time time series chart (spend over time), aggregate summary cards, breakdown by category/currency/recipient, date range picker, export to CSV.
Non-functional: Sub-second updates for live data (WebSocket/SSE), historical data loads in < 2s (pre-aggregated), handle large datasets efficiently (data windowing, virtualization), responsive for different screen sizes, graceful degradation when WebSocket disconnects (fall back to polling).
Phase 2: High-Level Architecture 10 min
Draw the full system. Show how data flows from database through backend to the client. The browser has two separate paths: one to the CDN for static assets, one to the API Gateway for data.
Money Transfer
Flight/Hotel Booking
Streaming Dashboard
Drawing Tips
High-Level Architecture — Revolut Business International Money Transfer
Note on Load Balancer vs API Gateway: In this diagram they are merged into one box. Modern API Gateways (Kong, AWS API Gateway, Nginx) include load balancing built in. For simplicity in the interview, you can keep them as one. If asked, mention: "In production at scale, the load balancer and API gateway might be separate layers — the LB distributes traffic across multiple API gateway instances."
Know Why Each Component Exists
CDN — Serves static assets (JS bundles, CSS, images) from edge servers close to the user. Separate path from API calls — browser fetches these directly
API Gateway — Single entry point for all API calls: handles auth, rate limiting, routing, logging. Includes load balancing to distribute across backend instances
Microservices — Separation of concerns, independent scaling & deployment
PostgreSQL — ACID compliance for financial data, strong consistency
Payment-specific additions: Idempotency layer at the API Gateway or service level. Transaction ledger with ACID guarantees. Payment processor integration (internal or Stripe-like).
High-Level Architecture — Flight/Hotel Booking System
Key differences from Money Transfer:
Search Aggregator Service — fans out to multiple provider APIs in parallel, merges results, applies ranking. Has strict timeouts (if provider doesn't respond in 3s, skip it).
Booking Service with reservation hold — creates a temporary lock on the selected flight/room. Expires after X minutes if not paid. Uses a scheduled job or TTL to auto-release.
Search Cache (Redis, short TTL) — search results cached for 2-5 minutes. Prices change fast so longer caching = stale data.
Provider APIs are external — need circuit breakers, timeouts, and fallbacks per provider.
Read-heavy, not write-heavy — dashboard mostly reads data. The writes happen elsewhere (transaction processing). Focus on query performance and caching.
Stream Processor — consumes transaction events from Kafka/SQS, computes real-time aggregations (rolling sums, counts), pushes updates via WebSocket.
Two data paths: Historical data from the analytics DB (pre-aggregated, fast reads). Live data streamed directly via WebSocket from the stream processor.
Analytics DB — time-series optimized (ClickHouse, TimescaleDB, or pre-aggregated tables in Postgres). NOT the same DB as the transactional one.
How to Draw This in 10 Minutes on Excalidraw
Start with the user — draw "Browser" on the left. Everything flows from there.
Two paths out of browser: one to CDN (top, for static files), one to API Gateway (center, for data).
Work top-to-bottom: Client → Gateway → Services → Data stores. Don't jump around.
Max 3-4 services — only draw services relevant to the problem. Not every microservice, just the important ones.
Label every arrow — "HTTPS", "WebSocket", "async events". Unlabeled arrows are confusing.
Use color in Excalidraw — one color for sync requests, another for async/events. Makes it readable.
Talk while drawing — "I'm putting the CDN here because static assets should be served from edge locations..." Don't draw silently.
Don't draw what you won't explain — every box should have a "why". If you can't explain why it's there, remove it.
Phase 3: Frontend Architecture Deep Dive 20 min — BIGGEST SECTION
Zoom into the client box from the high-level diagram. This is where you spend the most time.
Money Transfer
Flight Booking
Dashboard
Frontend Architecture — Revolut Business Web App (Money Transfer)
Frontend Architecture — Flight/Hotel Booking
Key frontend differences for booking:
Search is the heaviest feature — complex filters, sort options, infinite scroll or paginated results. Need debounced inputs and aggressive caching of search results.
Reservation timer — countdown UI component ("Your booking is held for 14:32"). On expiry, redirect back to search. Synced with server-side TTL.
Multi-step wizard state — search → select → passengers → payment → confirm. Need to persist wizard state across steps (URL params or client state). Handle back-button correctly.
Price volatility — price shown in search might differ at checkout. Need a "price changed" confirmation step.
Chart rendering is the core — D3/Recharts/Visx for time series. Need efficient re-renders when new data points arrive via WebSocket. Use canvas for very large datasets.
Data windowing — don't load all 12 months into memory. Load visible range + buffer. Fetch more on scroll/zoom.
WebSocket state management — connection status indicator, auto-reconnect with backoff, merge streamed data into cached historical data.
Less write-heavy — mostly reads + filters. State is simpler. But visualization performance is critical.
State Management
Loading States
Error Handling
Auth Flow
Performance
Dev Experience
Why Separate API Data from UI State?
In a client-side SPA, all state lives on the client. But it's useful to separate it by where it originates:
Remote / API data = data fetched from the backend (account balance, transactions, exchange rates). Needs caching, refresh strategies, and invalidation after mutations
A data-fetching library (e.g. React Query, SWR) helps with: caching, background refetching, cache invalidation, retry, stale-while-revalidate — but you can also build a simpler fetch + cache layer yourself
UI state = state that only exists on the client (is modal open? selected tab, form draft). Managed with component state, context, or a state library (e.g. Zustand, Redux)
Mixing them in one store leads to stale data, unnecessary re-renders, and complex sync logic
Example: account balance = remote data (refetch after transfer). "Is confirm modal open?" = UI state
// Remote data with a caching layer (e.g. React Query, SWR, or custom)
const { data: balance, isLoading } = useQuery({
queryKey: ['balance', accountId],
queryFn: () => api.getBalance(accountId),
staleTime: 30_000, // consider fresh for 30s
refetchOnWindowFocus: true,
});
// After a successful transfer, invalidate cached data so it refetches
const mutation = useMutation({
mutationFn: api.createTransfer,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['balance'] });
queryClient.invalidateQueries({ queryKey: ['transactions'] });
},
});
Loading States — They Specifically Asked About This!
Idempotency-Key: The client generates a UUID before sending the request. If the network fails and the client retries with the same key, the server recognizes it and returns the cached response instead of processing a second payment. New transfer intent = new key. Retry of same transfer = same key.
GET/api/v1/transfers/:id— Get transfer details & status
No request body. Used to poll transfer status after creation.
No Idempotency-Key needed — GET is naturally idempotent (reading data never changes anything).
Minor units for money: amount in cents/pence (10000 = 100.00 EUR) to avoid floating-point rounding bugs
ISO 8601 dates with timezone (2026-04-08T10:00:00Z) — no ambiguity across regions
HTTP status codes used correctly: 201 Created, 400 Bad Request, 401 Unauthorized, 409 Conflict (duplicate idempotency key with different body), 429 Too Many Requests
RESTful resource naming: nouns, plural (/transfers not /transfer, /accounts not /account)
HATEOAS (bonus): include links to related resources in responses for discoverability
Rate Limiting Headers — What Are They?
These are RESPONSE headers — the server sends them back to you
Rate limiting protects the server from being overwhelmed. The server tracks how many requests each client makes and includes these headers in every response so the client knows where it stands:
X-RateLimit-Limit: 100 — "You're allowed 100 requests per time window" X-RateLimit-Remaining: 97 — "You have 97 requests left in this window" X-RateLimit-Reset: 1712570400 — "Your limit resets at this Unix timestamp"
When remaining hits 0: The server responds with 429 Too Many Requests and a Retry-After: 30 header telling the client to wait 30 seconds.
Frontend should: Read these headers, show a user-friendly message if rate limited ("Please wait a moment before trying again"), and implement exponential backoff with jitter for automatic retries.
Exponential Backoff + Jitter
How to retry failed/rate-limited requests without making things worse
The problem: If 1000 clients all get rate-limited at the same time and all retry after exactly 2 seconds, they'll all hit the server again simultaneously — creating a "thundering herd" that crashes it again.
Exponential backoff: Each retry waits longer than the last: 1s → 2s → 4s → 8s → 16s (doubling each time, with a max cap).
Jitter: Add a random delay on top so not everyone retries at the exact same moment. Instead of all clients retrying at 4s, one retries at 3.2s, another at 4.7s, another at 5.1s — spreading the load.
When to give up: Set a max number of retries (e.g. 3-5). After that, show the user an error: "Transfer couldn't be completed. Please try again later." Don't retry forever.
Circuit Breaker Pattern
Stop calling a service that's clearly down — give it time to recover
Think of it like an electrical circuit breaker in your house: when there's too much load, it trips and cuts the connection to prevent damage.
Three states:
CLOSED (normal) — Requests flow through normally. The circuit tracks failure count. OPEN (tripped) — Too many failures detected (e.g. 5 failures in 10 seconds). Requests are immediately rejected without calling the server. Show cached data or a fallback UI. This gives the server time to recover instead of hammering it with more requests. HALF-OPEN (testing) — After a timeout (e.g. 30 seconds), let ONE request through to test if the service recovered. If it succeeds → back to CLOSED. If it fails → back to OPEN.
On the frontend this means: If the balance API fails 3 times in a row, stop calling it. Show the last cached balance with a note "Balance may be outdated" and a manual "Retry" button. After 30 seconds, try again automatically.
In the interview, mention it when discussing: failure scenarios, graceful degradation, or how the frontend handles a backend outage.
Idempotency — Why POST Needs Special Handling
The problem: POST is NOT idempotent by nature
Idempotent means "doing it twice has the same effect as doing it once."
GET /transfers/123 — Naturally idempotent. Reading data 10 times changes nothing. PUT /transfers/123 {amount: 100} — Naturally idempotent. Setting a value to 100 twice still gives 100. DELETE /transfers/123 — Naturally idempotent. Deleting twice? Already gone after the first. POST /transfers {amount: 100} — NOT idempotent! Sending twice = two transfers = user charged twice!
The fix: The client attaches an Idempotency-Key header (a UUID it generates). The server uses this key to detect retries and return the original response without processing again.
Client generates UUID as Idempotency-Key before sending the request
Server stores key + response in Redis/DB with a TTL (e.g. 24 hours)
Same key arrives again → return stored response, skip processing entirely
Why critical for payments: network timeout → user retries → without idempotency = double charge
New transfer = generate new UUID. Retry of failed/timed-out transfer = reuse same UUID
Frontend implementation: generate the key when user clicks "Send", store it, reuse on retry, clear on success
Idempotency Flow — Preventing Double Charges on Retry
Phase 5: Failure Scenarios & Testing 5 min
Failure Scenarios
Network failure mid-transfer → Idempotency key prevents double charge. Show "checking status..." then resolve
Backend service down → Circuit breaker pattern, graceful degradation, show cached data where possible
Auth token expired mid-flow → Silent refresh via interceptor, queue the failed request, retry transparently
Database overload → Read replicas for reads, Redis cache absorbs hot paths
Unhandled JavaScript exception → A component throws an error during rendering (e.g. accessing a property on undefined because the API returned unexpected data). Without protection, the entire app goes white. Fix: Wrap each feature module in an Error Boundary — it catches the crash, shows a fallback UI ("Something went wrong in Transfers" + a Retry button), and logs the error to a monitoring tool with session context (but no personal data)
Optimistic locking conflict → Two users edit the same resource at the same time (e.g. two admins updating the same recipient details). Each record has a version number. When you save, you send the version you read. If someone else saved first (version changed), the server rejects your save with 409 Conflict — your UI shows "This was modified by someone else, please refresh." This prevents silent data loss where the last write quietly overwrites someone else's changes
Rate limiting hit → Backoff + retry with jitter, show user-friendly "too many requests" message
Partial data from API → Some fields may be null or missing (API contract changes, partial outage). Render what you have, show placeholders for missing data. Never crash because one field is absent
Testing Strategy
Unit tests: Business logic, utils, custom hooks — fast, isolated, run on every commit
Integration tests: Test components with their API calls, but mock at network level (intercept HTTP requests) not at module level — this way you test the real code path, just with fake server responses
E2E tests: Automate critical user journeys (login, transfer money, view history) in a real browser — slow but catches real bugs
Performance testing: Measure page load speed, Web Vitals in CI and production
Load testing: Simulate many concurrent users hitting the API to find breaking points — usually run against staging, not production
Contract testing: Validate that API responses match the expected shape (TypeScript types / schema validation) — catches backend changes that break the frontend
Metrics to Measure
Core Web Vitals — Google's 3 metrics for user experience
These are measured in the user's real browser and reported back. Google uses them for search ranking too.
LCP = Largest Contentful Paint — How long until the biggest visible element (hero image, main text block) appears on screen. Measures "when does the page look loaded?" Good: < 2.5sNeeds work: 2.5-4sPoor: > 4s
INP = Interaction to Next Paint — How long between the user clicking/tapping something and the screen visually responding. Measures "does the page feel responsive?" Good: < 200msNeeds work: 200-500msPoor: > 500ms
CLS = Cumulative Layout Shift — How much the page content jumps around while loading (e.g. an image loads late and pushes text down, or an ad inserts above the button you were about to click). Measures "does the layout stay stable?" Good: < 0.1Needs work: 0.1-0.25Poor: > 0.25
API Latency Percentiles — p50, p95, p99
Instead of "average response time" (which hides outliers), we use percentiles:
p50 (median) — 50% of requests are faster than this. This is your typical experience. p95 — 95% of requests are faster than this. Only 1 in 20 is slower. This is what most users experience at worst. p99 — 99% of requests are faster than this. Only 1 in 100 is slower. This catches the worst-case outliers.
Example: "Our transfer API has p50 = 120ms, p95 = 450ms, p99 = 1200ms" means: most requests complete in 120ms, but 1 in 100 takes over a second — probably worth investigating why.
Why not average? If 99 requests take 100ms and 1 takes 10,000ms, the average is 199ms — looks fine. But p99 = 10,000ms tells you something is badly wrong for some users.
In the interview, say: "We'd monitor p50, p95, and p99 latency — averages hide problems."
Apdex — Application Performance Index
A single number from 0 to 1 that summarizes user satisfaction based on response time.
You define a threshold T (e.g. T = 500ms). Then:
Satisfied — response < T (under 500ms) Tolerating — response between T and 4T (500ms - 2s) Frustrated — response > 4T (over 2s) or errors
Apdex = (Satisfied + Tolerating/2) / Total. Score of 1.0 = everyone happy. Score of 0.5 = half your users are frustrated. Simple way to answer "are users happy with performance?"
Where and How to Collect Metrics
Frontend performance (Web Vitals) — Collected in the user's browser using the web-vitals library or the browser's Performance API. Reported to your monitoring backend on each page load. Shows real user experience, not lab conditions
API latency & error rates — Collected at the API Gateway or backend service level. Every request is logged with its duration and status code. Aggregated into dashboards (e.g. Grafana, Datadog) with percentile breakdowns
Client-side errors — Caught by a global error handler + Error Boundaries. Sent to an error tracking service (e.g. Sentry) with stack trace, browser info, and session ID (no personal data). Set up alerts for spikes in error rate
Business metrics (transfer success rate, time-to-transfer) — Tracked via analytics events. Instrument key user actions: "transfer_started", "transfer_confirmed", "transfer_completed", "transfer_failed". Analyze the funnel to find drop-off points
What to alert on: Error rate > X%, p99 latency > Y ms, Web Vitals degradation, transfer failure rate spike. Don't alert on everything — only actionable anomalies
Retention: Real-time dashboards for last 24h-7d. Aggregated historical data for weeks/months. Raw logs retained for days (expensive to store long-term)
Error rate: 4xx/5xx percentage of total API responses
Transfer success rate: completed vs failed transfers
Time to transfer: end-to-end user journey timing (click "Send" → see "Completed")
Client-side errors: JS exceptions, unhandled promise rejections per session
Rules & Gotchas
DO NOT say: "We already did something similar at my previous company" — they explicitly said to avoid this. Focus on engineering the solution from scratch.
DO NOT rely on specific technologies. Say "a component library" not just "Material UI". Say "data-fetching with caching" not just "React Query". You can mention specific tools as examples, but don't anchor your design on them.
DO NOT jump between topics. Follow the interview phases in order. Finish requirements before drawing. Finish high-level before zooming in.
DO ask questions whenever in doubt. "Is it okay if I focus on the transfer flow first?" "Should I also consider mobile?" There is no limit on questions.
DO think global. Revolut is in 35+ countries. Mention: i18n, multi-currency, time zones, RTL languages, regional compliance.
RTL Languages (Right-to-Left)
Languages like Arabic, Hebrew, and Farsi are read and written from right to left. This flips the entire UI layout:
What changes in RTL mode:
Text alignment — defaults to right-aligned instead of left
Layout direction — sidebars, nav items, icons all mirror horizontally
CSS — use margin-inline-start / padding-inline-end instead of margin-left / padding-right (logical properties)
Icons with direction — arrows, "back" buttons must flip; but icons like a clock or checkmark do NOT flip
Numbers and dates — digits themselves are still LTR, but surrounding text is RTL
How to mention in interview:"We set dir="rtl" on the HTML element and use CSS logical properties so the entire layout mirrors automatically. We avoid hardcoded left/right in CSS."
DO keep it simple. A working solution beats an overcomplicated one. They said this explicitly.
DO show customer focus. Frame every decision as user benefit: "This gives faster feedback to the user" / "This prevents double-charging the customer."
DO handle loading states everywhere. They specifically flagged this as important in feedback.
Loading States — Beyond Spinners
Showing a spinner for every loading state is a bad UX pattern. It makes the app feel slow and causes layout shift (bad for CLS). Here are better approaches:
1. Skeleton screens (best for initial loads)
Show grey placeholder shapes that match the layout of the content about to appear. The user sees the page "structure" instantly, then content fills in. No layout jump.
Use for: page loads, list items, cards, profile sections
Why: preserves layout, perceived performance is much better than a blank screen + spinner
2. Optimistic UI (best for user actions)
Immediately show the expected result before the server confirms. If the server fails, roll back and show an error.
Use for: like buttons, toggles, sending messages, adding to favourites
Why: the action feels instant — users don't wait for a round-trip
Example: "User taps 'Send money' → we immediately show 'Transfer pending' in the list → server confirms or we roll back"
3. Stale-while-revalidate (best for cached data)
Show the last known data immediately, then fetch fresh data in the background. When the new data arrives, update silently.
Use for: transaction history, account balance (show last known, update when fresh data arrives)
Why: the screen is never empty — there's always something useful to show
4. Progressive / incremental loading
Load and render critical content first, then load secondary content after.
Use for: dashboards (show summary cards first, then charts), long lists (virtualisation / infinite scroll)
Why: user sees actionable content faster, even if the whole page isn't ready
5. Inline loading indicators (when you must show "loading")
Small, contextual indicators right where the action happened — not a full-page overlay.
Use for: submit button → disable + tiny spinner inside the button; pull-to-refresh indicator
Why: user knows exactly what is loading and can still interact with the rest of the page
How to mention in interview:"For loading states, I'd use skeleton screens for initial page loads to avoid layout shift, optimistic updates for user actions like transfers, and stale-while-revalidate for cached API data so the screen is never blank. Full-page spinners are a last resort."
Communication Checklist
Talk through your thought process out loud
State assumptions before making them
Explain trade-offs: "I chose X over Y because..."
Ask "Does this make sense so far?" at transitions
Summarize requirements before moving to architecture
When drawing: narrate what each box does and why
Keep diagrams clean — labels on every box and arrow
Don't overcomplicate — it's better to have a working simple solution
Be mindful of time — don't spend 20 min on requirements