CS Basics: Big-O, Memory, Concurrency & Networking
Surviving senior interview CS theory questions — complexity, memory models, and HTTP fundamentals
Open interactive version (quiz + challenge)Real-world analogy
CS fundamentals are like knowing how engines work when you're a racing driver — you don't need to build the engine, but understanding why a diesel engine behaves differently at altitude helps you race smarter. Similarly, knowing Big-O helps you choose the right data structure, and knowing how TCP works helps you debug mobile network issues.
What is it?
Computer science fundamentals — algorithm complexity, memory management, concurrency models, and networking — are the theoretical foundations that senior Flutter engineers are expected to understand at a practical level during technical interviews.
Real-world relevance
In a field app sync interview, an interviewer asks why you chose a Map over a List for the in-memory cache. Answer: 'O(1) lookup by visitId vs O(n) linear scan — at 10K records, the Map is 10,000x faster for individual lookups.' This is Big-O applied directly to Flutter architecture decisions.
Key points
- Big-O notation fundamentals — Big-O describes worst-case time or space complexity as input size n grows. O(1) constant (HashMap lookup), O(log n) logarithmic (binary search), O(n) linear (list scan), O(n log n) linearithmic (merge sort), O(n²) quadratic (nested loops), O(2^n) exponential (naive recursion). Drop constants and lower terms: O(3n + 5) = O(n).
- Common Dart/Flutter Big-O scenarios — List.add() amortized O(1), List.insert(0, x) O(n) (shifts all elements), Map.containsKey() O(1), Set.contains() O(1), List.contains() O(n). Using a Set for membership testing instead of a List changes an O(n) loop to O(1) — common Flutter interview question.
- Space complexity — Space complexity measures additional memory used relative to input size. A function that creates a copy of a list is O(n) space. Recursive algorithms use O(depth) stack space — deep recursion risks stack overflow. Iterative alternatives use O(1) stack space. Flutter: large widget trees have O(n) memory where n is the number of live widgets.
- Stack vs heap memory — Stack: function call frames, local variables, fixed-size allocations — automatically managed, LIFO, fast allocation. Heap: objects, strings, collections — managed by Dart's garbage collector. In Dart: all objects live on the heap; local primitive-ish references (int, double, bool) are passed by value on the stack in optimized builds but are still Dart objects conceptually.
- Dart garbage collection — Dart uses a generational garbage collector: new objects go into 'young generation' (collected frequently, fast). Long-lived objects are promoted to 'old generation' (collected infrequently). Flutter DevTools Memory tab shows GC events. Avoid creating excessive short-lived objects in build() — reduces GC pressure.
- Process vs thread vs isolate — Process: independent memory space, OS-managed. Thread: shares process memory, OS-managed, risk of data races. Isolate (Dart): isolated memory heap, no shared memory, communicates via message passing (SendPort/ReceivePort). Dart's isolate model eliminates data races — you can't corrupt shared state because there is no shared state.
- Concurrency vs parallelism — Concurrency: multiple tasks in progress at the same time (may not run simultaneously) — Dart's event loop handles I/O concurrency on a single thread. Parallelism: multiple tasks running simultaneously on multiple CPU cores — Dart achieves with Isolate.spawn(). async/await in Dart is concurrent (cooperative scheduling), not parallel.
- Dart event loop model — Dart has two queues: microtask queue (higher priority — Future.value, scheduleMicrotask) and event queue (lower priority — I/O, timers, user input). The event loop processes all microtasks before the next event. Understanding this explains why Future.value() callbacks run before Timer(Duration.zero). Heavy sync code on the main isolate blocks both queues.
- TCP/IP basics — IP (Internet Protocol): routes packets across networks by IP address. TCP (Transmission Control Protocol): adds reliability (acknowledgement, retransmission), ordering, and flow control on top of IP. TCP connection: 3-way handshake (SYN → SYN-ACK → ACK). HTTP uses TCP. WebSocket upgrades an HTTP connection to a persistent TCP channel.
- HTTP/HTTPS request lifecycle — 1. DNS resolution (domain → IP). 2. TCP 3-way handshake. 3. TLS handshake (for HTTPS — certificate validation, key exchange, ~1-2 additional round trips). 4. HTTP request sent. 5. Server processes, responds. 6. Connection reused (HTTP/1.1 keep-alive) or closed. HTTP/2 adds multiplexing (multiple requests over one connection). HTTP/3 uses QUIC (UDP-based, faster on lossy networks).
- DNS resolution — DNS translates human-readable domains (api.myapp.com) to IP addresses. Hierarchy: recursive resolver (ISP/Google/Cloudflare) → root servers → TLD servers (.com, .io) → authoritative nameserver for the domain. TTL (Time To Live) controls caching. In mobile apps: DNS lookup adds latency on first request — HTTP client caches resolved IPs. DNSSEC prevents DNS spoofing.
- REST constraints and HTTP methods — REST (Representational State Transfer) constraints: stateless (server doesn't store client session), cacheable (responses indicate cacheability), uniform interface (resource-based URLs, standard verbs). HTTP semantics: GET (read, idempotent, cacheable), POST (create, not idempotent), PUT (replace, idempotent), PATCH (partial update), DELETE (idempotent). Idempotent: same result regardless of how many times applied.
Code example
// CS fundamentals in Dart/Flutter context
// Big-O examples in practice
void bigoExamples() {
final list = List.generate(1000, (i) => i);
final set = list.toSet();
final map = {for (final i in list) i: 'value_$i'};
// O(n) — scans entire list
final inList = list.contains(999); // avoid for large lists
// O(1) — hash lookup
final inSet = set.contains(999); // prefer for membership tests
final inMap = map.containsKey(999); // prefer for key lookup
// O(n) — avoid insert at beginning of large lists
list.insert(0, -1); // shifts all 1000 elements
list.add(1001); // O(1) amortized — append to end
}
// Isolate for parallel computation (true parallelism)
Future<List<Visit>> processLargeDatasetInIsolate(List<Map> rawData) async {
return await Isolate.run(() {
// This runs on a separate OS thread with its own memory
return rawData.map(Visit.fromJson).toList();
// No shared state — data is copied to the isolate
});
}
// Dart event loop — microtask vs event queue
void eventLoopDemo() {
print('1: sync');
Future(() => print('3: event queue')); // added to event queue
Future.microtask(() => print('2: microtask')); // added to microtask queue
print('1: sync continues');
// Output order: '1: sync', '1: sync continues', '2: microtask', '3: event queue'
}
// TCP keep-alive and connection reuse with Dio
Dio createOptimizedClient() {
final dio = Dio();
// Dio reuses connections via keep-alive automatically
// HTTP/2 is enabled when server supports it — multiple requests on one TCP connection
dio.options.connectTimeout = const Duration(seconds: 5); // TCP + TLS handshake
dio.options.receiveTimeout = const Duration(seconds: 30);
return dio;
}
// Memory-efficient list processing — O(1) space with lazy iteration
Iterable<Visit> filterHighPriorityLazy(List<Visit> visits) sync* {
for (final visit in visits) {
if (visit.priority == Priority.high) {
yield visit; // lazy — doesn't create intermediate list
}
}
}
// Compare: O(n) space intermediate list
List<Visit> filterHighPriorityEager(List<Visit> visits) =>
visits.where((v) => v.priority == Priority.high).toList(); // toList() materializes
// O(n log n) sort vs O(n²) naive sort
void sortingComplexity(List<Visit> visits) {
// Dart's sort() uses TimSort — O(n log n) worst case
visits.sort((a, b) => a.createdAt.compareTo(b.createdAt));
// Never implement bubble sort — O(n²) — mention this in interviews
}Line-by-line walkthrough
- 1. set.contains(999) is O(1) because Dart's HashSet computes a hash of 999 and does a direct bucket lookup — no iteration required regardless of set size.
- 2. list.contains(999) is O(n) because Dart scans from index 0 to the end — at 100K elements, this is 100K comparisons per check.
- 3. Isolate.run() copies rawData into the new isolate's memory space — this has a cost (serialization), but subsequent processing is truly parallel on a separate CPU core.
- 4. The return value is copied back from the isolate to the calling isolate — no shared memory means no thread-safety concerns but data copy overhead.
- 5. eventLoopDemo output order: sync code runs first (no queue), then all microtasks are drained, then events are processed — Future.microtask always beats Future() timer callbacks.
- 6. sync* generator with yield* lazy evaluation — filterHighPriorityLazy produces values one at a time without creating an intermediate list, using O(1) additional space.
- 7. filterHighPriorityEager's .toList() materializes all filtered results into a new list — O(k) space where k is the number of matching items.
- 8. dio.options.connectTimeout covers both the TCP handshake and TLS negotiation — on slow mobile networks, TLS adds 1-3 round trips (about 200-600ms) so the timeout should account for this.
Spot the bug
// Checking for duplicate messages in real-time chat
class MessageDeduplicator {
final List<String> _seenIds = [];
bool isDuplicate(String messageId) {
if (_seenIds.contains(messageId)) return true;
_seenIds.add(messageId);
return false;
}
}Need a hint?
This works correctly but has a performance problem and a memory problem in a busy chat app. What are they?
Show answer
Performance bug: _seenIds.contains() is O(n) — as the list grows with thousands of message IDs, every incoming message checks against the entire list. In a busy channel with 10K messages, each dedup check scans 10K items. Fix: use a Set<String> instead of List<String> — _seenIds.contains() becomes O(1) regardless of set size. Memory bug: the _seenIds list grows unbounded — in a long-running session, it accumulates every message ID ever seen, consuming increasing memory indefinitely. Fix: implement an LRU-style eviction by limiting the set to the last N IDs (e.g., 10,000) and removing the oldest entries when the limit is exceeded, or use a time-based expiry (only keep IDs from the last hour, since duplicates arrive within seconds of the original).
Explain like I'm 5
Big-O is like measuring how long it takes to find a toy. If you have 1 box (O(1)), it's instant. If you search 10 boxes one by one (O(n)), it takes 10 times longer for 10 boxes. Isolates are like two kids doing homework in separate rooms — they don't fight over the same pencil (no shared memory). HTTP is like sending a letter: you look up the address (DNS), walk to the post office (TCP handshake), seal the envelope with wax (HTTPS), send it, and wait for a reply.
Fun fact
Dart's isolate model was deliberately designed after the lessons learned from Java's thread model. Java threads sharing memory led to decades of concurrency bugs, deadlocks, and race conditions. Dart's approach — no shared memory, message-passing only — makes concurrent code dramatically safer, at the cost of needing to copy data between isolates.
Hands-on challenge
Answer these as you would in a senior Flutter interview: (1) You have a List of 50,000 message IDs and need to check if a new incoming message is a duplicate on every WebSocket event. What data structure do you use and why? (2) A Flutter screen feels janky during a data processing operation. How do you move the work off the main thread? Show the code. (3) Explain why Future.microtask() runs before Timer(Duration.zero) in Dart. (4) A mobile user makes an HTTPS request to a cold server. List every network step from URL to response. (5) What is the difference between PUT and PATCH? When would you use each?
More resources
- Big-O complexity cheat sheet (bigocheatsheet.com)
- Dart concurrency documentation (Dart Docs)
- How HTTPS works (howhttps.works)
- Dart event loop explained (Dart Medium)
- Flutter Isolate.run documentation (Dart API Docs)