Lesson 60 of 77 intermediate

CS Basics: Big-O, Memory, Concurrency & Networking

Surviving senior interview CS theory questions — complexity, memory models, and HTTP fundamentals

Open interactive version (quiz + challenge)

Real-world analogy

CS fundamentals are like knowing how engines work when you're a racing driver — you don't need to build the engine, but understanding why a diesel engine behaves differently at altitude helps you race smarter. Similarly, knowing Big-O helps you choose the right data structure, and knowing how TCP works helps you debug mobile network issues.

What is it?

Computer science fundamentals — algorithm complexity, memory management, concurrency models, and networking — are the theoretical foundations that senior Flutter engineers are expected to understand at a practical level during technical interviews.

Real-world relevance

In a field app sync interview, an interviewer asks why you chose a Map over a List for the in-memory cache. Answer: 'O(1) lookup by visitId vs O(n) linear scan — at 10K records, the Map is 10,000x faster for individual lookups.' This is Big-O applied directly to Flutter architecture decisions.

Key points

Code example

// CS fundamentals in Dart/Flutter context

// Big-O examples in practice
void bigoExamples() {
  final list = List.generate(1000, (i) => i);
  final set = list.toSet();
  final map = {for (final i in list) i: 'value_$i'};

  // O(n) — scans entire list
  final inList = list.contains(999); // avoid for large lists

  // O(1) — hash lookup
  final inSet = set.contains(999); // prefer for membership tests
  final inMap = map.containsKey(999); // prefer for key lookup

  // O(n) — avoid insert at beginning of large lists
  list.insert(0, -1); // shifts all 1000 elements
  list.add(1001);     // O(1) amortized — append to end
}

// Isolate for parallel computation (true parallelism)
Future<List<Visit>> processLargeDatasetInIsolate(List<Map> rawData) async {
  return await Isolate.run(() {
    // This runs on a separate OS thread with its own memory
    return rawData.map(Visit.fromJson).toList();
    // No shared state — data is copied to the isolate
  });
}

// Dart event loop — microtask vs event queue
void eventLoopDemo() {
  print('1: sync');

  Future(() => print('3: event queue')); // added to event queue

  Future.microtask(() => print('2: microtask')); // added to microtask queue

  print('1: sync continues');
  // Output order: '1: sync', '1: sync continues', '2: microtask', '3: event queue'
}

// TCP keep-alive and connection reuse with Dio
Dio createOptimizedClient() {
  final dio = Dio();
  // Dio reuses connections via keep-alive automatically
  // HTTP/2 is enabled when server supports it — multiple requests on one TCP connection
  dio.options.connectTimeout = const Duration(seconds: 5); // TCP + TLS handshake
  dio.options.receiveTimeout = const Duration(seconds: 30);
  return dio;
}

// Memory-efficient list processing — O(1) space with lazy iteration
Iterable<Visit> filterHighPriorityLazy(List<Visit> visits) sync* {
  for (final visit in visits) {
    if (visit.priority == Priority.high) {
      yield visit; // lazy — doesn't create intermediate list
    }
  }
}

// Compare: O(n) space intermediate list
List<Visit> filterHighPriorityEager(List<Visit> visits) =>
    visits.where((v) => v.priority == Priority.high).toList(); // toList() materializes

// O(n log n) sort vs O(n²) naive sort
void sortingComplexity(List<Visit> visits) {
  // Dart's sort() uses TimSort — O(n log n) worst case
  visits.sort((a, b) => a.createdAt.compareTo(b.createdAt));

  // Never implement bubble sort — O(n²) — mention this in interviews
}

Line-by-line walkthrough

  1. 1. set.contains(999) is O(1) because Dart's HashSet computes a hash of 999 and does a direct bucket lookup — no iteration required regardless of set size.
  2. 2. list.contains(999) is O(n) because Dart scans from index 0 to the end — at 100K elements, this is 100K comparisons per check.
  3. 3. Isolate.run() copies rawData into the new isolate's memory space — this has a cost (serialization), but subsequent processing is truly parallel on a separate CPU core.
  4. 4. The return value is copied back from the isolate to the calling isolate — no shared memory means no thread-safety concerns but data copy overhead.
  5. 5. eventLoopDemo output order: sync code runs first (no queue), then all microtasks are drained, then events are processed — Future.microtask always beats Future() timer callbacks.
  6. 6. sync* generator with yield* lazy evaluation — filterHighPriorityLazy produces values one at a time without creating an intermediate list, using O(1) additional space.
  7. 7. filterHighPriorityEager's .toList() materializes all filtered results into a new list — O(k) space where k is the number of matching items.
  8. 8. dio.options.connectTimeout covers both the TCP handshake and TLS negotiation — on slow mobile networks, TLS adds 1-3 round trips (about 200-600ms) so the timeout should account for this.

Spot the bug

// Checking for duplicate messages in real-time chat
class MessageDeduplicator {
  final List<String> _seenIds = [];

  bool isDuplicate(String messageId) {
    if (_seenIds.contains(messageId)) return true;
    _seenIds.add(messageId);
    return false;
  }
}
Need a hint?
This works correctly but has a performance problem and a memory problem in a busy chat app. What are they?
Show answer
Performance bug: _seenIds.contains() is O(n) — as the list grows with thousands of message IDs, every incoming message checks against the entire list. In a busy channel with 10K messages, each dedup check scans 10K items. Fix: use a Set<String> instead of List<String> — _seenIds.contains() becomes O(1) regardless of set size. Memory bug: the _seenIds list grows unbounded — in a long-running session, it accumulates every message ID ever seen, consuming increasing memory indefinitely. Fix: implement an LRU-style eviction by limiting the set to the last N IDs (e.g., 10,000) and removing the oldest entries when the limit is exceeded, or use a time-based expiry (only keep IDs from the last hour, since duplicates arrive within seconds of the original).

Explain like I'm 5

Big-O is like measuring how long it takes to find a toy. If you have 1 box (O(1)), it's instant. If you search 10 boxes one by one (O(n)), it takes 10 times longer for 10 boxes. Isolates are like two kids doing homework in separate rooms — they don't fight over the same pencil (no shared memory). HTTP is like sending a letter: you look up the address (DNS), walk to the post office (TCP handshake), seal the envelope with wax (HTTPS), send it, and wait for a reply.

Fun fact

Dart's isolate model was deliberately designed after the lessons learned from Java's thread model. Java threads sharing memory led to decades of concurrency bugs, deadlocks, and race conditions. Dart's approach — no shared memory, message-passing only — makes concurrent code dramatically safer, at the cost of needing to copy data between isolates.

Hands-on challenge

Answer these as you would in a senior Flutter interview: (1) You have a List of 50,000 message IDs and need to check if a new incoming message is a duplicate on every WebSocket event. What data structure do you use and why? (2) A Flutter screen feels janky during a data processing operation. How do you move the work off the main thread? Show the code. (3) Explain why Future.microtask() runs before Timer(Duration.zero) in Dart. (4) A mobile user makes an HTTPS request to a cold server. List every network step from URL to response. (5) What is the difference between PUT and PATCH? When would you use each?

More resources

Open interactive version (quiz + challenge) ← Back to course: Flutter Interview Mastery