Lesson 62 of 83 intermediate

CS Basics: Big-O, Memory, Threading & Networking

The computer science fundamentals every senior mobile engineer must own

Open interactive version (quiz + challenge)

Real-world analogy

Big-O is like estimating travel time. Driving 1 block is O(1) — same time always. Driving through every street in a city once is O(n). Comparing every street to every other street is O(n^2). Binary search on a sorted map is O(log n) — like a phone book lookup where you keep halving. Knowing these lets you predict performance before writing a line of code.

What is it?

Computer science fundamentals are the shared vocabulary of all software engineers. Big-O lets you predict and reason about performance. Memory models explain OutOfMemoryErrors and ANRs. Threading models are the foundation for coroutines. Networking fundamentals explain every Retrofit call under the hood. Senior mobile engineers are expected to speak this language fluently in system design and code review discussions.

Real-world relevance

When a senior engineer at FieldBuzz reviewed a sync algorithm that iterated all local records for every incoming server record to check for duplicates — a clear O(n^2) pattern causing 30-second freezes for large datasets — they refactored it to build a HashSet of local IDs first (O(n)), then check membership for each server record (O(1) each), bringing total complexity from O(n^2) to O(n) and reducing sync time from 30s to under 1s.

Key points

Code example

// Big-O demonstration in Kotlin

// O(1) — HashMap lookup
fun hasElement(map: HashMap<String, Int>, key: String): Boolean {
    return map.containsKey(key) // O(1) average
}

// O(n) — Linear scan
fun findFirst(list: List<Int>, target: Int): Int {
    for (i in list.indices) { // O(n)
        if (list[i] == target) return i
    }
    return -1
}

// O(n^2) — Naive duplicate check
fun hasDuplicatesNaive(list: List<Int>): Boolean {
    for (i in list.indices) {         // O(n)
        for (j in i + 1 until list.size) { // O(n)
            if (list[i] == list[j]) return true // O(n^2) total
        }
    }
    return false
}

// O(n) — Optimized duplicate check with HashSet
fun hasDuplicatesFast(list: List<Int>): Boolean {
    val seen = HashSet<Int>() // O(n) space
    for (item in list) {      // O(n) time
        if (!seen.add(item)) return true // HashSet.add returns false if already present
    }
    return false
}

// O(log n) — Binary search
fun binarySearch(sortedList: List<Int>, target: Int): Int {
    var left = 0
    var right = sortedList.size - 1
    while (left <= right) {
        val mid = left + (right - left) / 2 // avoid integer overflow
        when {
            sortedList[mid] == target -> return mid
            sortedList[mid] < target -> left = mid + 1
            else -> right = mid - 1
        }
    }
    return -1
}

// Space complexity: O(depth) recursion on call stack
fun factorial(n: Int): Long {
    if (n <= 1) return 1
    return n * factorial(n - 1) // O(n) stack frames — risky for large n
}

Line-by-line walkthrough

  1. 1. hasDuplicatesNaive uses two nested loops — for every element i, it checks every element j after it. This is n*(n-1)/2 comparisons, which is O(n^2).
  2. 2. hasDuplicatesFast creates a HashSet first. HashSet.add() returns false if the element was already present — this replaces the inner loop entirely.
  3. 3. The HashSet approach trades O(n) extra space for the set to bring time from O(n^2) to O(n) — classic time-space tradeoff.
  4. 4. binarySearch calculates mid as left + (right - left) / 2 not (left + right) / 2 — the latter can overflow Int.MAX_VALUE when left and right are both large.
  5. 5. Each iteration of binary search halves the remaining search space — after k iterations, the remaining space is n/2^k. Solving 2^k = n gives k = log2(n).
  6. 6. factorial is recursive — each call adds a frame to the call stack. factorial(10000) would overflow Android's thread stack (~8KB). Iterative version uses O(1) stack space.
  7. 7. HashMap.containsKey is O(1) average because it computes a hash and goes directly to the bucket. Worst case is O(n) when all keys hash to the same bucket — extremely rare with a good hash function.
  8. 8. The binary search returns -1 for not found (following Java convention) — in production Kotlin, returning null or a sealed Result type is more idiomatic than sentinel values.

Spot the bug

fun findCommonElements(listA: List<Int>, listB: List<Int>): List<Int> {
    val result = mutableListOf<Int>()
    for (a in listA) {
        for (b in listB) {
            if (a == b) {
                result.add(a)
            }
        }
    }
    return result
}

// Called with listA.size = 10000, listB.size = 10000
Need a hint?
There are two issues: a performance problem that makes this O(n^2) when it could be O(n), and a correctness problem that produces duplicate results.
Show answer
Bug 1 (Performance): The nested loop makes this O(n * m) — with 10,000 elements each, that is 100,000,000 iterations and will freeze the UI thread for seconds. Fix: convert listB to a HashSet before the loop. Then for each element in listA, do a O(1) set membership check. Total complexity drops to O(n + m). Bug 2 (Correctness): If listA contains [1, 1, 2] and listB contains [1, 2], the current code adds '1' twice to the result because the inner loop finds a match for both occurrences of '1' in listA. If the intent is to find distinct common elements, convert both to HashSets and use setA.intersect(setB). If duplicates should be counted (multiset intersection), the logic needs a frequency map approach.

Explain like I'm 5

Big-O is like asking 'how much longer does this take if I give it twice as much work?' If the answer is 'same time' — that is O(1) like a dictionary lookup. 'Twice as long' is O(n) like reading a book. 'Four times as long' is O(n^2) like comparing every page to every other page. O(log n) is like guessing a number between 1 and 1000 in 10 tries by always guessing the middle.

Fun fact

The V8 JavaScript engine (used in Chrome) uses a 'hidden class' optimization that makes property access O(1) even though JS objects are technically dynamic. Android's ART runtime similarly has aggressive JIT/AOT compilation that can turn O(n) Kotlin loops into SIMD vectorized native instructions, making theoretical Big-O analysis and practical performance sometimes differ by 10-100x depending on the JIT.

Hands-on challenge

You are reviewing a sync feature that, for each of 500 records coming from the server, scans the entire local list of 500 records to find matching IDs and update them. (1) What is the time complexity of the current approach? (2) Rewrite the logic conceptually to achieve O(n) total time. (3) What is the space trade-off of your optimized approach? (4) Explain when the naive O(n^2) approach might actually be acceptable.

More resources

Open interactive version (quiz + challenge) ← Back to course: Android Interview Mastery