Linked Lists, Trees, and Tries
Linked lists
A linked list is a sequence of nodes where each node stores a value and a pointer to the next node. Unlike arrays, linked lists don’t require contiguous memory, making insertion and deletion efficient, but access slower (no random access).
Key operations often focus on traversal, reversal, cycle detection, merging, splitting, and pointer manipulation.
Types:
- Singly linked list - Each node points to next
- Doubly linked list - Each node points to next and prev

Applications: Dynamic memory, queues, stacks, adjacency lists, text editors (undo history).
Types of linked list patterns
1. Reversal (classic problem)
- Idea: Rearrange next pointers to reverse the linked list.
- When to use: Reversing entire list, reversing in groups (k-group reversal).
prev, curr = None, head
while curr:
nxt = curr.next
curr.next = prev
prev = curr
curr = nxt
return prev2. Cycle detection (Floyd’s tortoise & hare)
- Idea: Traverse with slow (1 step) and fast (2 steps) pointers. If they meet, cycle exists.
- When to use: Detect loops in linked list, memory corruption checks.
slow, fast = head, head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast:
return True
return FalseTime complexity
- Traversals & searches:
O(n) - Insertion/deletion at head:
O(1) - Insertion/deletion at tail (singly):
O(n)unless tail pointer kept
Space complexity
O(1)(in-place pointer manipulations)O(n)(recursion, new list)
Common pitfalls
- Null pointer errors: Always check None before accessing next.
- Losing references: Save next before overwriting during reversal or deletion.
- Cycle handling: Delete operations must consider possible loops.
- Shallow vs deep copies: Copying lists requires new nodes, not just references.
- Boundary cases: Empty list, single node, list with 2 nodes often cause off-by-one mistakes.

Practice problems
Trees
A tree is a hierarchical data structure with nodes connected by edges. Each node contains a value and references to its children. The root is the top-most node, and leaf nodes are nodes without children.
Special types:
- Binary tree: Each node has at most 2 children (left, right).
- Binary search tree (BST): Left child < parent < right child.
- Balanced trees (AVL, Red-Black, B-Trees): Maintain height balance for faster operations.
- Trie: Prefix tree for strings.
Common applications: expression parsing, hierarchical structures (filesystems), search and indexing, priority queues (heaps).
Types of tree traversal patterns
1. Depth-first search (DFS)
Idea: Traverse as deep as possible before backtracking. Implemented via recursion or stack.

When to use:
- Preorder: cloning/copying tree, expression building.
- Inorder: retrieving sorted order from BST.
- Postorder: deleting tree, expression evaluation.
def dfs(node):
if not node: return
# preorder: process(node)
dfs(node.left)
# inorder: process(node)
dfs(node.right)
# postorder: process(node)2. Breadth-first search (BFS / Level order traversal)
- Idea: Traverse level by level using a queue.
- When to use: Shortest path in unweighted trees, level-order problems, computing depth/height.
from collections import deque
def bfs(root):
if not root: return []
q, result = deque([root]), []
while q:
size, level = len(q), []
for _ in range(size):
node = q.popleft()
level.append(node.val)
if node.left: q.append(node.left)
if node.right: q.append(node.right)
result.append(level)
return result3. Binary search tree (BST) operations
- Idea: Properties of BST allow O(log n) average search/insert/delete.
- When to use: Searching, maintaining ordered sets/maps, dynamic ranges.
Search template:
def search(root, key):
if not root or root.val == key:
return root
if key < root.val:
return search(root.left, key)
return search(root.right, key)Insert template:
def insert(root, key):
if not root:
return TreeNode(key)
if key < root.val:
root.left = insert(root.left, key)
else:
root.right = insert(root.right, key)
return rootDelete template (trickiest case: node with two children):
def delete(root, key):
if not root: return None
if key < root.val:
root.left = delete(root.left, key)
elif key > root.val:
root.right = delete(root.right, key)
else:
if not root.left: return root.right
if not root.right: return root.left
# replace with inorder successor
successor = root.right
while successor.left:
successor = successor.left
root.val = successor.val
root.right = delete(root.right, successor.val)
return rootTime complexity
- DFS/BFS traversals:
O(n)(must visit all nodes). - BST average search/insert/delete:
O(log n). - BST worst-case (skewed):
O(n). - Balanced BSTs: guarantee
O(log n).
Common pitfalls
- Null root handling: Check before accessing root.left or root.right.
- Infinite recursion: Forgetting to return node in insert/delete.
- Skewed trees: BST can degenerate into a linked list if not balanced.
- Queue misuse in BFS: Not tracking level size leads to wrong nesting.
- Traversal order mistakes: Mixing preorder, inorder, and postorder positions.

Practice problems
Tries
A Trie (prefix tree) is a tree-like data structure that stores strings by decomposing them into characters. Each node represents a character, and a path from root to a terminal node represents a word.
Common uses: autocomplete systems, spell checkers, IP routing, keyword matching, dictionary lookups, and prefix-based search.
Advantages:
- Efficient prefix queries (
O(L), whereL= word length). - Compact representation of common prefixes.
- Useful for problems dealing with dictionaries, string matching, and subsequences.
Types of trie patterns
1. Insert word
Idea: Traverse one character at a time; create nodes if missing. Mark end of word at the last node.
class TrieNode:
def __init__(self):
self.children = {}
self.is_end = False
class Trie:
def __init__(self):
self.root = TrieNode()2. Search word/prefix
- Idea: Traverse nodes following given word/prefix. For search, also confirm is_end = True.
- When to use: Dictionary lookup, prefix validation, input suggestions.
def search(self, word):
node = self.root
for ch in word:
if ch not in node.children:
return False
node = node.children[ch]
return node.is_end
def startsWith(self, prefix):
node = self.root
for ch in prefix:
if ch not in node.children:
return False
node = node.children\[ch]
return TrueTime complexity
- Insert:
O(L), whereL= word length. - Search:
O(L). - Prefix queries/autocomplete:
O(L + k), wherek= number of words returned.
Space complexity: O(N * L) worst-case, where N = number of words (can optimize with compressed tries or suffix trees).
Common pitfalls
- Memory usage: Storing large dictionaries in trie can consume a lot of space.
- Not marking word ends: Forgetting
is_endcauses incomplete searches. - Mixing search vs prefix: Full word search must check termination flag, prefix search doesn’t.
- Deletion complexity: Must avoid leaving dangling nodes.
- Character set assumptions: Designing for only lowercase English letters but input may include unicode or symbols.
