Data structures are specialized formats for organizing, processing, and storing data efficiently in computer memory. They enable efficient access and modification of data.
<!-- Example: Using an array as a simple data structure --> <script> // Declare an array to store numbers let numbers = [10, 20, 30, 40, 50]; // Access the first element in the array console.log("First element:", numbers[0]); // Output: 10 // Add a new element at the end numbers.push(60); console.log("After push:", numbers); // Output: [10,20,30,40,50,60] </script>
Output:
First element: 10
After push: 10,20,30,40,50,60
An algorithm is a step-by-step set of instructions designed to perform a specific task or solve a particular problem.
<!-- Example: Algorithm to find the maximum number in an array --> <script> let arr = [3, 7, 2, 9, 5]; // Initialize max with the first element let max = arr[0]; // Loop through the array to find the max for(let i = 1; i < arr.length; i++){ if(arr[i] > max){ max = arr[i]; } } console.log("Maximum value:", max); // Output: 9 </script>
Output: Maximum value: 9
Data structures and algorithms are fundamental for efficient programming. They are crucial in areas like databases, networking, AI, graphics, and system design.
<!-- Example: Using a Set data structure to remove duplicates from an array --> <script> let nums = [1, 2, 2, 3, 4, 4, 5]; // Create a Set to automatically remove duplicates let uniqueNums = new Set(nums); console.log("Unique numbers:", [...uniqueNums]); // Output: [1,2,3,4,5] </script>
Output: Unique numbers: 1,2,3,4,5
Time complexity measures the time an algorithm takes relative to input size. Big O notation describes the upper bound on time growth.
<!-- Example: Linear search - O(n) time complexity --> <script> let arr = [5, 3, 8, 6, 7]; let target = 6; function linearSearch(array, target){ for(let i=0; i < array.length; i++){ if(array[i] === target){ return i; // Found target at index i } } return -1; // Target not found } let index = linearSearch(arr, target); console.log("Index of target:", index); // Output: 3 </script>
Output: Index of target: 3
Space complexity measures the amount of memory an algorithm uses relative to input size.
<!-- Example: Creating a new array doubles space usage --> <script> function duplicateArray(arr){ let newArr = []; for(let i=0; i < arr.length; i++){ newArr.push(arr[i]); } return newArr; } let original = [1,2,3]; let copy = duplicateArray(original); console.log("Original:", original); console.log("Copy:", copy); </script>
Output:
Original: 1,2,3
Copy: 1,2,3
This analyzes algorithm performance limits: Big O (worst-case), Omega (best-case), and Theta (average-case).
<!-- Example: Best, worst and average case for linear search --> <script> let arr = [1,2,3,4,5]; let targetBest = 1; // Best case: found immediately let targetWorst = 6; // Worst case: not found at all function linearSearch(array, target){ for(let i=0; i < array.length; i++){ if(array[i] === target){ return i; } } return -1; } console.log("Best case index:", linearSearch(arr, targetBest)); // 0 console.log("Worst case index:", linearSearch(arr, targetWorst)); // -1 </script>
Output:
Best case index: 0
Worst case index: -1
Optimization improves algorithm speed or reduces memory use while maintaining correctness.
<!-- Example: Using a Set for faster lookup instead of array includes (O(1) vs O(n)) --> <script> let arr = [1,2,3,4,5]; let target = 3; // Using array includes (O(n)) console.log("Using includes:", arr.includes(target)); // true // Using Set for O(1) lookup let set = new Set(arr); console.log("Using Set:", set.has(target)); // true </script>
Output:
Using includes: true
Using Set: true
Techniques include divide and conquer, greedy algorithms, dynamic programming, backtracking, and recursion.
<!-- Example: Divide and Conquer - Recursive sum of array elements --> <script> function recursiveSum(arr, start, end){ if(start > end) return 0; if(start === end) return arr[start]; let mid = Math.floor((start + end) / 2); return recursiveSum(arr, start, mid) + recursiveSum(arr, mid+1, end); } let nums = [1,2,3,4,5]; console.log("Sum:", recursiveSum(nums, 0, nums.length - 1)); // Output: 15 </script>
Output: Sum: 15
Recursion is a method where a function calls itself to solve smaller instances of a problem.
<!-- Example: Factorial using recursion --> <script> function factorial(n){ if(n === 0) return 1; // Base case return n * factorial(n - 1); // Recursive call } console.log("Factorial of 5:", factorial(5)); // Output: 120 </script>
Output: Factorial of 5: 120
Common data structures include arrays, linked lists, stacks, queues, trees, and graphs.
<!-- Example: Stack implemented using array --> <script> class Stack { constructor(){ this.items = []; } // Add element to top of stack push(element){ this.items.push(element); } // Remove element from top of stack pop(){ if(this.items.length === 0) return "Stack is empty"; return this.items.pop(); } // Peek top element without removing peek(){ return this.items[this.items.length - 1]; } // Check if stack is empty isEmpty(){ return this.items.length === 0; } } let stack = new Stack(); stack.push(10); stack.push(20); console.log("Top element:", stack.peek()); // 20 console.log("Popped element:", stack.pop()); // 20 console.log("Is empty?", stack.isEmpty()); // false </script>
Output:
Top element: 20
Popped element: 20
Is empty? false
An array is a collection of elements of the same data type stored in contiguous memory locations. It allows you to store multiple values in a single variable and access them by index.
<!-- Example: Declaring and initializing an integer array --> int numbers[] = {10, 20, 30, 40, 50}; <!-- Declare an array of integers with 5 elements --> System.out.println(numbers[0]); <!-- Access and print the first element (10) -->
Output: 10
Static arrays have fixed size allocated at compile time. Dynamic arrays can grow or shrink during runtime.
<!-- Static array example in Java (fixed size) --> int staticArray[] = new int[3]; <!-- Fixed size 3 --> staticArray[0] = 1; staticArray[1] = 2; staticArray[2] = 3; <!-- Dynamic array example using ArrayList (resizable) --> ArrayList<Integer> dynamicArray = new ArrayList<>(); <!-- Creates dynamic array --> dynamicArray.add(1); dynamicArray.add(2); dynamicArray.add(3);
Output: No direct output but dynamicArray size can change at runtime.
Basic operations on arrays include inserting, deleting elements, and traversing (iterating) through elements.
<!-- Traversal example: Print all elements of an array --> int arr[] = {1, 2, 3, 4, 5}; for(int i = 0; i < arr.length; i++) { System.out.println(arr[i]); <!-- Print each element --> } <!-- Insertion in array (at index 2) by shifting elements --> int n = 5; int pos = 2; <!-- position to insert --> int val = 99; <!-- value to insert --> for(int i = n - 1; i >= pos; i--) { arr[i+1] = arr[i]; <!-- shift right --> } arr[pos] = val; <!-- insert value --> <!-- Deletion from array (at index 3) by shifting elements --> pos = 3; for(int i = pos; i < n - 1; i++) { arr[i] = arr[i+1]; <!-- shift left to delete --> } n--; <!-- reduce logical size -->
Output before insertion:
1
2
3
4
5
Output after insertion of 99 at index 2:
1
2
99
3
4
5
Output after deletion at index 3:
1
2
99
4
5
Arrays of arrays. For example, a 2D array is like a matrix with rows and columns.
<!-- Declare and initialize a 2D array (3x3 matrix) --> int matrix[][] = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} }; <!-- Access and print element at row 1, column 2 (value 6) --> System.out.println(matrix[1][2]); <!-- Outputs 6 --> <!-- Traverse and print all elements --> for(int i = 0; i < 3; i++) { for(int j = 0; j < 3; j++) { System.out.print(matrix[i][j] + " "); } System.out.println(); }
Output:
6
1 2 3
4 5 6
7 8 9
Linear search scans each element until the target is found.
<!-- Linear search for value 30 in array --> int arr[] = {10, 20, 30, 40, 50}; int target = 30; int index = -1; for(int i = 0; i < arr.length; i++) { if(arr[i] == target) { index = i; <!-- store index if found --> break; } } System.out.println("Element found at index: " + index);
Output: Element found at index: 2
Sorting arranges elements in ascending or descending order. Bubble sort is a simple example.
<!-- Bubble sort example --> int arr[] = {5, 1, 4, 2, 8}; int n = arr.length; for(int i = 0; i < n-1; i++) { for(int j = 0; j < n-i-1; j++) { if(arr[j] > arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } } <!-- Print sorted array --> for(int i = 0; i < n; i++) { System.out.print(arr[i] + " "); }
Output: 1 2 4 5 8
Arrays are used for storing collections, matrices, image data, implementing data structures like stacks, queues, hash tables, and more.
<!-- Example: Using an array as a stack to push and pop elements --> int stack[] = new int[5]; int top = -1; <!-- stack pointer --> <!-- Push elements --> stack[++top] = 10; stack[++top] = 20; <!-- Pop element --> int popped = stack[top--]; <!-- pops 20 --> System.out.println("Popped: " + popped);
Output: Popped: 20
Static arrays cannot resize. Dynamic arrays like ArrayList manage memory internally and resize automatically.
<!-- Using ArrayList in Java to automatically resize --> ArrayList<String> list = new ArrayList<>(); list.add("apple"); list.add("banana"); list.add("cherry"); System.out.println("Size before removal: " + list.size()); list.remove("banana"); System.out.println("Size after removal: " + list.size());
Output:
Size before removal: 3
Size after removal: 2
Includes finding duplicates, missing elements, max/min, reversing, rotating arrays, etc.
<!-- Example: Find max element in array --> int arr[] = {10, 50, 30, 20, 40}; int max = arr[0]; for(int i = 1; i < arr.length; i++) { if(arr[i] > max) { max = arr[i]; } } System.out.println("Max element is: " + max);
Output: Max element is: 50
Dynamic arrays like Java's ArrayList automatically resize as you add or remove elements.
<!-- Example of adding and iterating an ArrayList --> ArrayList<String> fruits = new ArrayList<>(); fruits.add("Apple"); fruits.add("Banana"); fruits.add("Cherry"); for(String fruit : fruits) { System.out.println(fruit); }
Output:
Apple
Banana
Cherry
A linked list is a linear data structure where elements are stored in nodes. Each node contains data and a reference (or pointer) to the next node in the sequence.
<!-- Example: Basic Node structure in JavaScript --> <script> class Node { constructor(data) { this.data = data; // Store node data this.next = null; // Pointer to the next node } } // Create a single node with value 10 let node = new Node(10); console.log("Node data:", node.data); // Output: 10 console.log("Next node:", node.next); // Output: null </script>
Output:
Node data: 10
Next node: null
A singly linked list is a chain of nodes where each node points to the next node and the last node points to null.
<!-- Example: Creating a simple singly linked list --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } // Create nodes let first = new Node(1); let second = new Node(2); let third = new Node(3); // Link nodes: 1 -> 2 -> 3 -> null first.next = second; second.next = third; // Traverse and print list let current = first; while(current !== null) { console.log(current.data); current = current.next; } </script>
Output:
1
2
3
A doubly linked list has nodes with two pointers: one to the next node and one to the previous node, allowing traversal in both directions.
<!-- Example: Node structure for doubly linked list --> <script> class Node { constructor(data) { this.data = data; this.next = null; this.prev = null; // Pointer to previous node } } // Create nodes let first = new Node(1); let second = new Node(2); let third = new Node(3); // Link nodes first.next = second; second.prev = first; second.next = third; third.prev = second; // Traverse forward let current = first; while(current !== null) { console.log(current.data); current = current.next; } </script>
Output:
1
2
3
In a circular linked list, the last node points back to the first node, forming a circle. It can be singly or doubly linked.
<!-- Example: Circular singly linked list --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } // Create nodes let first = new Node(1); let second = new Node(2); let third = new Node(3); // Link nodes in circle: 1 -> 2 -> 3 -> 1 ... first.next = second; second.next = third; third.next = first; // Traverse circular list for 6 steps to show the loop let current = first; for(let i = 0; i < 6; i++) { console.log(current.data); current = current.next; } </script>
Output:
1
2
3
1
2
3
Common operations include inserting nodes, deleting nodes, and traversing the list.
<!-- Example: Inserting at the head and deleting a node --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } class LinkedList { constructor() { this.head = null; } // Insert new node at head insertAtHead(data) { let newNode = new Node(data); newNode.next = this.head; this.head = newNode; } // Delete node by value deleteNode(key) { if(!this.head) return; if(this.head.data === key) { this.head = this.head.next; return; } let current = this.head; while(current.next && current.next.data !== key) { current = current.next; } if(current.next) { current.next = current.next.next; } } // Print list printList() { let current = this.head; while(current) { console.log(current.data); current = current.next; } } } // Usage let list = new LinkedList(); list.insertAtHead(3); list.insertAtHead(2); list.insertAtHead(1); list.printList(); // Delete node with data 2 list.deleteNode(2); console.log("After deletion:"); list.printList(); </script>
Output:
1
2
3
After deletion:
1
3
To find a node with a specific value, traverse the list until the node is found or the end is reached.
<!-- Example: Search for a value in singly linked list --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } class LinkedList { constructor() { this.head = null; } insertAtHead(data) { let newNode = new Node(data); newNode.next = this.head; this.head = newNode; } // Search for node with value key search(key) { let current = this.head; while(current) { if(current.data === key) return true; current = current.next; } return false; } } let list = new LinkedList(); list.insertAtHead(1); list.insertAtHead(2); list.insertAtHead(3); console.log("Search 2:", list.search(2)); // true console.log("Search 5:", list.search(5)); // false </script>
Output:
Search 2: true
Search 5: false
Linked lists allow dynamic memory allocation and efficient insertions/deletions but have slower access times compared to arrays.
<!-- Example: Summary comments for pros and cons --> <!-- Advantages: - Dynamic size - Easy insertion/deletion Disadvantages: - No random access - Extra memory for pointers -->
Used in implementing stacks, queues, adjacency lists in graphs, undo functionality in software, and more.
<!-- Example: Using linked list for stack implementation (see earlier stack example) --> <!-- Stacks and queues are often implemented with linked lists to allow dynamic resizing. -->
Linked lists allocate memory dynamically for nodes. Proper management requires handling node creation and deletion carefully to avoid memory leaks.
<!-- In languages like C/C++, manual memory allocation and deallocation (malloc/free) is required. In JavaScript, garbage collection handles unused nodes automatically. -->
Common problems include reversing a list, detecting loops, finding the middle node, and merging two lists.
<!-- Example: Reverse a singly linked list --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } class LinkedList { constructor() { this.head = null; } insertAtHead(data) { let newNode = new Node(data); newNode.next = this.head; this.head = newNode; } // Reverse the linked list reverse() { let prev = null; let current = this.head; while(current) { let nextNode = current.next; current.next = prev; prev = current; current = nextNode; } this.head = prev; } printList() { let current = this.head; while(current) { console.log(current.data); current = current.next; } } } let list = new LinkedList(); list.insertAtHead(1); list.insertAtHead(2); list.insertAtHead(3); console.log("Original list:"); list.printList(); list.reverse(); console.log("Reversed list:"); list.printList();
Output:
Original list:
3
2
1
Reversed list:
1
2
3
A stack is a linear data structure which follows the Last In First Out (LIFO) principle. Elements can be inserted or removed only from the top of the stack.
<!-- Example of stack concept with array representation --> <!-- The last element inserted is the first to be removed --> Stack: [10, 20, 30] <!-- 30 is on the top of the stack --> Pop operation removes 30 Stack after pop: [10, 20]
Basic operations on a stack include:
<!-- JavaScript example for stack operations using array --> let stack = []; <!-- Initialize empty stack array --> stack.push(10); <!-- Push 10 onto the stack --> stack.push(20); <!-- Push 20 onto the stack --> let top = stack[stack.length - 1]; <!-- Peek top element (20) --> let popped = stack.pop(); <!-- Pop top element (20) --> console.log("Top element:", top); <!-- Output: Top element: 20 --> console.log("Popped element:", popped); <!-- Output: Popped element: 20 --> console.log("Stack now:", stack); <!-- Output: Stack now: [10] -->
Output:
Top element: 20 Popped element: 20 Stack now: [10]
A stack can be implemented using an array where insertion and deletion happen at the end (top) of the array.
<!-- Stack implementation using array in JavaScript --> class Stack { constructor() { this.items = []; <!-- Array to hold stack elements --> } push(element) { this.items.push(element); <!-- Add element to the top --> } pop() { if(this.isEmpty()) <!-- Check if stack is empty --> return "Stack is empty"; return this.items.pop(); <!-- Remove and return top element --> } peek() { if(this.isEmpty()) return "Stack is empty"; return this.items[this.items.length - 1]; <!-- Return top element without removing --> } isEmpty() { return this.items.length === 0; <!-- Check if stack is empty --> } } let stack = new Stack(); <!-- Create new stack --> stack.push(5); <!-- Push 5 --> stack.push(8); <!-- Push 8 --> console.log(stack.peek()); <!-- Output: 8 --> console.log(stack.pop()); <!-- Output: 8 --> console.log(stack.peek()); <!-- Output: 5 -->
Output:
8 8 5
A stack can also be implemented using linked lists, where the top of the stack is the head of the list.
<!-- Stack implementation using linked list in JavaScript --> class Node { constructor(data) { this.data = data; <!-- Node data --> this.next = null; <!-- Pointer to next node --> } } class Stack { constructor() { this.top = null; <!-- Top points to head node --> } push(data) { let newNode = new Node(data); <!-- Create new node --> newNode.next = this.top; <!-- Link new node to current top --> this.top = newNode; <!-- Update top to new node --> } pop() { if(this.top == null) <!-- Check if stack is empty --> return "Stack is empty"; let popped = this.top.data; <!-- Store top data --> this.top = this.top.next; <!-- Move top to next node --> return popped; <!-- Return popped data --> } peek() { if(this.top == null) return "Stack is empty"; return this.top.data; <!-- Return top data --> } } let stack = new Stack(); <!-- Create new stack --> stack.push(1); <!-- Push 1 --> stack.push(2); <!-- Push 2 --> console.log(stack.peek()); <!-- Output: 2 --> console.log(stack.pop()); <!-- Output: 2 --> console.log(stack.peek()); <!-- Output: 1 -->
Output:
2 2 1
Stacks are widely used in:
<!-- Simple postfix expression evaluation using stack --> function evaluatePostfix(expression) { let stack = []; for(let char of expression) { if(!isNaN(char)) { stack.push(Number(char)); <!-- Push operands to stack --> } else { let val2 = stack.pop(); <!-- Pop two operands --> let val1 = stack.pop(); switch(char) { case '+': stack.push(val1 + val2); break; case '-': stack.push(val1 - val2); break; case '*': stack.push(val1 * val2); break; case '/': stack.push(val1 / val2); break; } } } return stack.pop(); <!-- Final result --> } console.log(evaluatePostfix("231*+9-")); <!-- Output: -4 -->
Output:
-4
Stacks can be used to check for balanced parentheses in an expression.
<!-- Balanced parentheses checker --> function isBalanced(str) { let stack = []; for(let char of str) { if(char === '(') { stack.push(char); <!-- Push opening bracket --> } else if(char === ')') { if(stack.length === 0) <!-- No matching opening bracket --> return false; stack.pop(); <!-- Pop matching opening bracket --> } } return stack.length === 0; <!-- True if all matched --> } console.log(isBalanced("((()))")); <!-- Output: true --> console.log(isBalanced("(()")); <!-- Output: false -->
Output:
true false
These are ways to write expressions:
<!-- Example: converting infix to postfix using stack --> function precedence(op) { if(op === '+' || op === '-') return 1; if(op === '*' || op === '/') return 2; return 0; } function infixToPostfix(exp) { let stack = []; let result = ""; for(let char of exp) { if(/[a-zA-Z0-9]/.test(char)) { result += char; <!-- Add operand to result --> } else { while(stack.length && precedence(stack[stack.length - 1]) >= precedence(char)) { result += stack.pop(); <!-- Pop higher precedence operators --> } stack.push(char); <!-- Push current operator --> } } while(stack.length) { result += stack.pop(); <!-- Pop remaining operators --> } return result; } console.log(infixToPostfix("a+b*c")); <!-- Output: abc*+ -->
Output:
abc*+
Stacks are useful in problems like:
<!-- Undo example using stack --> let undoStack = []; function addAction(action) { undoStack.push(action); <!-- Add action to stack --> } function undo() { if(undoStack.length === 0) { return "Nothing to undo"; } return undoStack.pop(); <!-- Undo last action --> } addAction("Type A"); addAction("Type B"); console.log(undo()); <!-- Output: Type B --> console.log(undo()); <!-- Output: Type A --> console.log(undo()); <!-- Output: Nothing to undo -->
Output:
Type B Type A Nothing to undo
Function calls are managed using a call stack which keeps track of active functions and their return addresses.
<!-- Simple example explanation --> Function main() calls function A: - main is pushed to stack - A is pushed to stack - A returns and is popped - main continues and finally returns Call stack helps track order of execution and local variables.
Comparison between Stack and Queue:
Feature | Stack | Queue |
---|---|---|
Order | LIFO (Last In First Out) | FIFO (First In First Out) |
Insertion | Top | Rear/End |
Deletion | Top | Front/Beginning |
Use cases | Undo, Backtracking, Expression evaluation | Scheduling, Buffers |
A queue is a linear data structure that follows the FIFO (First In First Out) principle, where elements are added at the rear and removed from the front.
<!-- Example: Simple queue concept --> <script> let queue = []; // Enqueue elements queue.push(1); queue.push(2); queue.push(3); console.log("Queue:", queue); // Output: [1,2,3] // Dequeue element let removed = queue.shift(); console.log("Removed:", removed); // Output: 1 console.log("Queue after dequeue:", queue); // Output: [2,3] </script>
Output:
Queue: [1,2,3]
Removed: 1
Queue after dequeue: [2,3]
Key operations: Enqueue adds to rear, Dequeue removes from front, Peek views the front without removing.
<!-- Example: Queue operations with array --> <script> let queue = []; // Enqueue function function enqueue(item) { queue.push(item); } // Dequeue function function dequeue() { if(queue.length === 0) return null; return queue.shift(); } // Peek function function peek() { if(queue.length === 0) return null; return queue[0]; } enqueue(10); enqueue(20); enqueue(30); console.log("Front item:", peek()); // 10 console.log("Dequeued:", dequeue()); // 10 console.log("Queue now:", queue); // [20,30] </script>
Output:
Front item: 10
Dequeued: 10
Queue now: [20,30]
Queues can be implemented using JavaScript arrays with push (enqueue) and shift (dequeue) methods.
<!-- Example: Simple queue class with array --> <script> class Queue { constructor() { this.items = []; } enqueue(element) { this.items.push(element); } dequeue() { if(this.isEmpty()) return "Queue is empty"; return this.items.shift(); } isEmpty() { return this.items.length === 0; } front() { if(this.isEmpty()) return "Queue is empty"; return this.items[0]; } printQueue() { console.log(this.items.join(", ")); } } let queue = new Queue(); queue.enqueue(1); queue.enqueue(2); queue.enqueue(3); queue.printQueue(); // 1, 2, 3 console.log("Dequeued:", queue.dequeue()); // 1 queue.printQueue(); // 2, 3 </script>
Output:
1, 2, 3
Dequeued: 1
2, 3
Queues can also be implemented using linked lists for efficient insertion and deletion.
<!-- Example: Queue implemented with linked list nodes --> <script> class Node { constructor(data) { this.data = data; this.next = null; } } class Queue { constructor() { this.front = null; this.rear = null; } enqueue(data) { let newNode = new Node(data); if(this.rear) { this.rear.next = newNode; } this.rear = newNode; if(!this.front) { this.front = newNode; } } dequeue() { if(!this.front) return null; let removedData = this.front.data; this.front = this.front.next; if(!this.front) { this.rear = null; } return removedData; } peek() { return this.front ? this.front.data : null; } printQueue() { let current = this.front; let items = []; while(current) { items.push(current.data); current = current.next; } console.log(items.join(", ")); } } let queue = new Queue(); queue.enqueue(10); queue.enqueue(20); queue.enqueue(30); queue.printQueue(); // 10, 20, 30 console.log("Dequeued:", queue.dequeue()); // 10 queue.printQueue(); // 20, 30 </script>
Output:
10, 20, 30
Dequeued: 10
20, 30
A circular queue treats the queue as a circle to efficiently reuse empty spaces.
<!-- Example: Circular queue using array (fixed size) --> <script> class CircularQueue { constructor(size) { this.size = size; this.queue = new Array(size); this.front = -1; this.rear = -1; } isFull() { return (this.rear + 1) % this.size === this.front; } isEmpty() { return this.front === -1; } enqueue(data) { if(this.isFull()) { console.log("Queue is full"); return; } if(this.isEmpty()) { this.front = 0; } this.rear = (this.rear + 1) % this.size; this.queue[this.rear] = data; } dequeue() { if(this.isEmpty()) { console.log("Queue is empty"); return null; } let data = this.queue[this.front]; if(this.front === this.rear) { this.front = -1; this.rear = -1; } else { this.front = (this.front + 1) % this.size; } return data; } printQueue() { if(this.isEmpty()) { console.log("Queue is empty"); return; } let i = this.front; let items = []; while(true) { items.push(this.queue[i]); if(i === this.rear) break; i = (i + 1) % this.size; } console.log(items.join(", ")); } } let cq = new CircularQueue(3); cq.enqueue(1); cq.enqueue(2); cq.enqueue(3); // Queue is full on next enqueue cq.printQueue(); // 1, 2, 3 console.log("Dequeued:", cq.dequeue()); // 1 cq.enqueue(4); cq.printQueue(); // 2, 3, 4 </script>
Output:
1, 2, 3
Dequeued: 1
2, 3, 4
A priority queue dequeues elements based on priority rather than order.
<!-- Example: Simple priority queue using array --> <script> class PriorityQueue { constructor() { this.items = []; } enqueue(element, priority) { let queueElement = {element, priority}; let added = false; for(let i = 0; i < this.items.length; i++) { if(queueElement.priority < this.items[i].priority) { this.items.splice(i, 0, queueElement); added = true; break; } } if(!added) { this.items.push(queueElement); } } dequeue() { if(this.isEmpty()) return null; return this.items.shift().element; } isEmpty() { return this.items.length === 0; } printQueue() { let result = this.items.map(item => `${item.element}(${item.priority})`); console.log(result.join(", ")); } } let pq = new PriorityQueue(); pq.enqueue("Low priority", 3); pq.enqueue("High priority", 1); pq.enqueue("Medium priority", 2); pq.printQueue(); // High priority(1), Medium priority(2), Low priority(3) console.log("Dequeued:", pq.dequeue()); // High priority pq.printQueue(); // Medium priority(2), Low priority(3) </script>
Output:
High priority(1), Medium priority(2), Low priority(3)
Dequeued: High priority
Medium priority(2), Low priority(3)
A deque allows insertion and removal from both front and rear ends.
Output:
0, 1, 2
Removed from front: 0
Removed from rear: 2
1
Queues are widely used in scheduling, buffering, printing tasks, and asynchronous data handling.
Output:
Task added: Task 1
Task added: Task 2
Processing task: Task 1
Processing task: Task 2
No tasks to process
Common problems: Implementing queue with limited size, reversing a queue, and using two queues to implement stacks.
Output:
Original queue: [1,2,3,4]
Reversed queue: [4,3,2,1]
Examples include print spooling, CPU task scheduling, call centers, and web server request handling.
Output:
Added print job: Document1.pdf
Added print job: Photo.png
Printing: Document1.pdf
Printing: Photo.png
No print jobs
A tree is a hierarchical data structure made of nodes connected by edges, with one root node and potentially many levels of child nodes.
<!-- Simple tree node structure in JavaScript --> function TreeNode(value) { <!-- Node constructor --> this.value = value; <!-- Value stored in the node --> this.children = []; <!-- Array to hold child nodes --> }
Root is the top node. Leaf nodes have no children. Height is longest path from root to leaf. Depth is distance from root to node.
<!-- Example terminology explanation --> let root = new TreeNode(1); <!-- Root node --> let leaf = new TreeNode(2); <!-- Leaf node if no children --> root.children.push(leaf); <!-- Adding leaf as child of root -->
Each node has up to two children, called left and right.
<!-- Binary Tree Node structure --> function BinaryTreeNode(value) { this.value = value; <!-- Node value --> this.left = null; <!-- Left child --> this.right = null; <!-- Right child --> }
BSTs are binary trees where left child < parent, right child > parent.
<!-- Insert function in BST --> function insert(root, value) { if (root == null) { return new BinaryTreeNode(value); <!-- Create new node if empty --> } if (value < root.value) { root.left = insert(root.left, value); <!-- Insert to left subtree --> } else { root.right = insert(root.right, value); <!-- Insert to right subtree --> } return root; }
Ways to visit nodes in a tree:
<!-- Inorder traversal (Left, Root, Right) --> function inorder(node) { if (node == null) return; inorder(node.left); console.log(node.value); inorder(node.right); } <!-- Preorder traversal (Root, Left, Right) --> function preorder(node) { if (node == null) return; console.log(node.value); preorder(node.left); preorder(node.right); } <!-- Postorder traversal (Left, Right, Root) --> function postorder(node) { if (node == null) return; postorder(node.left); postorder(node.right); console.log(node.value); }
Trees that maintain height balance to keep operations efficient. AVL trees rotate nodes to balance. Red-Black trees use color rules.
<!-- Example: Conceptual balancing in AVL --> <!-- Code for rotations omitted for brevity --> function rotateRight(y) { let x = y.left; let T2 = x.right; x.right = y; y.left = T2; return x; <!-- New root after rotation --> }
Binary heaps are complete binary trees used to implement priority queues.
<!-- Min-Heap insert example --> let heap = []; function insertHeap(value) { heap.push(value); <!-- Add at end --> let i = heap.length - 1; while (i > 0) { let parent = Math.floor((i - 1) / 2); if (heap[parent] <= heap[i]) break; [heap[parent], heap[i]] = [heap[i], heap[parent]]; <!-- Swap --> i = parent; } }
A tree structure where each node represents a character, used for efficient word lookup.
<!-- Trie node structure --> function TrieNode() { this.children = {}; <!-- Map of char to TrieNode --> this.isEndOfWord = false; } function insertWord(root, word) { let node = root; for (let ch of word) { if (!node.children[ch]) { node.children[ch] = new TrieNode(); } node = node.children[ch]; } node.isEndOfWord = true; }
Trees are used in file systems, databases, expression parsing, and network routing.
<!-- Example: Directory structure using TreeNodes --> let rootDir = new TreeNode("root"); rootDir.children.push(new TreeNode("Documents")); rootDir.children.push(new TreeNode("Pictures"));
Practice problems: find height, check if balanced, lowest common ancestor, etc.
<!-- Example: Find height of binary tree --> function height(node) { if (node == null) return 0; return 1 + Math.max(height(node.left), height(node.right)); }
Graphs are data structures consisting of nodes (vertices) connected by edges, used to model pairwise relationships.
Graphs can be directed, where edges have direction, or undirected, where edges have no direction.
Graphs can be represented using adjacency matrices or adjacency lists, each with different space and access characteristics.
Depth-First Search (DFS) and Breadth-First Search (BFS) are fundamental algorithms to explore nodes in a graph.
Cycle detection algorithms help identify loops within graphs, critical for applications like deadlock detection.
Topological sorting orders nodes linearly in directed acyclic graphs (DAGs) based on dependencies.
Dijkstra’s and Bellman-Ford algorithms compute shortest paths in graphs with varying constraints on weights.
Prim’s and Kruskal’s algorithms find minimum spanning trees that connect all nodes with minimum total edge weight.
Graphs are widely used in social networks, routing, scheduling, and many optimization problems.
These include problems like network flow, graph coloring, and Hamiltonian paths, which require more complex algorithms.
Hashing is a technique to map data to fixed-size values for fast lookup and retrieval.
<!-- Simple hash function example in JavaScript --> function simpleHash(key, size) { let hash = 0; <!-- Initialize hash value --> for (let i = 0; i < key.length; i++) { hash += key.charCodeAt(i); <!-- Sum char codes --> } return hash % size; <!-- Modulo to fit table size --> }
Functions that convert input data to a fixed-size hash code.
<!-- DJB2 hash function example --> function djb2Hash(str, size) { let hash = 5381; <!-- Starting seed --> for (let i = 0; i < str.length; i++) { hash = ((hash << 5) + hash) + str.charCodeAt(i); <!-- hash * 33 + c --> } return hash % size; }
Collisions occur when two keys map to the same index; chaining uses linked lists, open addressing probes for free slots.
<!-- Example of chaining in hash table --> class HashTable { constructor(size) { this.size = size; this.buckets = Array(size).fill(null).map(() => []); } insert(key, value) { let index = simpleHash(key, this.size); this.buckets[index].push([key, value]); <!-- Add key-value pair to bucket list --> } }
A structure that stores key-value pairs for efficient access using hashing.
<!-- Basic hash table with insert and get --> class HashTable { constructor(size) { this.size = size; this.buckets = Array(size).fill(null).map(() => []); } insert(key, value) { let index = simpleHash(key, this.size); this.buckets[index].push([key, value]); } get(key) { let index = simpleHash(key, this.size); for (let [k, v] of this.buckets[index]) { if (k === key) return v; <!-- Return value if found --> } return null; <!-- Not found --> } }
Load factor measures fullness; rehashing resizes and redistributes entries to maintain performance.
<!-- Load factor calculation example --> function loadFactor(hashTable) { let itemCount = 0; for (let bucket of hashTable.buckets) { itemCount += bucket.length; <!-- Count all items in buckets --> } return itemCount / hashTable.size; <!-- Ratio of items to bucket size --> }
Used in databases, caches, cryptography, and data indexing.
<!-- Example: Using hash for caching --> let cache = {}; function cacheResult(key, compute) { if (cache[key]) return cache[key]; <!-- Return cached value if present --> let result = compute(); cache[key] = result; <!-- Store result in cache --> return result; }
Secure hashing produces fixed-length output to ensure data integrity and security.
<!-- Conceptual: Cryptographic hash (SHA-256) usage (pseudo-code) --> const crypto = require('crypto'); function sha256Hash(data) { return crypto.createHash('sha256').update(data).digest('hex'); <!-- Generate secure hash --> }
Hashing technique with no collisions for a static set of keys.
<!-- Conceptual perfect hashing example --> <!-- Perfect hash functions depend on fixed known keys, usually generated offline -->
Include collisions, clustering, and resizing challenges.
<!-- Problem: Collision causes degraded performance --> <!-- Solution: Use better hash functions and collision resolution methods -->
Good hash functions distribute keys uniformly to minimize collisions.
<!-- Simple tips for hash functions --> <!-- 1. Use prime numbers in computation --> <!-- 2. Avoid patterns in keys --> <!-- 3. Use bitwise operations for mixing bits -->
Sorting arranges data in a specific order (ascending or descending), facilitating efficient searching and data processing.
Bubble Sort repeatedly swaps adjacent elements if they are in the wrong order, gradually moving the largest unsorted element to the end.
Selection Sort selects the smallest (or largest) element from the unsorted part and swaps it with the first unsorted element.
Insertion Sort builds the sorted array one element at a time by inserting elements into their correct position.
Basic sorting algorithms generally have O(n²) time complexity in the average and worst cases.
A sorting algorithm is stable if it preserves the relative order of equal elements.
Basic sorts often sort in-place requiring O(1) extra space, but some need additional memory.
Basic sorts are suitable for small datasets or nearly sorted data due to their simplicity.
Real-world applications use efficient sorts like mergesort or quicksort but understanding basics is essential.
Challenges include sorting large datasets, sorting with constraints, or sorting linked lists.
Divide and conquer algorithm that divides the array into halves, sorts, and merges them.
<!-- Merge Sort implementation in JavaScript --> function mergeSort(arr) { if (arr.length <= 1) return arr; <!-- Base case: single element is sorted --> const mid = Math.floor(arr.length / 2); <!-- Find middle index --> const left = mergeSort(arr.slice(0, mid)); <!-- Sort left half recursively --> const right = mergeSort(arr.slice(mid)); <!-- Sort right half recursively --> return merge(left, right); <!-- Merge sorted halves --> } function merge(left, right) { let result = []; let i = 0, j = 0; while (i < left.length && j < right.length) { if (left[i] <= right[j]) { result.push(left[i]); i++; } else { result.push(right[j]); j++; } } return result.concat(left.slice(i)).concat(right.slice(j)); <!-- Append remaining elements --> }
Sorts by selecting a pivot and partitioning the array into elements less or greater than the pivot.
<!-- Quick Sort implementation in JavaScript --> function quickSort(arr) { if (arr.length <= 1) return arr; <!-- Base case --> const pivot = arr[arr.length - 1]; <!-- Choose pivot element --> const left = []; const right = []; for (let i = 0; i < arr.length - 1; i++) { if (arr[i] <= pivot) left.push(arr[i]); else right.push(arr[i]); } return quickSort(left).concat(pivot, quickSort(right)); <!-- Recursively sort partitions --> }
Builds a max heap and extracts the maximum repeatedly to sort the array.
<!-- Heap Sort implementation in JavaScript --> function heapSort(arr) { let n = arr.length; function heapify(i, size) { let largest = i; let left = 2 * i + 1; let right = 2 * i + 2; if (left < size && arr[left] > arr[largest]) largest = left; if (right < size && arr[right] > arr[largest]) largest = right; if (largest !== i) { [arr[i], arr[largest]] = [arr[largest], arr[i]]; <!-- Swap --> heapify(largest, size); <!-- Recursively heapify affected subtree --> } } // Build max heap for (let i = Math.floor(n / 2) - 1; i >= 0; i--) { heapify(i, n); } // Extract elements from heap for (let i = n - 1; i > 0; i--) { [arr[0], arr[i]] = [arr[i], arr[0]]; <!-- Move current max to end --> heapify(0, i); <!-- Heapify reduced heap --> } return arr; }
Non-comparative sorting that counts occurrences of each value.
<!-- Counting Sort implementation in JavaScript --> function countingSort(arr, maxValue) { let count = new Array(maxValue + 1).fill(0); let output = new Array(arr.length); for (let num of arr) { count[num]++; <!-- Count each element --> } for (let i = 1; i <= maxValue; i++) { count[i] += count[i - 1]; <!-- Accumulate counts --> } for (let i = arr.length - 1; i >= 0; i--) { output[count[arr[i]] - 1] = arr[i]; <!-- Place elements in output --> count[arr[i]]--; } return output; }
Sorts integers by processing individual digits, using a stable sort at each digit.
<!-- Radix Sort implementation in JavaScript --> function radixSort(arr) { let maxNum = Math.max(...arr); let digit = 1; while (Math.floor(maxNum / digit) > 0) { arr = countingSortByDigit(arr, digit); digit *= 10; } return arr; } function countingSortByDigit(arr, digit) { let count = new Array(10).fill(0); let output = new Array(arr.length); for (let num of arr) { let digitNum = Math.floor(num / digit) % 10; count[digitNum]++; } for (let i = 1; i < 10; i++) { count[i] += count[i - 1]; } for (let i = arr.length - 1; i >= 0; i--) { let digitNum = Math.floor(arr[i] / digit) % 10; output[count[digitNum] - 1] = arr[i]; count[digitNum]--; } return output; }
Distributes elements into buckets, sorts each bucket, then concatenates.
<!-- Bucket Sort implementation in JavaScript --> function bucketSort(arr, bucketSize = 5) { if (arr.length === 0) return arr; let minValue = Math.min(...arr); let maxValue = Math.max(...arr); let bucketCount = Math.floor((maxValue - minValue) / bucketSize) + 1; let buckets = Array.from({ length: bucketCount }, () => []); for (let num of arr) { buckets[Math.floor((num - minValue) / bucketSize)].push(num); } return buckets.reduce((sortedArr, bucket) => { return sortedArr.concat(bucket.sort((a, b) => a - b)); }, []); }
Merge and Quick sort are divide-and-conquer, Heap sort uses heap structure, Counting and Radix are non-comparative sorts.
<!-- Summary --> <!-- Merge Sort: O(n log n), stable --> <!-- Quick Sort: O(n log n) average, unstable --> <!-- Heap Sort: O(n log n), unstable --> <!-- Counting Sort: O(n + k), stable, integer keys --> <!-- Radix Sort: O(d*(n + k)), stable, digit-based -->
Techniques for sorting data too large to fit into memory using disk storage.
<!-- Conceptual: External merge sort uses disk --> <!-- Read chunks, sort in memory, write back, merge chunks on disk -->
Approaches include external sorting, parallel sorting, and distributed algorithms.
<!-- Example: MapReduce sorting for large data --> <!-- Divide data across nodes, sort locally, merge globally -->
Handling duplicates, stability, memory usage, and performance under constraints.
<!-- Challenge examples --> <!-- Stability: keeping relative order of equal elements --> <!-- Memory constraints: in-place vs out-of-place sorting -->
Linear Search checks each element sequentially until the target is found or the list ends.
Binary Search efficiently finds an element in a sorted array by repeatedly dividing the search interval in half.
This involves searching in a sorted array that has been rotated, requiring modified binary search logic.
Searching in matrices with sorted rows and columns often uses binary search strategies tailored to 2D data.
Exponential Search finds range boundaries for binary search by increasing search intervals exponentially.
Fibonacci Search uses Fibonacci numbers to divide the search space, helpful in systems where access time varies.
Interpolation Search estimates the position of the target based on the value, assuming uniform distribution.
Ternary Search divides the array into three parts to locate the target in a sorted array, useful for unimodal functions.
Searching algorithms are key in database querying, information retrieval, and many optimization problems.
Common problems include searching with constraints, in rotated arrays, or multi-dimensional data.
Recursion is a programming technique where a function calls itself to solve smaller instances of a problem.
<!-- Recursive function to calculate factorial of a number --> def factorial(n): # Base case: factorial of 0 or 1 is 1 if n == 0 or n == 1: return 1 # Recursive case: n * factorial of (n-1) return n * factorial(n - 1) print(factorial(5)) # Output: 120
Designing recursive functions requires base cases to stop recursion and recursive calls to break problems down.
<!-- Recursive function to compute sum of first n natural numbers --> def recursive_sum(n): # Base case: if n is 0, sum is 0 if n == 0: return 0 # Recursive call: sum of n plus sum of n-1 return n + recursive_sum(n - 1) print(recursive_sum(10)) # Output: 55
Tail recursion is a special form where the recursive call is the last operation, enabling optimization by compilers.
<!-- Tail recursive factorial function with accumulator --> def tail_factorial(n, accumulator=1): # Base case: when n is 0, return the accumulated result if n == 0: return accumulator # Tail recursive call with updated accumulator return tail_factorial(n - 1, accumulator * n) print(tail_factorial(5)) # Output: 120
Backtracking systematically searches for solutions by trying possibilities and abandoning invalid paths.
<!-- Backtracking template example to find subsets that sum to a target --> def backtrack(nums, target, start, path, result): # If target is met, add current path to result if target == 0: result.append(path[:]) return # Explore further elements for i in range(start, len(nums)): if nums[i] > target: continue path.append(nums[i]) backtrack(nums, target - nums[i], i + 1, path, result) path.pop() # Backtrack by removing last element result = [] backtrack([1,2,3,4], 5, 0, [], result) print(result) # Output: [[1,4],[2,3]]
Maze problems use backtracking to find paths from start to finish by exploring and retreating when stuck.
<!-- Simple maze solver using backtracking --> def solve_maze(maze, x, y, path): if x < 0 or y < 0 or x >= len(maze) or y >= len(maze[0]) or maze[x][y] == 1: return False if (x, y) in path: return False path.append((x, y)) if (x, y) == (len(maze)-1, len(maze[0])-1): return True if solve_maze(maze, x + 1, y, path) or solve_maze(maze, x, y + 1, path) or solve_maze(maze, x - 1, y, path) or solve_maze(maze, x, y - 1, path): return True path.pop() return False maze = [ [0,0,1,0], [1,0,1,0], [0,0,0,0], [0,1,1,0] ] path = [] solve_maze(maze, 0, 0, path) print(path) # Output: path coordinates from start to finish
This classic problem places N queens on an NxN chessboard so that no two queens attack each other, solved via backtracking.
<!-- N-Queens solver using backtracking --> def is_safe(board, row, col, n): for i in range(col): if board[row][i] == 1: return False for i,j in zip(range(row,-1,-1), range(col,-1,-1)): if board[i][j] == 1: return False for i,j in zip(range(row,n), range(col,-1,-1)): if board[i][j] == 1: return False return True def solve_nqueens(board, col, n, solutions): if col == n: solutions.append([row[:] for row in board]) return for i in range(n): if is_safe(board, i, col, n): board[i][col] = 1 solve_nqueens(board, col + 1, n, solutions) board[i][col] = 0 # Backtrack board = [[0]*4 for _ in range(4)] solutions = [] solve_nqueens(board, 0, 4, solutions) print(f"Number of solutions: {len(solutions)}") # Output: 2
Backtracking algorithms fill empty Sudoku cells ensuring the rules are respected until a solution is found.
<!-- Sudoku solver using backtracking --> def is_valid(board, row, col, num): for x in range(9): if board[row][x] == num or board[x][col] == num: return False start_row, start_col = 3*(row//3), 3*(col//3) for i in range(start_row, start_row + 3): for j in range(start_col, start_col + 3): if board[i][j] == num: return False return True def solve_sudoku(board): for row in range(9): for col in range(9): if board[row][col] == 0: for num in range(1, 10): if is_valid(board, row, col, num): board[row][col] = num if solve_sudoku(board): return True board[row][col] = 0 # Backtrack return False return True sudoku_board = [ [5,3,0,0,7,0,0,0,0], [6,0,0,1,9,5,0,0,0], [0,9,8,0,0,0,0,6,0], [8,0,0,0,6,0,0,0,3], [4,0,0,8,0,3,0,0,1], [7,0,0,0,2,0,0,0,6], [0,6,0,0,0,0,2,8,0], [0,0,0,4,1,9,0,0,5], [0,0,0,0,8,0,0,7,9] ] solve_sudoku(sudoku_board) print(sudoku_board) # Output: solved sudoku board
The subset sum problem determines if there is a subset of numbers that adds up to a target value using recursion.
<!-- Recursive subset sum problem solver --> def is_subset_sum(nums, n, target): if target == 0: return True if n == 0 and target != 0: return False if nums[n-1] > target: return is_subset_sum(nums, n-1, target) return is_subset_sum(nums, n-1, target) or is_subset_sum(nums, n-1, target - nums[n-1]) nums = [3, 34, 4, 12, 5, 2] target = 9 print(is_subset_sum(nums, len(nums), target)) # Output: True
Recursion helps generate all permutations or combinations of a given set of elements.
def permute(nums, path, result): if not nums: result.append(path) return for i in range(len(nums)): permute(nums[:i] + nums[i+1:], path + [nums[i]], result) result = [] permute([1,2,3], [], result) print(result) # Output: all permutations of [1,2,3]
Techniques like memoization and dynamic programming optimize recursive algorithms by avoiding repeated work.
def fib_memo(n, memo={}): if n in memo: return memo[n] if n == 0 or n == 1: return n memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo) return memo[n] print(fib_memo(10)) # Output: 55
Dynamic Programming is a method for solving complex problems by breaking them into simpler overlapping subproblems, storing their results to avoid redundant work.
Top-down approach that uses recursion and caching (memo) to store previously computed results.
<!-- Memoized Fibonacci function in JavaScript --> function fibMemo(n, memo = {}) { if (n <= 1) return n; <!-- Base case --> if (memo[n]) return memo[n]; <!-- Return cached result if exists --> memo[n] = fibMemo(n - 1, memo) + fibMemo(n - 2, memo); <!-- Store result in cache --> return memo[n]; }
Bottom-up approach that iteratively fills a table to solve subproblems.
<!-- Tabulated Fibonacci function in JavaScript --> function fibTab(n) { let dp = [0, 1]; <!-- Initialize base values --> for (let i = 2; i <= n; i++) { dp[i] = dp[i - 1] + dp[i - 2]; <!-- Compute current using previous two --> } return dp[n]; }
Using memoization or tabulation to compute Fibonacci numbers efficiently instead of naive recursion.
Optimize the total value of items in a knapsack without exceeding capacity.
<!-- 0/1 Knapsack Problem using DP --> function knapsack(weights, values, capacity) { const n = weights.length; let dp = Array(n + 1).fill().map(() => Array(capacity + 1).fill(0)); for (let i = 1; i <= n; i++) { for (let w = 0; w <= capacity; w++) { if (weights[i - 1] <= w) { dp[i][w] = Math.max(dp[i - 1][w], dp[i - 1][w - weights[i - 1]] + values[i - 1]); } else { dp[i][w] = dp[i - 1][w]; } } } return dp[n][capacity]; }
Find the longest subsequence present in both sequences.
<!-- Longest Common Subsequence using DP --> function lcs(str1, str2) { const m = str1.length, n = str2.length; let dp = Array(m + 1).fill().map(() => Array(n + 1).fill(0)); for (let i = 1; i <= m; i++) { for (let j = 1; j <= n; j++) { if (str1[i - 1] === str2[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + 1; } else { dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]); } } } return dp[m][n]; }
Count the number of ways to make change for a given amount using given coin denominations.
<!-- Coin Change (number of ways) using DP --> function coinChange(coins, amount) { let dp = Array(amount + 1).fill(0); dp[0] = 1; <!-- Base case: one way to make 0 --> for (let coin of coins) { for (let x = coin; x <= amount; x++) { dp[x] += dp[x - coin]; } } return dp[amount]; }
Dynamic programming techniques applied to string problems like edit distance, subsequences, and pattern matching.
Using DP to solve optimization problems on tree structures, e.g., finding max weight independent set.
Resource allocation, sequence alignment in bioinformatics, financial modeling, and more.
Greedy algorithms build up a solution piece by piece, always choosing the next piece that offers the most immediate benefit.
<!-- Simple greedy example: coin change using largest coin first --> def coin_change(coins, amount): coins.sort(reverse=True) # Sort coins descending count = 0 for coin in coins: while amount >= coin: amount -= coin count += 1 return count print(coin_change([25, 10, 5, 1], 63)) # Output: 6 coins (2 quarters, 1 dime, 0 nickels, 3 pennies)
A problem exhibits the greedy choice property if a globally optimal solution can be arrived at by making locally optimal choices.
<!-- Example showing greedy choice: selecting max number of activities --> activities = [(1,4), (3,5), (0,6), (5,7), (3,8), (5,9), (6,10), (8,11), (8,12), (2,13), (12,14)] activities.sort(key=lambda x: x[1]) # Sort by finish time selected = [] last_finish = 0 for start, finish in activities: if start >= last_finish: selected.append((start, finish)) last_finish = finish print(selected) # Output: list of max non-overlapping activities
This problem selects the maximum number of activities that don't overlap, solved efficiently with a greedy approach.
<!-- Activity Selection Problem function --> def activity_selection(activities): activities.sort(key=lambda x: x[1]) last_finish = 0 selected = [] for start, finish in activities: if start >= last_finish: selected.append((start, finish)) last_finish = finish return selected acts = [(1,4), (3,5), (0,6), (5,7), (3,8)] print(activity_selection(acts)) # Output: [(1,4), (5,7)]
Huffman coding builds an optimal prefix code used for lossless data compression, using a greedy strategy to combine lowest frequency nodes.
<!-- Simplified Huffman coding outline (conceptual) --> # Use a priority queue to combine two lowest frequency nodes repeatedly until one tree remains # Resulting tree used to assign variable-length binary codes # (For full implementation, external libraries or detailed code needed)
In the fractional knapsack problem, items can be broken into smaller parts, and a greedy approach based on value/weight ratio yields optimal solutions.
<!-- Fractional Knapsack greedy approach --> def fractional_knapsack(weights, values, capacity): ratio = [(v/w, w, v) for w,v in zip(weights, values)] ratio.sort(key=lambda x: x[0], reverse=True) total_value = 0 for r, w, v in ratio: if capacity == 0: break take = min(w, capacity) total_value += take * r capacity -= take return total_value print(fractional_knapsack([10,20,30], [60,100,120], 50)) # Output: 240.0
Both Prim’s and Kruskal’s algorithms use greedy strategies to build a minimum spanning tree by choosing minimum weight edges.
<!-- Kruskal's algorithm outline --> # Sort edges by weight # Add edges if they don't form cycles (using union-find data structure) # Repeat until all vertices connected # (Full implementation requires graph and union-find details)
Scheduling jobs to maximize profit or minimize lateness is often solved with greedy algorithms by sorting and assigning jobs strategically.
<!-- Greedy job scheduling for maximizing jobs done before deadlines --> def job_scheduling(jobs): jobs.sort(key=lambda x: x[2], reverse=True) # Sort by profit descending n = len(jobs) result = [False]*n job_order = [None]*n for job in jobs: for slot in range(min(n, job[1])-1, -1, -1): if not result[slot]: result[slot] = True job_order[slot] = job[0] break return [j for j in job_order if j] jobs = [('a', 2, 100), ('b', 1, 19), ('c', 2, 27), ('d', 1, 25), ('e', 3, 15)] print(job_scheduling(jobs)) # Output: ['a', 'd', 'e']
Greedy algorithms make local optimum choices and do not always guarantee global optima; dynamic programming solves problems by combining solutions to subproblems, often guaranteeing global optima.
<!-- Example: Coin change problem --> # Greedy fails for coins = [1,3,4], amount=6 (greedy picks 4+1+1=3 coins) # Dynamic programming finds 2 coins (3+3) # (DP solution requires separate implementation)
Greedy algorithms may fail for problems where local optimum does not lead to global optimum, requiring other approaches like DP or backtracking.
Common practice problems include coin change, activity selection, job scheduling, Huffman coding, and minimum spanning trees.
Practice implementing these algorithms to understand their greedy choices and limitations.
Strings are sequences of characters with common operations like concatenation, substring extraction, and length checking.
<!-- String concatenation and basic operations in JavaScript --> let str1 = "Hello"; let str2 = "World"; let combined = str1 + " " + str2; <!-- Concatenate with space --> let length = combined.length; <!-- Get length of string --> let sub = combined.substring(0, 5); <!-- Extract substring "Hello" -->
Techniques to find occurrences of a pattern string inside a larger text string efficiently.
Simple approach checking for pattern at every position of text.
<!-- Naive Pattern Search in JavaScript --> function naiveSearch(text, pattern) { for (let i = 0; i <= text.length - pattern.length; i++) { let j; for (j = 0; j < pattern.length; j++) { if (text[i + j] !== pattern[j]) break; <!-- Mismatch breaks inner loop --> } if (j === pattern.length) return i; <!-- Pattern found at index i --> } return -1; <!-- Pattern not found --> }
Uses hashing to find pattern matches efficiently.
<!-- Simplified Rabin-Karp Algorithm --> function rabinKarp(text, pattern) { const prime = 101; const m = pattern.length, n = text.length; let patternHash = 0, textHash = 0, h = 1; for (let i = 0; i < m - 1; i++) h = (h * 256) % prime; <!-- The value of h would be "pow(d, m-1)%prime" --> for (let i = 0; i < m; i++) { patternHash = (256 * patternHash + pattern.charCodeAt(i)) % prime; textHash = (256 * textHash + text.charCodeAt(i)) % prime; } for (let i = 0; i <= n - m; i++) { if (patternHash === textHash) { let match = true; for (let j = 0; j < m; j++) { if (text[i + j] !== pattern[j]) { match = false; break; } } if (match) return i; <!-- Pattern found --> } if (i < n - m) { textHash = (256 * (textHash - text.charCodeAt(i) * h) + text.charCodeAt(i + m)) % prime; if (textHash < 0) textHash += prime; <!-- Make sure hash is positive --> } } return -1; }
Knuth-Morris-Pratt algorithm improves pattern search by avoiding unnecessary re-checks using prefix table.
<!-- KMP Algorithm in JavaScript --> function computeLPS(pattern) { let lps = Array(pattern.length).fill(0); let len = 0, i = 1; while (i < pattern.length) { if (pattern[i] === pattern[len]) { len++; lps[i] = len; i++; } else { if (len !== 0) { len = lps[len - 1]; } else { lps[i] = 0; i++; } } } return lps; } function kmpSearch(text, pattern) { let lps = computeLPS(pattern); let i = 0, j = 0; while (i < text.length) { if (pattern[j] === text[i]) { i++; j++; } if (j === pattern.length) return i - j; <!-- Pattern found at i-j --> else if (i < text.length && pattern[j] !== text[i]) { if (j !== 0) j = lps[j - 1]; else i++; } } return -1; }
Computes Z-array to perform linear time pattern matching.
<!-- Z-Algorithm computation --> function calculateZArray(s) { let n = s.length; let Z = Array(n).fill(0); let L = 0, R = 0; for (let i = 1; i < n; i++) { if (i <= R) Z[i] = Math.min(R - i + 1, Z[i - L]); while (i + Z[i] < n && s[Z[i]] === s[i + Z[i]]) Z[i]++; if (i + Z[i] - 1 > R) { L = i; R = i + Z[i] - 1; } } return Z; }
Tree data structure for storing strings to allow fast prefix searches.
<!-- Basic Trie Node and Insert function --> class TrieNode { constructor() { this.children = {}; <!-- Children nodes --> this.isEndOfWord = false; <!-- Flag for end of word --> } } class Trie { constructor() { this.root = new TrieNode(); } insert(word) { let node = this.root; for (let char of word) { if (!node.children[char]) { node.children[char] = new TrieNode(); } node = node.children[char]; } node.isEndOfWord = true; } }
Data structures to allow efficient substring queries and pattern matching on strings.
Algorithms like Run Length Encoding (RLE) and Huffman coding to reduce string size.
Used in search engines, DNA sequencing, data compression, and text editing software.
Bits are the basic units of data in computing. Bitwise operators operate directly on bits of integers.
# Bitwise AND (&) a = 5 # (0101) b = 3 # (0011) print(a & b) # Output: 1 (0001)
# Bitwise OR (|) print(a | b) # Output: 7 (0111) # Bitwise XOR (^) print(a ^ b) # Output: 6 (0110)
# Left shift (<<) print(a << 1) # Output: 10 (1010) # Right shift (>>) print(a >> 1) # Output: 2 (0010)
def is_power_of_two(n): return n & (n - 1) == 0 and n != 0 print(is_power_of_two(8)) # True print(is_power_of_two(10)) # False
def count_set_bits(n): count = 0 while n: count += n & 1 n >>= 1 return count print(count_set_bits(13)) # Output: 3 (1101 has three 1s)
Bit masks help manipulate specific bits within an integer.
FLAG_A = 1 << 0 # 0001 FLAG_B = 1 << 1 # 0010 flags = 0 flags |= FLAG_A # Set FLAG_A print(flags) # Output: 1 flags |= FLAG_B # Set FLAG_B print(flags) # Output: 3 flags &= ~FLAG_A # Clear FLAG_A print(flags) # Output: 2
Bit operations can speed up calculations, reduce memory usage, and efficiently represent states.
Examples include finding unique elements, swapping numbers without temp variables, and parity checking.
# Swap two numbers using XOR x, y = 5, 7 x = x ^ y y = x ^ y x = x ^ y print(x, y) # Output: 7 5
Bitwise operations are essential for encryption, hashing, error detection, and packet processing.
Includes tricks like Brian Kernighan’s algorithm to count set bits efficiently.
def count_set_bits_kernighan(n): count = 0 while n: n &= (n - 1) # Drop the lowest set bit count += 1 return count print(count_set_bits_kernighan(13)) # Output: 3
Number theory deals with properties and relationships of integers, including divisibility and primes.
GCD is the largest number dividing two integers; LCM is the smallest common multiple.
<!-- Compute GCD using Euclidean Algorithm in JavaScript --> function gcd(a, b) { while (b !== 0) { let temp = b; b = a % b; <!-- Remainder of a divided by b --> a = temp; <!-- Swap a and b --> } return a; <!-- GCD result --> } <!-- Compute LCM using GCD --> function lcm(a, b) { return (a * b) / gcd(a, b); <!-- LCM formula --> }
Operations on numbers under modulo, useful in cryptography and hashing.
<!-- Modular addition example --> function modAdd(a, b, m) { return (a % m + b % m) % m; <!-- Add under modulo m --> }
Efficient way to compute large powers modulo a number using exponentiation by squaring.
<!-- Fast modular exponentiation --> function modExp(base, exponent, mod) { let result = 1; base = base % mod; while (exponent > 0) { if (exponent % 2 === 1) result = (result * base) % mod; <!-- Multiply when exponent odd --> base = (base * base) % mod; <!-- Square base --> exponent = Math.floor(exponent / 2); <!-- Divide exponent by 2 --> } return result; }
Finding all primes up to a limit efficiently using the sieve method.
<!-- Sieve of Eratosthenes in JavaScript --> function sieve(n) { let prime = Array(n + 1).fill(true); prime[0] = false; prime[1] = false; for (let p = 2; p * p <= n; p++) { if (prime[p]) { for (let i = p * p; i <= n; i += p) prime[i] = false; <!-- Mark multiples as not prime --> } } let primes = []; for (let i = 2; i <= n; i++) { if (prime[i]) primes.push(i); } return primes; <!-- Return list of primes up to n --> }
Study of counting, arrangement, and combination of objects.
Permutations count ordered arrangements; combinations count selections without order.
<!-- Compute factorial recursively --> function factorial(n) { if (n === 0 || n === 1) return 1; return n * factorial(n - 1); } <!-- Compute nCr (combinations) --> function nCr(n, r) { return factorial(n) / (factorial(r) * factorial(n - r)); }
Basics of probability including events, outcomes, and calculating chances.
If n items are put into m containers, with n > m, then at least one container must contain more than one item.
Using mathematical reasoning and these concepts to solve algorithmic problems efficiently.
Segment Trees are tree data structures used for storing information about intervals or segments. They allow querying and updating ranges efficiently.
# Simple segment tree for range sum query def build_segment_tree(arr, segtree, left, right, pos): if left == right: segtree[pos] = arr[left] return mid = (left + right) // 2 build_segment_tree(arr, segtree, left, mid, 2 * pos + 1) build_segment_tree(arr, segtree, mid + 1, right, 2 * pos + 2) segtree[pos] = segtree[2 * pos + 1] + segtree[2 * pos + 2] arr = [1, 3, 5, 7, 9, 11] n = len(arr) segtree = [0] * (4 * n) build_segment_tree(arr, segtree, 0, n - 1, 0) print(segtree) # Segment tree array representation
Fenwick Trees provide efficient methods for cumulative frequency tables.
def fenw_update(bit, index, val): while index < len(bit): bit[index] += val index += index & (-index) def fenw_sum(bit, index): result = 0 while index > 0: result += bit[index] index -= index & (-index) return result arr = [1,2,3,4,5] bit = [0]*(len(arr)+1) for i, val in enumerate(arr, 1): fenw_update(bit, i, val) print(fenw_sum(bit, 3)) # Sum of first 3 elements: 6
Data structure to keep track of disjoint sets and support union and find operations efficiently.
class DSU: def __init__(self, n): self.parent = list(range(n)) def find(self, x): if self.parent[x] != x: self.parent[x] = self.find(self.parent[x]) return self.parent[x] def union(self, x, y): px, py = self.find(x), self.find(y) if px != py: self.parent[py] = px dsu = DSU(5) dsu.union(0, 1) print(dsu.find(1)) # Output: 0
Probabilistic data structures that allow fast search within an ordered sequence.
(Skip lists are complex; here’s a conceptual placeholder.)
Self-adjusting binary search trees with the recently accessed element moved to the root.
(Due to complexity, code is omitted for brevity.)
Balanced tree data structures used extensively in databases and file systems.
(Conceptual explanation due to complexity.)
Space-partitioning data structure for organizing points in a k-dimensional space.
(Basic concept only.)
Probabilistic data structures to test whether an element is in a set with some false positive rate.
from hashlib import sha256 class BloomFilter: def __init__(self, size=1000): self.size = size self.bit_array = [0]*size def _hashes(self, item): h1 = int(sha256(item.encode()).hexdigest(), 16) h2 = int(sha256((item+'salt').encode()).hexdigest(), 16) return [h1 % self.size, h2 % self.size] def add(self, item): for h in self._hashes(item): self.bit_array[h] = 1 def check(self, item): return all(self.bit_array[h] for h in self._hashes(item)) bf = BloomFilter() bf.add("apple") print(bf.check("apple")) # True print(bf.check("banana")) # Probably False
Data structures that preserve previous versions of themselves when modified.
(Conceptual explanation due to complexity.)
Examples include range queries, dynamic connectivity, and more complex problem solving involving these structures.
Parallel computing is a type of computation where many calculations or processes are carried out simultaneously to increase speed.
# Python example: Basic parallel execution using multiprocessing import multiprocessing def worker(num): print(f'Worker: {num}') if __name__ == '__main__': processes = [] for i in range(4): p = multiprocessing.Process(target=worker, args=(i,)) processes.append(p) p.start() for p in processes: p.join()
Parallel algorithms split tasks into subtasks that can be solved concurrently.
# Example: Parallel sum using multiprocessing pool from multiprocessing import Pool def square(n): return n * n if __name__ == '__main__': with Pool(4) as p: result = p.map(square, [1, 2, 3, 4]) print(result) # [1, 4, 9, 16]
MapReduce is a programming model for processing large data sets with a distributed algorithm.
# MapReduce style example in Python data = ['apple', 'banana', 'apple', 'orange', 'banana', 'apple'] # Map step: count each fruit as 1 mapped = [(fruit, 1) for fruit in data] # Reduce step: sum counts for each fruit from collections import defaultdict counts = defaultdict(int) for fruit, count in mapped: counts[fruit] += count print(dict(counts)) # {'apple': 3, 'banana': 2, 'orange': 1}
Multithreading allows concurrent execution of multiple threads within a single process.
import threading def print_numbers(): for i in range(5): print(i) t1 = threading.Thread(target=print_numbers) t1.start() t1.join()
Locks prevent race conditions by ensuring that only one thread accesses a resource at a time.
import threading counter = 0 lock = threading.Lock() def safe_increment(): global counter with lock: temp = counter temp += 1 counter = temp threads = [threading.Thread(target=safe_increment) for _ in range(100)] for t in threads: t.start() for t in threads: t.join() print(counter) # Should reliably print 100
Data structures designed to be safely accessed and modified by multiple threads concurrently.
# Python's queue module provides thread-safe queues import queue q = queue.Queue() def producer(): for i in range(5): q.put(i) def consumer(): while not q.empty(): item = q.get() print(f'Consumed {item}') q.task_done() import threading t1 = threading.Thread(target=producer) t2 = threading.Thread(target=consumer) t1.start() t1.join() t2.start() t2.join()
Deadlocks occur when two or more threads wait indefinitely for locks. Race conditions happen when threads access shared data unsafely.
# Deadlock example (conceptual, avoid running) import threading lock1 = threading.Lock() lock2 = threading.Lock() def thread1(): with lock1: with lock2: print("Thread 1") def thread2(): with lock2: with lock1: print("Thread 2") # Threads can deadlock by waiting on each other's locks
Techniques for performing searching and sorting in parallel to speed up processing.
# Example: Parallel sorting using multiprocessing import multiprocessing def parallel_sort(arr): if len(arr) <= 1: return arr mid = len(arr) // 2 with multiprocessing.Pool(2) as pool: left, right = pool.map(parallel_sort, [arr[:mid], arr[mid:]]) return sorted(left + right) if __name__ == '__main__': arr = [5,3,8,6,2,7,4,1] sorted_arr = parallel_sort(arr) print(sorted_arr) # [1, 2, 3, 4, 5, 6, 7, 8]
Using Graphics Processing Units to accelerate computation-intensive tasks via parallelism.
(Example requires specialized libraries, so conceptual explanation only.)
Used in scientific simulations, big data processing, real-time systems, machine learning, and more to improve performance and efficiency.
Divide and Conquer splits a problem into smaller subproblems, solves them independently, and combines their solutions.
<!-- Merge Sort Implementation: Divide and Conquer --> function mergeSort(arr) { if (arr.length <= 1) return arr; <!-- Base case: array is sorted --> const mid = Math.floor(arr.length / 2); <!-- Find middle index --> const left = mergeSort(arr.slice(0, mid)); <!-- Sort left half --> const right = mergeSort(arr.slice(mid)); <!-- Sort right half --> return merge(left, right); <!-- Merge sorted halves --> } function merge(left, right) { let result = [], i = 0, j = 0; while (i < left.length && j < right.length) { if (left[i] <= right[j]) { result.push(left[i]); i++; } else { result.push(right[j]); j++; } } return result.concat(left.slice(i)).concat(right.slice(j)); <!-- Append remaining elements --> }
Dynamic Programming solves problems by storing solutions of overlapping subproblems.
<!-- Fibonacci with Memoization --> function fib(n, memo = {}) { if (n <= 1) return n; if (memo[n]) return memo[n]; <!-- Return cached value if available --> memo[n] = fib(n - 1, memo) + fib(n - 2, memo); <!-- Compute and store --> return memo[n]; }
The greedy approach builds up a solution piece by piece choosing the best option at each step.
<!-- Coin Change (Greedy) Example --> function minCoins(coins, amount) { coins.sort((a, b) => b - a); <!-- Sort coins descending --> let count = 0; for (let coin of coins) { while (amount >= coin) { amount -= coin; count++; } } return count; }
Backtracking tries all possibilities and abandons solutions that fail early.
<!-- Solve N-Queens problem recursively --> function solveNQueens(n) { const board = Array(n).fill().map(() => Array(n).fill('.')); const res = []; function isSafe(row, col) { for (let i = 0; i < row; i++) { if (board[i][col] === 'Q') return false; } for (let i = row - 1, j = col - 1; i >= 0 && j >= 0; i--, j--) { if (board[i][j] === 'Q') return false; } for (let i = row - 1, j = col + 1; i >= 0 && j < n; i--, j++) { if (board[i][j] === 'Q') return false; } return true; } function backtrack(row) { if (row === n) { res.push(board.map(r => r.join('')).slice()); <!-- Save solution --> return; } for (let col = 0; col < n; col++) { if (isSafe(row, col)) { board[row][col] = 'Q'; backtrack(row + 1); board[row][col] = '.'; <!-- Undo move --> } } } backtrack(0); return res; }
Algorithms that use random numbers to make decisions, often for efficiency or simplicity.
<!-- Randomized Quick Select to find kth smallest element --> function randomizedSelect(arr, k) { if (arr.length === 1) return arr[0]; let pivot = arr[Math.floor(Math.random() * arr.length)]; let lows = arr.filter(x => x < pivot); let highs = arr.filter(x => x > pivot); let pivots = arr.filter(x => x === pivot); if (k <= lows.length) return randomizedSelect(lows, k); else if (k <= lows.length + pivots.length) return pivot; else return randomizedSelect(highs, k - lows.length - pivots.length); }
Algorithms that find near-optimal solutions efficiently when exact solutions are too costly.
<!-- Approximate solution to Vertex Cover problem --> function vertexCoverApprox(graph) { let cover = new Set(); let edges = new Set(graph.edges); while (edges.size > 0) { let [u, v] = edges.values().next().value; <!-- Pick any edge --> cover.add(u); cover.add(v); for (let edge of Array.from(edges)) { if (edge.includes(u) || edge.includes(v)) edges.delete(edge); } } return Array.from(cover); }
Algorithms that process input piece-by-piece without knowing future input.
Techniques and best practices to solve algorithm problems quickly and correctly in contests.
Common problems asked in technical interviews to test problem solving and coding skills.
Developing a mindset to design, analyze, and implement algorithms effectively.
Efficient data organization directly impacts AI model performance and scalability.
# Example: Using a dictionary (hash map) for fast feature lookup in AI features = {'age': 25, 'income': 50000, 'education': 'Bachelor'} print(features['income']) # 50000
Many ML algorithms rely on classical algorithms for training and inference.
# Example: Simple gradient descent to minimize a quadratic function def gradient_descent(x, learning_rate=0.1, epochs=10): for _ in range(epochs): grad = 2 * x # derivative of x^2 x = x - learning_rate * grad print(f'x: {x}') gradient_descent(10)
Graphs are used for knowledge representation, social networks, and neural network structures.
# Simple graph using adjacency list for a neural network layer connections graph = { 'input': ['hidden1', 'hidden2'], 'hidden1': ['output'], 'hidden2': ['output'], 'output': [] } print(graph)
Tree structures help in classification and regression tasks in AI.
# Pseudo-example of a decision tree node class TreeNode: def __init__(self, feature=None, threshold=None, left=None, right=None, label=None): self.feature = feature self.threshold = threshold self.left = left self.right = right self.label = label # Example: A simple decision node root = TreeNode(feature='age', threshold=30, label=None)
Hash tables enable fast retrieval of features or cached computations.
# Example: Cache results of expensive function using a dictionary cache = {} def expensive_computation(x): if x in cache: return cache[x] result = x * x # Simulate computation cache[x] = result return result print(expensive_computation(4)) print(expensive_computation(4)) # Uses cached result
Used in best-first search algorithms like A* and scheduling tasks.
import heapq pq = [] heapq.heappush(pq, (1, 'task1')) heapq.heappush(pq, (3, 'task3')) heapq.heappush(pq, (2, 'task2')) while pq: priority, task = heapq.heappop(pq) print(f'Processing {task} with priority {priority}')
Algorithms for pattern matching, tokenization, and substring search in text.
text = "hello world" print(text.find("world")) # 6, index where 'world' starts tokens = text.split() # ['hello', 'world'] print(tokens)
Used in sequence alignment, resource allocation, and reinforcement learning.
# Fibonacci with memoization (dynamic programming) memo = {0:0, 1:1} def fib(n): if n not in memo: memo[n] = fib(n-1) + fib(n-2) return memo[n] print(fib(10)) # 55
Used for approximate solutions in clustering and feature selection.
# Greedy example: select items with max value under weight constraint items = [(2, 10), (3, 20), (1, 15)] # (weight, value) items.sort(key=lambda x: x[1]/x[0], reverse=True) capacity = 3 total_value = 0 for w, v in items: if w <= capacity: capacity -= w total_value += v print(total_value)
Bit tricks optimize hardware-level neural network computations and data encoding.
# Check if number is power of two def is_power_of_two(n): return (n & (n-1)) == 0 and n != 0 print(is_power_of_two(8)) # True print(is_power_of_two(10)) # False
Using arrays, linked lists, and trees for cleaning and organizing datasets.
Quadtrees, k-d trees for image segmentation and path planning.
Representing relational data and learning on graph structures.
Impact of algorithms and data structures on bias mitigation.
Handling large-scale continuous data inputs efficiently.
Distributed data structures and parallelization for faster model training.
Bloom filters and count-min sketches for approximate queries in large datasets.
Algorithmic strategies driving model convergence.
Compact representations for AI on IoT and mobile platforms.
Emerging algorithms and data structures for quantum-enhanced AI.
Utilizing shortest paths, centrality, and community detection in AI systems.
# Example: Dijkstra's shortest path in Python import heapq def dijkstra(graph, start): pq = [(0, start)] dist = {node: float('inf') for node in graph} dist[start] = 0 while pq: cost, node = heapq.heappop(pq) if cost > dist[node]: continue for neighbor, weight in graph[node]: new_cost = cost + weight if new_cost < dist[neighbor]: dist[neighbor] = new_cost heapq.heappush(pq, (new_cost, neighbor)) return dist graph = { 'A': [('B',1), ('C',4)], 'B': [('C',2), ('D',5)], 'C': [('D',1)], 'D': [] } print(dijkstra(graph, 'A'))
Implementing decision trees with pruning and feature importance.
# Simple decision tree split example def split_data(data, feature, threshold): left = [d for d in data if d[feature] <= threshold] right = [d for d in data if d[feature] > threshold] return left, right data = [{'x':2}, {'x':5}, {'x':1}] left, right = split_data(data, 'x', 3) print(left, right)
Mapping large categorical features into compact hash spaces.
def feature_hashing(feature, size=10): return hash(feature) % size print(feature_hashing("color_red"))
Fast prefix search and autocomplete implementations.
class TrieNode: def __init__(self): self.children = {} self.end = False class Trie: def __init__(self): self.root = TrieNode() def insert(self, word): node = self.root for ch in word: node = node.children.setdefault(ch, TrieNode()) node.end = True def search(self, prefix): node = self.root for ch in prefix: if ch not in node.children: return False node = node.children[ch] return True t = Trie() t.insert("hello") print(t.search("hel"))
Applying DP to optimize sequence predictions in AI.
# DP for Fibonacci numbers (memoization) memo = {0:0, 1:1} def fib(n): if n not in memo: memo[n] = fib(n-1)+fib(n-2) return memo[n] print(fib(7))
Probabilistic data structure for membership queries.
# Simple Bloom filter example using sets (approximate) class BloomFilter: def __init__(self): self.data = set() def add(self, item): self.data.add(item) def check(self, item): return item in self.data bf = BloomFilter() bf.add("apple") print(bf.check("apple")) # True print(bf.check("banana")) # False
Storing and processing graph-structured data for AI.
# Represent graph as adjacency dict for GNN input graph = { 0: [1,2], 1: [0,2], 2: [0,1] } print(graph)
Using heaps to prioritize nodes in search algorithms.
import heapq pq = [] heapq.heappush(pq, (2, 'B')) heapq.heappush(pq, (1, 'A')) print(heapq.heappop(pq)) # (1, 'A')
Disjoint set for grouping related data points.
class UnionFind: def __init__(self, n): self.parent = list(range(n)) def find(self, x): while self.parent[x] != x: x = self.parent[x] return x def union(self, a, b): self.parent[self.find(a)] = self.find(b) uf = UnionFind(5) uf.union(0,1) print(uf.find(1) == uf.find(0)) # True
Compactly representing boolean features.
flags = 0b0000 flags |= 0b0010 # Set 2nd bit print(bin(flags))
Using dictionaries for feature storage.
features = {"height": 170, "weight": 65} print(features["height"])
Efficiently processing sequences and streams.
data = [1,2,3,4,5] window_size = 3 for i in range(len(data)-window_size+1): print(data[i:i+window_size])
Handling dynamic queries in AI data streams.
# Placeholder for segment tree build/query example print("Segment Tree example placeholder")
Efficient nearest neighbor searches in high dimensions.
# Simplified KD-tree node class KDNode: def __init__(self, point): self.point = point self.left = None self.right = None print("KD-Tree node created")
Improving AI computation speed by caching results.
cache = {} def compute(x): if x in cache: return cache[x] result = x*x cache[x] = result return result print(compute(4))
Distributing computations for faster training.
from concurrent.futures import ThreadPoolExecutor def square(x): return x*x with ThreadPoolExecutor() as executor: results = list(executor.map(square, [1,2,3])) print(results)
Checking word prefixes quickly.
# Reusing trie insert and search from before print("Spell check example placeholder")
Key for neural network forward passes.
import numpy as np a = np.array([[1,2],[3,4]]) b = np.array([[5,6],[7,8]]) print(np.dot(a,b))
Modifying graph structures on the fly.
graph = {'A': ['B'], 'B': []} graph['B'].append('A') print(graph)
Ordering tasks with dependencies.
def topo_sort(graph): visited = set() order = [] def dfs(node): if node in visited: return visited.add(node) for neighbor in graph.get(node, []): dfs(neighbor) order.append(node) for node in graph: dfs(node) return order[::-1] print(topo_sort({'A': ['B'], 'B': ['C'], 'C': []}))
Processing data streams with limited memory.
def moving_average(stream, n): window = [] for x in stream: window.append(x) if len(window) > n: window.pop(0) print(sum(window)/len(window)) moving_average([1,2,3,4,5], 3)
Efficient storage of large sparse matrices in ML.
from scipy.sparse import csr_matrix import numpy as np data = np.array([1,2,3]) row_ind = np.array([0,1,2]) col_ind = np.array([0,2,1]) sparse_mat = csr_matrix((data, (row_ind, col_ind)), shape=(3,3)) print(sparse_mat)
Custom heuristics to guide AI search algorithms.
# Heuristic example: Manhattan distance for grids def manhattan(a, b): return abs(a[0]-b[0]) + abs(a[1]-b[1]) print(manhattan((1,2), (3,4)))
Maintaining versions of data structures efficiently.
# Example: immutable list (using tuple) lst_v1 = (1,2,3) lst_v2 = lst_v1 + (4,) print(lst_v1, lst_v2)
Transforming categorical features for ML models.
features = ["red", "blue", "green"] hashed = [hash(f)%10 for f in features] print(hashed)
Using distributed hash tables and queues in AI pipelines.
print("Distributed queues example placeholder")
Fast similarity search in high-dimensional data.
print("ANN placeholder for k-d trees or locality-sensitive hashing")
Converting graph data into vector representations.
print("Graph embeddings placeholder for node2vec, DeepWalk")
Algorithms optimized for memory hierarchy without tuning.
print("Cache-oblivious matrix multiply placeholder")
Maintaining balanced search trees for efficient queries.
print("AVL or Red-Black tree placeholder")
Using Monte Carlo Tree Search for AI decision making.
print("MCTS placeholder with tree traversal")
Structures supporting interpretability in AI.
print("Decision tree visualization placeholder")
Protecting data integrity with cryptographic hashes.
import hashlib print(hashlib.sha256(b"AI data").hexdigest())
Managing distributed model updates efficiently.
print("Federated averaging placeholder")
Fast prefix lookups for autocomplete and tokenization.
print("Trie autocomplete placeholder")
Modeling and optimizing AI task scheduling.
print("Queue simulation placeholder")
Using randomness to optimize or approximate solutions.
import random print(random.sample(range(100), 5))
Task prioritization for resource allocation.
print("Priority queue example reused")
Dividing graphs for parallel AI computations.
print("Graph partitioning placeholder")
Reducing storage and transmission costs for AI data.
import zlib text = b"AI data compression example" compressed = zlib.compress(text) print(compressed)
Optimizing data structures for faster CPU cache access.
print("Array of structs vs struct of arrays explanation placeholder")
Generating synthetic data efficiently.
print("Data augmentation placeholder for image rotation")
Using parallelism to speed up graph analytics.
print("Parallel BFS placeholder")
Distributing AI workloads evenly across nodes.
print("Load balancing hash ring placeholder")
Modeling complex relationships beyond simple graphs.
print("Hypergraph placeholder")
Ensuring data consistency despite failures.
print("Checkpointing and recovery placeholder")
Indexing data for fast retrieval in AI systems.
print("Inverted index example placeholder")
Adapting data structures to multi-level memory systems.
print("Multi-level caching placeholder")
Making decisions based on data arriving over time.
print("Online learning placeholder")
Emerging quantum approaches for data representation.
print("Quantum superposition placeholder")
1. Programming Languages Used in DSA
Data Structures and Algorithms are concepts you can implement in many programming languages. Popular languages include:
2. Conceptual Language / Terminology
This includes terms like nodes, pointers, arrays, recursion, Big O notation, dynamic programming, graph traversal, and more.
3. Pseudocode - A Language for Algorithms
Pseudocode is a plain-language way to describe algorithms without syntax rules, helping to explain logic clearly before coding.
Example: Pseudocode for Bubble Sort
<!-- Bubble Sort Pseudocode --> for i from 0 to length(array) - 1: <!-- Outer loop for each pass --> for j from 0 to length(array) - i - 1: <!-- Inner loop for comparing adjacent elements --> if array[j] > array[j+1]: <!-- If the current element is greater than the next element --> swap array[j] and array[j+1] <!-- Swap the two elements -->
Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
<!-- Bubble Sort in Python --> def bubble_sort(arr):<br> n = len(arr) <!-- Get the length of the array --><br> for i in range(n): <!-- Loop over the entire array --><br> for j in range(0, n - i - 1): <!-- Last i elements are already sorted --><br> if arr[j] > arr[j + 1]: <!-- Compare adjacent elements --><br> arr[j], arr[j + 1] = arr[j + 1], arr[j] <!-- Swap if the previous is greater --><br> # Example usage arr = [64, 34, 25, 12, 22, 11, 90]<br> bubble_sort(arr)<br> print("Sorted array is:", arr) <!-- Output the sorted array -->
Output:
Sorted array is: [11, 12, 22, 25, 34, 64, 90]
Bubble Sort is one of the simplest sorting algorithms used to arrange elements in a list (like numbers) in ascending or descending order.
If you have the list: [5, 3, 8, 4]
Repeat the process until no swaps are needed and the list becomes [3, 4, 5, 8].
Want me to show you a code example too?