Study Notes
Overview
Welcome to the study of Algorithms, a cornerstone of OCR's A-Level Computer Science (H446) and a fundamental part of computational thinking. This topic, specification reference 2.1, is not just about memorising steps; it's about understanding the logic, efficiency, and trade-offs involved in solving problems programmatically. In your Component 02 exam, you will be expected to analyse, design, and evaluate algorithms, often using the official OCR Pseudocode. Questions will test your ability to trace algorithm execution, compare their performance characteristics using Big O notation, and apply them to practical scenarios. A solid grasp of this topic is crucial as it links directly to data structures and forms the basis for more advanced problem-solving, making it a frequent and high-value area for assessment.
Key Concepts
Concept 1: Algorithm Efficiency and Big O Notation
Algorithm efficiency is a measure of how many resources (primarily time and memory) an algorithm consumes in relation to the size of its input (n). We use Big O notation to provide an upper bound on this, describing the worst-case scenario. This allows us to compare algorithms in a standardised way, ignoring machine speed and focusing on computational complexity. For the exam, you must be able to identify and compare the following complexities.
- O(1) - Constant Time: The algorithm takes the same number of steps regardless of input size. A classic example is accessing an element in an array using its index.
- O(log n) - Logarithmic Time: The algorithm's time complexity grows logarithmically with the input size. With each step, the problem size is reduced by a constant factor (usually halved). Binary Search is the prime example.
- O(n) - Linear Time: The time taken is directly proportional to the input size. If you double the input, you double the time. Linear Search is a perfect illustration.
- O(n log n) - Log-Linear Time: This is the hallmark of efficient sorting algorithms. It's a sweet spot between linear and polynomial time. Merge Sort and Quick Sort (on average) fall into this category.
- O(n^2) - Polynomial (Quadratic) Time: The time taken is proportional to the square of the input size. These algorithms become very slow as 'n' increases. Inefficient sorting algorithms like Bubble Sort and Insertion Sort have this complexity in their average and worst cases.
- O(2^n) - Exponential Time: The time taken doubles with each additional element in the input. These algorithms are only feasible for very small 'n'. Brute-force solutions to complex problems often exhibit this complexity.
Examiner's Tip: When asked to calculate Big O, always drop constants and lower-order terms. An algorithm with a complexity of 4n^2 + 10n + 150 is simply O(n^2), because as 'n' becomes very large, the n^2 term dominates.
Concept 2: Standard Sorting Algorithms
Sorting is a fundamental operation in computer science. You need to know the mechanics, time complexity, space complexity, and stability of four key algorithms.
- Bubble Sort: The simplest sorting algorithm. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. Passes through the list are repeated until no swaps are needed. It's easy to implement but highly inefficient for large lists.
- Insertion Sort: Builds the final sorted array one item at a time. It iterates through the input elements and inserts each element into its correct position in the sorted part of the array. It's efficient for small or nearly-sorted lists.
- Merge Sort: A 'Divide and Conquer' algorithm. It divides the unsorted list into n sub-lists, each containing one element, and then repeatedly merges sub-lists to produce new sorted sub-lists until there is only one sub-list remaining. It's highly efficient and predictable but requires additional memory.
- Quick Sort: Another 'Divide and Conquer' algorithm. It picks an element as a pivot and partitions the given array around the picked pivot. It's generally faster in practice than Merge Sort but has a worst-case time complexity of O(n^2) if pivot selection is poor.
Concept 3: Standard Searching Algorithms
Searching for data is another core task. The choice of algorithm depends on whether the data is sorted.
- Linear Search: The most basic search. It sequentially checks each element of the list until a match is found or the whole list has been searched. It can be used on any list, sorted or unsorted. Its time complexity is O(n).
- Binary Search: A highly efficient search algorithm that only works on sorted lists. It works by repeatedly dividing the search interval in half. If the value of the search key is less than the item in the middle of theinterval, narrow the interval to the lower half. Otherwise, narrow it to the upper half. This continues until the value is found or the interval is empty. Its time complexity is O(log n).
Concept 4: Pathfinding Algorithms
Pathfinding algorithms are used to find the shortest or optimal route between two points in a graph.
- Dijkstra's Algorithm: Finds the shortest path between a starting node and all other nodes in a weighted graph. It works by visiting nodes in the graph starting from the object's start node and calculating the distance to its neighbours. It's a greedy algorithm that always selects the path with the smallest known distance. Important: Dijkstra's does not work correctly if there are negative edge weights.
- A Search Algorithm*: An extension of Dijkstra's that is optimised for a single destination. It uses a heuristic function to estimate the cost from a node to the goal. This allows it to prioritise paths that appear to be leading closer to the destination, making it more efficient than Dijkstra's in many practical scenarios like video games and route planning.
Mathematical/Scientific Relationships
- Big O Notation Dominance:
O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(2^n) - Binary Search Steps: The maximum number of comparisons in a binary search is
floor(log2(n)) + 1.
Practical Applications
- Sorting: Used everywhere from ordering files by name or date, to ranking search engine results, to processing financial transactions.
- Searching: Underpins database queries, web searches, and looking up information in any large dataset.
- Pathfinding: Powers GPS navigation (Google Maps, Waze), AI in video games (enemy movement), and network routing (finding the most efficient path for data packets on the internet).
{{asset:algorithms_podcast.wav