IB SL

Computational Thinking and Problem-Solving

Download PDF

Computational Thinking

Computational thinking is a structured approach to solving problems in a way that can be understood and executed by a computer. The IB syllabus identifies four key thinking approaches.

Four Computational Thinking Approaches

Thinking abstractly — identifying and extracting the essential features of a problem while ignoring irrelevant detail. For example, a map application abstracts road geometry but omits the colour of buildings.

Thinking ahead — planning what outputs are needed before designing the process; identifying pre-conditions, post-conditions, and the order in which steps must happen. For example, a sorting algorithm must consider that its output must satisfy a specific ordering property before a step is written.

Thinking procedurally — breaking a complex task into a sequence of clearly ordered steps that, when followed, produce the desired result. For example, writing a recipe as numbered steps.

Thinking concurrently — identifying parts of a problem that can be done simultaneously (in parallel) rather than sequentially. For example, a search engine crawls multiple web pages at the same time rather than one after another.

IB Paper 1 often asks students to “identify the thinking approach” in a scenario and justify it. The key is to match the specific action described to the correct approach: abstracting = ignoring detail; thinking ahead = planning outputs first; procedural = sequential steps; concurrent = parallelism. One approach, one justification.


Flowcharts

A flowchart is a visual representation of an algorithm, using standardised symbols to show the sequence of steps, decisions, and loops. Flowcharts help communicate an algorithm’s logic before writing code or pseudocode.

Standard Flowchart Symbols

Symbol shapeNamePurpose
Oval (rounded rectangle)TerminalStart or End of the algorithm
RectangleProcessAn action or computation (e.g., total ← total + arr[i])
DiamondDecisionA yes/no (true/false) question that branches the flow
ParallelogramInput/OutputReading data (input) or displaying a result (output)
ArrowFlow lineShows the direction of execution

Reading a Flowchart

Execution flows from the Start terminal downward (or in the direction of arrows). At a diamond (decision), the flow splits: one path for the “Yes” (true) branch and one for the “No” (false) branch. Loops are created by having a flow line loop back to an earlier point.

Worked Example — Flowchart for finding the maximum of two numbers:

Start → Input A, B → Decision: A > B? → Yes: Output A → End; No: Output B → End

Written as pseudocode for clarity:

input A
input B
if A > B then
    output A
else
    output B
end if

The diamond corresponds to if A > B then, with the Yes branch going left to output A and the No branch going right to output B.

IB Paper 1 may ask you to draw a flowchart or to trace through a given flowchart and state the output. Always follow the flow line direction carefully. Common mistakes: using the wrong shape (rectangle for a decision instead of a diamond); forgetting to label the Yes/No branches of the diamond; omitting the terminal (Start/End) ovals.


Problem Decomposition and Modular Design

Problem decomposition means breaking a large, complex problem into smaller, more manageable sub-problems. Each sub-problem can then be solved independently and the solutions combined.

Modular design applies decomposition to programming: the solution is built as a collection of modules (procedures, functions, methods), each responsible for one specific task.

Benefits of modular design:

  • Each module can be written, tested, and debugged independently (unit testing)
  • Modules can be reused across different programs
  • Large programs can be developed by teams working on different modules simultaneously
  • Easier to maintain — changing one module does not require rewriting others, provided the interface (inputs and outputs) stays the same
  • Makes programs easier to read and understand

Modular design key benefits: RETUR Reusability, Easy maintenance, Testability independently, Understanding (readability), Reusability by teams


IB Pseudocode Conventions

IB Computer Science uses a standardised pseudocode notation in all exams. You must use this exact syntax in Paper 1 and Paper 2 answers — markers will not accept general-purpose code or other pseudocode styles.

Variables and Assignment

count ← 0
name ← "Alice"
total ← total + 5

The symbol means “is assigned the value of”. The right-hand side is evaluated first, then the result is stored in the variable on the left.

Input and Output

input name
output "Hello, ", name
output score

Conditionals

if score >= 50 then
    output "Pass"
else
    output "Fail"
end if

Multi-branch:

if grade = "A" then
    output "Excellent"
else if grade = "B" then
    output "Good"
else
    output "Acceptable"
end if

Loops

Count-controlled loop (from/to):

loop count from 1 to 10
    output count
end loop

Condition-controlled loop:

loop while answer ≠ "quit"
    input answer
end loop

Arrays

IB pseudocode uses 1-based indexing — the first element is arr[1], and an array of N elements runs from arr[1] to arr[N]. A 1D array named scores with 5 elements:

scores[1] ← 85
scores[5] ← 72
output scores[2]

Methods and Procedures

METHOD greet(name)
    output "Hello, ", name
END METHOD

METHOD add(a, b)
    return a + b
END METHOD

Call a method: greet("Alice") or result ← add(3, 5)

The most common pseudocode errors in IB exams are: (1) using = for assignment instead of ; (2) writing while loops without loop and end loop; (3) using print or console.log instead of output; (4) forgetting end if to close a conditional. These are penalised in Paper 1. Practise writing IB pseudocode exactly.


Algorithm Design — Searching

A searching algorithm locates a specific item within a collection of data. The two algorithms required for IB SL are linear search and binary search.

Examines each element of the array in sequence from the first to the last until the target is found or all elements have been checked.

Pre-condition: None — the array does not need to be sorted.

Pseudocode:

METHOD linearSearch(arr, target)
    found ← false
    index ← 1
    loop while index <= length(arr) and found = false
        if arr[index] = target then
            found ← true
        else
            index ← index + 1
        end if
    end loop
    if found = true then
        return index
    else
        return -1
    end if
END METHOD

Time complexity (concept): In the worst case, every element is examined — nn comparisons for an array of nn elements. This is called linear or O(n)O(n) complexity.

Trace table — linear search for target = 7 in [3, 9, 7, 1, 5]:

Stepindexarr[index]foundAction
113false3 ≠ 7, index ← 2
229false9 ≠ 7, index ← 3
337false7 = 7, found ← true
End37trueReturn 3

Repeatedly halves the search space by comparing the target to the middle element. Requires the array to be sorted beforehand.

Pre-condition: Array must be sorted in ascending order.

Pseudocode:

METHOD binarySearch(arr, target)
    low ← 1
    high ← length(arr)
    found ← false
    loop while low <= high and found = false
        mid ← (low + high) div 2
        if arr[mid] = target then
            found ← true
        else if arr[mid] < target then
            low ← mid + 1
        else
            high ← mid - 1
        end if
    end loop
    if found = true then
        return mid
    else
        return -1
    end if
END METHOD

Time complexity (concept): Each step halves the remaining elements. For nn elements, at most log2n\log_2 n comparisons are needed — much faster than linear search for large arrays.

Trace table — binary search for target = 14 in [2, 5, 8, 12, 14, 23, 38] (indices 1–7):

Steplowhighmidarr[mid]Action
11741212 < 14, low ← 5
25762323 > 14, high ← 5
35551414 = 14, found ← true
End5Return 5

A common error is applying binary search to an unsorted array and claiming it will still work — it will not. Always state the pre-condition: “the array must be sorted in ascending order”. A second common error is writing mid = (low + high) / 2 — use div for integer division in IB pseudocode to avoid fractional indices.


Algorithm Design — Sorting

A sorting algorithm rearranges the elements of an array into a specified order (typically ascending). Two algorithms are required for IB SL.

Bubble Sort

Repeatedly compares adjacent pairs of elements and swaps them if they are in the wrong order. After each full pass, the largest unsorted element “bubbles” to its correct position at the end.

Pseudocode:

METHOD bubbleSort(arr)
    n ← length(arr)
    loop i from 1 to n - 1
        loop j from 1 to n - i
            if arr[j] > arr[j + 1] then
                temp ← arr[j]
                arr[j] ← arr[j + 1]
                arr[j + 1] ← temp
            end if
        end loop
    end loop
END METHOD

Trace table — bubble sort on [5, 3, 8, 1]:

Pass 1 (i = 1):

jComparisonActionArray state
15 > 3? YesSwap[3, 5, 8, 1]
25 > 8? NoNo swap[3, 5, 8, 1]
38 > 1? YesSwap[3, 5, 1, 8]

Pass 2 (i = 2):

jComparisonActionArray state
13 > 5? NoNo swap[3, 5, 1, 8]
25 > 1? YesSwap[3, 1, 5, 8]

Pass 3 (i = 3):

jComparisonActionArray state
13 > 1? YesSwap[1, 3, 5, 8]

Sorted result: [1, 3, 5, 8]

Selection Sort

Finds the minimum element in the unsorted portion of the array and swaps it into the next sorted position. Repeats until the entire array is sorted.

Concept (pseudocode):

METHOD selectionSort(arr)
    n ← length(arr)
    loop i from 1 to n - 1
        minIndex ← i
        loop j from i + 1 to n
            if arr[j] < arr[minIndex] then
                minIndex ← j
            end if
        end loop
        if minIndex ≠ i then
            temp ← arr[i]
            arr[i] ← arr[minIndex]
            arr[minIndex] ← temp
        end if
    end loop
END METHOD

Key difference from bubble sort: Selection sort makes at most n1n-1 swaps (one per pass); bubble sort may make many more swaps. However, both make O(n2)O(n^2) comparisons.

IB Paper 1 often gives a partially-sorted array and asks you to show the state after one or two passes of a named algorithm. For bubble sort: show the state after each full pass. For selection sort: show which element was placed in position after each pass. State the algorithm name explicitly before beginning your trace.


Trace Tables

A trace table is a tool for manually simulating the execution of an algorithm, tracking the value of each variable at each step. Trace tables are used in IB exams to demonstrate understanding of algorithm execution.

How to Construct a Trace Table

  1. Identify all variables used in the algorithm
  2. Create a column for each variable (and one for any output)
  3. Execute the algorithm one line at a time, recording each change
  4. Record only the values that change at each step; leave unchanged cells blank

Trace table — find the maximum value in [4, 7, 2, 9, 5]:

METHOD findMax(arr)
    max ← arr[1]
    loop i from 2 to length(arr)
        if arr[i] > max then
            max ← arr[i]
        end if
    end loop
    return max
END METHOD
iarr[i]maxConditionAction
4Initialise max ← arr[1]
2747 > 4? Yesmax ← 7
3272 > 7? No
4979 > 7? Yesmax ← 9
5595 > 9? No
End9Return 9

Collection Types

The IB syllabus requires understanding of four abstract data structures and their basic operations.

Arrays

An array is an ordered, fixed-size collection of elements of the same type, accessed by index.

  • Access any element directly by index: O(1)O(1)
  • Inserting or deleting in the middle is slow (requires shifting elements)
  • IB arrays are 1-based: arr[1] is the first element, arr[N] is the last in an N-element array

2D Arrays

A 2D array (two-dimensional array) is an array of arrays — a grid of elements arranged in rows and columns. It is used to represent matrices, game boards, tables, and image pixel grids.

In IB pseudocode, a 2D array is accessed using two indices: grid[row][col]. Both indices are 1-based.

grid[1][1] ← 5
grid[2][3] ← 9
output grid[1][1]

Traversing a 2D array (reading every element):

METHOD print2DArray(grid, rows, cols)
    loop i from 1 to rows
        loop j from 1 to cols
            output grid[i][j]
        end loop
    end loop
END METHOD

Worked Example — Find the sum of all elements in a 3x3 grid:

METHOD sumGrid(grid)
    total ← 0
    loop i from 1 to 3
        loop j from 1 to 3
            total ← total + grid[i][j]
        end loop
    end loop
    return total
END METHOD

For grid = [[1,2,3],[4,5,6],[7,8,9]]: total = 1+2+3+4+5+6+7+8+9 = 45.

2D array questions in IB exams often ask you to trace a nested loop. Track both loop variables (i and j) simultaneously in your trace table — add a column for each. The inner loop runs completely for each iteration of the outer loop.

Stacks

A stack is a Last-In, First-Out (LIFO) collection. Think of a stack of plates — you add and remove only from the top.

OperationDescription
push(item)Add item to the top of the stack
pop()Remove and return the top item
peek()View the top item without removing it
isEmpty()Returns true if the stack has no items

Real-world uses: undo/redo in editors, call stack in program execution, browser back button history.

Queues

A queue is a First-In, First-Out (FIFO) collection. Think of a line at a checkout — the first person to join is the first to be served.

OperationDescription
enqueue(item)Add item to the back of the queue
dequeue()Remove and return the item at the front
isEmpty()Returns true if the queue has no items

Real-world uses: print job queues, CPU scheduling, network packet queues.

Linked Lists

A linked list is a dynamic collection of nodes, where each node stores a data value and a pointer (reference) to the next node. Unlike arrays, linked lists do not require contiguous memory.

  • Advantage: Inserting or deleting a node requires only updating pointers — no shifting of elements
  • Disadvantage: Accessing element nn requires traversing from the head — O(n)O(n) access time
  • The last node’s pointer is null (or NIL), indicating the end of the list

Data structure — access pattern:

  • Array — fast random access by index; fixed size
  • 2D Array — grid structure; nested loops to traverse; row-column indexing
  • Stack — LIFO; add/remove from top only
  • Queue — FIFO; add at back, remove from front
  • Linked list — dynamic size; fast insert/delete; slow random access

HL Abstract Data Structures

Binary Trees

A binary tree is a hierarchical data structure where each node has at most two children, referred to as the left child and right child. The topmost node is the root; nodes with no children are leaves.

Binary Search Tree (BST): A special binary tree where, for every node:

  • All values in the left subtree are less than the node’s value
  • All values in the right subtree are greater than the node’s value

This property enables efficient searching, insertion, and deletion.

        8
       / \
      3   10
     / \    \
    1   6    14
       / \   /
      4   7 13

Three traversal orders for binary trees:

TraversalOrderVisits
In-orderLeft → Root → RightProduces a sorted sequence for a BST
Pre-orderRoot → Left → RightUseful for copying or serialising the tree
Post-orderLeft → Right → RootUseful for deleting the tree or evaluating expression trees

Worked Example — In-order traversal of the BST above:

In-order visits: 1, 3, 4, 6, 7, 8, 10, 13, 14

Note this produces the values in ascending sorted order — confirming the BST property.

Searching in a BST:

  1. Start at the root
  2. If the target equals the current node, found
  3. If the target is less, go left; if greater, go right
  4. Repeat until found or reach a null pointer (not present)

Average time complexity: O(logn)O(\log n) for a balanced BST — each step halves the remaining search space, just like binary search on an array.

HL exams may ask you to insert values into a BST and draw the resulting tree, or to perform a named traversal and list the output order. For insertion: follow the same comparison rules as searching until you reach a null position, then insert the new node there. The order of insertion determines the tree’s shape.


HL Recursion

Recursion is a programming technique where a function calls itself as part of its own definition. A recursive function must have:

  1. Base case — a condition under which the function returns a result without calling itself again (prevents infinite recursion)
  2. Recursive case — the function calls itself with a simpler or smaller version of the problem

How Recursion Works

Each recursive call creates a new stack frame on the call stack, storing the local variables and the return address for that call. When the base case is reached, the stack unwinds — each frame returns its result to the caller below it.

Worked Example — Recursive factorial:

METHOD factorial(n)
    if n = 0 then
        return 1
    else
        return n * factorial(n - 1)
    end if
END METHOD

Trace for factorial(4):

  • factorial(4) calls factorial(3)
  • factorial(3) calls factorial(2)
  • factorial(2) calls factorial(1)
  • factorial(1) calls factorial(0)
  • factorial(0) returns 1 (base case)
  • factorial(1) returns 1 * 1 = 1
  • factorial(2) returns 2 * 1 = 2
  • factorial(3) returns 3 * 2 = 6
  • factorial(4) returns 4 * 6 = 24

Recursion vs iteration:

AttributeRecursionIteration
MechanismFunction calls itselfLoop repeats
MemoryEach call uses stack space; risk of stack overflowFixed memory use
ReadabilityElegant for tree/graph problemsOften simpler for linear problems
Use casesTree traversal, divide-and-conquer, factorialsArray processing, simple loops

Any recursive algorithm can be rewritten iteratively and vice versa. IB HL questions sometimes ask you to “convert a recursive algorithm to an iterative one” or to “identify the base case and recursive case” in given code. Always identify both explicitly.


HL Big-O Notation

Big-O notation describes the time complexity of an algorithm — how the number of operations grows as the input size nn increases. It focuses on the dominant term as nn becomes large, ignoring constants and lower-order terms.

Common Complexity Classes

Big-ONameExampleGrowth
O(1)O(1)ConstantArray access by indexDoes not grow with nn
O(logn)O(\log n)LogarithmicBinary search, BST searchGrows very slowly
O(n)O(n)LinearLinear search, finding maxProportional to nn
O(nlogn)O(n \log n)LinearithmicMerge sort, quicksort (average)Slightly worse than linear
O(n2)O(n^2)QuadraticBubble sort, selection sortGrows rapidly
O(2n)O(2^n)ExponentialRecursive subset generationExtremely rapid growth

Analysing Algorithm Complexity

  • Single loop over nn elements: O(n)O(n)
  • Nested loop (loop inside a loop), each running nn times: O(n2)O(n^2)
  • Halving the search space each step: O(logn)O(\log n)
  • Fixed number of steps regardless of input: O(1)O(1)

Worked Example — Determine the complexity of bubble sort:

Bubble sort uses two nested loops. The outer loop runs n1n-1 times. For each iteration of the outer loop, the inner loop runs up to n1n-1 times. Total comparisons in the worst case: approximately n(n1)2\frac{n(n-1)}{2}.

Dominant term: n2n^2. Therefore bubble sort is O(n2)O(n^2).

Search and sort complexity to memorise:

  • Linear search: O(n)O(n)
  • Binary search: O(logn)O(\log n)
  • Bubble sort: O(n2)O(n^2)
  • Selection sort: O(n2)O(n^2)
  • Binary tree search (balanced): O(logn)O(\log n)

When comparing algorithms in HL, always state the Big-O class and explain what it means in context. For example: “Binary search is O(logn)O(\log n), meaning that for an array of 1,000,000 elements, at most 20 comparisons are needed, whereas linear search would require up to 1,000,000 comparisons in the worst case.”


Standard Algorithms

Beyond searching and sorting, the IB syllabus requires students to be able to write or trace algorithms for these common tasks.

Finding Minimum and Maximum

METHOD findMin(arr)
    min ← arr[1]
    loop i from 2 to length(arr)
        if arr[i] < min then
            min ← arr[i]
        end if
    end loop
    return min
END METHOD

The maximum algorithm is identical but uses > and a variable named max.

Counting Occurrences

METHOD countOccurrences(arr, target)
    count ← 0
    loop i from 1 to length(arr)
        if arr[i] = target then
            count ← count + 1
        end if
    end loop
    return count
END METHOD

In Paper 1, these standard algorithms are sometimes presented with minor errors and you are asked to identify the bug. Common inserted bugs include: off-by-one errors in loop bounds (from 1 to length(arr) + 1 instead of length(arr)), using > where >= is needed, or failing to initialise the accumulator variable before the loop.

Video Resources

Watch: IB CS Topic 4 — Pseudocode & Problem-Solving

IB Computer Science · IB-specific · Exam-style pseudocode questions solved step-by-step: arrays, collections, loops, and Paper 1 mark-scheme technique


Practice Questions

Q1 — A programmer writes a procedure that compresses image files and a separate procedure that uploads them to a server, then calls them in sequence. Identify the computational thinking approach demonstrated and justify your answer. [2 marks]

Model answer:

Thinking procedurally (1 mark) — the programmer has broken the overall task (compress and upload) into a sequence of clearly ordered steps (first compress, then upload), each represented as a separate procedure called in order (1 mark).

Alternative credit: thinking ahead (1 mark) — the programmer identified the required outputs (compressed file; uploaded file) before designing the steps, and the order of steps reflects their dependency (must compress before uploading) (1 mark).

Q2 — Write IB pseudocode for a procedure that accepts an array of integers and outputs the sum of all elements. [4 marks]

Model answer:

METHOD sumArray(arr)
    total ← 0
    loop i from 1 to length(arr)
        total ← total + arr[i]
    end loop
    output total
END METHOD

Award: 1 mark for correct METHOD/END METHOD structure; 1 mark for initialising total ← 0 before the loop; 1 mark for correct loop from 1 to length(arr); 1 mark for accumulating total ← total + arr[i] inside the loop.

Q3 — Trace the binary search algorithm on the sorted array [1, 4, 6, 10, 15, 21, 30] searching for target = 10. Show the trace table. [4 marks]

Model answer:

Array indices: 1=1, 2=4, 3=6, 4=10, 5=15, 6=21, 7=30

Steplowhighmidarr[mid]Action
11741010 = 10, found ← true
End4Return 4

Award: 1 mark for correct initial low = 1, high = 7; 1 mark for mid = 4 correctly computed; 1 mark for identifying arr[4] = 10 = target; 1 mark for returning index 4 (or stating “found at index 4”).

Q4 — Show the state of the array [6, 2, 9, 4] after each pass of a bubble sort. [4 marks]

Model answer:

Start: [6, 2, 9, 4]

Pass 1: Compare 6,2 → swap; compare 6,9 → no swap; compare 9,4 → swap → [2, 6, 4, 9] (1 mark)

Pass 2: Compare 2,6 → no swap; compare 6,4 → swap → [2, 4, 6, 9] (1 mark)

Pass 3: Compare 2,4 → no swap → [2, 4, 6, 9] (1 mark — must show pass occurred even with no swaps)

Sorted: [2, 4, 6, 9] (1 mark for correct final result)

Q5 — State two differences between a stack and a queue. [4 marks]

Model answer:

Difference 1: A stack uses LIFO (Last-In, First-Out) order — the most recently added item is the first to be removed (1 mark). A queue uses FIFO (First-In, First-Out) order — the item that has been waiting longest is removed first (1 mark).

Difference 2: In a stack, items are added and removed from the same end (the top) using push and pop (1 mark). In a queue, items are added at the back (enqueue) and removed from the front (dequeue) — different ends are used (1 mark).

Q6 — Write IB pseudocode for a linear search that returns true if a target value is found in an array, and false otherwise. [4 marks]

Model answer:

METHOD linearSearch(arr, target)
    found ← false
    loop i from 1 to length(arr)
        if arr[i] = target then
            found ← true
        end if
    end loop
    return found
END METHOD

Award: 1 mark for METHOD/END METHOD with correct parameters; 1 mark for initialising found ← false; 1 mark for correct loop bounds 1 to length(arr); 1 mark for setting found ← true on match and returning found.