The Sliding Puzzle

Solving the 15-puzzle using A* and Iterative Deepening A* (IDA*) algorithms

Overview

The 15-puzzle is a sliding puzzle that consists of 15 numbered square tiles in random order in a 4 by 4 frame. The goal is to re-arrange the tiles into their correct orders. There exists other n-puzzles such as the smaller 8-puzzle or the larger 24-puzzle. The n-puzzle is a classical problem that asks whether a specific puzzle is solvable, and if it’s solvable, how many sliding moves it would take to re-arrange the tiles. For larger sizes of the n-puzzle, the question of finding the shortest solution to re-arrange the tiles is NP-hard.

Solvable Conditions

Different n-puzzles have different ways to determine whether a puzzle is solvable. The conditions for which a 15-puzzle is solvable are:

  • The blank is on an even row counting from the bottom (second-last, fourth-last, etc.), and the number of inversions is odd.
  • The blank is on an odd row counting from the bottom (last, third-last, fifth-last, etc.), and the number of inversions is even.
  • Number of inversions: If the tiles are written out in a single row instead of being spread in 4 rows, a pair of tiles (a, b) forms an inversion if a appears before b but a > b.

Shortest Solution

This interactive webpage applies two methods to solve the 15-puzzle: A* and Iterative deepening A* (IDA*). You can play around with the puzzle on the left or click on the shuffle button to generate 10 random moves. The other two buttons will try to solve the puzzle using A* and IDA* under a 1-minute threshold. That is, if the solving time exceeds 1 minute, the program will stop and return no result. This limitation is due to the fact that the solving engine was built on the client side (i.e. in your favorite web browser). Therefore, the 1-minute threshold would make sure that your web browser would not hang, and you would still able to browse other stuff while the solver is running. Another thing to note is that the total runtime of both algorithms is subjected to the activity of your web browser. It might run slower if there are more active tabs or interactions in the browser. The runtime is also slow due to the fact that the code was implemented asynchronously to make sure it would not block the main thread (or else you wouldn’t be able to click on other links or tabs). As a result, a lot of tweaks to the original code, which includes a complex recursion and some loops, were needed to make it work in the browser.

Detailed explanations with the original Java implementations and performance analysis are included in the next tabs.

The A* is an informed search algorithm, or a best first search. It uses a heuristic function to estimate the cost of the cheapest path and determine the next promising direction. If the heuristic function is admissible, which means it never overestimates the actual cost to get to the goal, A* is guaranteed to return the shortest path from start to goal. Mathematically, A* selects the path that minimizes

heuristic function fn = gn + hn

where n is the next node on the path, g(n) is the cost of the path from start to n, and h(n) is a heuristic function that estimates the cost of the cheapest path from n to goal.

Complexity

Space and time complexities of A* depend on the heuristic function. If the heuristic does not give any useful information (e.g. when heuristic is zero), then A* behaves just like the BFS algorithm. The good heuristic will prune away many nodes that are guaranteed not to yield an optimal solution. The most obvious example of these nodes is when a next node is exactly the previous node, meaning that it’s going backward.

In the worst case, the time and space complexities of A* are exponential just like BFS. With the use of a good heuristic function, its space and time complexity are just polynomial.

Optimizations

In my Java implementation, I used a Min-heap Priority Queue to keep track of the promising candidates so that the next promising candidate with the lowest heuristic score will be evaluated subsequently. For those candidates that have the same heuristic scores, they will be sorted descendingly by the actual moves that they already made. This second prioritization improves the performance of the algorithm for more complex problems in which many nodes have the same heuristic score. Besides, I also implemented a HashMap (i.e. a dictionary) to keep track of the nodes that are already evaluated so that we don’t have to put identical nodes into the Priority Queue unless they have a lower score. This would help reduce the time significantly because we can cut off duplicate nodes. Furthermore, because HashMap can look for an item in O(1) time complexity, it’s much more quickly compared to the Priority Queue, which is O(n) for checking if an item is in the queue.

Java Implementation

private void solveAStar() {
    PriorityQueue<State> open = new PriorityQueue<>();
    HashMap<State, Integer> closed = new HashMap<>();
    open.add(this.solutionState);
    closed.put(this.solutionState, this.solutionState.cost);
    while (!open.isEmpty()) {
        State q = open.poll();
        if (q.board.manhattan() == 0) {
            // STOP SEARCH
            this.solutionState = q;
            this.minMoves = q.moves;
            break;
        }
        PriorityQueue<Board> neighbors = q.board.neighbors();
        while (!neighbors.isEmpty()) {
            State state = new State(neighbors.poll(), q.moves + 1, q);

            if (!closed.containsKey(state)) {
                open.add(state);
                closed.put(state, state.cost);
            } else if (closed.get(state) > state.cost) {
                open.add(state);
                closed.replace(state, state.cost);
            }
        }
    }
}

Iterative deepening A* can be viewed as a combination of BFS and DFS. It performs a DFS and cuts off a branch when it goes down to a specific depth threshold, which initially is the heuristic cost of the root. If it cannot find the goal, the depth threshold is increased strategically using the minimum heuristic cost that exceeds the current threshold. Similar to A*, IDA* is guaranteed to find the optimal solution if the heuristic function is admissible because it will not overestimate the depth threshold.

Complexity

In contrast to A*, IDA* only remembers all the nodes on its current path. As a result, IDA* only requires a linear amount of memory O(d), in which d is the max depth of the search tree. Regarding time complexity, IDA* works similar to a brute-force tree search with a smaller constant factor. As IDA* does not utilize dynamic programming (i.e. saves results), it ends up checking the same nodes many times.

Optimizations

My implementation of IDA* also takes some advantages of optimizations already discussed in A*.

Java Implementation

private void solveIDAStar() {
    int bound = solutionState.cost;
    Stack<State> path = new Stack<>();
    HashSet<State> pathRef = new HashSet<>();
    path.push(solutionState);
    pathRef.add(solutionState);
    while (solutionState.board.manhattan() != 0) {
        bound = searchIDAStar(path, pathRef, solutionState.cost, bound);
    }
    minMoves = solutionState.moves;
}

private int searchIDAStar(Stack<State> path, HashSet<State> pathRef, int f, int bound) {
    State currState = path.lastElement();
    if (f > bound) {
        return f;
    }
    if (currState.board.manhattan() == 0) {
        solutionState = currState;
        return -Math.abs(f); // FOUND SOLUTION
    }
    int min = Integer.MAX_VALUE;
    PriorityQueue<Board> neighbors = currState.board.neighbors();
    while (!neighbors.isEmpty()) {
        State state = new State(neighbors.poll(), currState.moves + 1, currState);
        if (!pathRef.contains(state)) {
            path.push(state);
            pathRef.add(state);
            int t = searchIDAStar(path, pathRef, state.cost, bound);
            if (t < 0) {
                return -Math.abs(t); // FOUND SOLUTION
            }
            if (t < min) {
                min = t;
            }
            path.pop();
            pathRef.remove(state);
        }
    }
    return min;
}

In most cases, A* will be faster than IDA* because it always looks for the most promising candidates during its search while IDA* repeats checking candidates many times. However, A* will take more memory usage than IDA* because it needs to store all the promising candidates for future evaluation. On the other hand, IDA* only takes at most O(d) space in which d is the max depth of the search tree.

The table below shows the performance of A* and IDA*. Note that A* performs better for easy puzzles but is unable to finish because of its huge memory usage.

# Puzzle Estimated Actual A* time IDA* time
1 {{2, 7, 4, 3}, {1, 12, 8, 6}, {0, 14, 15, 9}, {13, 5, 11, 10}} 24 40 413ms 587ms
2 {{4, 6, 3, 0}, {1, 2, 7, 12}, {8, 5, 9, 14}, {10, 11, 13, 15}} 25 43 2s 617ms 1s 667ms
3 {{2, 0, 12, 5}, {13, 7, 1, 3}, {14, 11, 8, 10}, {4, 6, 9, 15}} 35 43 16ms 32ms
4 {{1, 4, 9, 3}, {6, 8, 11, 15}, {12, 2, 14, 10}, {13, 0, 7, 5}} 30 46 1s 321ms 5s 683ms
5 {{6, 8, 0, 4}, {13, 9, 10, 14}, {15, 2, 12, 5}, {1, 3, 7, 11}} 36 50 3s 26ms 4s 732ms
6 {{2, 3, 1, 9}, {5, 4, 7, 11}, {10, 0, 14, 15}, {12, 8, 6, 13}} 33 51 Memory Limit Exceeded 1m 49s 198ms
7 {{9, 15, 12, 7}, {6, 1, 3, 4}, {5, 13, 0, 14}, {8, 11, 10, 2}} 36 54 Memory Limit Exceeded 2m 43s 456ms
8 {{2, 14, 10, 7}, {11, 3, 0, 13}, {5, 9, 8, 6}, {12, 4, 15, 1}} 41 55 33s 504ms 1m 23s 90ms
9 {{1, 5, 0, 13}, {10, 3, 2, 11}, {15, 12, 14, 7}, {8, 4, 6, 9}} 42 56 Memory Limit Exceeded 58s 211ms
10 {{15, 1, 10, 13}, {11, 7, 5, 6}, {14, 3, 0, 12}, {4, 2, 8, 9}} 44 62 Memory Limit Exceeded 5m 34s 698ms