Home Writing Tree traversal for Backtracking
Post
Cancel

Writing Tree traversal for Backtracking

Backtracking as orderly enumeration

Backtracking searches all the possible solutions in the solution space in a systematic(orderly) way. It could provide an orderly enumeration for a general combination problem with constraint in the format:

1
2
3
4
5
From a set S, choose a combination C from S.

Let this combination C satisfy g(C);
OR
Let this combination C satisfying g(C) achieve the best/max/min condition for a target function F(C).

To make the enumeration orderly, we could first construct the complete solution space from the orderly selection:

1
2
3
4
5
Let S = {s1, s2, s3, ..., s_N}

Compose a combination C from:

Choosing c1 = s_1 -> Choosing c2 from S - {c_1} -> Choosing c3 from S - {c_1, c_2} -> ...

This is useful when \(S\) is rather implicitly given, like in the subset-sum problems (e.g., Knapsack). But sometimes the \(S\) is not given explicitly, while the backtracking structure is more explicitly given already:

  • Board games (e.g., crossword, N-queen problem), where the nodes are the board positions, and the edges are transitions from a cell to its adjacent cells.
  • Multi-player Step games (e.g., Decision-making min-max), where the nodes are the game states, and the edges are the choices/strategies taken by the player at the source node.
  • Sequential-conditional select (e.g., Resource allocation problems (e.g. PE151)): where the nodes are the current states of the resources, and the edges are the decisions to make at this step.

The DFS traversal structure of the backtracking

A backtracking structure is a path traversal in a tree/graph, where the nodes are the elements in the \(S\). Every candidate solution is a full path. For a graph representation, the fewer edges in the graph, the less complexity in the traversal toward the final solution. The extreme trim-down of a graph is a tree, where the leaf nodes signify all the full paths from the root, and each should be considered and checked.

So even though backtracking is a searching method considering the entire search space, it would be more efficient than brute-force/full enumeration because we could prune the subtree of a node when we have decided no more exploration is necessary from a sub-solution.

Temporally Independent DFS search paths

Suppose we have already been given a tree of nodes and edges representing a problem. Backtracking is usually done by using depth-first-search as a subroutine.

From a node n, each search branch is a path down from this node. The search branches are temporally independent – if not done in parallel. So sometimes, it is convenient to utilize this temporal locality feature to assign temporary values to the visited edges and nodes. For example, in the crossword game, it requires that a word is composed of non-overlapping cells in the board.

Tree Traversal and backtracking_temporalIndependent (1)

Object/Procedure-oriented DFS

The search starting from the node n is done by calling f(n). It’s often confusing as to what this function is actually doing. Basically, there are two ways to phrase it:

  • The “object-oriented” way: We continue the search from n down the different branches, and return whether any search is a success from n.

  • The “precedure-oriented” way: We walk from n and explore all the paths from this n with possible early stops.

The “object-oriented” way better describes the structure of the “root-tree/sub-trees and aggregation” search, which leads to –

Important Locality feature of DFS f(n)

Tree Traversal and backtracking_aggregation

By writing f(n, ...) specifically for the the tree rooted on node n, it simultaneously implicates an important feature of f(n, ...):

  • f(n, ...) does not know anything about the subtrees besides that their root nodes are on the other end of the edges.

That is:

  • f(n, ...) does not access the properties of the subtree root nodes nbr.

General Writing Style of f(n, …)

By incorporating the above rule, it follows the general writing style for f(n):

1
2
3
4
5
6
7
8
9
10
11
12
13
def f(n, state, params):
  if not continue_search(n): # no more branches from this node: node is None, some capacity limit is reached, state meets the termination, etc
    return ...

  res = []
  for nbr in n.neighbours:
    if nbr is valid:
      new_state <- state_change(nbr, state) #state change based on choice of nbr
      new_params <- param_change(params) #usually state unaware, e.g. level based
      res <- f(nbr, new_state, params)

  return aggregate(res)

The aggregation() above is a general way to get the solution for this node n from the subtrees. But sometimes if we only need to get one valid solution (“any”) instead of returning all applicable, we could add the early break/return whenever we find a successful f(nbr, ...).

1
2
3
4
5
6
7
8
9
10
def f(n, ...):
  ...
  for nbr in n.neighbours:
    if nbr is valid:
      ...
      if f(nbr, ...):
        return True

  return False

To remind ourselves at all time the locality of f(n, ...), we could name it by the target function, like “found”, “matched”, “valid”, or the return type “max length”, “min cost”, etc.

The end condition for f(n, …)

For different problems, it might require different writing styles for the end condition section in f(n, ...).

For the word matching problem, the end condition is:

1
2
3
def match(n, matched_word, remain_to_match, ...)
  if len(remain_to_match) == 0:
    return matched_word

For the N-queen problem, the end condition is:

1
2
3
def valid(pos, current_placements, ...):
  if len(current_placements) == N:
    return True

For a binary tree max-depth problem, the end condition is:

1
2
3
def maxDepth(root):
  if root is None:
    return 0

Note for the binary tree:

  • The end condition for f(n) when searching a binary tree is “when n is None”, but NOT “when n is a leaf node.”
This post is licensed under CC BY 4.0 by the author.