Depth-Limited Search

Depth-first search will not find a goal if it searches down a path that has infinite length. So, in general, depth-first search is not guaranteed to find a solution, so it is not complete.

This problem is eliminated by limiting the depth of the search to some value $l$. However, this introduces another way of preventing depth-first search from finding the goal. If the goal is deeper than $l$ it will not be found.

How would you make an intelligent guess for $l$ for a given search problem?

Its time complexity is $O(b^l)$ and its space complexity is $O(bl)$. What would the space complexity be of the backtracking version of this search?

Regular depth-first search is a special case, for which $l=\infty$.

Iterative-Deepening Search

If a depth-limited depth-first search limited to depth $l$ does not find the goal, try it again with limit at $l+1$. Continue this until goal is found.

Make depth-limited depth-first search complete by repeatedly applying it with greater values for the depth limit $l$.

Feels like breadth-first search, in that a level is fully explored before extending the search to the next level. But, unlike breadth-first search, after one level is fully explored, all nodes already expanded are thrown away and the search starts with a clear memory.

Seems very wasteful! Is it really? How many nodes are generated at the final level $d$?


How many nodes are expanded in the tree on your way to the final level, down to depth $d-1$?

$$b + b^2 + \cdots + b^{d-1} = O(b^{d-1})$$

How much of a waste is it to throw away those $O(b^{d-1})$ nodes? Say $b=10$ and $d=5$. We are throwing away on the order of $10^4 = 1,000$ nodes, regenerating them and then generating $b^d = 10^5 = 10,000$ new nodes. Regenerating those 1,000 nodes seems trivial compared to making the 10,000 new ones.

Our textbook authors say:

"In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known."

Watch this short video by Richard Korf, one of the developers of iterative deepening.

Recursive definition

Let's discuss the recursive implementation in Figures 3.17 (with Figure 3.18). Rather than the explicit storage of expanded nodes in a python dictionary named expanded, we can rely on the local variables implicitly stored in the function call stack.

First, define the recursive depth-limited search function that generates the children of a state and calls itself recursively on each of the child states. Let's define it as a mix of python and English. Let take_action_f be a function that generates one new state given a current state and a valid action from that state. Also let actions_f be a function that returns a list of valid actions from a given state. We will see examples of these in the next lecture notes.

In [ ]:
def depth_limited_search(state, goal_state, actions_f, take_action_f, depth_limit):
    # If we have reached the goal, exit, returning an empty solution path.
    If state == goal_state, then
        return []
    # If we have reached the depth limit, return the string 'cutoff'.
    If depth_limit is 0, then
        Return the string 'cutoff' to signal that the depth limit was reached
    cutoff_occurred = False
    # For each possible action from state ...
    For each action in actions_f(state):
        # Apply the action to the current state to get a next state, named child_state
        child_state = take_action_f(state, action)
        # Recursively call this function to continue the search starting from the child_state.
        # Decrease by one the depth_limit for this search.
        result = depth_limited_search(child_state, goal_state, actions_f, take_action_f, depth_limit - 1)
        # If result was 'cufoff', just note that this happened.
        If result is 'cutoff', then
            cutoff_occurred = True
        # If result was not 'failure', search succeeded so add childState to front of solution path and
        # return that path.
        else if result is not 'failure' then
            Add child_state to front of partial solution path, in result, returned by depth_limited_search
            return result
    # We reach here only if cutoff or failure occurred.  Return whichever occurred.
    If cutoff_occurred, then
        return 'cutoff'
        return 'failure'
In [ ]:
def iterative_deepening_search(start_state, goal_state, actions_f, take_action_f, max_depth):
    # Conduct multiple searches, starting with smallest depth, then increasing it by 1 each time.
    for depth in range(max_depth):
        # Conduct search from startState
        result = depth_limited_search(start_state, goal_state, actions_f, take_action_f, depth)
        # If result was failure, return 'failure'.
        if result is 'failure':
            return 'failure'
        # Otherwise, if result was not cutoff, it succeeded, so add start_state to solution path and return it.
        if result is not 'cutoff', then
            Add start_state to front of solution path, in result, returned by depth_limited_search       
            return result
    # If we reach here, no solution found within the max_depth limit.
    return 'cutoff'

Bidirectional Search

If, and this is a big if, every action in a search problem has a known inverse action allowing search to go backwards, then an $O(b^d)$ search can be reduced to two $O(b^{d/2})$ searches by iteratively, or simultaneously in parallel, searching forward from the start state and searching backwards from the goal state. This also assumes there is one goal state, or a finite number of goal states.

Uninformed Search Summary

This table is from page 91 of our texbook. Here $b$ is the branching factor, $d$ is the depth of the shallowest solution, $m$ is the maximum depth of the search tree, and $l$ is the depth limit.

Criterion Breadth-First Depth-First Depth-Limited Iterative-Deepening Bidirectional
Complete? Yes No No Yes Yes
Optimal? Yes No No Yes Yes
Time $O(b^d)$ $O(b^m)$ $O(b^l)$ $O(b^d)$ $O(b^{d/2})$
Space $O(b^d)$ $O(bm)$ $O(bl)$ $O(bd)$ $O(b^{d/2})$