#!/usr/bin/env python # coding: utf-8 #

Table of Contents

#
# # Assignment 2: Iterative-Deepening Search # *Type your name here.* # ## Overview # Implement the iterative-deepening search algorithm as discussed in our lecture notes and as shown in figures 3.17 and 3.18 in our text book. Apply it to the 8-puzzle and a second puzzle of your choice. # ## Required Code # In this jupyter notebook, implement the following functions: # # * `iterative_deepening_search(start_state, goal_state, actions_f, take_action_f, max_depth)` # * `depth_limited_search(start_state, goal_state, actions_f, take_action_f, depth_limit)` # # `depth_limited_search` is called by `iterative_deepening_search` with `depth_limit`s of $0, 1, \ldots, $ `max_depth`. Both must return either the solution path as a list of states, or the strings `'cutoff'` or `'failure'`. `'failure'` signifies that all states were searched and the goal was not found. # # Each receives the arguments # # * the starting state, # * the goal state, # * a function `actions_f` that is given a state and returns a list of valid actions from that state, # * a function `take_action_f` that is given a state and an action and returns the new state that results from applying the action to the state, # * either a `depth_limit` for `depth_limited_search`, or `max_depth` for `iterative_deepening_search`. # Use your solution to solve the 8-puzzle. # Implement the state of the puzzle as a list of integers. 0 represents the empty position. # # Required functions for the 8-puzzle are the following. # # * `find_blank_8p(state)`: return the row and column index for the location of the blank (the 0 value). # * `actions_f_8p(state)`: returns a list of up to four valid actions that can be applied in `state`. Return them in the order `left`, `right`, `up`, `down`, though only if each one is a valid action. # * `take_action_f_8p(state, action)`: return the state that results from applying `action` in `state`. # * `print_state_8p(state)`: prints the state as a 3 x 3 table, as shown in lecture notes, or a bit fancier with, for example, '-' and '|' characters to separate tiles. This function is useful to call when debugging your search algorithms. # * `print_path_8p(start_state, goal_state, path)`: print a solution path in a readable form by calling `print_state_8p`. # Also, implement a second search problem of your choice. Apply your `iterative_deepening_search` function to it. # Here are some example results. # In[2]: start_state = [1, 0, 3, 4, 2, 5, 6, 7, 8] # In[3]: print_state_8p(start_state) # In[4]: find_blank_8p(start_state) # In[5]: actions_f_8p(start_state) # In[6]: take_action_f_8p(start_state, 'down') # In[7]: print_state_8p(take_action_f_8p(start_state, 'down')) # In[8]: goal_state = take_action_f_8p(start_state, 'down') # In[9]: new_state = take_action_f_8p(start_state, 'down') # In[10]: new_state == goal_state # In[11]: start_state # In[12]: path = depth_limited_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3) path # Notice that `depth_limited_search` result is missing the start state. This is inserted by `iterative_deepening_search`. # # But, when we try `iterative_deepening_search` to do the same search, it finds a shorter path! # In[13]: path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3) path # Also notice that the successor states are lists, not tuples. This is okay, because the search functions for this assignment do not make use of python dictionaries. # In[14]: start_state = [4, 7, 2, 1, 6, 5, 0, 3, 8] path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3) path # In[15]: start_state = [4, 7, 2, 1, 6, 5, 0, 3, 8] path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 5) path # Humm...maybe we can't reach the goal state from this state. We need a way to randomly generate a valid start state. # In[1]: import random # In[5]: random.choice(['left', 'right', 'down', 'up']) # In[18]: def random_start_state(goal_state, actions_f, take_action_f, n_steps): state = goal_state for i in range(n_steps): state = take_action_f(state, random.choice(actions_f(state))) return state # In[19]: goal_state = [1, 2, 3, 4, 0, 5, 6, 7, 8] random_start_state(goal_state, actions_f_8p, take_action_f_8p, 10) # In[20]: start_state = random_start_state(goal_state, actions_f_8p, take_action_f_8p, 50) start_state # In[21]: path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 20) path # Let's print out the state sequence in a readable form. # In[22]: for p in path: print_state_8p(p) print() # Here is one way to format the search problem and solution in a readable form. # In[23]: print_path_8p(start_state, goal_state, path) # ## Grading and Check in # Download [A2grader.tar](A2grader.tar) and extract A2grader.py from it, before running next code cell. # In[2]: get_ipython().run_line_magic('run', '-i A2grader.py') # Check in your notebook for Assignment 2 on our [Canvas site](https://colostate.instructure.com/courses/109411). # ## Extra Credit # # For extra credit, apply your solution to the grid example in Assignment 1 with the addition of at least one horizontal and at least one vertical barrier, all at least three positions long. Demonstrate the solutions found in four different pairs of start and goal states.