#!/usr/bin/env python
# coding: utf-8
#
Table of Contents
#
# # A3: A\*, IDS, and Effective Branching Factor
# For this assignment, implement the Recursive Best-First Search
# implementation of the A\* algorithm given in class. Name this function `Astar_search`. Also in this notebook include your `iterative_deepening_search` functions.
# Define a new function named `effective_branching_factor` that returns an estimate of the effective
# branching factor for a search algorithm applied to a search problem.
#
# So, the required functions are
#
# - `Astar_search(start_state, actions_f, take_action_f, goal_test_f, h_f)`
# - `iterative_deepening_search(start_state, goal_state, actions_f, take_action_f, max_depth)`
# - `effective_branching_factor(n_nodes, depth, precision=0.01)`, returns the effective branching factor, given the number of nodes expanded and depth reached during a search.
#
# Apply `iterative_deepening_search` and `Astar_search` to several eight-tile sliding puzzle
# problems. For this you must include your implementations of these functions from Assignment 2. Here we are renaming these functions to not include `_f`, just for simplicity.
#
# * `actions_8p(state)`: returns a list of up to four valid actions that can be applied in `state`. With each action include a step cost of 1. For example, if all four actions are possible from this state, return [('left', 1), ('right', 1), ('up', 1), ('down', 1)].
# * `take_action_8p(state, action)`: return the state that results from applying `action` in `state` and the cost of the one step,
#
# plus the following function for the eight-tile puzzle:
#
# * `goal_test_8p(state, goal)`
#
# Compare their results by displaying
# solution path depth, number of nodes
# generated, and the effective branching factor, and discuss the results. Do this by defining the following function that prints the table as shown in the example below.
#
# - `run_experiment(goal_state_1, goal_state_2, goal_state_3, [h1, h2, h3])`
#
# Define this function so it takes any number of $h$ functions in the list that is the fourth argument.
# ## Heuristic Functions
#
# For `Astar_search` use the following two heuristic functions, plus one more of your own design, for a total of three heuristic functions.
#
# * `h1_8p(state, goal)`: $h(state, goal) = 0$, for all states $state$ and all goal states $goal$,
# * `h2_8p(state, goal)`: $h(state, goal) = m$, where $m$ is the Manhattan distance that the blank is from its goal position,
# * `h3_8p(state, goal)`: $h(state, goal) = ?$, that you define. It must be admissible, and not constant for all states.
# ## Comparison
# Apply all four algorithms (`iterative_deepening_search` plus `Astar_search` with the three heuristic
# functions) to three eight-tile puzzle problems with start state
#
# $$
# \begin{array}{ccc}
# 1 & 2 & 3\\
# 4 & 0 & 5\\
# 6 & 7 & 8
# \end{array}
# $$
#
# and these three goal states.
#
# $$
# \begin{array}{ccccccccccc}
# 1 & 2 & 3 & ~~~~ & 1 & 2 & 3 & ~~~~ & 1 & 0 & 3\\
# 4 & 0 & 5 & & 4 & 5 & 8 & & 4 & 5 & 8\\
# 6 & 7 & 8 & & 6 & 0 & 7 & & 2 & 6 & 7
# \end{array}
# $$
# Print a well-formatted table like the following. Try to match this
# format. If you have time, you might consider learning a bit about the `DataFrame` class in the `pandas` package. When displayed in jupyter notebooks, `pandas.DataFrame` objects are nicely formatted in html.
#
# [1, 2, 3, 4, 0, 5, 6, 7, 8] [1, 2, 3, 4, 5, 8, 6, 0, 7] [1, 0, 3, 4, 5, 8, 2, 6, 7]
# Algorithm Depth Nodes EBF Depth Nodes EBF Depth Nodes EBF
# IDS 0 0 0.000 3 43 3.086 11 225850 2.954
# A*h1 0 0 0.000 3 116 4.488 11 643246 3.263
# A*h2 0 0 0.000 3 51 3.297 11 100046 2.733
#
# Of course you will have one more line for `h3`.
# First, some example output for the effective_branching_factor function. During execution, this example shows debugging output which is the low and high values passed into a recursive helper function.
# In[2]:
effective_branching_factor(10, 3)
# The smallest argument values should be a depth of 0, and 1 node.
# In[3]:
effective_branching_factor(1, 0)
# In[4]:
effective_branching_factor(2, 1)
# In[5]:
effective_branching_factor(2, 1, precision=0.000001)
# In[6]:
effective_branching_factor(200000, 5)
# In[7]:
effective_branching_factor(200000, 50)
# Here is a simple example using our usual simple graph search.
# In[1]:
def actions_simple(state):
succs = {'a': ['b', 'c'], 'b':['a'], 'c':['h'], 'h':['i'], 'i':['j', 'k', 'l'], 'k':['z']}
return [(s, 1) for s in succs.get(state, [])]
def take_action_simple(state, action):
return action
def goal_test_simple(state, goal):
return state == goal
def h_simple(state, goal):
return 1
# In[2]:
actions = actions_simple('a')
actions
# In[3]:
take_action_simple('a', actions[0])
# In[11]:
goal_test_simple('a', 'a')
# In[12]:
h_simple('a', 'z')
# In[13]:
iterative_deepening_search('a', 'z', actions_simple, take_action_simple, 10)
# In[14]:
Astar_search('a',actions_simple, take_action_simple,
lambda s: goal_test_simple(s, 'z'),
lambda s: h_simple(s, 'z'))
# ## Grading
# Download [A3grader.tar](http://www.cs.colostate.edu/~anderson/cs440/notebooks/A3grader.tar) and extract A3grader.py from it.
# In[2]:
get_ipython().run_line_magic('run', '-i A3grader.py')
# ## Extra Credit
# Add a third column for each result (from running `runExperiment`) that is the number of seconds each search required. You may get the total run time when running a function by doing
#
# import time
#
# start_time = time.time()
#
# < do some python stuff >
#
# end_time = time.time()
# print('This took', end_time - start_time, 'seconds.')
#