In this lab exercise we will look at how to work with data stored in a tabular form and perform exploratory data analysis on it. We will be using the Python Data Analysis Library (aka Pandas) to do this.
At the end of the exercise, remember to fill out the response form here.
The two main data structures that Pandas supports are Series and DataFrames. Series are one-dimensional data structures that are a collection of any numpy data type. DataFrames on the other hand are two dimensional data structures which resemble a database table or say an Excel spreadsheet. In this lab we will primarily be using DataFrames and will look at operations that we can perform using them.
Before you start, download this file of world cup data into the same directory as this notebook (e.g. /home/datascience/lab3) and untar it.
Also Download this restaurant dataset from here. Put the data file, "restaurants.csv", in the same directory as this notebook.
from pylab import *
%matplotlib inline
import pandas as pd
df = pd.DataFrame( { 'a' : [1, 2, 3, 4], 'b': [ 'w', 'x', 'y', 'z'] })
If you need clarifications about the Pandas API you can type the function name followed by ? to get inline help. For example to get help with the above call run:
pd.DataFrame?
If you want to see the same in a browser lookup the function in the API documentation
The simplest way to see what is in a DataFrame is to just print it to the console. For example to see the DataFrame we created before you can just type df and see something like
df
This shows that we have two columnns 'a' and 'b' and four rows in our DataFrame.
However large DataFrames cannot be printed to the console and we have higher level commands to inspect its contents. To get information on the schema of the DataFrames, we can use the info function
df.info()
To see the first few rows you can use head
and to see the last few rows you can use tail
. This is similar to the UNIX-command line tools (Remember Lab 1 !?)
df.head(2)
To print any range of rows from the DataFrame you can use array-like indexing of row ids. As you might have noticed rows are numbered from 0 in Pandas, so to get the middle two rows we can use the range 1:3
df[1:3]
Finally, Pandas also has a useful function describe that summarizes the contents of numerical columns in a DataFrame. For example in df we can see the mean, standard deviation etc. for the column a by running describe.
Having worked our way through the basics, lets now see how we can use Pandas for data analysis. To do this part of the lab we will reuse the World Cup soccer logs from Lab 1. However this time the input data has been sampled and formatted as a csv file that you will load first.
log_df = pd.read_csv("wc_day6_1_sample.csv",
names=['ClientID', 'Date', 'Time', 'URL', 'ResponseCode', 'Size'],
na_values=['-'])
The names argument tells Pandas what the column names are in our file and na_values indicates what character is used for missing values in our dataset. Use the commands from the previous section to explore the dataset and its summary statistics.
TODO:
How many rows are present in log_df ?
What are the URLs between rows 85 and 87 inclusive ?
Next we will look at operators in Pandas that allow us to perform SQL-like queries on the dataset.
A SQL statement typically selects a subset of rows from a table that match a given criteria. This is known as the Selection (Links to an external site.) operator in Relational Algebra. Similarly we can perform selections in Pandas using boolean indexing.
Boolean indexing refers to a technique where you can use a list of boolean values to filter a DataFrame. For example lets say we only want entries from '01/May/1998'. To do this we can create a boolean list like
is_may1st = log_df['Date'] == '01/May/1998'
is_may1st.head(2)
Now we can filter our DataFrame by passing it the boolean list.
may1_df = log_df[is_may1st]
may1_df.head()
Or we can directly do this by passing in the boolean clause to the DataFrame
may1_df = log_df[log_df['Date'] == '01/May/1998']
may1_df.head()
While selection is used for filtering rows, projection is the relational algebra operator used to select columns. To do this with Pandas we just need to pass in a list of columns that we wish to select. For example to only keep the 'URL' and 'ResponseCode' column we would run
url_codes = log_df[['URL', 'ResponseCode']]
url_codes.head(5)
Pandas also allows you to group the DataFrame by values in any column. For example to group requests by 'ResponseCode' you can run
grouped = log_df.groupby('ResponseCode')
grouped
As you can see from the output above, grouped is not a DataFrame but an object of type DataFrameGroupBy. This just means that it contains a number of groups and each group is in turn a DataFrame. To see this try
grouped.ngroups
grouped.groups.keys()
grouped.get_group(200).head()
You can also group by multiple columns by just passing the a list of column names. For example to group by both date and response code you can run
multi_grouped = log_df.groupby(['ResponseCode', 'Date'])
Pandas also has useful commands (Links to an external site.) to print various statistics about elements in each group.
To plot a Series or a DataFrame you can just call plot() on the object and for a histogram just call hist()
rand_df = pd.DataFrame({'a' : randn(100)})
rand_df.plot()
rand_df.hist()
A join is a way to connect rows in two different data tables based on some criteria. Suppose the university has a database for student records with two tables in it: Students and Grades.
import pandas as pd
Students = pd.DataFrame({'student_id': [1, 2], 'name': ['Alice', 'Bob']})
Students
Grades = pd.DataFrame({'student_id': [1, 1, 2, 2], 'class_id': [1, 2, 1, 3], 'grade': ['A', 'C', 'B', 'B']})
Grades
Let's say we want to know all of Bob's grades. Then, we can look up Bob's student ID in the Students table, and with the ID, look up his grades in the Grades table. Joins naturally express this process: when two tables share a common type of column (student ID in this case), we can join the tables together to get a complete view.
In Pandas, we can use the merge method to perform a join. Pass the two tables to join as the first arguments, then the "on" parameter is set to the join column name.
Student_Grades = pd.merge(Students, Grades, on='student_id')
Student_Grades
TODO
Classes = pd.DataFrame({'class_id': [1, 2, 3], 'title': ['Math', 'English', 'Spanish']})
Now let's load the restaurant data that we will be analyzing:
resto = pd.read_csv('restaurants.csv')
resto.info()
resto[:10]
The restaurant data has four columns. id is a unique ID field (unique for each row), name is the name of the restaurant, and city is where it is located. The fourth column, cluster, is a "gold standard" column. If two records have the same cluster, that means they are both about the same restaurant.
The type of join we made above between Students and Grades, where we link records with equal values in a common column, is called an equijoin. Equijoins may join on more than one column, too (both value have to match).
Let's use an equijoin to find pairs of duplicate restaurant records. We join the data to itself, on the cluster column.
Note: a join between a table and itself is called a self-join.
The result ("clusters" below) has a lot of extra records in it. For example, since we're joining a table to itself, every record matches itself. We can filter on IDs to get rid of these extra join results. Note that when Pandas joins two tables that have columns with the same name, it appends "_x" and "_y" to the names to distinguish them.
clusters = pd.merge(resto, resto, on='cluster')
clusters = clusters[clusters.id_x != clusters.id_y]
clusters[:10]
TODO
Filter clusters so that we only keep one instance of each matching pair (HINT: use the IDs again).
TODO: How many rows are there in your filtered table?
Do this section if you have time. There are no lab responses for it however.
Sometimes an equijoin isn't good enough.
Say you want to match up records that are almost equal in a column. Or where a function of a columns is equal. Or maybe you don't care about equality: maybe "less than" or "greater than or equal to" is what you want. These cases call for a more general join than equijoin.
We are going to make one of these joins between the restaurants data and itself. Specifically, we want to match up pairs of records whose restaurant names are almost the same. We call this a fuzzy join.
To do a fuzzy join in Pandas we need to go about it in a few steps:
SQL Aside: In SQL, all of joins are supported in about the same way as equijoins are. Essentially, you write a boolean expression on columns from the join-tables, and whenever that expression is true, you join the records together. This is very similar to writing an if statement in python or Java.
Let's do an example to get the hang of it.
We're going to be using a string-similarity python library to compute "edit distance".
To test that it works, the following should run OK:
import Levenshtein as L
We use a "dummy" column to compute the Cartesian product of the data with itself. dummy takes the same value for every record, so we can do an equijoin and get back all pairs.
resto['dummy'] = 0
prod = pd.merge(resto, resto, on='dummy')
# Clean up
del prod['dummy']
del resto['dummy']
# Show that prod is the size of "resto" squared:
print len(prod), len(resto)**2
prod[:10]
In the homework assignment, we used a string similarity metric called cosine similarity which measured how many "tokens" two strings shared in common. Now, we're going to use an alternative measure of string similarity called edit-distance. Edit-distance counts the number of simple changes you have to make to a string to turn it into another string.
Import the edit distance library:
import Levenshtein as L
L.distance('Hello, World!', 'Hallo, World!')
Next, we add a computed column, named distance, that measures the edit distance between the names of two restaurants:
# This takes a minute or two to run
prod['distance'] = prod.apply(lambda r: L.distance(r['name_x'], r['name_y']), axis=1)
Now we complete the join by filtering out pairs or records that aren't equal enough for our liking. As in the first homework assignment, we can only figure out how similar is "similar enough" by trying out some different options. Let's try maximum edit-distance from 0 to 10 and compute precision and recall.
%matplotlib inline
import pylab
def accuracy(max_distance):
similar = prod[prod.distance < max_distance]
correct = float(sum(similar.cluster_x == similar.cluster_y))
precision = correct / len(similar)
recall = correct / len(clusters)
return (precision, recall)
thresholds = range(1, 11)
p = []
r = []
for t in thresholds:
acc = accuracy(t)
p.append(acc[0])
r.append(acc[1])
pylab.plot(thresholds, p)
pylab.plot(thresholds, r)
pylab.legend(['precision', 'recall'], loc='upper left')
1. Another common way to visualize the tradeoff between precision and recall is to plot them directly against each other. Create a scatterplot with precision on one axis and recall on the other. Where are "good" points on the plot, and where are "bad" ones.
Finally, remember to fill out the response form here !