#!/usr/bin/env python # coding: utf-8 # [A while back I claimed](http://www.gregreda.com/2013/01/23/translating-sql-to-pandas-part1/) I was going to write a couple of posts on translating [pandas](http://pandas.pydata.org) to SQL. I never followed up. However, the other week a couple of coworkers expressed their interest in learning a bit more about it - this seemed like a good reason to revisit the topic. # # What follows is a fairly thorough introduction to the library. I chose to break it into three parts as I felt it was too long and daunting as one. # # - [Part 1: Intro to pandas data structures](http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/), covers the basics of the library's two main data structures - Series and DataFrames. # # - [Part 2: Working with DataFrames](http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/), dives a bit deeper into the functionality of DataFrames. It shows how to inspect, select, filter, merge, combine, and group your data. # # - [Part 3: Using pandas with the MovieLens dataset](http://www.gregreda.com/2013/10/26/using-pandas-on-the-movielens-dataset/), applies the learnings of the first two parts in order to answer a few basic analysis questions about the MovieLens ratings data. # # If you'd like to follow along, you can find the necessary CSV files [here](https://github.com/gjreda/gregreda.com/tree/master/content/notebooks/data) and the MovieLens dataset [here](http://files.grouplens.org/datasets/movielens/ml-100k.zip). # # My goal for this tutorial is to teach the basics of pandas by comparing and contrasting its syntax with SQL. Since all of my coworkers are familiar with SQL, I feel this is the best way to provide a context that can be easily understood by the intended audience. # # If you're interested in learning more about the library, pandas author [Wes McKinney](https://twitter.com/wesmckinn) has written [Python for Data Analysis](http://www.amazon.com/gp/product/1449319793/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1449319793&linkCode=as2&tag=gjreda-20&linkId=MCGW4C4NOBRVV5OC), which covers it in much greater detail. # ### What is it? # [pandas](http://pandas.pydata.org/) is an open source [Python](http://www.python.org/) library for data analysis. Python has always been great for prepping and munging data, but it's never been great for analysis - you'd usually end up using [R](http://www.r-project.org/) or loading it into a database and using SQL (or worse, Excel). pandas makes Python great for analysis. # ## Data Structures # pandas introduces two new data structures to Python - [Series](http://pandas.pydata.org/pandas-docs/dev/dsintro.html#series) and [DataFrame](http://pandas.pydata.org/pandas-docs/dev/dsintro.html#dataframe), both of which are built on top of [NumPy](http://www.numpy.org/) (this means it's fast). # In[1]: import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('max_columns', 50) get_ipython().run_line_magic('matplotlib', 'inline') # ### Series # A Series is a one-dimensional object similar to an array, list, or column in a table. It will assign a labeled index to each item in the Series. By default, each item will receive an index label from 0 to N, where N is the length of the Series minus one. # In[2]: # create a Series with an arbitrary list s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!']) s # Alternatively, you can specify an index to use when creating the Series. # In[3]: s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!'], index=['A', 'Z', 'C', 'Y', 'E']) s # The Series constructor can convert a dictonary as well, using the keys of the dictionary as its index. # In[4]: d = {'Chicago': 1000, 'New York': 1300, 'Portland': 900, 'San Francisco': 1100, 'Austin': 450, 'Boston': None} cities = pd.Series(d) cities # You can use the index to select specific items from the Series ... # In[5]: cities['Chicago'] # In[6]: cities[['Chicago', 'Portland', 'San Francisco']] # Or you can use boolean indexing for selection. # In[7]: cities[cities < 1000] # That last one might be a little weird, so let's make it more clear - `cities < 1000` returns a Series of True/False values, which we then pass to our Series `cities`, returning the corresponding True items. # In[8]: less_than_1000 = cities < 1000 print(less_than_1000) print('\n') print(cities[less_than_1000]) # You can also change the values in a Series on the fly. # In[9]: # changing based on the index print('Old value:', cities['Chicago']) cities['Chicago'] = 1400 print('New value:', cities['Chicago']) # In[10]: # changing values using boolean logic print(cities[cities < 1000]) print('\n') cities[cities < 1000] = 750 print(cities[cities < 1000]) # What if you aren't sure whether an item is in the Series? You can check using idiomatic Python. # In[11]: print('Seattle' in cities) print('San Francisco' in cities) # Mathematical operations can be done using scalars and functions. # In[12]: # divide city values by 3 cities / 3 # In[13]: # square city values np.square(cities) # You can add two Series together, which returns a union of the two Series with the addition occurring on the shared index values. Values on either Series that did not have a shared index will produce a NULL/NaN (not a number). # In[14]: print(cities[['Chicago', 'New York', 'Portland']]) print('\n') print(cities[['Austin', 'New York']]) print('\n') print(cities[['Chicago', 'New York', 'Portland']] + cities[['Austin', 'New York']]) # Notice that because Austin, Chicago, and Portland were not found in both Series, they were returned with NULL/NaN values. # # NULL checking can be performed with `isnull` and `notnull`. # In[15]: # returns a boolean series indicating which values aren't NULL cities.notnull() # In[16]: # use boolean logic to grab the NULL cities print(cities.isnull()) print('\n') print(cities[cities.isnull()]) # ## DataFrame # # A DataFrame is a tablular data structure comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can also think of a DataFrame as a group of Series objects that share an index (the column names). # # For the rest of the tutorial, we'll be primarily working with DataFrames. # ### Reading Data # # To create a DataFrame out of common Python data structures, we can pass a dictionary of lists to the DataFrame constructor. # # Using the `columns` parameter allows us to tell the constructor how we'd like the columns ordered. By default, the DataFrame constructor will order the columns alphabetically (though this isn't the case when reading from a file - more on that next). # In[17]: data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012], 'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'], 'wins': [11, 8, 10, 15, 11, 6, 10, 4], 'losses': [5, 8, 6, 1, 5, 10, 6, 12]} football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses']) football # Much more often, you'll have a dataset you want to read into a DataFrame. Let's go through several common ways of doing so. # **CSV** # # Reading a CSV is as simple as calling the *read_csv* function. By default, the *read_csv* function expects the column separator to be a comma, but you can change that using the `sep` parameter. # In[18]: get_ipython().run_line_magic('cd', '~/Dropbox/tutorials/pandas/') # In[19]: # Source: baseball-reference.com/players/r/riverma01.shtml get_ipython().system('head -n 5 mariano-rivera.csv') # In[20]: from_csv = pd.read_csv('mariano-rivera.csv') from_csv.head() # Our file had headers, which the function inferred upon reading in the file. Had we wanted to be more explicit, we could have passed `header=None` to the function along with a list of column names to use: # In[21]: # Source: pro-football-reference.com/players/M/MannPe00/touchdowns/passing/2012/ get_ipython().system('head -n 5 peyton-passing-TDs-2012.csv') # In[22]: cols = ['num', 'game', 'date', 'team', 'home_away', 'opponent', 'result', 'quarter', 'distance', 'receiver', 'score_before', 'score_after'] no_headers = pd.read_csv('peyton-passing-TDs-2012.csv', sep=',', header=None, names=cols) no_headers.head() # pandas' various *reader* functions have many parameters allowing you to do things like skipping lines of the file, parsing dates, or specifying how to handle NA/NULL datapoints. # # There's also a set of *writer* functions for writing to a variety of formats (CSVs, HTML tables, JSON). They function exactly as you'd expect and are typically called `to_format`: # # ```python # my_dataframe.to_csv('path_to_file.csv') # ``` # # [Take a look at the IO documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) to familiarize yourself with file reading/writing functionality. # **Excel** # # Know who hates [VBA](http://en.wikipedia.org/wiki/Visual_Basic_for_Applications)? Me. I bet you do, too. Thankfully, pandas allows you to read and write Excel files, so you can easily read from Excel, write your code in Python, and then write back out to Excel - no need for VBA. # # Reading Excel files requires the [xlrd](https://pypi.python.org/pypi/xlrd) library. You can install it via [pip](http://www.pip-installer.org/en/latest/) (*pip install xlrd*). # # Let's first write a DataFrame to Excel. # In[23]: # this is the DataFrame we created from a dictionary earlier football.head() # In[24]: # since our index on the football DataFrame is meaningless, let's not write it football.to_excel('football.xlsx', index=False) # In[25]: get_ipython().system('ls -l *.xlsx') # In[26]: # delete the DataFrame del football # In[27]: # read from Excel football = pd.read_excel('football.xlsx', 'Sheet1') football # **Database** # # pandas also has some support for reading/writing DataFrames directly from/to a database [[docs](http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries)]. You'll typically just need to pass a connection object or sqlalchemy engine to the `read_sql` or `to_sql` functions within the `pandas.io` module. # # Note that `to_sql` executes as a series of INSERT INTO statements and thus trades speed for simplicity. If you're writing a large DataFrame to a database, it might be quicker to write the DataFrame to CSV and load that directly using the database's file import arguments. # In[28]: from pandas.io import sql import sqlite3 conn = sqlite3.connect('/Users/gjreda/Dropbox/gregreda.com/_code/towed') query = "SELECT * FROM towed WHERE make = 'FORD';" results = sql.read_sql(query, con=conn) results.head() # **Clipboard** # # While the results of a query can be read directly into a DataFrame, I prefer to read the results directly from the clipboard. I'm often tweaking queries in my SQL client ([Sequel Pro](http://www.sequelpro.com/)), so I would rather see the results *before* I read it into pandas. Once I'm confident I have the data I want, then I'll read it into a DataFrame. # # This works just as well with any type of delimited data you've copied to your clipboard. The function does a good job of inferring the delimiter, but you can also use the `sep` parameter to be explicit. # # [Hank Aaron](http://www.baseball-reference.com/players/a/aaronha01.shtml) # # ![hank-aaron-stats-screenshot](http://i.imgur.com/xiySJ2e.png) # In[29]: hank = pd.read_clipboard() hank.head() # **URL** # # With `read_table`, we can also read directly from a URL. # # Let's use the [best sandwiches data](https://raw.github.com/gjreda/best-sandwiches/master/data/best-sandwiches-geocode.tsv) that I [wrote about scraping](http://www.gregreda.com/2013/05/06/more-web-scraping-with-python/) a while back. # In[30]: url = 'https://raw.github.com/gjreda/best-sandwiches/master/data/best-sandwiches-geocode.tsv' # fetch the text from the URL and read it into a DataFrame from_url = pd.read_table(url, sep='\t') from_url.head(3) # **Google Analytics** # # pandas also has some integration with the Google Analytics API, though there is some setup required. I won't be covering it, but you can read more about it [here](http://blog.yhathq.com/posts/pandas-google-analytics.html) and [here](http://quantabee.wordpress.com/2012/12/17/google-analytics-pandas/). # ## Working with DataFrames # Now that we can get data into a DataFrame, we can finally start working with them. pandas has an abundance of functionality, far too much for me to cover in this introduction. I'd encourage anyone interested in diving deeper into the library to check out its [excellent documentation](http://pandas.pydata.org/pandas-docs/stable/). Or just use Google - there are a lot of Stack Overflow questions and blog posts covering specifics of the library. # # We'll be using the [MovieLens](http://www.grouplens.org/node/73) dataset in many examples going forward. The dataset contains 100,000 ratings made by 943 users on 1,682 movies. # In[31]: # pass in column names for each CSV u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] users = pd.read_csv('ml-100k/u.user', sep='|', names=u_cols, encoding='latin-1') r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings = pd.read_csv('ml-100k/u.data', sep='\t', names=r_cols, encoding='latin-1') # the movies file contains columns indicating the movie's genres # let's only load the first five columns of the file with usecols m_cols = ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url'] movies = pd.read_csv('ml-100k/u.item', sep='|', names=m_cols, usecols=range(5), encoding='latin-1') # ### Inspection # # pandas has a variety of functions for getting basic information about your DataFrame, the most basic of which is using the `info` method. # In[32]: movies.info() # The output tells a few things about our DataFrame. # # 1. It's obviously an instance of a DataFrame. # 2. Each row was assigned an index of 0 to N-1, where N is the number of rows in the DataFrame. pandas will do this by default if an index is not specified. Don't worry, this can be changed later. # 3. There are 1,682 rows (every row must have an index). # 4. Our dataset has five total columns, one of which isn't populated at all (video_release_date) and two that are missing some values (release_date and imdb_url). # 5. The last datatypes of each column, but not necessarily in the corresponding order to the listed columns. You should use the `dtypes` method to get the datatype for each column. # 6. An approximate amount of RAM used to hold the DataFrame. See the `.memory_usage` method # In[33]: movies.dtypes # DataFrame's also have a `describe` method, which is great for seeing basic statistics about the dataset's numeric columns. Be careful though, since this will return information on **all** columns of a numeric datatype. # In[34]: users.describe() # Notice *user_id* was included since it's numeric. Since this is an ID value, the stats for it don't really matter. # # We can quickly see the average age of our users is just above 34 years old, with the youngest being 7 and the oldest being 73. The median age is 31, with the youngest quartile of users being 25 or younger, and the oldest quartile being at least 43. # You've probably noticed that I've used the `head` method regularly throughout this post - by default, `head` displays the first five records of the dataset, while `tail` displays the last five. # In[35]: movies.head() # In[36]: movies.tail(3) # Alternatively, Python's regular [slicing](http://docs.python.org/release/2.3.5/whatsnew/section-slices.html) syntax works as well. # In[37]: movies[20:22] # ### Selecting # # You can think of a DataFrame as a group of Series that share an index (in this case the column headers). This makes it easy to select specific columns. # # Selecting a single column from the DataFrame will return a Series object. # In[38]: users['occupation'].head() # To select multiple columns, simply pass a list of column names to the DataFrame, the output of which will be a DataFrame. # In[39]: print(users[['age', 'zip_code']].head()) print('\n') # can also store in a variable to use later columns_you_want = ['occupation', 'sex'] print(users[columns_you_want].head()) # Row selection can be done multiple ways, but doing so by an individual index or boolean indexing are typically easiest. # In[40]: # users older than 25 print(users[users.age > 25].head(3)) print('\n') # users aged 40 AND male print(users[(users.age == 40) & (users.sex == 'M')].head(3)) print('\n') # users younger than 30 OR female print(users[(users.sex == 'F') | (users.age < 30)].head(3)) # Since our index is kind of meaningless right now, let's set it to the `user_id` using the `set_index` method. By default, `set_index` returns a new DataFrame, so you'll have to specify if you'd like the changes to occur in place. # # This has confused me in the past, so look carefully at the code and output below. # In[41]: print(users.set_index('user_id').head()) print('\n') print(users.head()) print("\n^^^ I didn't actually change the DataFrame. ^^^\n") with_new_index = users.set_index('user_id') print(with_new_index.head()) print("\n^^^ set_index actually returns a new DataFrame. ^^^\n") # If you want to modify your existing DataFrame, use the `inplace` parameter. Most DataFrame methods return new a DataFrames, while offering an `inplace` parameter. Note that the `inplace` version might not actually be any more efficient (in terms of speed or memory usage) than the regular version. # In[42]: users.set_index('user_id', inplace=True) users.head() # Notice that we've lost the default pandas 0-based index and moved the user_id into its place. We can select rows *by position* using the `iloc` method. # In[43]: print(users.iloc[99]) print('\n') print(users.iloc[[1, 50, 300]]) # And we can select rows *by label* with the `loc` method. # In[44]: print(users.loc[100]) print('\n') print(users.loc[[2, 51, 301]]) # If we realize later that we liked the old pandas default index, we can just `reset_index`. The same rules for `inplace` apply. # In[45]: users.reset_index(inplace=True) users.head() # The simplified rules of indexing are # # - Use `loc` for label-based indexing # - Use `iloc` for positional indexing # # I've found that I can usually get by with boolean indexing, `loc` and `iloc`, but pandas has a whole host of [other ways to do selection](http://pandas.pydata.org/pandas-docs/stable/indexing.html). # ### Joining # # Throughout an analysis, we'll often need to merge/join datasets as data is typically stored in a [relational](http://en.wikipedia.org/wiki/Relational_database) manner. # # Our MovieLens data is a good example of this - a rating requires both a user and a movie, and the datasets are linked together by a key - in this case, the user_id and movie_id. It's possible for a user to be associated with zero or many ratings and movies. Likewise, a movie can be rated zero or many times, by a number of different users. # # Like SQL's JOIN clause, `pandas.merge` allows two DataFrames to be joined on one or more keys. The function provides a series of parameters `(on, left_on, right_on, left_index, right_index)` allowing you to specify the columns or indexes on which to join. # # By default, `pandas.merge` operates as an *inner join*, which can be changed using the `how` parameter. # # From the function's docstring: # # > how : {'left', 'right', 'outer', 'inner'}, default 'inner' # # > * left: use only keys from left frame (SQL: left outer join) # # > * right: use only keys from right frame (SQL: right outer join) # # > * outer: use union of keys from both frames (SQL: full outer join) # # > * inner: use intersection of keys from both frames (SQL: inner join) # # Below are some examples of what each look like. # In[46]: left_frame = pd.DataFrame({'key': range(5), 'left_value': ['a', 'b', 'c', 'd', 'e']}) right_frame = pd.DataFrame({'key': range(2, 7), 'right_value': ['f', 'g', 'h', 'i', 'j']}) print(left_frame) print('\n') print(right_frame) # **inner join (default)** # In[47]: pd.merge(left_frame, right_frame, on='key', how='inner') # We lose values from both frames since certain keys do not match up. The SQL equivalent is: # # ``` # SELECT left_frame.key, left_frame.left_value, right_frame.right_value # FROM left_frame # INNER JOIN right_frame # ON left_frame.key = right_frame.key; # ``` # # Had our *key* columns not been named the same, we could have used the *left_on* and *right_on* parameters to specify which fields to join from each frame. # ```python # pd.merge(left_frame, right_frame, left_on='left_key', right_on='right_key') # ``` # Alternatively, if our keys were indexes, we could use the `left_index` or `right_index` parameters, which accept a True/False value. You can mix and match columns and indexes like so: # ```python # pd.merge(left_frame, right_frame, left_on='key', right_index=True) # ``` # **left outer join** # In[48]: pd.merge(left_frame, right_frame, on='key', how='left') # We keep everything from the left frame, pulling in the value from the right frame where the keys match up. The right_value is NULL where keys do not match (NaN). # # SQL Equivalent: # # SELECT left_frame.key, left_frame.left_value, right_frame.right_value # FROM left_frame # LEFT JOIN right_frame # ON left_frame.key = right_frame.key; # **right outer join** # In[49]: pd.merge(left_frame, right_frame, on='key', how='right') # This time we've kept everything from the right frame with the left_value being NULL where the right frame's key did not find a match. # # SQL Equivalent: # # SELECT right_frame.key, left_frame.left_value, right_frame.right_value # FROM left_frame # RIGHT JOIN right_frame # ON left_frame.key = right_frame.key; # **full outer join** # In[50]: pd.merge(left_frame, right_frame, on='key', how='outer') # We've kept everything from both frames, regardless of whether or not there was a match on both sides. Where there was not a match, the values corresponding to that key are NULL. # # SQL Equivalent (though some databases don't allow FULL JOINs (e.g. MySQL)): # # SELECT IFNULL(left_frame.key, right_frame.key) key # , left_frame.left_value, right_frame.right_value # FROM left_frame # FULL OUTER JOIN right_frame # ON left_frame.key = right_frame.key; # ### Combining # # pandas also provides a way to combine DataFrames along an axis - `pandas.concat`. While the function is equivalent to SQL's UNION clause, there's a lot more that can be done with it. # # `pandas.concat` takes a list of Series or DataFrames and returns a Series or DataFrame of the concatenated objects. Note that because the function takes list, you can combine many objects at once. # In[51]: pd.concat([left_frame, right_frame]) # By default, the function will vertically append the objects to one another, combining columns with the same name. We can see above that values not matching up will be NULL. # # Additionally, objects can be concatentated side-by-side using the function's *axis* parameter. # In[52]: pd.concat([left_frame, right_frame], axis=1) # `pandas.concat` can be used in a variety of ways; however, I've typically only used it to combine Series/DataFrames into one unified object. The [documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-objects) has some examples on the ways it can be used. # ### Grouping # # Grouping in pandas took some time for me to grasp, but it's pretty awesome once it clicks. # # pandas `groupby` method draws largely from the [split-apply-combine strategy for data analysis](http://www.jstatsoft.org/v40/i01/paper). If you're not familiar with this methodology, I highly suggest you read up on it. It does a great job of illustrating how to properly think through a data problem, which I feel is more important than any technical skill a data analyst/scientist can possess. # # When approaching a data analysis problem, you'll often break it apart into manageable pieces, perform some operations on each of the pieces, and then put everything back together again (this is the gist split-apply-combine strategy). pandas `groupby` is great for these problems (R users should check out the [plyr](http://plyr.had.co.nz/) and [dplyr](https://github.com/hadley/dplyr) packages). # # If you've ever used SQL's GROUP BY or an Excel Pivot Table, you've thought with this mindset, probably without realizing it. # # Assume we have a DataFrame and want to get the average for each group - visually, the split-apply-combine method looks like this: # # ![Source: Gratuitously borrowed from [Hadley Wickham's Data Science in R slides](http://courses.had.co.nz/12-oscon/)](http://i.imgur.com/yjNkiwL.png) # The City of Chicago is kind enough to publish all city employee salaries to its open data portal. Let's go through some basic `groupby` examples using this data. # In[53]: get_ipython().system('head -n 3 city-of-chicago-salaries.csv') # Since the data contains a dollar sign for each salary, python will treat the field as a series of strings. We can use the `converters` parameter to change this when reading in the file. # # >converters : dict. optional # # >* Dict of functions for converting values in certain columns. Keys can either be integers or column labels # In[54]: headers = ['name', 'title', 'department', 'salary'] chicago = pd.read_csv('city-of-chicago-salaries.csv', header=0, names=headers, converters={'salary': lambda x: float(x.replace('$', ''))}) chicago.head() # pandas `groupby` returns a DataFrameGroupBy object which has a variety of methods, many of which are similar to standard SQL aggregate functions. # In[55]: by_dept = chicago.groupby('department') by_dept # Calling `count` returns the total number of NOT NULL values within each column. If we were interested in the total number of records in each group, we could use `size`. # In[56]: print(by_dept.count().head()) # NOT NULL records within each column print('\n') print(by_dept.size().tail()) # total records for each department # Summation can be done via `sum`, averaging by `mean`, etc. (if it's a SQL function, chances are it exists in pandas). Oh, and there's median too, something not available in most databases. # In[57]: print(by_dept.sum()[20:25]) # total salaries of each department print('\n') print(by_dept.mean()[20:25]) # average salary of each department print('\n') print(by_dept.median()[20:25]) # take that, RDBMS! # Operations can also be done on an individual Series within a grouped object. Say we were curious about the five departments with the most distinct titles - the pandas equivalent to: # # SELECT department, COUNT(DISTINCT title) # FROM chicago # GROUP BY department # ORDER BY 2 DESC # LIMIT 5; # # pandas is a lot less verbose here ... # In[58]: by_dept.title.nunique().sort_values(ascending=False)[:5] # ### split-apply-combine # # The real power of `groupby` comes from it's split-apply-combine ability. # # What if we wanted to see the highest paid employee within each department. Given our current dataset, we'd have to do something like this in SQL: # # SELECT * # FROM chicago c # INNER JOIN ( # SELECT department, max(salary) max_salary # FROM chicago # GROUP BY department # ) m # ON c.department = m.department # AND c.salary = m.max_salary; # # This would give you the highest paid person in each department, but it would return multiple if there were many equally high paid people within a department. # # Alternatively, you could alter the table, add a column, and then write an update statement to populate that column. However, that's not always an option. # # _Note: This would be a lot easier in PostgreSQL, T-SQL, and possibly Oracle due to the existence of partition/window/analytic functions. I've chosen to use MySQL syntax throughout this tutorial because of it's popularity. Unfortunately, MySQL doesn't have similar functions._ # Using `groupby` we can define a function (which we'll call `ranker`) that will label each record from 1 to N, where N is the number of employees within the department. We can then call `apply` to, well, _apply_ that function to each group (in this case, each department). # In[59]: def ranker(df): """Assigns a rank to each employee based on salary, with 1 being the highest paid. Assumes the data is DESC sorted.""" df['dept_rank'] = np.arange(len(df)) + 1 return df # In[60]: chicago.sort_values('salary', ascending=False, inplace=True) chicago = chicago.groupby('department').apply(ranker) print(chicago[chicago.dept_rank == 1].head(7)) # In[61]: chicago[chicago.department == "LAW"][:5] # We can now see where each employee ranks within their department based on salary. # ## Using pandas on the MovieLens dataset # To show pandas in a more "applied" sense, let's use it to answer some questions about the [MovieLens](http://www.grouplens.org/datasets/movielens/) dataset. Recall that we've already read our data into DataFrames and merged it. # In[62]: # pass in column names for each CSV u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] users = pd.read_csv('ml-100k/u.user', sep='|', names=u_cols, encoding='latin-1') r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings = pd.read_csv('ml-100k/u.data', sep='\t', names=r_cols, encoding='latin-1') # the movies file contains columns indicating the movie's genres # let's only load the first five columns of the file with usecols m_cols = ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url'] movies = pd.read_csv('ml-100k/u.item', sep='|', names=m_cols, usecols=range(5), encoding='latin-1') # create one merged DataFrame movie_ratings = pd.merge(movies, ratings) lens = pd.merge(movie_ratings, users) # **What are the 25 most rated movies?** # In[63]: most_rated = lens.groupby('title').size().sort_values(ascending=False)[:25] most_rated # There's a lot going on in the code above, but it's very idomatic. We're splitting the DataFrame into groups by movie title and applying the `size` method to get the count of records in each group. Then we order our results in descending order and limit the output to the top 25 using Python's slicing syntax. # # In SQL, this would be equivalent to: # # SELECT title, count(1) # FROM lens # GROUP BY title # ORDER BY 2 DESC # LIMIT 25; # # Alternatively, pandas has a nifty `value_counts` method - yes, this is simpler - the goal above was to show a basic `groupby` example. # In[64]: lens.title.value_counts()[:25] # **Which movies are most highly rated?** # In[65]: movie_stats = lens.groupby('title').agg({'rating': [np.size, np.mean]}) movie_stats.head() # We can use the `agg` method to pass a dictionary specifying the columns to aggregate (as keys) and a list of functions we'd like to apply. # # Let's sort the resulting DataFrame so that we can see which movies have the highest average score. # In[66]: # sort by rating average movie_stats.sort_values([('rating', 'mean')], ascending=False).head() # Because `movie_stats` is a DataFrame, we use the `sort` method - only Series objects use `order`. Additionally, because our columns are now a [MultiIndex](http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex), we need to pass in a tuple specifying how to sort. # # The above movies are rated so rarely that we can't count them as quality films. Let's only look at movies that have been rated at least 100 times. # In[67]: atleast_100 = movie_stats['rating']['size'] >= 100 movie_stats[atleast_100].sort_values([('rating', 'mean')], ascending=False)[:15] # Those results look realistic. Notice that we used boolean indexing to filter our `movie_stats` frame. # # We broke this question down into many parts, so here's the Python needed to get the 15 movies with the highest average rating, requiring that they had at least 100 ratings: # # ```python # movie_stats = lens.groupby('title').agg({'rating': [np.size, np.mean]}) # atleast_100 = movie_stats['rating'].size >= 100 # movie_stats[atleast_100].sort_values([('rating', 'mean')], ascending=False)[:15] # ``` # # The SQL equivalent would be: # # SELECT title, COUNT(1) size, AVG(rating) mean # FROM lens # GROUP BY title # HAVING COUNT(1) >= 100 # ORDER BY 3 DESC # LIMIT 15; # **Limiting our population going forward** # # Going forward, let's only look at the 50 most rated movies. Let's make a Series of movies that meet this threshold so we can use it for filtering later. # In[68]: most_50 = lens.groupby('movie_id').size().sort_values(ascending=False)[:50] # The SQL to match this would be: # # CREATE TABLE most_50 AS ( # SELECT movie_id, COUNT(1) # FROM lens # GROUP BY movie_id # ORDER BY 2 DESC # LIMIT 50 # ); # # This table would then allow us to use EXISTS, IN, or JOIN whenever we wanted to filter our results. Here's an example using EXISTS: # # SELECT * # FROM lens # WHERE EXISTS (SELECT 1 FROM most_50 WHERE lens.movie_id = most_50.movie_id); # **Which movies are most controversial amongst different ages?** # Let's look at how these movies are viewed across different age groups. First, let's look at how age is distributed amongst our users. # In[69]: users.age.plot.hist(bins=30) plt.title("Distribution of users' ages") plt.ylabel('count of users') plt.xlabel('age'); # pandas' integration with [matplotlib](http://matplotlib.org/index.html) makes basic graphing of Series/DataFrames trivial. In this case, just call `hist` on the column to produce a histogram. We can also use [matplotlib.pyplot](http://matplotlib.org/users/pyplot_tutorial.html) to customize our graph a bit (always label your axes). # **Binning our users** # # I don't think it'd be very useful to compare individual ages - let's bin our users into age groups using `pandas.cut`. # In[70]: labels = ['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70-79'] lens['age_group'] = pd.cut(lens.age, range(0, 81, 10), right=False, labels=labels) lens[['age', 'age_group']].drop_duplicates()[:10] # `pandas.cut` allows you to bin numeric data. In the above lines, we first created labels to name our bins, then split our users into eight bins of ten years (0-9, 10-19, 20-29, etc.). Our use of `right=False` told the function that we wanted the bins to be *exclusive* of the max age in the bin (e.g. a 30 year old user gets the 30s label). # # Now we can now compare ratings across age groups. # In[71]: lens.groupby('age_group').agg({'rating': [np.size, np.mean]}) # Young users seem a bit more critical than other age groups. Let's look at how the 50 most rated movies are viewed across each age group. We can use the `most_50` Series we created earlier for filtering. # In[72]: lens.set_index('movie_id', inplace=True) # In[73]: by_age = lens.loc[most_50.index].groupby(['title', 'age_group']) by_age.rating.mean().head(15) # Notice that both the title and age group are indexes here, with the average rating value being a Series. This is going to produce a really long list of values. # # Wouldn't it be nice to see the data as a table? Each title as a row, each age group as a column, and the average rating in each cell. # # Behold! The magic of `unstack`! # In[80]: by_age.rating.mean().unstack(1).fillna(0)[10:20] # `unstack`, well, unstacks the specified level of a [MultiIndex](http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex) (by default, `groupby` turns the grouped field into an index - since we grouped by two fields, it became a MultiIndex). We unstacked the second index (remember that Python uses 0-based indexes), and then filled in NULL values with 0. # # If we would have used: # ```python # by_age.rating.mean().unstack(0).fillna(0) # ``` # We would have had our age groups as rows and movie titles as columns. # **Which movies do men and women most disagree on?** # # EDIT: *I realized after writing this question that Wes McKinney basically went through the exact same question in his book. It's a good, yet simple example of pivot_table, so I'm going to leave it here. Seriously though, [go buy the book](http://www.amazon.com/gp/product/1449319793/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1449319793&linkCode=as2&tag=gjreda-20&linkId=MCGW4C4NOBRVV5OC).* # # Think about how you'd have to do this in SQL for a second. You'd have to use a combination of IF/CASE statements with aggregate functions in order to pivot your dataset. Your query would look something like this: # # SELECT title, AVG(IF(sex = 'F', rating, NULL)), AVG(IF(sex = 'M', rating, NULL)) # FROM lens # GROUP BY title; # # Imagine how annoying it'd be if you had to do this on more than two columns. # # DataFrame's have a *pivot_table* method that makes these kinds of operations much easier (and less verbose). # In[75]: lens.reset_index('movie_id', inplace=True) # In[76]: pivoted = lens.pivot_table(index=['movie_id', 'title'], columns=['sex'], values='rating', fill_value=0) pivoted.head() # In[77]: pivoted['diff'] = pivoted.M - pivoted.F pivoted.head() # In[78]: pivoted.reset_index('movie_id', inplace=True) # In[79]: disagreements = pivoted[pivoted.movie_id.isin(most_50.index)]['diff'] disagreements.sort_values().plot(kind='barh', figsize=[9, 15]) plt.title('Male vs. Female Avg. Ratings\n(Difference > 0 = Favored by Men)') plt.ylabel('Title') plt.xlabel('Average Rating Difference'); # Of course men like Terminator more than women. Independence Day though? Really? # ### Additional Resources: # # * [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) # * [Introduction to pandas](http://nbviewer.ipython.org/urls/gist.github.com/fonnesbeck/5850375/raw/c18cfcd9580d382cb6d14e4708aab33a0916ff3e/1.+Introduction+to+Pandas.ipynb) by [Chris Fonnesbeck](https://twitter.com/fonnesbeck) # * [pandas videos from PyCon](http://pyvideo.org/search?models=videos.video&q=pandas) # * [pandas and Python top 10](http://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/) # * [pandasql](http://blog.yhathq.com/posts/pandasql-sql-for-pandas-dataframes.html) # * [Practical pandas by Tom Augspurger (one of the pandas developers)](http://tomaugspurger.github.io/categories/pandas.html) # * [Video](https://www.youtube.com/watch?v=otCriSKVV_8) from Tom's pandas tutorial at PyData Seattle 2015