Grouping and aggregating data
apply
functionpandas
import pandas as pd
import numpy as np
import pylab as pl
df = pd.read_csv("./data/credit-data-trainingset.csv")
df.head()
serious_dlqin2yrs | revolving_utilization_of_unsecured_lines | age | number_of_time30-59_days_past_due_not_worse | debt_ratio | monthly_income | number_of_open_credit_lines_and_loans | number_of_times90_days_late | number_real_estate_loans_or_lines | number_of_time60-89_days_past_due_not_worse | number_of_dependents | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0.766127 | 45 | 2 | 0.802982 | 9120 | 13 | 0 | 6 | 0 | 2 |
1 | 0 | 0.957151 | 40 | 0 | 0.121876 | 2600 | 4 | 0 | 0 | 0 | 1 |
2 | 0 | 0.658180 | 38 | 1 | 0.085113 | 3042 | 2 | 1 | 0 | 0 | 0 |
3 | 0 | 0.907239 | 49 | 1 | 0.024926 | 63588 | 7 | 0 | 1 | 0 | 0 |
4 | 0 | 0.213179 | 74 | 0 | 0.375607 | 3500 | 3 | 0 | 1 | 0 | 1 |
"Applies" or operates on a column in your data frame with a given function. This is analagous to an Excel formula.
df.monthly_income.apply(np.log)
0 9.118225 1 7.863267 2 8.020270 3 11.060180 4 8.160518 5 8.160518 6 10.072555 7 7.824046 8 8.779711 9 9.429797 10 -inf 11 9.338030 12 9.082507 13 8.095599 14 5.808142 ... 112400 9.752665 112401 8.329658 112402 7.170120 112403 9.176370 112404 8.832442 112405 7.955074 112406 8.081784 112407 7.899895 112408 8.081784 112409 8.702344 112410 9.528794 112411 8.702344 112412 7.675546 112413 8.235095 112414 9.210340 Name: monthly_income, Length: 112415, dtype: float64
A lambda
function is an anonymous function. Think of it just as a shorthand way to define a quick function that you need once.
add_10 = lambda x: x + 10
plus = lambda x, y: x + y
print add_10(9)
print plus(10, 20)
19 30
df.monthly_income.apply(lambda x: np.log(x + 1))
0 9.118335 1 7.863651 2 8.020599 3 11.060196 4 8.160804 5 8.160804 6 10.072597 7 7.824446 8 8.779865 9 9.429877 10 0.000000 11 9.338118 12 9.082621 13 8.095904 14 5.811141 ... 112400 9.752723 112401 8.329899 112402 7.170888 112403 9.176473 112404 8.832588 112405 7.955425 112406 8.082093 112407 7.900266 112408 8.082093 112409 8.702510 112410 9.528867 112411 8.702510 112412 7.676010 112413 8.235361 112414 9.210440 Name: monthly_income, Length: 112415, dtype: float64
#numpy actually has log(x + 1)
help(np.log1p)
Help on ufunc object: log1p = class ufunc(__builtin__.object) | Functions that operate element by element on whole arrays. | | To see the documentation for a specific ufunc, use np.info(). For | example, np.info(np.sin). Because ufuncs are written in C | (for speed) and linked into Python with NumPy's ufunc facility, | Python's help() function finds this page whenever help() is called | on a ufunc. | | A detailed explanation of ufuncs can be found in the "ufuncs.rst" | file in the NumPy reference guide. | | Unary ufuncs: | ============= | | op(X, out=None) | Apply op to X elementwise | | Parameters | ---------- | X : array_like | Input array. | out : array_like | An array to store the output. Must be the same shape as `X`. | | Returns | ------- | r : array_like | `r` will have the same shape as `X`; if out is provided, `r` | will be equal to out. | | Binary ufuncs: | ============== | | op(X, Y, out=None) | Apply `op` to `X` and `Y` elementwise. May "broadcast" to make | the shapes of `X` and `Y` congruent. | | The broadcasting rules are: | | * Dimensions of length 1 may be prepended to either array. | * Arrays may be repeated along dimensions of length 1. | | Parameters | ---------- | X : array_like | First input array. | Y : array_like | Second input array. | out : array_like | An array to store the output. Must be the same shape as the | output would have. | | Returns | ------- | r : array_like | The return value; if out is provided, `r` will be equal to out. | | Methods defined here: | | __call__(...) | x.__call__(...) <==> x(...) | | __repr__(...) | x.__repr__() <==> repr(x) | | __str__(...) | x.__str__() <==> str(x) | | accumulate(...) | accumulate(array, axis=0, dtype=None, out=None) | | Accumulate the result of applying the operator to all elements. | | For a one-dimensional array, accumulate produces results equivalent to:: | | r = np.empty(len(A)) | t = op.identity # op = the ufunc being applied to A's elements | for i in xrange(len(A)): | t = op(t, A[i]) | r[i] = t | return r | | For example, add.accumulate() is equivalent to np.cumsum(). | | For a multi-dimensional array, accumulate is applied along only one | axis (axis zero by default; see Examples below) so repeated use is | necessary if one wants to accumulate over multiple axes. | | Parameters | ---------- | array : array_like | The array to act on. | axis : int, optional | The axis along which to apply the accumulation; default is zero. | dtype : data-type code, optional | The data-type used to represent the intermediate results. Defaults | to the data-type of the output array if such is provided, or the | the data-type of the input array if no output array is provided. | out : ndarray, optional | A location into which the result is stored. If not provided a | freshly-allocated array is returned. | | Returns | ------- | r : ndarray | The accumulated values. If `out` was supplied, `r` is a reference to | `out`. | | Examples | -------- | 1-D array examples: | | >>> np.add.accumulate([2, 3, 5]) | array([ 2, 5, 10]) | >>> np.multiply.accumulate([2, 3, 5]) | array([ 2, 6, 30]) | | 2-D array examples: | | >>> I = np.eye(2) | >>> I | array([[ 1., 0.], | [ 0., 1.]]) | | Accumulate along axis 0 (rows), down columns: | | >>> np.add.accumulate(I, 0) | array([[ 1., 0.], | [ 1., 1.]]) | >>> np.add.accumulate(I) # no axis specified = axis zero | array([[ 1., 0.], | [ 1., 1.]]) | | Accumulate along axis 1 (columns), through rows: | | >>> np.add.accumulate(I, 1) | array([[ 1., 1.], | [ 0., 1.]]) | | outer(...) | outer(A, B) | | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. | | Let ``M = A.ndim``, ``N = B.ndim``. Then the result, `C`, of | ``op.outer(A, B)`` is an array of dimension M + N such that: | | .. math:: C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] = | op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}]) | | For `A` and `B` one-dimensional, this is equivalent to:: | | r = empty(len(A),len(B)) | for i in xrange(len(A)): | for j in xrange(len(B)): | r[i,j] = op(A[i], B[j]) # op = ufunc in question | | Parameters | ---------- | A : array_like | First array | B : array_like | Second array | | Returns | ------- | r : ndarray | Output array | | See Also | -------- | numpy.outer | | Examples | -------- | >>> np.multiply.outer([1, 2, 3], [4, 5, 6]) | array([[ 4, 5, 6], | [ 8, 10, 12], | [12, 15, 18]]) | | A multi-dimensional example: | | >>> A = np.array([[1, 2, 3], [4, 5, 6]]) | >>> A.shape | (2, 3) | >>> B = np.array([[1, 2, 3, 4]]) | >>> B.shape | (1, 4) | >>> C = np.multiply.outer(A, B) | >>> C.shape; C | (2, 3, 1, 4) | array([[[[ 1, 2, 3, 4]], | [[ 2, 4, 6, 8]], | [[ 3, 6, 9, 12]]], | [[[ 4, 8, 12, 16]], | [[ 5, 10, 15, 20]], | [[ 6, 12, 18, 24]]]]) | | reduce(...) | reduce(a, axis=0, dtype=None, out=None, keepdims=False) | | Reduces `a`'s dimension by one, by applying ufunc along one axis. | | Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then | :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = | the result of iterating `j` over :math:`range(N_i)`, cumulatively applying | ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. | For a one-dimensional array, reduce produces results equivalent to: | :: | | r = op.identity # op = ufunc | for i in xrange(len(A)): | r = op(r, A[i]) | return r | | For example, add.reduce() is equivalent to sum(). | | Parameters | ---------- | a : array_like | The array to act on. | axis : None or int or tuple of ints, optional | Axis or axes along which a reduction is performed. | The default (`axis` = 0) is perform a reduction over the first | dimension of the input array. `axis` may be negative, in | which case it counts from the last to the first axis. | | .. versionadded:: 1.7.0 | | If this is `None`, a reduction is performed over all the axes. | If this is a tuple of ints, a reduction is performed on multiple | axes, instead of a single axis or all the axes as before. | | For operations which are either not commutative or not associative, | doing a reduction over multiple axes is not well-defined. The | ufuncs do not currently raise an exception in this case, but will | likely do so in the future. | dtype : data-type code, optional | The type used to represent the intermediate results. Defaults | to the data-type of the output array if this is provided, or | the data-type of the input array if no output array is provided. | out : ndarray, optional | A location into which the result is stored. If not provided, a | freshly-allocated array is returned. | keepdims : bool, optional | If this is set to True, the axes which are reduced are left | in the result as dimensions with size one. With this option, | the result will broadcast correctly against the original `arr`. | | Returns | ------- | r : ndarray | The reduced array. If `out` was supplied, `r` is a reference to it. | | Examples | -------- | >>> np.multiply.reduce([2,3,5]) | 30 | | A multi-dimensional array example: | | >>> X = np.arange(8).reshape((2,2,2)) | >>> X | array([[[0, 1], | [2, 3]], | [[4, 5], | [6, 7]]]) | >>> np.add.reduce(X, 0) | array([[ 4, 6], | [ 8, 10]]) | >>> np.add.reduce(X) # confirm: default axis value is 0 | array([[ 4, 6], | [ 8, 10]]) | >>> np.add.reduce(X, 1) | array([[ 2, 4], | [10, 12]]) | >>> np.add.reduce(X, 2) | array([[ 1, 5], | [ 9, 13]]) | | reduceat(...) | reduceat(a, indices, axis=0, dtype=None, out=None) | | Performs a (local) reduce with specified slices over a single axis. | | For i in ``range(len(indices))``, `reduceat` computes | ``ufunc.reduce(a[indices[i]:indices[i+1]])``, which becomes the i-th | generalized "row" parallel to `axis` in the final result (i.e., in a | 2-D array, for example, if `axis = 0`, it becomes the i-th row, but if | `axis = 1`, it becomes the i-th column). There are two exceptions to this: | | * when ``i = len(indices) - 1`` (so for the last index), | ``indices[i+1] = a.shape[axis]``. | * if ``indices[i] >= indices[i + 1]``, the i-th generalized "row" is | simply ``a[indices[i]]``. | | The shape of the output depends on the size of `indices`, and may be | larger than `a` (this happens if ``len(indices) > a.shape[axis]``). | | Parameters | ---------- | a : array_like | The array to act on. | indices : array_like | Paired indices, comma separated (not colon), specifying slices to | reduce. | axis : int, optional | The axis along which to apply the reduceat. | dtype : data-type code, optional | The type used to represent the intermediate results. Defaults | to the data type of the output array if this is provided, or | the data type of the input array if no output array is provided. | out : ndarray, optional | A location into which the result is stored. If not provided a | freshly-allocated array is returned. | | Returns | ------- | r : ndarray | The reduced values. If `out` was supplied, `r` is a reference to | `out`. | | Notes | ----- | A descriptive example: | | If `a` is 1-D, the function `ufunc.accumulate(a)` is the same as | ``ufunc.reduceat(a, indices)[::2]`` where `indices` is | ``range(len(array) - 1)`` with a zero placed | in every other element: | ``indices = zeros(2 * len(a) - 1)``, ``indices[1::2] = range(1, len(a))``. | | Don't be fooled by this attribute's name: `reduceat(a)` is not | necessarily smaller than `a`. | | Examples | -------- | To take the running sum of four successive values: | | >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2] | array([ 6, 10, 14, 18]) | | A 2-D example: | | >>> x = np.linspace(0, 15, 16).reshape(4,4) | >>> x | array([[ 0., 1., 2., 3.], | [ 4., 5., 6., 7.], | [ 8., 9., 10., 11.], | [ 12., 13., 14., 15.]]) | | :: | | # reduce such that the result has the following five rows: | # [row1 + row2 + row3] | # [row4] | # [row2] | # [row3] | # [row1 + row2 + row3 + row4] | | >>> np.add.reduceat(x, [0, 3, 1, 2, 0]) | array([[ 12., 15., 18., 21.], | [ 12., 13., 14., 15.], | [ 4., 5., 6., 7.], | [ 8., 9., 10., 11.], | [ 24., 28., 32., 36.]]) | | :: | | # reduce such that result has the following two columns: | # [col1 * col2 * col3, col4] | | >>> np.multiply.reduceat(x, [0, 3], 1) | array([[ 0., 3.], | [ 120., 7.], | [ 720., 11.], | [ 2184., 15.]]) | | ---------------------------------------------------------------------- | Data descriptors defined here: | | identity | The identity value. | | Data attribute containing the identity element for the ufunc, if it has one. | If it does not, the attribute value is None. | | Examples | -------- | >>> np.add.identity | 0 | >>> np.multiply.identity | 1 | >>> np.power.identity | 1 | >>> print np.exp.identity | None | | nargs | The number of arguments. | | Data attribute containing the number of arguments the ufunc takes, including | optional ones. | | Notes | ----- | Typically this value will be one more than what you might expect because all | ufuncs take the optional "out" argument. | | Examples | -------- | >>> np.add.nargs | 3 | >>> np.multiply.nargs | 3 | >>> np.power.nargs | 3 | >>> np.exp.nargs | 2 | | nin | The number of inputs. | | Data attribute containing the number of arguments the ufunc treats as input. | | Examples | -------- | >>> np.add.nin | 2 | >>> np.multiply.nin | 2 | >>> np.power.nin | 2 | >>> np.exp.nin | 1 | | nout | The number of outputs. | | Data attribute containing the number of arguments the ufunc treats as output. | | Notes | ----- | Since all ufuncs can take output arguments, this will always be (at least) 1. | | Examples | -------- | >>> np.add.nout | 1 | >>> np.multiply.nout | 1 | >>> np.power.nout | 1 | >>> np.exp.nout | 1 | | ntypes | The number of types. | | The number of numerical NumPy types - of which there are 18 total - on which | the ufunc can operate. | | See Also | -------- | numpy.ufunc.types | | Examples | -------- | >>> np.add.ntypes | 18 | >>> np.multiply.ntypes | 18 | >>> np.power.ntypes | 17 | >>> np.exp.ntypes | 7 | >>> np.remainder.ntypes | 14 | | signature | | types | Returns a list with types grouped input->output. | | Data attribute listing the data-type "Domain-Range" groupings the ufunc can | deliver. The data-types are given using the character codes. | | See Also | -------- | numpy.ufunc.ntypes | | Examples | -------- | >>> np.add.types | ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', | 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', | 'GG->G', 'OO->O'] | | >>> np.multiply.types | ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', | 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', | 'GG->G', 'OO->O'] | | >>> np.power.types | ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', | 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', | 'OO->O'] | | >>> np.exp.types | ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O'] | | >>> np.remainder.types | ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', | 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O']
If you can't do it in a one-liner lambda function don't worry. pandas
also let's apply
your own custom functions. You can use custom functions when applying on Series and also when operating on chunks of data frames in groupby
s.
def inverse(x):
return 1 / (x + 1)
df.monthly_income.apply(inverse)
0 0.000110 1 0.000384 2 0.000329 3 0.000016 4 0.000286 5 0.000286 6 0.000042 7 0.000400 8 0.000154 9 0.000080 10 1.000000 11 0.000088 12 0.000114 13 0.000305 14 0.002994 ... 112400 0.000058 112401 0.000241 112402 0.000769 112403 0.000103 112404 0.000146 112405 0.000351 112406 0.000309 112407 0.000371 112408 0.000309 112409 0.000166 112410 0.000073 112411 0.000166 112412 0.000464 112413 0.000265 112414 0.000100 Name: monthly_income, Length: 112415, dtype: float64
cap_value(x, cap)
that will set x to the cap if x > cap. Then apply it to debt_ratio with a cap of 5.¶def cap_value(x, cap):
"""
x - a value
cap - threshold value for x; if x > cap, then x is set to cap
Examples:
cap_value(1000, 10)
10
cap_value(10, 100)
10
"""
# your code here
return None
print cap_value(1000, 10)==10
print cap_value(10, 100)==10
print df.debt_ratio.apply(lambda x: cap_value(x, 5.0)).mean()#should be close to 1.28
False False nan
Split, Apply, Combine is a data munging methodology similar in spirit to SQL
's GROUP BY
. The idea being you split your data into chunks, operate on those chunks, and then combine the results together into a single table. groupby
in pandas
works exactly the same way. But since we're using Python and not SQL, we have a lot more flexibility in terms of the types of operations we can perform in the apply step.
From the pandas
documentation:
subset = df[['serious_dlqin2yrs', 'age', 'monthly_income']]
subset.groupby("serious_dlqin2yrs")
<pandas.core.groupby.DataFrameGroupBy object at 0x103c5b550>
Aggregate whatever is returned
subset.groupby("serious_dlqin2yrs").mean()
age | monthly_income | |
---|---|---|
serious_dlqin2yrs | ||
0 | 52.734998 | 6424.391821 |
1 | 45.966445 | 5457.963915 |
What's really going on here? You can see below that when you groupby
a certain variable(s), you're literally splitting the data into chunks based on each possible value of that variable.
for name, group in subset.groupby("serious_dlqin2yrs"):
print "splitting by: ", name
print group.mean()
print "*"*80
splitting by: 0 serious_dlqin2yrs 0.000000 age 52.734998 monthly_income 6424.391821 dtype: float64 ******************************************************************************** splitting by: 1 serious_dlqin2yrs 1.000000 age 45.966445 monthly_income 5457.963915 dtype: float64 ********************************************************************************
subset.groupby("serious_dlqin2yrs").agg([np.min, np.mean, np.median, np.max])
age | monthly_income | |||||||
---|---|---|---|---|---|---|---|---|
amin | mean | median | amax | amin | mean | median | amax | |
serious_dlqin2yrs | ||||||||
0 | 0 | 52.734998 | 52 | 109 | 0 | 6424.391821 | 5250 | 3008750 |
1 | 21 | 45.966445 | 46 | 99 | 0 | 5457.963915 | 4333 | 250000 |
pandas
also let's you use custom apply functions¶def age_x_income(frame):
x = (frame.age * frame.monthly_income)
return np.mean(x)
subset.groupby("serious_dlqin2yrs").apply(age_x_income)
serious_dlqin2yrs 0 343570.264001 1 260374.521838 dtype: float64
pop = pd.read_csv("./data/uspop.csv")
pop
<class 'pandas.core.frame.DataFrame'> Int64Index: 78 entries, 0 to 77 Data columns (total 2 columns): age 78 non-null values est_pop 78 non-null values dtypes: float64(1), int64(1)
cols = ['age', 'monthly_income', 'serious_dlqin2yrs']
result = pd.merge(df[cols] , pop, how='left', on='age')
result
<class 'pandas.core.frame.DataFrame'> Int64Index: 116157 entries, 0 to 116156 Data columns (total 4 columns): age 116157 non-null values monthly_income 116157 non-null values serious_dlqin2yrs 116157 non-null values est_pop 114660 non-null values dtypes: float64(2), int64(2)
len(result) > len(df)
True
pd.value_counts(pop.age).head()
29 2 59 2 36 1 30 1 31 1 dtype: int64
pop = pop[pop.age.duplicated()==False]
cols = ['age', 'monthly_income', 'serious_dlqin2yrs']
joined = pd.merge(df[cols] , pop, how='left', on='age')
pop.tail()
age | est_pop | |
---|---|---|
73 | 81 | 5672.346939 |
74 | 82 | 5656.795918 |
75 | 83 | 5641.244898 |
76 | 84 | 5625.693878 |
77 | 85 | 4957.000000 |
joined.est_pop = joined.est_pop.fillna(4957.0)
joined.est_pop.describe()
count 112415.000000 mean 17879.408997 std 4874.574536 min 4957.000000 25% 16892.244898 50% 19467.306122 75% 21510.408163 max 21988.020408 dtype: float64
pandasql
¶Training wheels for pandas
. We developed and open sourced pandasql
to help people coming from other languages ease into the pandas
syntax. It allows you to query pandas
data frames like they were SQL
tables.
from pandasql import sqldf
pysqldf = lambda q: sqldf(q, globals())
query = """
select
serious_dlqin2yrs
, sum(1) as total
from
df
group by
serious_dlqin2yrs;
"""
pysqldf(query)
serious_dlqin2yrs | total | |
---|---|---|
0 | 0 | 104905 |
1 | 1 | 7510 |
query = """
select
age
, avg(serious_dlqin2yrs) as pct_delinquent
from
df
group by
age
order by
age;
"""
pysqldf(query)
<class 'pandas.core.frame.DataFrame'> Int64Index: 86 entries, 0 to 85 Data columns (total 2 columns): age 86 non-null values pct_delinquent 86 non-null values dtypes: float64(2)
apply
to make custom data transformationsgroupby
and aggregate operations using pandas
pandas
and pandasql
to merge data frames together