Pyxplorer - interactive data set exploration

The goal of pyxplorer is to provide a simple tool that allows interactive profiling of datasets that are accessible via a SQL like interface. The only requirement to run data profiling is that you are able to provide a Python DBAPI like interface to your data source and the data source is able to understand simplistic SQL queries.

I built this piece of software while trying to get a better understanding of data distribution in a massive several hundred million record large dataset. Depending on the size of the dataset and the query engine the resposne time can ranging from seconds (Impala) to minutes (Hive) or even hourse (MySQL)

The typical use case is to use pyxplorer interactively from an iPython Notebook or iPython shell to incrementally extract information about your data.

$> pip install pyxplorer pympala

Questions, Ideas, Comments:

https://github.com/grundprinzip/pyxplorer

Example using Impala

Basically pyexplorer works with all DBAPI like interfaces, but to show the advantages of running a high-performance data analysis on large amounts of data we will use Impala to store our data.

In [2]:
from impala.dbapi import connect
conn = connect(host='diufpc57', port=21050)

Database Operations

Imagine that you are provided with access to a huge Hive/Impala database on your very own Hadoop cluster and you're asked to profile the data to get a better understanding for performing more specific data science later on. Based on this connection, we can now instantiate a new explorer object.

In [4]:
import pyxplorer as pxp
data = pxp.Database("tpcc3", conn)
data
Out[4]:
NameSize
customerp30000000
districtp10000
historyp30000000
itemp100000
new_orderp9000000
oorderp30000000
order_linep299991280
stockp100000000
warehousep1000

Rows: 9 / Columns: 2

This simple code gives you access to all the tables in this database. Let's further investigate how many tables and columns exist in the database.

In [12]:
len(data)
Out[12]:
9

The above is the idiomatic python way, but sometimes, it might not be as easy to gasp what is meant, which gives you the chance to use as well

In [13]:
data.num_tables()
Out[13]:
9

Get the total number of columns:

In [7]:
sum([len(x.columns()) for x in data.tables()])
Out[7]:
92

Or we can directly use the number of columns method on the database object

In [9]:
data.num_columns()
Out[9]:
92

It seems like we have a better understanding of the dataset, but how many tuples are we talking about?

In [15]:
data.num_tuples()
Out[15]:
499102280

Singel Table Operation

Using the above operations, we can perform simple operations on all tables, but let's have a further look at single table operations to extract more information from instances.

In this example, we want to investigate the order_line table.

In [14]:
tab = data['order_linep']
tab
Out[14]:
<Table: "tpcc3.order_linep">

Let's start by doing some basic inspection of the table, like extracting the number of rows and the number of columns

In [17]:
tab.size()
Out[17]:
299991280
In [18]:
len(tab.columns())
Out[18]:
10
In [19]:
tab.columns()
Out[19]:
[ol_w_id,
 ol_d_id,
 ol_o_id,
 ol_number,
 ol_i_id,
 ol_delivery_d,
 ol_amount,
 ol_supply_w_id,
 ol_quantity,
 ol_dist_info]

Columns are special objects that can be easily and interactively inspected in iPython Notebooks, the default information per column are the min and max value, the most frequent and least frequent value and the total number of distinct values. Based on these measrues we provide information about the column.

$uniqueness = \frac{distinct}{rows}$

$constancy = \frac{count_{mf}}{rows}$

In [20]:
tab['ol_w_id']
Out[20]:
NameValue
Min1
Max1000
#Distinct Values1000
Most Frequent109 (301593)
Least Frequent212 (298395)
Top 10 MF109,676,117,460,19,877,165,764,340,689
Top 10 LF212, 405, 52, 284, 304, 769, 727, 665, 90, 163
Uniqueness3.33343022504e-06
Constancy0.00100533922186

Rows: 9 / Columns: 2

Its possible to access the column either using subscript notation or directly as an attribute of the object

In [21]:
tab.ol_w_id
Out[21]:
NameValue
Min1
Max1000
#Distinct Values1000
Most Frequent109 (301593)
Least Frequent212 (298395)
Top 10 MF109,676,117,460,19,877,165,764,340,689
Top 10 LF212, 405, 52, 284, 304, 769, 727, 665, 90, 163
Uniqueness3.33343022504e-06
Constancy0.00100533922186

Rows: 9 / Columns: 2

Based on this information we can further deduct what role this column might have in the overall schema. For example, based on the uniqueness we can say that the column is not suitable to uniquely identify every row. In additiona, based on the constancy of the most frequent value and the spread between the most and least frequent value we can deduct that the data is almost uniformly distributed.

Distinct Values and Distinct Value Distribution

One important feature ist to look at the distinct values and their distribution

In [22]:
tab.ol_w_id.dcount()
Out[22]:
1000
In [24]:
dist = tab.ol_w_id.distribution(limit=10000)
In [25]:
%matplotlib inline
dist.fraction.hist()
Out[25]:
<matplotlib.axes.AxesSubplot at 0x10c550ad0>

The above example validates our assumption of unifrom distribution as the histograms spreads basically from $0.0009$ to $0.001006$.

In addition to performing such analysis on a column level, we can have a look at the distributions as well from a higher level.

In [7]:
data.dcounts().fraction.hist()
Out[7]:
<matplotlib.axes.AxesSubplot at 0x109958510>
In [102]:
len(data.dcounts())
Out[102]:
98
In [9]:
data.dcounts().head(10)
Out[9]:
table column distinct size fraction
0 customerp c_w_id 1000 30000000 3.333333e-05
1 customerp c_d_id 10 30000000 3.333333e-07
2 customerp c_id 3000 30000000 1.000000e-04
3 customerp c_discount 5000 30000000 1.666667e-04
4 customerp c_credit 2 30000000 6.666667e-08
5 customerp c_last 1000 30000000 3.333333e-05
6 customerp c_first 29999284 30000000 9.999761e-01
7 customerp c_credit_lim 1 30000000 3.333333e-08
8 customerp c_balance 1 30000000 3.333333e-08
9 customerp c_ytd_payment 1 30000000 3.333333e-08

Herew, we see that dcounts refers to a Pandas data frame object with 96 rows.

Data Loading (Beta)

While in most cases the data to profile is already available as a Hive or Impala or even MySQL table, there might exist the case that we simply have a directory full of CSV files following the same format and want to perform the same analysis.

To facilitate this process, pyxplorer provides a simple interface to generate an external table based on a sample from the CSV files in a given directory.

The idea is that a directory in HDFS already contains a set of files that all follow the same specification and the user now wants to perform some data profiling. We further assume that the user has access to Hive / Impala to run queries on the data.

In [27]:
from pyxplorer.loader import Loader
loader = Loader("/user/martin/test1", name_node="diufpc56", hive_server="diufpc301")

It will check the first file it finds in the directory, extract the data of the first few hundred lines. It will then try to guess the correct separator and build an external table. The rest of the operations follows the normal explorative schedule described before.

In [28]:
loader.load()
Out[28]:
('default', 'pyxplorer_data')
In [33]:
pyxdb = m.Database("default", conn)
pyxdb
Out[33]:
NameSize
pyxplorer_data7

Rows: 1 / Columns: 2

In [37]:
tab = pyxdb["pyxplorer_data"]
tab
Out[37]:
<Table: "default.pyxplorer_data">
In [38]:
tab.columns()
Out[38]:
[col_0, col_1, col_2, col_3, col_4, col_5, col_6]
In [39]:
tab.col_0
Out[39]:
NameValue
Min100
Maxdistance
#Distinct Values7
Most Frequentdistance (1)
Least Frequentdistance (1)
Top 10 MF100,129,148,distance,192,113,168
Top 10 LF100, 129, 148, distance, 192, 113, 168
Uniqueness1.0
Constancy0.142857142857

Rows: 9 / Columns: 2