THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
In discrete time survival analysis, we calculate the probability of an event occurring in a particular time period (hazard probability) as well as the probability that a given number of cases might have experienced (or not experienced) that event after a certain number of periods (survival probability).
In this example, I follow the approach used by Singer and Willett in Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence Chapter 10 on "Fitting Discrete Time Hazard Models," but with an example from Wikipedia.
None of the learning examples of discrete time survival analysis look at online behaviour, so I have created an example using Wikipedia data from the Wikipedia Participation Challenge posted to Kaggle in 2011 (See on Kaggle here). To make this clearer, and to reduce the overhead, I fit a modified version of the validation dataset of edits from 21105 users, rather than the much larger training dataset.
This page does not offer a proposed solution to the challenge. Contributors submitted 1029 entries, using a large variety of methods. The top contributors are listed here, with further discussion here. Rather, this example sets out to use the Wikipedia data to illustrate an introductory, regression-based approach to hazard/survival analysis.
This example makes extensive use of the Lifelines library, which offers features that go well beyond the methods in Singer & Willet's introductory chapter on discrete time hazard models.
In this analysis, we are testing a question-motivated hypothesis rather than attempting to accurately predict/model unobserved users (which was the focus of the Wikipedia challenge). Our hypothesis is that there is a relationship between reversion rate and the probability of ceasing contributions in a given period, on average in the population of Wikipedians such that a higher rate of edit reversion is associated with a higher probability of ending contributions in a period. This can also be expressed in terms of survival by implication: Wikipedians whose contributions are reverted at lower rates have a higher probability of continue to make contributions, on average in the population.
Note that this is not a causal analysis-- if we find support for this hypothesis, we cannot claim that reversions cause people to leave Wikipedia. If we find support, it may also plausible that people who make often-reverted edits aren't very interested in Wikipedia or very good writers, and leave because they loose interest.
To establish causality, we would have to use other methods, such the methods used by Aaron Halfaker, Aniket Kittur, and John Riedl in their 2011 WikiSym paper Don't Bite the Newbies: How reverts affect the quantity and quality of Wikipedia work, who used a matched comparison method to test the causal relationship between reversions and contribution to Wikipedia.
%matplotlib inline
import codecs
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.graphics.gofplots import qqplot
from scipy import stats as ss
import csv
from collections import Counter
from collections import defaultdict
import seaborn as sns
from scipy import stats
from dateutil import *
import math
import statsmodels.api as sm
import statsmodels.formula.api as smf
import re
import sys
## FILES TO IMPORT FOR LIFE TABLE ANALYSIS
import lifelines
kmf = lifelines.KaplanMeierFitter()
#naf = lifelines.NelsonAalenFitter()
import csv
cols = None
user_edits = defaultdict(list)
# Data was provided under the Creative Commons Attribution-Share-Alike 3.0 license
# https://creativecommons.org/licenses/by-sa/3.0/us/
with open("wikipedia-validation.tsv") as tsv:
for line in csv.reader(tsv, dialect="excel-tab"):
row = {}
if(cols is None):
cols = line
else:
for i in range(len(cols)):
row[cols[i]] = line[i]
user_edits[row['user_id']].append(row)
Here, we generate a table of the earliest month, last month, and number of months active. We also identify whether the account is "censored" -- e.g. active in the last recorded month. Censoring allows us to account for the fact our data collection ended before finding out when these accounts ceased contributing to Wikipedia (many of them are likely still editing to this day). The lifelines documentation has a good explanation of censorship.
def diff_month(d1, d2):
return (d1.year - d2.year)*12 + d1.month - d2.month
# This method generates a row with details we're interested in
# that then becomes the dataframe for life table analysis
def generate_user_row(u):
r = {"user_id":u[0]['user_id'],
"edits":len(u),
"reversions":len([x for x in u if int(x['reverted'])==1]),
"namespaces":len(set([x['namespace'] for x in u]))}
sorted_dates = sorted([x['timestamp'] for x in u])
r['earliest_edit'] = parser.parse(sorted_dates[0])
#r['earliest_month'] = int(r['earliest_edit'].strftime("%Y%m"))
r['last_edit'] = parser.parse(sorted_dates[-1])
r['last_month'] = int(r['last_edit'].strftime("%Y%m"))
r['edit_days'] = (r['last_edit'] - r['earliest_edit']).days
r['edit_months'] = diff_month(r['last_edit'], r['earliest_edit'])
return r
def flip(val):
flip={0:1,1:0,None:None}
return flip[val]
user_rows = []
for user in user_edits.keys():
# remove an outlier for pedagogical purposes
if(user_edits[user][0]['user_id']=='526223'):
continue
else:
user_rows.append(generate_user_row(user_edits[user]))
udf = pd.DataFrame(user_rows)
# GENERATE A LIST OF CENSORED RECORDS
# WHO CONTRIBUTED IN THE FINAL RECORDED MONTH
maxmonth = udf.last_month.max()
def censored(last_month):
if last_month == maxmonth:
return 1
return 0
udf['censored'] = udf.last_month.map(censored)
udf['dropout'] = udf.censored.map(flip)
Here we use the lifelines library to plot the Kaplan Meier Survival Curve from the life tables. Lifelines has a great number of features that are left out of this example, along with especially well-written, clear, helpful documentation. The Lifelines documentation reads more like a tutorial; I highly recommend it.
f, ax = plt.subplots(1,1, figsize=(10,6))
# fit the
kmf.fit(udf.edit_days,udf.dropout)
kmf.plot(ax=ax)
plt.suptitle("Sample Survival Probabilities for Continuing to Edit Wikipedia After First Edit",
fontsize="20")
plt.title("From the Kaggle Wikipedia Participation Challenge Validation Set (n = %(n)d users)" %{"n":len(user_rows)},
fontsize="14")
ax.set_xlabel("Day periods since first edit", fontsize="18")
ax.set_ylabel("Probability of Continuing to Edit",fontsize="18")
plt.show()
To make our example clearer, and to use the Singer & Willett method of regression on individual periods as a first attempt at fit, we focus on month periods in this analysis.
f, ax = plt.subplots(1,1, figsize=(10,6))
# fit the
kmf.fit(udf.edit_months,udf.dropout)
kmf.plot(ax=ax)
plt.suptitle("Sample Survival Probabilities for Continuing to Edit Wikipedia After First Edit",
fontsize="20")
plt.title("From the Kaggle Wikipedia Participation Challenge Validation Set (n = %(n)d users)" %{"n":len(user_rows)},
fontsize="14")
ax.set_xlabel("Month periods since first edit", fontsize="18")
ax.set_ylabel("Probability of Continuing to Edit",fontsize="18")
plt.show()
In this section, I generate a variety of possible covariates for edit count and reversion count. Since the purpose of this analysis is to clearly demonstrate a technique rather than optimize the fit, I settle on ''reversion rate'', which is the number of reversions against total number of edits for the entire observed period.
If we were aiming to win the Kaggle competition, we would want to develop a better method for modeling people with a reversion rate of 0 compared to people with nonzero reversion rates, as illustrated in the historgrams. This is discussed in a blog post by Adam Hyland on R Bloggers.
#GENERATE COVARIATES
def rrate(row):
return float(row.reversions)/float(row.edits)
udf['log_reversions'] = udf.reversions.map(math.log1p)
udf['log_edits'] = udf.edits.map(math.log1p)
udf['reversion_rate'] = udf.apply(rrate, axis=1)
udf['log_reversion_rate'] = udf.reversion_rate.map(lambda x: math.log1p(x*100.))
plt.xlabel("log(reversions per user)", fontsize="12")
plt.ylabel("log(edits per user)", fontsize="12")
plt.scatter(udf.log_reversions, udf.log_edits)
plt.show()
plt.hist(udf.reversion_rate)
plt.title("reversion rate")
plt.show()
plt.hist(udf.log_reversion_rate)
plt.title("log transformed reversion rate")
plt.show()
To do regression analysis on a life table, we convert it to a person-period dataset, where each row represents a month-long period between the month period of first activity and the month of the last edit made by that person.
This approach doesn't include any information at all about the activity of that user within that period, only whether they had not yet left Wikipedia for good, as defined by whether they made an edit in the last month of the period analyzed. This is a HUGE assumption and is probably wrong for a lot of Wikipedia users, who might easily have just taken the last month off.
Notice also that covariates like revision_rate get replicated across all period rows. The revision_rate for each month period row is not the revision rate in that month, it is that user's total reversion rate at the end of the observation period.
def gen_person_period(dataframe, period, event):
records = dataframe.to_dict("records")
min_period = int(dataframe[period].min())
person_periods = []
for r in records:
for i in range( int(r[period]) - (min_period - 1)):
ri = r.copy()
ri['PERIOD'] = int(i)+min_period
ri['EVENT'] = int((ri['PERIOD'] == int(ri[period])) and r[event]==1)
person_periods.append(ri)
return pd.DataFrame(person_periods)
ppdf=gen_person_period(udf, "edit_months", "dropout")
In this example, we use logistic regression to generate the hazard probabilities for the cohort of those who are still active N months after their first edit. This is equivalent to the hazard probabilities generated from the life tables, but since we are using the machinery of regression, we are able to make claims about the population, not just the sample (which is a limitation of the KM curves etc).
result = smf.glm(formula = " EVENT ~ C(PERIOD)",
data=ppdf,
family=sm.families.Binomial()).fit()
print result.summary()
Generalized Linear Model Regression Results ============================================================================== Dep. Variable: EVENT No. Observations: 106446 Model: GLM Df Residuals: 106381 Model Family: Binomial Df Model: 64 Link Function: logit Scale: 1.0 Method: IRLS Log-Likelihood: -36967. Date: Sat, 04 Apr 2015 Deviance: 73933. Time: 12:22:33 Pearson chi2: 1.06e+05 No. Iterations: 23 =================================================================================== coef std err z P>|z| [95.0% Conf. Int.] ----------------------------------------------------------------------------------- Intercept 0.6063 0.014 42.092 0.000 0.578 0.635 C(PERIOD)[T.1] -2.5230 0.038 -67.013 0.000 -2.597 -2.449 C(PERIOD)[T.2] -3.1379 0.050 -62.948 0.000 -3.236 -3.040 C(PERIOD)[T.3] -3.4424 0.058 -58.902 0.000 -3.557 -3.328 C(PERIOD)[T.4] -3.4541 0.060 -57.232 0.000 -3.572 -3.336 C(PERIOD)[T.5] -3.4005 0.061 -56.091 0.000 -3.519 -3.282 C(PERIOD)[T.6] -3.4342 0.063 -54.275 0.000 -3.558 -3.310 C(PERIOD)[T.7] -3.4429 0.065 -52.728 0.000 -3.571 -3.315 C(PERIOD)[T.8] -3.2803 0.063 -52.312 0.000 -3.403 -3.157 C(PERIOD)[T.9] -3.2815 0.065 -50.659 0.000 -3.408 -3.155 C(PERIOD)[T.10] -3.1613 0.064 -49.636 0.000 -3.286 -3.036 C(PERIOD)[T.11] -3.1461 0.066 -47.923 0.000 -3.275 -3.017 C(PERIOD)[T.12] -2.9448 0.063 -46.856 0.000 -3.068 -2.822 C(PERIOD)[T.13] -2.9821 0.067 -44.736 0.000 -3.113 -2.851 C(PERIOD)[T.14] -2.9125 0.068 -42.996 0.000 -3.045 -2.780 C(PERIOD)[T.15] -2.9168 0.071 -41.046 0.000 -3.056 -2.778 C(PERIOD)[T.16] -2.8296 0.072 -39.326 0.000 -2.971 -2.689 C(PERIOD)[T.17] -2.5895 0.069 -37.397 0.000 -2.725 -2.454 C(PERIOD)[T.18] -2.4446 0.070 -34.786 0.000 -2.582 -2.307 C(PERIOD)[T.19] -2.2071 0.070 -31.584 0.000 -2.344 -2.070 C(PERIOD)[T.20] -1.9301 0.071 -27.286 0.000 -2.069 -1.791 C(PERIOD)[T.21] -3.6562 0.155 -23.595 0.000 -3.960 -3.353 C(PERIOD)[T.22] -3.6037 0.185 -19.522 0.000 -3.965 -3.242 C(PERIOD)[T.23] -4.0563 0.240 -16.912 0.000 -4.526 -3.586 C(PERIOD)[T.24] -3.8124 0.218 -17.492 0.000 -4.240 -3.385 C(PERIOD)[T.25] -3.7246 0.214 -17.441 0.000 -4.143 -3.306 C(PERIOD)[T.26] -3.3995 0.189 -18.022 0.000 -3.769 -3.030 C(PERIOD)[T.27] -3.4097 0.195 -17.471 0.000 -3.792 -3.027 C(PERIOD)[T.28] -3.2712 0.189 -17.274 0.000 -3.642 -2.900 C(PERIOD)[T.29] -3.4796 0.215 -16.199 0.000 -3.901 -3.059 C(PERIOD)[T.30] -3.0675 0.185 -16.606 0.000 -3.430 -2.705 C(PERIOD)[T.31] -2.8770 0.178 -16.151 0.000 -3.226 -2.528 C(PERIOD)[T.32] -3.2234 0.216 -14.891 0.000 -3.648 -2.799 C(PERIOD)[T.33] -2.6857 0.180 -14.932 0.000 -3.038 -2.333 C(PERIOD)[T.34] -3.0565 0.223 -13.726 0.000 -3.493 -2.620 C(PERIOD)[T.35] -2.6212 0.195 -13.449 0.000 -3.003 -2.239 C(PERIOD)[T.36] -2.2927 0.185 -12.421 0.000 -2.654 -1.931 C(PERIOD)[T.37] -2.0127 0.184 -10.938 0.000 -2.373 -1.652 C(PERIOD)[T.38] -1.4344 0.179 -8.001 0.000 -1.786 -1.083 C(PERIOD)[T.39] -3.7739 0.511 -7.391 0.000 -4.775 -2.773 C(PERIOD)[T.40] -4.2952 1.013 -4.242 0.000 -6.280 -2.311 C(PERIOD)[T.41] -3.5507 0.726 -4.893 0.000 -4.973 -2.129 C(PERIOD)[T.42] -3.4967 0.727 -4.812 0.000 -4.921 -2.073 C(PERIOD)[T.43] -3.4395 0.728 -4.726 0.000 -4.866 -2.013 C(PERIOD)[T.44] -3.3789 0.729 -4.635 0.000 -4.808 -1.950 C(PERIOD)[T.45] -4.0075 1.017 -3.942 0.000 -6.000 -2.015 C(PERIOD)[T.46] -3.2454 0.732 -4.433 0.000 -4.680 -1.811 C(PERIOD)[T.47] -2.7266 0.611 -4.461 0.000 -3.924 -1.529 C(PERIOD)[T.48] -3.7844 1.021 -3.708 0.000 -5.785 -1.784 C(PERIOD)[T.49] -3.7418 1.022 -3.663 0.000 -5.744 -1.739 C(PERIOD)[T.50] -2.9577 0.740 -3.996 0.000 -4.408 -1.507 C(PERIOD)[T.51] -1.7695 0.513 -3.452 0.001 -2.774 -0.765 C(PERIOD)[T.52] -2.5522 0.756 -3.376 0.001 -4.034 -1.070 C(PERIOD)[T.53] -1.9056 0.651 -2.925 0.003 -3.183 -0.629 C(PERIOD)[T.54] -1.9926 0.791 -2.520 0.012 -3.542 -0.443 C(PERIOD)[T.55] -1.1171 0.730 -1.529 0.126 -2.549 0.315 C(PERIOD)[T.56] -25.1724 5.86e+04 -0.000 1.000 -1.15e+05 1.15e+05 C(PERIOD)[T.57] -0.6063 1.414 -0.429 0.668 -3.378 2.166 C(PERIOD)[T.58] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.59] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.60] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.61] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.62] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.63] -25.1724 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.64] 23.9598 1.31e+05 0.000 1.000 -2.57e+05 2.57e+05 ===================================================================================
ppdf['predvals1'] = result.predict()
yvals = []
xvals = list(set(ppdf.PERIOD.tolist()))
for i in xvals:
yvals.append(result.predict(exog={"PERIOD":[i]}))
f, ax = plt.subplots(1,1, figsize=(10,6))
plt.scatter(ppdf.PERIOD, ppdf.predvals1, marker=".")
plt.plot(xvals, yvals, color="r")
plt.title("Fitted Population Hazard Probabilities", fontsize="18")
<matplotlib.text.Text at 0x139654d90>
By iterating through the fitted hazard probabilities, we are able to calculate the population survival probabilities. This is the probability that a person who contributes to Wikipedia will continue to be editing the Wiki beyond the N month period, on average in the population. You can also read it as the proportion of people who "survive" through that period. In this particular model, less than 0.4 people continue through the first month.
periods = list(set(ppdf.PERIOD.tolist()))
f, ax = plt.subplots(1,1, figsize=(10,6))
survival = 1.0
yvals = [1.0]
xvals = [0]
for p in periods:
hazard =result.predict(exog={"PERIOD":[p]})
survival = survival - survival*hazard
yvals.append(survival)
xvals.append(p+1)
line, = ax.plot(xvals, yvals)
plt.suptitle("Fitted Survival Probability of Continuing to Edit Wikipedia beyond a Month Period since first Edit" %{"p":p+1},
fontsize="18")
plt.show()
In Singer & Willett's example, they are able to fit a more parsimonious model on periods as a continuous predictor-- taking those per-period dummy variables and turning them into a single PERIOD predictor. Judging from the variation in the hazard probabilities as plotted above, I think we're going to lose a lot of fit, but let's give it a try anyway.
Since we stored the predicted values predvals1 from our previous model, we will be able to compare the fitted probabilities in this new, more parsimonious model to the conditional probabilities per month from the previous model, with each month period as a dummy variable.
result = smf.glm(formula = " EVENT ~ PERIOD",
data=ppdf,
family=sm.families.Binomial()).fit()
print result.summary()
yvals = []
xvals = list(set(ppdf.PERIOD.tolist()))
for i in xvals:
yvals.append(result.predict(exog={"PERIOD":[i]}))
f, ax = plt.subplots(1,1, figsize=(10,6))
plt.scatter(ppdf.PERIOD, ppdf.predvals1, marker=".")
plt.plot(xvals, yvals, color="r")
plt.title("Fitted Model of Hazard Probabilities Against Conditional Hazard Probabilities", fontsize="18")
Generalized Linear Model Regression Results ============================================================================== Dep. Variable: EVENT No. Observations: 106446 Model: GLM Df Residuals: 106444 Model Family: Binomial Df Model: 1 Link Function: logit Scale: 1.0 Method: IRLS Log-Likelihood: -47273. Date: Sat, 04 Apr 2015 Deviance: 94546. Time: 12:22:34 Pearson chi2: 2.49e+05 No. Iterations: 8 ============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------ Intercept -0.6574 0.011 -61.729 0.000 -0.678 -0.637 PERIOD -0.1350 0.002 -80.611 0.000 -0.138 -0.132 ==============================================================================
<matplotlib.text.Text at 0x11ceaea50>
We're finally testing our hypothesis, that there is a relationship between hazard probabilities and reversion rate, in the following model, keeping each period as a dummy variable.
# GENERATE FITTED MODEL
result = smf.glm(formula = " EVENT ~ reversion_rate + C(PERIOD)",
data=ppdf,
family=sm.families.Binomial()).fit()
print result.summary()
Generalized Linear Model Regression Results ============================================================================== Dep. Variable: EVENT No. Observations: 106446 Model: GLM Df Residuals: 106380 Model Family: Binomial Df Model: 65 Link Function: logit Scale: 1.0 Method: IRLS Log-Likelihood: -36952. Date: Sat, 04 Apr 2015 Deviance: 73904. Time: 12:23:03 Pearson chi2: 1.07e+05 No. Iterations: 23 =================================================================================== coef std err z P>|z| [95.0% Conf. Int.] ----------------------------------------------------------------------------------- Intercept 0.5895 0.015 40.043 0.000 0.561 0.618 C(PERIOD)[T.1] -2.5202 0.038 -66.928 0.000 -2.594 -2.446 C(PERIOD)[T.2] -3.1353 0.050 -62.889 0.000 -3.233 -3.038 C(PERIOD)[T.3] -3.4396 0.058 -58.850 0.000 -3.554 -3.325 C(PERIOD)[T.4] -3.4515 0.060 -57.184 0.000 -3.570 -3.333 C(PERIOD)[T.5] -3.3978 0.061 -56.043 0.000 -3.517 -3.279 C(PERIOD)[T.6] -3.4317 0.063 -54.232 0.000 -3.556 -3.308 C(PERIOD)[T.7] -3.4404 0.065 -52.686 0.000 -3.568 -3.312 C(PERIOD)[T.8] -3.2778 0.063 -52.268 0.000 -3.401 -3.155 C(PERIOD)[T.9] -3.2788 0.065 -50.615 0.000 -3.406 -3.152 C(PERIOD)[T.10] -3.1586 0.064 -49.590 0.000 -3.283 -3.034 C(PERIOD)[T.11] -3.1435 0.066 -47.881 0.000 -3.272 -3.015 C(PERIOD)[T.12] -2.9419 0.063 -46.807 0.000 -3.065 -2.819 C(PERIOD)[T.13] -2.9793 0.067 -44.691 0.000 -3.110 -2.849 C(PERIOD)[T.14] -2.9098 0.068 -42.953 0.000 -3.043 -2.777 C(PERIOD)[T.15] -2.9140 0.071 -41.004 0.000 -3.053 -2.775 C(PERIOD)[T.16] -2.8272 0.072 -39.290 0.000 -2.968 -2.686 C(PERIOD)[T.17] -2.5873 0.069 -37.363 0.000 -2.723 -2.452 C(PERIOD)[T.18] -2.4420 0.070 -34.746 0.000 -2.580 -2.304 C(PERIOD)[T.19] -2.2047 0.070 -31.547 0.000 -2.342 -2.068 C(PERIOD)[T.20] -1.9276 0.071 -27.248 0.000 -2.066 -1.789 C(PERIOD)[T.21] -3.6535 0.155 -23.577 0.000 -3.957 -3.350 C(PERIOD)[T.22] -3.6007 0.185 -19.505 0.000 -3.963 -3.239 C(PERIOD)[T.23] -4.0532 0.240 -16.899 0.000 -4.523 -3.583 C(PERIOD)[T.24] -3.8088 0.218 -17.475 0.000 -4.236 -3.382 C(PERIOD)[T.25] -3.7216 0.214 -17.426 0.000 -4.140 -3.303 C(PERIOD)[T.26] -3.3968 0.189 -18.007 0.000 -3.767 -3.027 C(PERIOD)[T.27] -3.4074 0.195 -17.459 0.000 -3.790 -3.025 C(PERIOD)[T.28] -3.2690 0.189 -17.262 0.000 -3.640 -2.898 C(PERIOD)[T.29] -3.4774 0.215 -16.188 0.000 -3.898 -3.056 C(PERIOD)[T.30] -3.0654 0.185 -16.594 0.000 -3.428 -2.703 C(PERIOD)[T.31] -2.8753 0.178 -16.142 0.000 -3.224 -2.526 C(PERIOD)[T.32] -3.2217 0.216 -14.882 0.000 -3.646 -2.797 C(PERIOD)[T.33] -2.6838 0.180 -14.921 0.000 -3.036 -2.331 C(PERIOD)[T.34] -3.0542 0.223 -13.716 0.000 -3.491 -2.618 C(PERIOD)[T.35] -2.6189 0.195 -13.437 0.000 -3.001 -2.237 C(PERIOD)[T.36] -2.2911 0.185 -12.412 0.000 -2.653 -1.929 C(PERIOD)[T.37] -2.0110 0.184 -10.928 0.000 -2.372 -1.650 C(PERIOD)[T.38] -1.4326 0.179 -7.991 0.000 -1.784 -1.081 C(PERIOD)[T.39] -3.7675 0.511 -7.378 0.000 -4.768 -2.767 C(PERIOD)[T.40] -4.2858 1.013 -4.233 0.000 -6.270 -2.301 C(PERIOD)[T.41] -3.5415 0.726 -4.881 0.000 -4.964 -2.119 C(PERIOD)[T.42] -3.4872 0.727 -4.799 0.000 -4.911 -2.063 C(PERIOD)[T.43] -3.4304 0.728 -4.714 0.000 -4.857 -2.004 C(PERIOD)[T.44] -3.3699 0.729 -4.623 0.000 -4.799 -1.941 C(PERIOD)[T.45] -3.9992 1.017 -3.934 0.000 -5.992 -2.007 C(PERIOD)[T.46] -3.2373 0.732 -4.422 0.000 -4.672 -1.802 C(PERIOD)[T.47] -2.7188 0.611 -4.448 0.000 -3.917 -1.521 C(PERIOD)[T.48] -3.7776 1.021 -3.701 0.000 -5.778 -1.777 C(PERIOD)[T.49] -3.7349 1.022 -3.656 0.000 -5.737 -1.733 C(PERIOD)[T.50] -2.9512 0.740 -3.987 0.000 -4.402 -1.500 C(PERIOD)[T.51] -1.7625 0.513 -3.439 0.001 -2.767 -0.758 C(PERIOD)[T.52] -2.5422 0.756 -3.362 0.001 -4.024 -1.060 C(PERIOD)[T.53] -1.8965 0.652 -2.911 0.004 -3.173 -0.620 C(PERIOD)[T.54] -1.9830 0.791 -2.508 0.012 -3.533 -0.433 C(PERIOD)[T.55] -1.1083 0.730 -1.517 0.129 -2.540 0.323 C(PERIOD)[T.56] -25.1645 5.86e+04 -0.000 1.000 -1.15e+05 1.15e+05 C(PERIOD)[T.57] -0.5895 1.414 -0.417 0.677 -3.361 2.182 C(PERIOD)[T.58] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.59] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.60] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.61] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.62] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.63] -25.1556 1.31e+05 -0.000 1.000 -2.57e+05 2.57e+05 C(PERIOD)[T.64] 23.9766 1.31e+05 0.000 1.000 -2.57e+05 2.57e+05 reversion_rate 0.4928 0.093 5.280 0.000 0.310 0.676 ===================================================================================
To visually illustrate this relationship, I have picked three prototypical periods and plot the relationship betwen hazard probabilities and revision rates at those periods.
periods = [0,1, 11]
rates = np.linspace(0,1,100)
flatui = ["#9b59b6", "#3498db", "#e74c3c", "#34495e", "#2ecc71"]
f, ax = plt.subplots(1,1, figsize=(6,6))
lines = []
labels = []
i=0
for p in periods:
xvals = rates
yvals = []
for x in xvals:
yvals.append(result.predict(exog={"PERIOD":[p], "reversion_rate":x}))
line, = ax.plot(xvals, yvals, color=sns.color_palette(flatui)[i])
i+=1
lines.append(line)
labels.append("Month Period %(n)d" % {"n":p+1})
plt.suptitle("Fitted Hazard Probability of Ceasing Editing Wikipedia" %{"p":p+1},
fontsize="18")
plt.title("by lifetime Reversion Rate, for different month periods", fontsize="16")
plt.xlabel("Reversion Rate", fontsize="14")
plt.ylabel("Hazard Probability of Stopping out in a Given Month Period")
plt.legend(lines, labels, fontsize="12", loc=2)
plt.show()
In this example, we pick three prototypical revision rates and plot the fitted survival probabilities at each revision rate. As you can see, editors with a higher reversion rate have lower survival probabilities by month.
rates = [udf.reversion_rate.mean(), 0.75, 0.90]
periods = list(set(ppdf.PERIOD.tolist()))
f, ax = plt.subplots(1,1, figsize=(10,6))
lines = []
labels = []
for r in rates:
survival = 1.0
yvals = [1.0]
xvals = [0]
for p in periods:
hazard =result.predict(exog={"PERIOD":[p], "reversion_rate":r})
survival = survival - survival*hazard
yvals.append(survival)
xvals.append(p+1)
line, = ax.plot(xvals, yvals)
lines.append(line)
labels.append("Reversion Rate: %(r).02f" % {"r":r})
plt.suptitle("Fitted Survival Probability of Ceasing Editing Wikipedia" %{"p":p+1},
fontsize="18")
plt.title("by Month period, for different Reversion Rates", fontsize="16")
plt.legend(lines, labels, fontsize="12", loc=4)
plt.xlabel("Month Period since First Edit", fontsize="14")
plt.ylabel("Fitted Survival Probability", fontsize="12")
plt.show()