First, do some initialization and set debugging level to `debug`

to see progress of computation.

In [1]:

```
%matplotlib inline
import pandas as pd
from universal import tools
from universal import algos
import logging
# we would like to see algos progress
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.DEBUG)
import matplotlib
# increase the size of graphs
matplotlib.rcParams['savefig.dpi'] *= 1.5
```

Let's try to replicate the results of B.Li and S.Hoi from their article On-Line Portfolio Selection with Moving Average Reversion. They claim superior performance on several datasets using their OLMAR algorithm. These datasets are available in `data/`

directory in `.pkl`

format. Those are all relative prices (start with 1.) and artificial tickers. We can start with NYSE stocks from period 1/1/1985 - 30/6/2010.

In [2]:

```
# load data using tools module
data = tools.dataset('nyse_o')
# plot first three of them as example
data.iloc[:,:3].plot()
```

Out[2]:

Now we need an implementation of the OLMAR algorithm. Fortunately, it is already implemented in module `algos`

, so all we have to do is load it and set its parameters. Authors recommend lookback window $w = 5$ and threshold $\epsilon = 10$ (these are default parameters anyway). Just call `run`

method on our data to get results for analysis.

In [3]:

```
# set algo parameters
algo = algos.OLMAR(window=5, eps=10)
# run
result = algo.run(data)
```

Ok, let's see some results. First print some basic summary metrics and plot portfolio equity with UCRP (uniform constant rebalanced portfolio).

In [4]:

```
print(result.summary())
result.plot(weights=False, assets=False, ucrp=True, logy=True)
```

Out[4]:

That seems really impressive, in fact it looks too good to be true. Let's see how individual stocks contribute to portfolio equity and disable legend to keep the graph clean.

In [5]:

```
result.plot_decomposition(legend=False, logy=True)
```

Out[5]:

As you can see, almost all wealth comes from single stock (don't forget it has logarithm scale!). So if we used just 5 of all these stocks, we would get almost the same equity as if we used all of them. To stress test the strategy, we can remove that stock and rerun the algorithm.

In [6]:

```
# find name of the most profitable asset
most_profitable = result.equity_decomposed.iloc[-1].argmax()
# rerun an algorithm on data without it
result_without = algo.run(data.drop([most_profitable], 1))
# and print results
print(result_without.summary())
result_without.plot(weights=False, assets=False, ucrp=True, logy=True)
```

Out[6]:

We lost about 7 orders of wealth, but the results are more realistic now. Let's move on and try adding fees of 0.1% per transaction (we pay \$1 for every \$1000 of stocks bought or sold).

In [7]:

```
result_without.fee = 0.001
print(result_without.summary())
result_without.plot(weights=False, assets=False, ucrp=True, logy=True)
```

Out[7]:

Results still hold, although our Sharpe ratio decreased from 3.14 to 1.56 and annualized return from 466% to 109%. Now some of you trained in quantitative finance might start asking: "_Isn't there some survivorship bias?_". Yes, it is. In fact, a huge one considering that we have almost 25 years of data and mean-reversion type of strategy.

Let's see whether the algo works on recent data, too. First download closing prices of several (randomly chosen) stocks from Yahoo.

In [8]:

```
from pandas.io.data import DataReader
from datetime import datetime
# load data from Yahoo
yahoo_data = DataReader(['MSFT', 'IBM', 'AAPL', 'GOOG'], 'yahoo', start=datetime(2005,1,1))['Adj Close']
# plot normalized prices of these stocks
(yahoo_data / yahoo_data.iloc[0,:]).plot()
```

Out[8]:

Instead of using fixed parameters, we will test several `window`

parameters with function `run_combination`

. It the same as `run`

, just use it as classmethod and use lists for combination of values. `run_combination`

returns list of results which can be used similarly to `result`

.

In [9]:

```
list_result = algos.OLMAR.run_combination(yahoo_data, window=[3,5,10,15], eps=10)
print(list_result.summary())
list_result.plot()
```

Out[9]:

Since we don't know the best parameters in hindsight, we will invest equal money in each of them in the beginning and let them run. This is called *buy and hold* strategy. Portfolio equities in `list_result`

can be regarded as stock prices and used as an input for new algo (*buy and hold* in this case). This way you can chain algorithms however you like, for example OLMAR on OLMAR, etc.

To compare it with individual assets or uniform constant rebalanced portfolio, use parameters `assets`

and `ucrp`

.

In [10]:

```
# run buy and hold on OLMAR results and show its equity together with original assets
algos.BAH().run(list_result).plot(assets=True, weights=False, ucrp=True)
```

Out[10]:

Ok, so that was enough for the start. There are plenty of other algorithms in module `algos`

collected across research papers about online-portfolios including famous Universal portfolio by Thomas Cover.

Entire package is actually pretty simple. Algorithms are subclasses of base `Algo`

class and methods for reporting, plotting and analysing are built on top of this class. I will illustrate it on this mean-reversion strategy

- use logarithm of price
- calculate difference $\delta_i$ between current price of $i$-th stock and its moving average of $n$ days
- if $\delta_i > 0$, assign zero portfolio weight $w_i = 0$ for $i$-th stock
- if $\delta_i < 0$, assign weight $w_i = -\delta_i$ for $i$-th stock
- normalize all weights so that $\sum w_i = 1$

The idea is that badly performing stocks will revert to its mean and have higher returns than those above their mean. Here is the complete code, comments should be self-explanatory.

In [11]:

```
from universal.algo import Algo
import numpy as np
class MeanReversion(Algo):
# use logarithm of prices
PRICE_TYPE = 'log'
def __init__(self, n):
# length of moving average
self.n = n
# step function will be called after min_history days
super(MeanReversion, self).__init__(min_history=n)
def init_weights(self, m):
# use zero weights for start
return np.zeros(m)
def step(self, x, last_b, history):
# calculate moving average
ma = history.iloc[-self.n:].mean()
# weights
delta = x - ma
w = np.maximum(-delta, 0.)
# normalize so that they sum to 1
return w / sum(w)
```

That's all. Now let's try it on nyse data.

In [12]:

```
mr = MeanReversion(n=20)
result = mr.run(data)
print(result.summary())
result.plot(assets=False, logy=True, weights=False, ucrp=True)
```

Out[12]:

Not bad considering how simple that strategy is. Next step could be performance optimization. To profile your strategy, you can use function `profile`

in `universal.tools`

which profile the code using fantastic line_profiler. After identifying the most critical parts of the code, you have two options. Either optimize your `step`

function (using tools such as weave, numba, theano or cython) or subclass `weights`

method if your code could be vectorized easily (beware the forward bias!).