This tutorial is deprecated


Please use the official documentation.

A short introduction

In [25]:
import maabara as ma

Let's start

In [26]:
# a basic symbolic error propagation

ekin = ma.uncertainty.Sheet()
ekin.set_name("E_{kin}")
ekin.set_equation("Rational(1,2)*m*V**2")
ekin.set_value('m',5,0.1)
ekin.set_value('V',1,tex='\\nu_{A}')

ekin.print_result()
$$E_{kin}=\frac{m}{2} \cdot \nu_{A}^{2}$$
E_{kin}=\frac{m}{2} \cdot \nu_{A}^{2}

$$E_{kin}=2.50 \pm 0.05$$
E_{kin}=2.50 \pm 0.05

$$\sigma_{E_{kin}}=\frac{\sigma_{m}}{2} \cdot \nu_{A}^{2}$$
\sigma_{E_{kin}}=\frac{\sigma_{m}}{2} \cdot \nu_{A}^{2}

Out[26]:
2.5+/-0.05

Or even shorter

In [27]:
einstein = ma.uncertainty.Sheet( "m*c**2","E",[("m",20,0.1,False),("c",10,0.5, False)])
einstein.p()
$$E=c^{2} \cdot m$$
E=c^{2} \cdot m

$$E=\left(2.00 \pm 0.20\right) \times 10^{3}$$
E=\left(2.00 \pm 0.20\right) \times 10^{3}

$$\sigma_{E}=c \cdot \sqrt{c^{2} \cdot \sigma_{m}^{2} + 4 \cdot m^{2} \cdot \sigma_{c}^{2}}$$
\sigma_{E}=c \cdot \sqrt{c^{2} \cdot \sigma_{m}^{2} + 4 \cdot m^{2} \cdot \sigma_{c}^{2}}

Out[27]:
2000.0+/-200.24984394500788

And more complex

In [28]:
phi = ma.uncertainty.Sheet()
phi.set_equation("atan(w*L/(Ro+Ra+Rl))")
phi.set_value("w",1272,4,"\\omega_R")
phi.set_value("L",0.36606,0.00004,"L")
phi.set_value("Ro",9.9,0.05,"R_\\Omega")
phi.set_value("Ra",10.5,1,"R_A")
phi.set_value("Rl",65.4,0.1,"R_L")
phi.print_result()
$$\operatorname{atan}{\left (\frac{L \cdot \omega_R}{R_A + R_L + R_\Omega} \right )}$$
\operatorname{atan}{\left (\frac{L \cdot \omega_R}{R_A + R_L + R_\Omega} \right )}

$$1.3886 \pm 0.0022$$
1.3886 \pm 0.0022

$$\frac{1}{L^{2} \cdot \omega_R^{2} + \left(R_A + R_L + R_\Omega\right)^{2}} \cdot \sqrt{L^{2} \cdot \sigma_{R_A}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{R_L}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{R_\Omega}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{\omega_R}^{2} \cdot \left(R_A + R_L + R_\Omega\right)^{2} + \sigma_{L}^{2} \cdot \omega_R^{2} \cdot \left(R_A + R_L + R_\Omega\right)^{2}}$$
\frac{1}{L^{2} \cdot \omega_R^{2} + \left(R_A + R_L + R_\Omega\right)^{2}} \cdot \sqrt{L^{2} \cdot \sigma_{R_A}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{R_L}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{R_\Omega}^{2} \cdot \omega_R^{2} + L^{2} \cdot \sigma_{\omega_R}^{2} \cdot \left(R_A + R_L + R_\Omega\right)^{2} + \sigma_{L}^{2} \cdot \omega_R^{2} \cdot \left(R_A + R_L + R_\Omega\right)^{2}}

Out[28]:
1.3885732588100712+/-0.0021639705877990025

With multiple values and weighted average

In [29]:
reload(ma.uncertainty)

import numpy as np
import uncertainties as uc

multi = ma.uncertainty.Sheet("x**3")

# numpy array     x   x_err
data = np.array([ [1 ,  2],
                  [2 ,0.1]] )

results_multi = multi.batch(data, 'x|x%')
results_multi
Out[29]:
array([[ 1. ,  6. ],
       [ 8. ,  1.2]])

Export to Latex table

In [30]:
tbl = ma.latex.Table()
tbl.add_column(results_multi, '$0,$1','Results')
tbl.set_caption('My table')
print tbl.export()
# even saving to file by passing filename to export()
\begin{table} 
\centering
\begin{tabular}[!htb]{|l|}
 \hline
Results\\
\hline
$1 \pm 6$\\
\hline
$8,0 \pm 1,2$\\

 \hline
\end{tabular}
\caption{My table}\end{table}

More

In [31]:
wa_multi = ma.data.weighted_average(results_multi)
wa_multi
Out[31]:
(7.7307692307692308, 1.1766968108291043)
In [32]:
ma.data.literature_value(7.5, wa_multi[0], wa_multi[1], mode='print:Demo')
$3(16) \%$ Abweichung vom Demo-Literaturwert