Joint Work with Christopher Ferrie and D. G. Cory
Slides, references and source code are available at https://www.cgranade.com/research/arb/. $\renewcommand{\vec}[1]{\boldsymbol{#1}}$ $\newcommand{\ket}[1]{\left|#1\right\rangle}$ $\newcommand{\dd}{\mathrm{d}}$ $\newcommand{\expect}{\mathbb{E}}$ $\newcommand{\matr}[1]{\mathbf{#1}}$ $\newcommand{\T}{\mathrm{T}}$
Fully characterizing large quantum systems is very difficult.
For some applications, fidelity alone can be useful. Ex:
Fidelity isn't the full story, though (Puzzuoli et al, PRA 89 022306), so some care is needed.
where $F$ is the fidelity of $\Lambda$.
Knill et al, PRA 77 012307 (2008). Magesan, Gambetta and Emerson, PRA 85 042311 (2012). Wood, in preparation.
$$ taken over the implementation of each Clifford gate $C$.
Knill et al, PRA 77 012307 (2008). Magesan, Gambetta and Emerson, PRA 85 042311 (2012).
For example, to measure the fidelity of $S_C$:
Magesan et al, PRL 109 080505 (2012).
$\tilde{p} = 0.99994$, $p_{\text{ref}} = 0.99999$
where $$ \matr{I}(\vec{x}) := \expect_{D | \vec{x}} [\nabla_{\vec{x}} \log\Pr(D | \vec{x}) \cdot \nabla_{\vec{x}}^\T \log\Pr(D | \vec{x}) ] $$ is the Fisher information at $\vec{x}$.
Ferrie and Granade, QIP 12 611 (2012).
In practice, we often have prior information. Demanding unbiased estimators is too strong.
Let's take a Bayesian approach instead. After observing a datum $d$ taken from a sequence of length $m$: $$ \Pr(\vec{x} | d; m) = \frac{\Pr(d | \vec{x}; m)}{\Pr(d | m)} \Pr(\vec{x}). $$
We can implement this on a computer using sequential Monte Carlo (SMC). For example, to incorporate a uniform prior:
from qinfer.smc import SMCUpdater
from qinfer.rb import RandomizedBenchmarkingModel
from qinfer.distributions import UniformDistribution
prior = UniformDistribution([[0.9, 1], [0.4, 0.5], [0.5, 0.6]])
updater = SMCUpdater(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives:
# updater.update(datum, experiment)
Granade et al, NJP 14 103013 (2012).
With prior information, we need the Bayesian Cramer-Rao Bound, $$ \expect_{\vec{x}} [\matr{E}(\vec{x})] \ge \matr{J}^{-1}, $$ where $$ \matr{J} := \expect_{\vec{x}} [\matr{I}(\vec{x})] $$ is the Bayesian information matrix.
This again can also be computed by using SMC.
from qinfer.smc import SMCUpdaterBCRB
updater = SMCUpdaterBCRB(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives, the BCRB is given by:
# updater.current_bim
Ferrie and Granade, QIP 12 611 (2012). Granade et al, NJP 14 103013 (2012).
SMC-accelerate algorithm, outperforms least-squares fitting, esp. with small amounts of data.
This advantage persists for changing the maximum sequence length as well.
To show that SMC acceleration is experimentally useful, we use a prior that is approximately 7 standard deviations away from the correct values for a cumulant-simulated gateset.
The data was simulated using the methods of Puzzuoli et al, PRA 89 022306.
Even with a significantly bad prior, SMC does quite well.
$$\begin{array}{l|cccc} & \tilde{p} & p_{\text{ref}} & A_0 & B_0 \\ \hline \text{True} & 0.9983 & 0.9957 & 0.3185 & 0.5012 \\ \text{SMC Estimate} & 0.9940 & 0.9968 & 0.3071 & 0.5134 \\ \text{LSF Estimate} & 0.9947 & 0.9972 & 0.3369 & 0.4820 \\ \hline \text{SMC Error} & 0.0043 & 0.0011 & 0.0113 & 0.0122 \\ \text{LSF Error} & 0.0036 & 0.0015 & 0.0184 & 0.0192 \end{array}$$Due to the bad prior, it doesn't outperform least-squares fitting in this case for $\tilde{p}$, but it does very well at $p_{\text{ref}}$, $A$ and $B$, lending credibility to the estimate.
We have developed a flexible and easy-to-use Python library, QInfer, for implementing SMC-based applications.
iframe("http://python-qinfer.readthedocs.org/en/latest/")
Full reference information is available on Zotero.
iframe('https://www.zotero.org/cgranade/items/collectionKey/2NQVPRK9')