The data were collected on a human volunteer by Kangrong using Adam's blipped-caipi-muxarcepi PSD. The data were reconned using Kagrong's GRAPPA recon code. Only a single slice was acquired, because there is still a bug with multi-slice acquisitions. The mux factor was 3, so we will get three slices from the recon. Other parameters: TR=2sec, TE=min full (43ms), FOV=24cm, matrix=128x128 and ARC=1 (i.e., no inplane acceleration, thus the long TE and pronounced EPI distortions)
Define a simple image montage functions and a function to remove the small-n bias from a standard deviation (see this notebook).
def montage(vol, ncols=None):
"""Returns a 2d image monage given a 3d volume."""
ncols = ncols if ncols else int(np.ceil(np.sqrt(vol.shape[2])))
rows = np.array_split(vol, ceil(float(vol.shape[2])/ncols), axis=2)
# ensure the last row is the same size as the others
rows[-1] = np.dstack((rows[-1], np.zeros(rows[-1].shape[0:2] + (rows[0].shape[2]-rows[-1].shape[2],))))
im = np.vstack([np.squeeze(np.hstack(np.dsplit(row, ncols))) for row in rows])
return(im)
def montage_4d(vol):
return np.hstack([montage(vol[...,i], 1) for i in range(vol.shape[3])])
from scipy.special import gamma
def std_unbiased(std, n):
bias = std * (1 - sqrt(2 / (std-1)) * (gamma(n / 2) / gamma((n-1) / 2)))
return std + bias
Note that the data order is [freq,phase,slice,time].
import os
import urllib
import scipy
import scipy.io
data_file = 'blip_snr_test_cni4004.mat'
data_url = 'http://cninb.stanford.edu/pub/blip_snr_test_cni4004.mat'
if not os.path.isfile(data_file):
print "Fetching data, this may take a moment..."
urllib.urlretrieve(data_url, data_file)
if os.path.isfile(data_file):
print 'Data found.'
else:
raise FileError('Data not found. Maybe the download failed?')
m = scipy.io.loadmat('blip_snr_test_cni4004.mat')
blipon = m.get('blipon')
blipoff = m.get('blipoff')
blipon_muxed = m.get('blipon_muxed')
blipoff_muxed = m.get('blipoff_muxed')
Data already present, nothing to download.
Let's have a look at the muxed data (the result of a simple recon) to see how the acquired data actually look.
figsize = (16,8)
fig = figure(figsize=figsize)
fig.add_subplot(111).set_title('blip off (top), blip on (bottom)')
imshow(np.vstack((montage_4d(blipoff_muxed[...,0:10]), montage_4d(blipon_muxed[...,0:10]))),cmap='gray')
<matplotlib.image.AxesImage at 0x3312550>
Note that the first six frames are calibration frames that use phase-cycling to make the slices separable via a simple FFT. These six acquisitions are needed to reconstruct two fully-kspace-sampled images, so the de-muxed data below will only have two frames reconstructed from the six calibration frames. The effect of the CAIPIRINHA phase blips is obvious in the remaining four muxed frames. The slices have less overlap and are more easily distinguished visually (and hopefully be more easily deconvolved!)
Now we'll look at the result of de-muxing the slices (here only the center slice is shown). Note that the first two frames are part of the calibration sequence and have been phase-cycled in addition to multiplexed. But since they have been deconvolved, they look very similar to the eight 'regular' muxed frames that follow.
fig = figure(figsize=figsize)
fig.add_subplot(211).set_title('blip off')
imshow(montage_4d(blipoff[...,0:10]),cmap='gray')
fig.add_subplot(212).set_title('blip on')
imshow(montage_4d(blipon[...,0:10]),cmap='gray')
<matplotlib.image.AxesImage at 0x4e00c10>
Discard the first two calibration frames, and a few more to ensure we've reached steady state. We'll also compute the mean across time.
blipon = blipon[:,:,:,5:]
blipoff = blipoff[:,:,:,5:]
blipoff_mean = blipoff.mean(axis=3)
blipon_mean = blipon.mean(axis=3)
Let's have a look at the images to make sure they look good.
fig = figure(figsize=figsize)
fig.add_subplot(111).set_title('blip off (top), blip on (bottom)')
imshow(np.vstack((montage(blipoff_mean, 3), montage(blipon_mean, 3))),cmap='gray')
colorbar()
<matplotlib.colorbar.Colorbar instance at 0x56cf710>
blipoff_sigma = std_unbiased(blipoff.std(axis=3), blipoff.shape[3])
blipon_sigma = std_unbiased(blipon.std(axis=3), blipon.shape[3])
blipoff_snr = blipoff_mean / blipoff_sigma
blipon_snr = blipon_mean / blipon_sigma
blipoff_snrdb = 20 * np.log10(blipoff_snr)
blipon_snrdb = 20 * np.log10(blipon_snr)
fig = figure(figsize=figsize)
fig.add_subplot(111).set_title('blip off SNR (dB)')
imshow(np.vstack((montage(blipoff_snrdb,3), montage(blipon_snrdb,3))), cmap='hot', vmax=50)
colorbar()
-c:8: RuntimeWarning: invalid value encountered in log10
<matplotlib.colorbar.Colorbar instance at 0x6bb9a70>
To compute the average SNR across the brain, we need an image mask.
from scipy import ndimage as img
mask = np.logical_and(blipoff_mean>200, blipon_mean>200)
mask = img.gaussian_filter(mask.astype(float), 0.2)
mask = img.grey_erosion(mask, size=(4,4,4))
mask = img.grey_dilation(mask, size=(4,4,4))
mask = mask>0.5
blipoff_snrdb[~mask] = 0
blipon_snrdb[~mask] = 0
fig = figure(figsize=figsize)
fig.add_subplot(111).set_title('blip off SNR (dB)')
im = np.vstack((montage(blipoff_snrdb,3), montage(blipon_snrdb,3)))
imshow(im, cmap='hot')
colorbar(orientation='vertical')
<matplotlib.colorbar.Colorbar instance at 0x773ec68>
blipon_avgsnr = np.median(blipon_snr[mask])
blipoff_avgsnr = np.median(blipoff_snr[mask])
print "blipoff SNR: %f, blipon SNR: %f (%+0.1f%%)." % (blipoff_avgsnr, blipon_avgsnr, 100. * (blipon_avgsnr-blipoff_avgsnr)/blipoff_avgsnr)
blipoff SNR: 12.471388, blipon SNR: 22.760284 (+82.5%).