Aligning Partial Volume fMRI in AFNI

One of the benefits of high-resolution fMRI (e.g. 7T+) is the ability to see smaller structures and even discern layers of the human cortex (examples: motor cortex and DLPFC). One of the frustrating things about doing sub-1mm resolution is that you’re often constrained to partial volume coverage. Which means that you generally have something that looks like this:

Our “Slab” Partial Volume EPI

And if you have’t already tried, that means that registration of these partial volumes is often more difficult than when your functional images (EPI, etc) cover the entire brain. This is true of all software packages (FSL, AFNI, and yes even ANTs).

Right out of the box your two images might look something like this:

Default view of images off the scanner

Now we can clearly see that the default AFNI view with no transformations shows the slab in the wrong position. On the left I have just the axial slice, followed by the axial slice with the overlay showing significant mis-alignment – caused primarily from the slab being too low in the brain. If you were to attempt to register this slab to the whole-brain anatomical using a standard process like align_epi_anat.py, you’d get something like this:

Default alignment that works fine on whole-brain images

The top row of images were made using a pretty standard call that works fine on whole-brain EPI images:

align_epi_anat.py \
-anat anat_ss.nii.gz \
-anat_has_skull no \
-epi epi1.nii.gz \
-epi_base 0 \
-epi2anat \
-big_move \
-suffix _altest \
-epi_strip 3dAutomask

And the second row of images was made with a slight modification of adding “-partial_axial” flag, which sometimes works if you have sufficient coverage AND the images aren’t terribly far off. Note that this sometimes works if the slab has the correct header/scanner information on image orientation and location relative to the anatomical images.

It turns out that one thing can be really helpful in getting automated registrations working with partial volumes is to get the the base and source as close as possible. Yes, that’s an obvious statement!

In AFNI, the way we do that is by first moving the images into the correct location. Since the slab data here is cutoff on the top and bottom, I’m also going to add some zero padding around the images to help with the process.

3dZeropad -S 10 -I 20 -prefix epi1_rs.nii.gz epi1.nii.gz
3drefit -dzorigin 20 epi1_rs.nii.gz

Here I add 20 empty “zero” slices above the image and 20 slices below the EPI. In the second statement, I simply move the origin of the image 20 mm up, which is like a lightyear in brain space! Now my images look closer, but still not aligned.

New image overlap without registration

At this point, we can attempt some alignment using any of your favorite alignment programs. Since I like AFNI, I’ll use the veritable 3dAllineate with some options that are fairly similar to what I would use on other imaging data. You could also use align_epi_anat.py, or even options within afni_proc.py.

3dAllineate \
-base anat_ss.nii.gz \
-input epi1_rs.nii.gz \
-prefix epi1_rs_al.nii.gz \
-1Dmatrix_save epi1_rs_al_12.1D \
-cost lpc \
-final wsinc5 \
-source_automask -autoweight \
-twopass \
-verb
Final Successful Alignment

Now you can see that the slab is correctly aligned with the skull-stripped anatomical. The two axial slices are just to show you with the overlay on and off. You can also see that this is a far cry from the original locations and those registrations shown earlier in this post.

Happy Registering! This post made possible by the amount of time it takes to register 300 brains to a new template and do deformation metrics on them! Yes, paper coming soon.

Rendering AFNI/SUMA Over the Network

This is mostly a quick note to remind myself how to do this, since I spent about 30 minutes today “remembering” (synonymous with googling) how I got this to work last time! That said: Once upon a time, many of us would connect to a remote (usually linux) server and run graphical programs over the network. That usually meant connecting to a box like so:

ssh -X -Y myuser@myserverbox.mydomain.com

Where the -X and -Y were donating SSH forwarding for X11.  Side note: usually you had to also enable this on the server in /etc/sshd_config under the very readable setting “X11Forwarding”.  In addition to running the programs remotely, you also had the option of choosing between Direct Rendering and Indirect Rendering.  Direct Rendering was when your computer received the information to render the images directly.  Whereas Indirect Rendering had the server render the images and beam them over to your computer.  Not surprising, Direct Rendering is usually faster.  For a quick refresher on what all of the general terms are, checkout this awesome post.

Either way, we could then do things like run a remote X11 application and have it pop open on our computers:

xclock

xclock rendering

With the advent of things like VNC and NoMachine, where we can actually view the remote computer’s desktop and interact with things, many of us transitioned away from SSH X11 Forwarding.  Nevertheless, some users continue to persist and occasionally point out that things don’t always work the way they used to – particularly if you love Apple Macs as much as I do.  Recent changes in the XQuartz project may have broken some of this functionality as of 2.7.9.  But never fear, with the release of 2.7.10 (and now 2.7.11) you can reenable this functionality by running this command in the terminal:

defaults write org.macosforge.xquartz.X11 enable_iglx -bool true

While I’m not sure it’s necessary, you can also then export a variable to “force” indirect rendering:

export LIBGL_ALWAYS_INDIRECT=1

And finally, as you could have guessed, AFNI and SUMA can both be used with X11 Forwarding!  SUMA in particular seems to appreciate indirect graphics, but as always, your mileage may vary.  If you find that this is too slow for you, I’d recommend mounting the remote server using FUSE (Mac version here) and SSHFS.  You can then run AFNI/SUMA locally while accessing remote data.  Obviously you can also use Samba (SMB) or other networking tricks too.

The better skullstrip in AFNI

For most users of imaging software, we use the skull stripping program included in our distribution.  If you’re an AFNI user, you probably use 3dSkullStrip, if you use FSL then it’s BET, and of course Freesurfer has their own in recon-step 1.  A few years ago, I advocated for using Optibet and even wrote an entirely AFNI based version of the pipeline.

If you want to get all of the benefits of an Optibet like pipeline as well as doing your whole-brain normalization via a combination of linear and nonlinear warps, look no further than the “recently” introduced script in the AFNI distribution @SSwarper.  This program takes your original T1 anatomical image and does a robust skull stripping, as well as normalizes to the MNI152 brain via both a linear and nonlinear warp.  It even  generates helpful output images to double check the output like the one below!

@SSwarper \
-base MNI152_2009_template_SSW.nii.gz \
-input anatomical_scan.nii.gz \
-subid Sub123

So check it out, it’ll also make your afni_proc.py commands even more efficient, because you can feed the output to the AFNI superscript.  Helpful if you tend to run afni_proc.py with multiple options, you won’t have to rerun the normalization step over and over again (which can cost up to about an hour each time)!

DTI Statistics with TRACULA data (Part 1)

If you’re reading this post either you’re a loyal fan (yay!) or you’ve run Freesurfer (possibly in parallel) and processed your diffusion data in (hopefully TORTOISE) and then Tracula (also possibly in parallel) and you’re wondering where do you go from here.  I’m glad you’re here, that’s probably me talking to myself because I read my own blog to remind myself how to do things from past analyses!

So you’ve got your Freesurfer folder and your Tracula folder.  The first step is to export all of your data into text files (one per tract is easiest) for all of your subjects.  I tend to use a quick bash script to do this, relying on the Freesurfer/Tracula tools tractstats2table to do the hard work (rename Subject* to the prefix of your subjects):

#!/bin/bash

cd /path/to/tracula/folder
mkdir output

for aTract in fmajor_PP_avg33_mni_bbr \
fminor_PP_avg33_mni_bbr lh.atr_PP_avg33_mni_bbr \
lh.cab_PP_avg33_mni_bbr lh.ccg_PP_avg33_mni_bbr \
lh.cst_AS_avg33_mni_bbr lh.ilf_AS_avg33_mni_bbr \
lh.slfp_PP_avg33_mni_bbr lh.slft_PP_avg33_mni_bbr \
lh.unc_AS_avg33_mni_bbr rh.atr_PP_avg33_mni_bbr \
rh.cab_PP_avg33_mni_bbr rh.ccg_PP_avg33_mni_bbr \
rh.cst_AS_avg33_mni_bbr rh.ilf_AS_avg33_mni_bbr \
rh.slfp_PP_avg33_mni_bbr rh.slft_PP_avg33_mni_bbr \
rh.unc_AS_avg33_mni_bbr
do
        echo $aTract
        tractstats2table --inputs Subject*/dpath/${aTract}/pathstats.overall.txt \
        --overall -t output/${aTract}.txt -v
done

Now that you’ve exported all the numbers to text files, the next step is often to combine all of those outputs into a single file for data analysis.  You might have already had the great idea of using “cat” to combine the files via the command line, and then you’d be disappointed that it’s not that easy because the header of the first column (the one with your subject numbers) is the tract name, but there’s no dedicated column for the tract name (at least not in Freesurfer 6.0).

So here’s some handy R code:

library(tidyverse) #if you don't have tidyverse, you should. 
setwd('/path/to/tracula/folder/output') #folder with text files from bash script
allFiles <- list.files(pattern='.txt') #get a list of all the text files

#functions make the world go 'round:
#this one reads in each file, makes a column with the tract name
#and renames the first column to "Filename"
readDTI <- function(x) {
 tmp <- read.table(x, header=T)
 tmp$tract <- names(tmp)[1] #get the tract name and make it a column
 names(tmp)[1] <- 'Filename' #now make the first column a useful name
 tmp$Filename <- as.character(tmp$Filename) #useful if you want to manipulate the header later
 return(tmp)
}

#this will run your function on all the text files you found with list.files()
alltracts <- lapply(allFiles, readDTI) #lapply returns a list

#this will return a single data.frame object with all the tracts
tracula_data <- do.call("rbind", alltracts)

#this writes the whole thing to a file on your computer
write.table(tracula_data, "MyTraculaData.dat", sep="\t", row.names=F, col.names=T, quote=F)

And just like that, you have a single tab-delimited text filled with all your data for 18 tracts (9 per hemisphere) for all of your subjects) for reading into your favorite stats program, or sending to co-authors.

Preprocessing Diffusion Weighted Images with TORTOISE 3 (Part 1)

A while back, I posted about how to use TORTOISE 2.0/2.5 for preprocessing Diffusion Weighted Images (DWIs), create tensors, and then do blip-up blip-down correction.  All of those steps relied on using a graphical user interface (GUI) through a program called IDL to tell TORTOISE what you wanted it to do.  This was often a tedious process, as it required someone to click buttons for every single subject, even when all subjects had been collected in the same manner.  Enter TORTOISE 3.0, a new suite of command line programs for doing what TORTOISE 2.0/2.5 did, but faster and more scriptable!

First thing’s first, go download TORTOISE 3.0 (it’s now split into DIFF_PREP and DIFF_CALC downloads).  The next thing you’ll need to do is add TORTOISE to your path.  I personally like to put all of my MRI-related applications in one folder.  So for me to add TORTOISE to my path (on my Mac), I would do the following:

echo "export PATH=$PATH:/Applications/mri/TORTOISE_V3.0.0/DIFFPREPV3/bin/bin" >> ~/.bash_profile

Also useful at this point is to copy the default registration settings to your home folder:

mkdir ~/DIFF_PREP_WORK
cp /Applications/mri/TORTOISE_V3.0.0/DIFFPREPV3/example_registration_settings.dmc ~/MySettings.dmc

You’ll then need to close the shell and open a new one.  Or you can “source ~/.bash_profile” to load the changes into your current shell.  Now you’re all set.  Let’s look at some data! If you’re not already organizing your data according to the BIDS spec, I would highly suggest you start!  It makes your file structures regular and really makes it considerably easier for new people to start working on your project!  For now, let’s assume that my data is in some kind of BIDS-like format as follows:

sub_001
     anat
           sub_001_T1.nii.gz
           sub_001_T2.nii.gz
     dwi
           sub_001_dwi.nii.gz
           sub_001_dwi.bval
           sub_001_dwi.bvec

The first thing that we need to do is skull-strip our T1 and T2 images and it generally helps if you put them into some type of axilized orientation (something approximating AC-PC alignment would be good).  We can do this easily in AFNI, first start by skull stripping both T2 and T2 images:

3dSkullStrip -input anat/sub_001_T1.nii.gz -prefix anat/T1_ss -orig_vol
3dSkullStrip -input anat/sub_001_T2.nii.gz -prefix anat/T2_ss -orig_vol

Next bring your T1 into “AC-PC” alignment using the handy @auto_tlrc script, copying the result into a file we’ll affectionately call “T1_ss_acpc.nii”:

cd anat/
@auto_tlrc \
-base MNI152_1mm_uni+tlrc. \
-input T1_ss+orig \
-no_ss \
-init_xform CENTER \
-rigid_equiv
3dcopy T1_ss.Xat.rigid+tlrc sub_001_T1_ss_acpc.nii

Next we’ll align our T2 to our already setup T1 and rename the output to “T2_ss_acpc.nii”:

align_epi_anat.py \
-anat T1_ss_acpc.nii \
-epi T2_ss+orig. \
-epi2anat \
-anat_has_skull no \
-epi_strip None \
-suffix _al2acpc \
-rigid_body \
-epi_base 0 \
-giant_move
3dcopy T2_ss_al2acpc+orig. sub_001_T2_ss_acpc.nii
cd ../

You probably feel like you’ve already had your typing workout for the day!  But that’s just getting the files setup the way that we need them to be setup!  Now we’re ready to actually use TORTOISE!  The initial processing in TORTOISE has two steps: 1) importing the NIFTI files, and 2) running DIFFPREP.  One random bit, TORTOISE doesn’t seem to (currently) play with gzipped files, so we expand that:

cd dwi
gunzip sub_001_dwi.nii.gz
ImportNIFTI -i sub_001_dwi.nii \
-p vertical \
-b sub_001_dwi.bval \
-v sub_001_dwi.bvec

Let’s just review the options, the -i is the input file, the -p is the phase encoding (AP would be vertical, LR would be horizontal), the -b is the b-values, and the -v is the b-vectors.

Now that we have the data imported, you’ll notice that there’s a folder in the current directory (sub_001/dwi) named sub_001_dwi_proc.  In this folder, we’ll actually execute the DIFFPREP step (it’ll copy some temporary files into the folder that you execute it from and I like to keep my file directories as clean as possible!

cd sub_001_dwi_proc
DIFFPREP -i sub_001_dwi.list \
-o sub_001_DMC \
-s ../anat/sub_001_T2_ss_acpc.nii \
--reg_settings MySettings.dmc

Again let’s review the options: the -i is the input list, the -o is the output name (here I like to end with DMC to keep with the naming convention of TORTOISE 2.0/2.5), the -s is my structural file (this is a T2 image), and the –reg_settings is your registration setting file that is located in your home directory’s DIFF_PREP_WORK folder.  You could of course have called yours something different and be a rebel.

While the differences aren’t always astounding from just looking at the images, I feel like I have to put some brain imaging in every post, so here you go!

TORTOISE3_demo

From here you can use AFNI’s FATCAT programs (part 3 is particularly relevant) to fit your tensors, and do ROI-to-ROI analyses.  You can also use Freesurfer (easily run in parallel) and TRACULA (also able to be done in parallel).  Stay tuned for next time, I’ll run through the next step of DR_BUDDI if you have blip-up and blip-down data.  We’ll then tie in the full AFNI pipeline for more analysis possibilities.