DTI Statistics with TRACULA data (Part 1)

If you’re reading this post either you’re a loyal fan (yay!) or you’ve run Freesurfer (possibly in parallel) and processed your diffusion data in (hopefully TORTOISE) and then Tracula (also possibly in parallel) and you’re wondering where do you go from here.  I’m glad you’re here, that’s probably me talking to myself because I read my own blog to remind myself how to do things from past analyses!

So you’ve got your Freesurfer folder and your Tracula folder.  The first step is to export all of your data into text files (one per tract is easiest) for all of your subjects.  I tend to use a quick bash script to do this, relying on the Freesurfer/Tracula tools tractstats2table to do the hard work (rename Subject* to the prefix of your subjects):

#!/bin/bash

cd /path/to/tracula/folder
mkdir output

for aTract in fmajor_PP_avg33_mni_bbr \
fminor_PP_avg33_mni_bbr lh.atr_PP_avg33_mni_bbr \
lh.cab_PP_avg33_mni_bbr lh.ccg_PP_avg33_mni_bbr \
lh.cst_AS_avg33_mni_bbr lh.ilf_AS_avg33_mni_bbr \
lh.slfp_PP_avg33_mni_bbr lh.slft_PP_avg33_mni_bbr \
lh.unc_AS_avg33_mni_bbr rh.atr_PP_avg33_mni_bbr \
rh.cab_PP_avg33_mni_bbr rh.ccg_PP_avg33_mni_bbr \
rh.cst_AS_avg33_mni_bbr rh.ilf_AS_avg33_mni_bbr \
rh.slfp_PP_avg33_mni_bbr rh.slft_PP_avg33_mni_bbr \
rh.unc_AS_avg33_mni_bbr
do
        echo $aTract
        tractstats2table --inputs Subject*/dpath/${aTract}/pathstats.overall.txt \
        --overall -t output/${aTract}.txt -v
done

Now that you’ve exported all the numbers to text files, the next step is often to combine all of those outputs into a single file for data analysis.  You might have already had the great idea of using “cat” to combine the files via the command line, and then you’d be disappointed that it’s not that easy because the header of the first column (the one with your subject numbers) is the tract name, but there’s no dedicated column for the tract name (at least not in Freesurfer 6.0).

So here’s some handy R code:

library(tidyverse) #if you don't have tidyverse, you should. 
setwd('/path/to/tracula/folder/output') #folder with text files from bash script
allFiles <- list.files(pattern='.txt') #get a list of all the text files

#functions make the world go 'round:
#this one reads in each file, makes a column with the tract name
#and renames the first column to "Filename"
readDTI <- function(x) {
 tmp <- read.table(x, header=T)
 tmp$tract <- names(tmp)[1] #get the tract name and make it a column
 names(tmp)[1] <- 'Filename' #now make the first column a useful name
 tmp$Filename <- as.character(tmp$Filename) #useful if you want to manipulate the header later
 return(tmp)
}

#this will run your function on all the text files you found with list.files()
alltracts <- lapply(allFiles, readDTI) #lapply returns a list

#this will return a single data.frame object with all the tracts
tracula_data <- do.call("rbind", alltracts)

#this writes the whole thing to a file on your computer
write.table(tracula_data, "MyTraculaData.dat", sep="\t", row.names=F, col.names=T, quote=F)

And just like that, you have a single tab-delimited text filled with all your data for 18 tracts (9 per hemisphere) for all of your subjects) for reading into your favorite stats program, or sending to co-authors.

Preprocessing Diffusion Weighted Images with TORTOISE 3 (Part 1)

A while back, I posted about how to use TORTOISE 2.0/2.5 for preprocessing Diffusion Weighted Images (DWIs), create tensors, and then do blip-up blip-down correction.  All of those steps relied on using a graphical user interface (GUI) through a program called IDL to tell TORTOISE what you wanted it to do.  This was often a tedious process, as it required someone to click buttons for every single subject, even when all subjects had been collected in the same manner.  Enter TORTOISE 3.0, a new suite of command line programs for doing what TORTOISE 2.0/2.5 did, but faster and more scriptable!

First thing’s first, go download TORTOISE 3.0 (it’s now split into DIFF_PREP and DIFF_CALC downloads).  The next thing you’ll need to do is add TORTOISE to your path.  I personally like to put all of my MRI-related applications in one folder.  So for me to add TORTOISE to my path (on my Mac), I would do the following:

echo "export PATH=$PATH:/Applications/mri/TORTOISE_V3.0.0/DIFFPREPV3/bin/bin" >> ~/.bash_profile

Also useful at this point is to copy the default registration settings to your home folder:

mkdir ~/DIFF_PREP_WORK
cp /Applications/mri/TORTOISE_V3.0.0/DIFFPREPV3/example_registration_settings.dmc ~/MySettings.dmc

You’ll then need to close the shell and open a new one.  Or you can “source ~/.bash_profile” to load the changes into your current shell.  Now you’re all set.  Let’s look at some data! If you’re not already organizing your data according to the BIDS spec, I would highly suggest you start!  It makes your file structures regular and really makes it considerably easier for new people to start working on your project!  For now, let’s assume that my data is in some kind of BIDS-like format as follows:

sub_001
     anat
           sub_001_T1.nii.gz
           sub_001_T2.nii.gz
     dwi
           sub_001_dwi.nii.gz
           sub_001_dwi.bval
           sub_001_dwi.bvec

The first thing that we need to do is skull-strip our T1 and T2 images and it generally helps if you put them into some type of axilized orientation (something approximating AC-PC alignment would be good).  We can do this easily in AFNI, first start by skull stripping both T2 and T2 images:

3dSkullStrip -input anat/sub_001_T1.nii.gz -prefix anat/T1_ss -orig_vol
3dSkullStrip -input anat/sub_001_T2.nii.gz -prefix anat/T2_ss -orig_vol

Next bring your T1 into “AC-PC” alignment using the handy @auto_tlrc script, copying the result into a file we’ll affectionately call “T1_ss_acpc.nii”:

cd anat/
@auto_tlrc \
-base MNI152_1mm_uni+tlrc. \
-input T1_ss+orig \
-no_ss \
-init_xform CENTER \
-rigid_equiv
3dcopy T1_ss.Xat.rigid+tlrc sub_001_T1_ss_acpc.nii

Next we’ll align our T2 to our already setup T1 and rename the output to “T2_ss_acpc.nii”:

align_epi_anat.py \
-anat T1_ss_acpc.nii \
-epi T2_ss+orig. \
-epi2anat \
-anat_has_skull no \
-epi_strip None \
-suffix _al2acpc \
-rigid_body \
-epi_base 0 \
-giant_move
3dcopy T2_ss_al2acpc+orig. sub_001_T2_ss_acpc.nii
cd ../

You probably feel like you’ve already had your typing workout for the day!  But that’s just getting the files setup the way that we need them to be setup!  Now we’re ready to actually use TORTOISE!  The initial processing in TORTOISE has two steps: 1) importing the NIFTI files, and 2) running DIFFPREP.  One random bit, TORTOISE doesn’t seem to (currently) play with gzipped files, so we expand that:

cd dwi
gunzip sub_001_dwi.nii.gz
ImportNIFTI -i sub_001_dwi.nii \
-p vertical \
-b sub_001_dwi.bval \
-v sub_001_dwi.bvec

Let’s just review the options, the -i is the input file, the -p is the phase encoding (AP would be vertical, LR would be horizontal), the -b is the b-values, and the -v is the b-vectors.

Now that we have the data imported, you’ll notice that there’s a folder in the current directory (sub_001/dwi) named sub_001_dwi_proc.  In this folder, we’ll actually execute the DIFFPREP step (it’ll copy some temporary files into the folder that you execute it from and I like to keep my file directories as clean as possible!

cd sub_001_dwi_proc
DIFFPREP -i sub_001_dwi.list \
-o sub_001_DMC \
-s ../anat/sub_001_T2_ss_acpc.nii \
--reg_settings MySettings.dmc

Again let’s review the options: the -i is the input list, the -o is the output name (here I like to end with DMC to keep with the naming convention of TORTOISE 2.0/2.5), the -s is my structural file (this is a T2 image), and the –reg_settings is your registration setting file that is located in your home directory’s DIFF_PREP_WORK folder.  You could of course have called yours something different and be a rebel.

While the differences aren’t always astounding from just looking at the images, I feel like I have to put some brain imaging in every post, so here you go!

TORTOISE3_demo

From here you can use AFNI’s FATCAT programs (part 3 is particularly relevant) to fit your tensors, and do ROI-to-ROI analyses.  You can also use Freesurfer (easily run in parallel) and TRACULA (also able to be done in parallel).  Stay tuned for next time, I’ll run through the next step of DR_BUDDI if you have blip-up and blip-down data.  We’ll then tie in the full AFNI pipeline for more analysis possibilities.

How to install AFNI on macOS X Sierra

Follow these instructions for Mac OS X 10.11 El Capitain.  That’s it!  It’s great when stuff continues to just work!  Well… mostly.  Depending on how you upgraded, you might have to run this command in the terminal to allow you to run applications from unsigned developers:

sudo spctl --master-disable

But that should really be it.

Performing brain-behavior correlations with 3dttest++

A while ago, I wrote a post for doing brain-behavior correlations with different AFNI programs.  And at the time I mentioned that you can do correlations with 3dtest++, but you’d first need to standardize (read: z-score) both the brain and behavior values.  The general logic there is that if you’re doing a simple regression (what a ANCOVA is with one group and one covariate), that standardizing the inputs will standardize your regression (ANCOVA) b-value to a beta, which is the same as a correlation.

So let’s say I have MRI data (of a single voxel for simplicity) from 10 people:

0.15214194
0.53165484
0.75160697
0.73402534
0.15539711
0.41414913
0.09563909
0.56122166
0.32040320
0.55696615

and their corresponding covariate value (some score):

subject score
Sub1 33
Sub2 9
Sub3 35
Sub4 25
Sub5 20
Sub6 8
Sub7 21
Sub8 16
Sub9 2
Sub10 18

The basic correlation of these values is 0.1150181.

If I run these data through 3dttest++:

3dttest++ -prefix corr_demo \
-setA allSubs \
Sub1 mri_data.1D[0] \
Sub2 mri_data.1D[1] \
Sub3 mri_data.1D[2] \
Sub4 mri_data.1D[3] \
Sub5 mri_data.1D[4] \
Sub6 mri_data.1D[5] \
Sub7 mri_data.1D[6] \
Sub8 mri_data.1D[7] \
Sub9 mri_data.1D[8] \
Sub10 mri_data.1D[9] \
-covariates covariates_t.1D

I get the following output:
<AFNI_3D_dataset
self_idcode = "AFN_lE6-AT8YoUXP4TUZqIw65g"
ni_type = "4*float"
ni_dimen = "1,1,1"
ni_delta = "1,1,1"
ni_origin = "0,0,0"
ni_axes = "R-L,A-P,I-S"
ni_stat = "none;Ttest(8);none;Ttest(8)"
>
0.427321 5.35561 0.00259738 0.327494
</AFNI_3D_dataset>

Which tells me that the “mean” value (without covariates) is 0.427, which is the same output as we would see in R.  And the t-stat on that mean 5.6428.  The covariate estimate is 0.00259, again the same as R’s lm(a~b) function.  And the t-stat for the b-value is 0.327 (same as R’s).  We can then convert the t-value of this covariate to an R-squared and thus into a r-value (correlation).

R2 = t2 / (t2 + DF)

3dcalc -a T-stat -expr ‘(ispositive(a)-isnegative(a))*sqrt(a*a/(a*a+DF))’ -prefix corr

Which gives us a correlation value of 0.1150181.  Congrats, they match!  

If you were to do things the old school way, you could standardize both your MRI data and the covariate data, your MRI data would look like this:
-1.14909
0.4356788
1.354154
1.280737
-1.135497
-0.05500114
-1.385034
0.5591438
-0.4464651
0.5413737

and your covariate file would look like this:

subject score
Sub1 1.348483
Sub2 -0.9147055
Sub3 1.537082
Sub4 0.5940871
Sub5 0.1225894
Sub6 -1.009005
Sub7 0.2168889
Sub8 -0.2546087
Sub9 -1.574802
Sub10 -0.06600967

and your AFNI output from 3dttest++ would look like this:

 

<AFNI_3D_dataset
self_idcode = “AFN_q0gjrxxu0ENLzuuMuFhqVw”
ni_type = “4*float”
ni_dimen = “1,1,1”
ni_delta = “1,1,1”
ni_origin = “0,0,0”
ni_axes = “R-L,A-P,I-S”
ni_stat = “none;Ttest(8);none;Ttest(8)”
>
1.11759e-08 3.35426e-08 0.115018 0.327494
</AFNI_3D_dataset>

As you can see, the “mean” value of the covariate is now equal to 0.11501, which is your correlation value.

HowTo: 3D printing your brain

IMG_1370

It’s been about five months since I last updated, and for that I’m somewhat apologetic.  Things have been busy and I’ve been trying to figure out a series of topics to post on for the next series.  So rest assured, there will be more analysis guides in the future.  BUT today is all about how to 3D print a human brain.  Before I begin, as with everything in MRI: There are many ways to actually accomplish this.  This is just the way that I managed to print it.  And of course if you don’t feel like going through the steps yourself, you can always pay someone to do it for you!

  1. Find a 3D printer.  It turns out this isn’t terribly hard!
  2. Get an MRI.  Preferably a T1 suitable for processing in Freesurfer.
  3. Process the brain in Freesurfer.  I’ve posted a few tutorial on rapid processing of brains in Freesurfer, and related Tracula.
    1. recon-all -s YourBrain -i /path/to/file/YourBrain.nii.gz -all
  4. Wait a really long time (actually only about 5 hours on a 4Ghz Retina iMac)
  5. Optionally (recommended) hand correct the segmentations to correct artifacts
  6. Convert each hemisphere to an STL file
    1. mris_convert $SUBJECTS_DIR/YourBrain/surf/lh.pial ~/Desktop/lh.YourBrain.stl
    2. mris_convert $SUBJECTS_DIR/YourBrain/surf/rh.pial ~/Desktop/rh.YourBrain.stl
  7. Review your mesh, you can do this in SolidWorks
  8. Load each STL file into whatever software came with your 3D printer (SolidWorks & Meshlab in my case)
  9. Send the job to your printer and prepare to wait a long time (~13 hours here) per hemisphere.

IMG_1354