Performing brain-behavior correlations with 3dttest++

A while ago, I wrote a post for doing brain-behavior correlations with different AFNI programs.  And at the time I mentioned that you can do correlations with 3dtest++, but you’d first need to standardize (read: z-score) both the brain and behavior values.  The general logic there is that if you’re doing a simple regression (what a ANCOVA is with one group and one covariate), that standardizing the inputs will standardize your regression (ANCOVA) b-value to a beta, which is the same as a correlation.

So let’s say I have MRI data (of a single voxel for simplicity) from 10 people:


and their corresponding covariate value (some score):

subject score
Sub1 33
Sub2 9
Sub3 35
Sub4 25
Sub5 20
Sub6 8
Sub7 21
Sub8 16
Sub9 2
Sub10 18

The basic correlation of these values is 0.1150181.

If I run these data through 3dttest++:

3dttest++ -prefix corr_demo \
-setA allSubs \
Sub1 mri_data.1D[0] \
Sub2 mri_data.1D[1] \
Sub3 mri_data.1D[2] \
Sub4 mri_data.1D[3] \
Sub5 mri_data.1D[4] \
Sub6 mri_data.1D[5] \
Sub7 mri_data.1D[6] \
Sub8 mri_data.1D[7] \
Sub9 mri_data.1D[8] \
Sub10 mri_data.1D[9] \
-covariates covariates_t.1D

I get the following output:
self_idcode = "AFN_lE6-AT8YoUXP4TUZqIw65g"
ni_type = "4*float"
ni_dimen = "1,1,1"
ni_delta = "1,1,1"
ni_origin = "0,0,0"
ni_axes = "R-L,A-P,I-S"
ni_stat = "none;Ttest(8);none;Ttest(8)"
0.427321 5.35561 0.00259738 0.327494

Which tells me that the “mean” value (without covariates) is 0.427, which is the same output as we would see in R.  And the t-stat on that mean 5.6428.  The covariate estimate is 0.00259, again the same as R’s lm(a~b) function.  And the t-stat for the b-value is 0.327 (same as R’s).  We can then convert the t-value of this covariate to an R-squared and thus into a r-value (correlation).

R2 = t2 / (t2 + DF)

3dcalc -a T-stat -expr ‘(ispositive(a)-isnegative(a))*sqrt(a*a/(a*a+DF))’ -prefix corr

Which gives us a correlation value of 0.1150181.  Congrats, they match!  

If you were to do things the old school way, you could standardize both your MRI data and the covariate data, your MRI data would look like this:

and your covariate file would look like this:

subject score
Sub1 1.348483
Sub2 -0.9147055
Sub3 1.537082
Sub4 0.5940871
Sub5 0.1225894
Sub6 -1.009005
Sub7 0.2168889
Sub8 -0.2546087
Sub9 -1.574802
Sub10 -0.06600967

and your AFNI output from 3dttest++ would look like this:


self_idcode = “AFN_q0gjrxxu0ENLzuuMuFhqVw”
ni_type = “4*float”
ni_dimen = “1,1,1”
ni_delta = “1,1,1”
ni_origin = “0,0,0”
ni_axes = “R-L,A-P,I-S”
ni_stat = “none;Ttest(8);none;Ttest(8)”
1.11759e-08 3.35426e-08 0.115018 0.327494

As you can see, the “mean” value of the covariate is now equal to 0.11501, which is your correlation value.

HowTo: 3D printing your brain


It’s been about five months since I last updated, and for that I’m somewhat apologetic.  Things have been busy and I’ve been trying to figure out a series of topics to post on for the next series.  So rest assured, there will be more analysis guides in the future.  BUT today is all about how to 3D print a human brain.  Before I begin, as with everything in MRI: There are many ways to actually accomplish this.  This is just the way that I managed to print it.  And of course if you don’t feel like going through the steps yourself, you can always pay someone to do it for you!

  1. Find a 3D printer.  It turns out this isn’t terribly hard!
  2. Get an MRI.  Preferably a T1 suitable for processing in Freesurfer.
  3. Process the brain in Freesurfer.  I’ve posted a few tutorial on rapid processing of brains in Freesurfer, and related Tracula.
    1. recon-all -s YourBrain -i /path/to/file/YourBrain.nii.gz -all
  4. Wait a really long time (actually only about 5 hours on a 4Ghz Retina iMac)
  5. Optionally (recommended) hand correct the segmentations to correct artifacts
  6. Convert each hemisphere to an STL file
    1. mris_convert $SUBJECTS_DIR/YourBrain/surf/lh.pial ~/Desktop/lh.YourBrain.stl
    2. mris_convert $SUBJECTS_DIR/YourBrain/surf/rh.pial ~/Desktop/rh.YourBrain.stl
  7. Review your mesh, you can do this in SolidWorks
  8. Load each STL file into whatever software came with your 3D printer (SolidWorks & Meshlab in my case)
  9. Send the job to your printer and prepare to wait a long time (~13 hours here) per hemisphere.



Installing AFNI on Mac OS X 10.11 “El Capitain”

We’ve all been there, Apple releases a new Operating System and when you install it, you find out that your favorite programs don’t work on launch day or require some special install instructions.  Well if you’ve installed AFNI onto a Mac running 10.11, you may notice that some of the Python programs don’t fully work.  This isn’t AFNI’s fault, it’s actually a “subtle” change in the OS that limits the abilities of interpreters (like Python) to gain access to shell variables (like DYLD_*).

So how do you actually go about installing AFNI on a system running 10.11.x?  Well you have (like always) plenty of options, but my preferred way to do it uses *surprise* homebrew.

1. Install Xcode via the Mac App Store
2. Install Homebrew
3. Install GCC via Homebrew

brew install gcc --with-all-languages --without-multilib

4. Install PyQT (for access to and

brew install pyqt

5. Link your libgomp.1.dylib to the correct location for AFNI to find it.  Note that you’ll want to look for this file and not just copy the command below:

ln -s /usr/local/Cellar/gcc/5.2.0/lib/gcc/5/libgomp.1.dylib /usr/local/lib/libgomp.1.dylib

6. Install glib

brew install glib

7. Download AFNI’s 10.7 binaries and move to ~/abin
8. Setup your path

echo "export PATH=$PATH:~/abin" >> ~/.bash_profile

9. Test your setup -check_all

10. Rejoice.

Installing PyMVPA on Mac OS X

These instructions work on 10.10 (Yosemite) and 10.11 (El Capitain).  If things change in the future, I’ll try to update these instructions!

Multi-voxel Pattern Analysis (MVPA) is hot right now.  Its users are the cool kids at conferences.  And if you want to join that crowd of researchers, you have a growing possibility of solutions to perform MVPA without having to resort to writing your own.  The list that I’ll start out with today includes three MATLAB toolboxes: 1) The Decoding Toolbox, 2) PRoNTo, and 3) the long-since updated Princeton MVPA Toolbox.  And of course the non-MATLAB possibility featured in todays post is PyMVPA.

Now as you may have already realized, the first three require MATLAB.  If you don’t have MATLAB, you could try to find instructions to run these toolboxes in Octave.  But that may be more tech stuff that you don’t want to deal with!  Python has the advantage of being Open Source, Free, and relatively close to MATLAB in many of its syntactic qualities.  So today I’ll detail the installation of PyMVPA and in a later post, I’ll talk about some of the other MVPA solutions.

The way I see it, at this very moment, you have about four options for installing PyMVPA on your Mac.  The first is to install the NeuroDebian Virtual Machine, which runs in Virtual Box (a Free PC Emulator software).  If you go this route, you’ll be almost guaranteed a smooth transition to having the software installed.  Of course you’ll have to fight against the slowness of any virtual machine and may be limited by how much hard drive space and RAM your computer has.

The second solution is to install MacPorts, and use it to install all of the necessary components for you.  This is a fairly straightforward (and seems to be recommended way by the maintainers of PyMVPA).

sudo port install py25-pymvpa +scipy +nibabel \
 +hcluster +libsvm +matplotlib +pywavelet

However, I will say that not everyone likes installing MacPorts.  So that brings me to the third solution is to install something like Enthought, a ready made Python environment with a number of dependencies (Numpy, Scipy) already installed for you.  The good news is that there is a free version of this toolkit and it really is smooth to install.  After the installation you’ll just have to grab the source code and follow the install from source instructions.

And finally, we reach the fourth option.  To install PyMVPA onto your computer by satisfying the dependencies yourself!  Here I recommend, if you don’t already, having Homebrew installed!  You’ll also need to grab a copy of Xcode (via the AppStore).  The rest of the instructions suggest that you setup a Virtual Environment, so that your don’t have to globally install packages on your computer, with the necessary dependencies.  This has the advantage of keeping everything mostly contained so that you could run different versions of any package without breaking your PyMVPA installation!  Also note that the install directory is in the Shared Users area, which is handy because you can have multiple users share the same environment.  The following should be run in a terminal.  Some have a description and the command after the colon (:).
1) Install Homebrew:
ruby -e "$(curl -fsSL"
2) mkdir /Users/Shared/PyMVPA; cd  /Users/Shared/
3) sudo easy_install pip
4) pip install virtualenv
5) virtualenv PyMVPA
6) cd PyMVPA
7) . bin/activate
8) pip install numpy
9) pip install scipy
10) pip install nibabel
11) pip install ipython'[all]’
12) pip install scikit-learn
13) pip install matplotlib
14) brew install swig
15) Grab the PyMVPA source (and place into your virtualenv folder):
git clone git://
16) Install PyMVPA:
make 3rd
python build_ext –with-libsvm
python install –with-libsvm
17) Download tutorial data at:
18) Unzip and Place data in /Users/Shared/PyMVPA/Tutorial_Data
And give it a shot!

Helpful fMRI QA Tools in AFNI

Apologies for the lack of updates lately!  It’s been… busy.

I’ve written in the past about automatically making “snapshots” in AFNI (here) and even doing that without having AFNI taking over you entire screen using Xvfb (here).  These are one way of performing Quality Assurance (QA) on your data, by actually LOOKING at the activation each individual has for different conditions, without having to open AFNI, select the conditions Coefficient and Tstat and adjusting the slider.

But there’s considerably more QA that you might want to do!  First and foremost, you may have already discovered that if you use, some helpful scripts for looking at single-subject data are created for you.  These are:

  1. @ss_review_basic – which will print out a variety of data information, such as your thresholds for motion and outlier censoring, as well as the number of TRs censored, average motion, and even a breakdown of your censoring by conditions.
  2. @ss_review_driver – this script will walk you through some important QA on your data, starting by printing out the information in @ss_review_basic!  Then it will walk you through motion and outlier censoring plots, checking EPI to Anatomical registration, regression matrix errors/warnings, and finally display the peak activation of your overall F-map.  This should be inside of the head.

Now I won’t lie, running these scripts is good, but sometimes you just want all of the data in one place fast.  Well you could pay someone to transcribe all of the information from @ss_review_basic into a table.  Or you can use a very helpful program called  And of course that’s the topic of today’s post!

Recall that places all of your results into a single “results” folder.  Within that folder is where we find our @ss_review_basic and @ss_review_driver scripts.  When the scripts are called, they automatically create an out.ss_review version of the output of those scripts.  And it is these files that you want to call on. For example:  \
-infiles Subject*/Subject*.results/out.ss_review* \
-tablefile ss_review_stats.txt

Would generate a table of all of my subjects including easy to summarize data on motion thresholds, TRs that are censored, and even blur estimates for computing your inputs to 3dClustSim!  Example shown below, other columns are not visible because I took a screenshot for easy viewing purposes.