Installing AFNI on Mac OS X 10.11 “El Capitain”

We’ve all been there, Apple releases a new Operating System and when you install it, you find out that your favorite programs don’t work on launch day or require some special install instructions.  Well if you’ve installed AFNI onto a Mac running 10.11, you may notice that some of the Python programs don’t fully work.  This isn’t AFNI’s fault, it’s actually a “subtle” change in the OS that limits the abilities of interpreters (like Python) to gain access to shell variables (like DYLD_*).

So how do you actually go about installing AFNI on a system running 10.11.x?  Well you have (like always) plenty of options, but my preferred way to do it uses *surprise* homebrew.

1. Install Xcode via the Mac App Store
2. Install Homebrew
3. Install GCC via Homebrew

brew install gcc --with-all-languages --without-multilib

4. Install PyQT (for access to uber_subject.py and uber_ttest.py)

brew install pyqt

5. Link your libgomp.1.dylib to the correct location for AFNI to find it.  Note that you’ll want to look for this file and not just copy the command below:

ln -s /usr/local/Cellar/gcc/5.2.0/lib/gcc/5/libgomp.1.dylib /usr/local/lib/libgomp.1.dylib

6. Install glib

brew install glib

7. Download AFNI’s 10.7 binaries and move to ~/abin
8. Setup your path

echo "export PATH=$PATH:~/abin" >> ~/.bash_profile

9. Test your setup

afni_system_check.py -check_all

10. Rejoice.

Installing PyMVPA on Mac OS X

These instructions work on 10.10 (Yosemite) and 10.11 (El Capitain).  If things change in the future, I’ll try to update these instructions!

Multi-voxel Pattern Analysis (MVPA) is hot right now.  Its users are the cool kids at conferences.  And if you want to join that crowd of researchers, you have a growing possibility of solutions to perform MVPA without having to resort to writing your own.  The list that I’ll start out with today includes three MATLAB toolboxes: 1) The Decoding Toolbox, 2) PRoNTo, and 3) the long-since updated Princeton MVPA Toolbox.  And of course the non-MATLAB possibility featured in todays post is PyMVPA.

Now as you may have already realized, the first three require MATLAB.  If you don’t have MATLAB, you could try to find instructions to run these toolboxes in Octave.  But that may be more tech stuff that you don’t want to deal with!  Python has the advantage of being Open Source, Free, and relatively close to MATLAB in many of its syntactic qualities.  So today I’ll detail the installation of PyMVPA and in a later post, I’ll talk about some of the other MVPA solutions.

The way I see it, at this very moment, you have about four options for installing PyMVPA on your Mac.  The first is to install the NeuroDebian Virtual Machine, which runs in Virtual Box (a Free PC Emulator software).  If you go this route, you’ll be almost guaranteed a smooth transition to having the software installed.  Of course you’ll have to fight against the slowness of any virtual machine and may be limited by how much hard drive space and RAM your computer has.

The second solution is to install MacPorts, and use it to install all of the necessary components for you.  This is a fairly straightforward (and seems to be recommended way by the maintainers of PyMVPA).

sudo port install py25-pymvpa +scipy +nibabel \
 +hcluster +libsvm +matplotlib +pywavelet

However, I will say that not everyone likes installing MacPorts.  So that brings me to the third solution is to install something like Enthought, a ready made Python environment with a number of dependencies (Numpy, Scipy) already installed for you.  The good news is that there is a free version of this toolkit and it really is smooth to install.  After the installation you’ll just have to grab the source code and follow the install from source instructions.

And finally, we reach the fourth option.  To install PyMVPA onto your computer by satisfying the dependencies yourself!  Here I recommend, if you don’t already, having Homebrew installed!  You’ll also need to grab a copy of Xcode (via the AppStore).  The rest of the instructions suggest that you setup a Virtual Environment, so that your don’t have to globally install packages on your computer, with the necessary dependencies.  This has the advantage of keeping everything mostly contained so that you could run different versions of any package without breaking your PyMVPA installation!  Also note that the install directory is in the Shared Users area, which is handy because you can have multiple users share the same environment.  The following should be run in a terminal.  Some have a description and the command after the colon (:).
1) Install Homebrew:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
2) mkdir /Users/Shared/PyMVPA; cd  /Users/Shared/
3) sudo easy_install pip
4) pip install virtualenv
5) virtualenv PyMVPA
6) cd PyMVPA
7) . bin/activate
8) pip install numpy
9) pip install scipy
10) pip install nibabel
11) pip install ipython'[all]’
12) pip install scikit-learn
13) pip install matplotlib
14) brew install swig
15) Grab the PyMVPA source (and place into your virtualenv folder):
git clone git://github.com/PyMVPA/PyMVPA.git
16) Install PyMVPA:
make 3rd
python setup.py build_ext –with-libsvm
python setup.py install –with-libsvm
17) Download tutorial data at: http://data.pymvpa.org/datasets/haxby2001/
18) Unzip and Place data in /Users/Shared/PyMVPA/Tutorial_Data
And give it a shot!

Helpful fMRI QA Tools in AFNI

Apologies for the lack of updates lately!  It’s been… busy.

I’ve written in the past about automatically making “snapshots” in AFNI (here) and even doing that without having AFNI taking over you entire screen using Xvfb (here).  These are one way of performing Quality Assurance (QA) on your data, by actually LOOKING at the activation each individual has for different conditions, without having to open AFNI, select the conditions Coefficient and Tstat and adjusting the slider.

But there’s considerably more QA that you might want to do!  First and foremost, you may have already discovered that if you use afni_proc.py, some helpful scripts for looking at single-subject data are created for you.  These are:

  1. @ss_review_basic – which will print out a variety of data information, such as your thresholds for motion and outlier censoring, as well as the number of TRs censored, average motion, and even a breakdown of your censoring by conditions.
  2. @ss_review_driver – this script will walk you through some important QA on your data, starting by printing out the information in @ss_review_basic!  Then it will walk you through motion and outlier censoring plots, checking EPI to Anatomical registration, regression matrix errors/warnings, and finally display the peak activation of your overall F-map.  This should be inside of the head.

Now I won’t lie, running these scripts is good, but sometimes you just want all of the data in one place fast.  Well you could pay someone to transcribe all of the information from @ss_review_basic into a table.  Or you can use a very helpful program called gen_ss_review_table.py.  And of course that’s the topic of today’s post!

Recall that afni_proc.py places all of your results into a single “results” folder.  Within that folder is where we find our @ss_review_basic and @ss_review_driver scripts.  When the scripts are called, they automatically create an out.ss_review version of the output of those scripts.  And it is these files that you want to call gen_ss_review_table.py on. For example:

gen_ss_review_table.py  \
-infiles Subject*/Subject*.results/out.ss_review* \
-tablefile ss_review_stats.txt

Would generate a table of all of my subjects including easy to summarize data on motion thresholds, TRs that are censored, and even blur estimates for computing your inputs to 3dClustSim!  Example shown below, other columns are not visible because I took a screenshot for easy viewing purposes.

table_demo1

 

Parallelizing Tracula

By popular request, or rather the sheer number of hits the Parallelizing Freesurfer post, today we turn our attention to Tracula, the Freesurfer integrated solution for doing diffusion tensor imaging (DTI) tractography!  Tracula has a lot of nice features, one of my favorites is the ability to estimate 18 or so tracts in the brain by constraining the tractography to the underlying anatomy.  According to the website it uses probabilistic tractography (using FSL under the hood) with anatomical priors.  You probably care about more details than this, and you probably care enough to head over to the Tracula main page and read about it there!

Of course if you haven’t already run the full Freesurfer recon-all process on your structural data, you might wish to start doing that now…  Done?  Great!

As with just about any kind of MRI processing, Tracula can be slow.  As of Freesurfer version 5.3, I’m seeing times around 18 hours per subject on big chunky Mac Pros.  Add that in with another 20 hours to run Freesurfer on all data going into Tracula and putting your data through the full pipeline can seem like quite an endeavor!  But never fear, it turns out it’s fairly easy to automate Tracula (documented using their config files) and even parallelize it.

Now we all come from different approaches, Tracula documentation is setup for DICOM files as the inputs, but you can really give it anything that Freesurfer’s mri_convert will accept.  So NIFTI is a perfectly valid file format to give it and might make your life easier since dcm2nii (part of MRIcron) will automatically spit out the relevant bvec and bval files that you need!

Tracula can do considerable amounts of preprocessing of your diffusion data!  But, since Tracula accepts NIFTI files, that means we can (if we so desire) do quite a bit of the preprocessing (e.g. eddy current correction,  blip-up/blip-down) outside of Tracula.  I’ve posted before about using TORTOISE to do these things (part 1, part 2, part 3) as well as using DTIprep (tutorial, automating) as well as a general review of other DTI-related articles I’ve posted.

In the example here I’ve used my DTIprep script from before to do a quick run through of my diffusion run.  I can then copy the output three files dwi_QCed.nii dwi_QCed.bvec and dwi_QCed.bval to whatever directory I wish to use as my input directory in Tracula.  So at this point my file tree might look something like this:

-Subject001
--dwi_QCed.nii
--dwi_QCed.bvec
--dwi_QCed.bval
-Subject002
--dwi_QCed.nii
--dwi_QCed.bvec
--dwi_QCed.bval

I usually put my Tracula files in an “input” directory and this is separate (or sometimes nested within) my Freesurfer SUBJECTS_DIR.  In this example I nested it within my Freesurfer folder.  The next step is to setup your Tracula config file, there are great resources (read: examples) available on their official wiki.  But for what I need, you might end up with something like what is shown below.  Notice that because I’ve already preprocessed my data with DTIprep, I’m turning off some of the functionality in the pipeline since we probably don’t need to eddy_correct again:

#!/bin/tcsh
setenv SUBJECTS_DIR /data/freesurfer
set dtroot=/data/freesurfer/tracula
set subjlist = ( Subject001 )
set dcmroot = /data/freesurfer/tracula/inputs
set dcmlist = ( Subject001/dwi_QCed.nii )
set bvecfile = /data/freesurfer/tracula/inputs/Subject001/dwi_QCed.bvec
set bvalfile = /data/freesurfer/tracula/inputs/Subject001/dwi_QCed.bval
#don't do these, handled by DTIprep (or TORTOISE)
set doeddy = 0
set dorotbvecs = 0
#register via Freesurfer bbregister
set doregflt = 0
set doregbbr = 1
#put in MNI space
set doregmni = 1
set doregcvs = 0
set ncpts = (6 6 5 5 5 5 7 5 5 5 5 5 4 4 5 5 5 5)

Now this config file is great if you just want to run a single participant’s data through the pipeline.  The config file allows you to specify more than one participant, but for our automation process here let’s leave it at one because we’re going to use the power of UNIX to make life easier!  With this config file, you could execute Tracula with one command:

trac-all -prep -c NameOfConfigFile.txt

If we wanted to run this config file on another participant we could go through and change the subject number and run it again and so on and so forth.  But that seems really tedious!  So instead let’s use sed to change subject name and then execute the next participant:

cp NameOfConfigFile.txt NameOfConfigFile_Subject002.txt
sed -i '' "s/Subject001/Subject002/g" NameOfConfigFile_Subject002.txt
trac-all -prep -c NameOfConfigFile_Subject002.txt

Now take it one step further and run this through a loop, make your script smart enough to not re-run the script on participants (though I think Tracula may be smart enough to avoid this):

for aSub in Subject???
do
	#check to see if they already exist!  
	if [ ! -e $current/$aSub ]; then
		echo "Starting process for $aSub"
		cp Tracula_Config.txt $aSub.config.txt
		sed -i '' "s/Subject001/${aSub}/g" $aSub.config.txt
		trac-all -prep -c $aSub.config.txt
	fi
done

Now your computer will sequentially run Tracula on each participant.  If you wanted to get even more crazy and parallelize the process (hence the name of the post), you could spit the Tracula command into a file and then use GNU parallel to run all jobs (here up to 8) simultaneously:

echo "trac-all -prep -c $aSub.config.txt" > all_jobs.txt
parallel -j 8 < all_jobs.txt

And there you have it.  You can now easily automate Tracula and even parallelize it without having to deal with config files over and over again!  To maximize your time you might want to add in lines for doing the -bedp -path and -stat options as well!

UCLA NeuroImaging Training Program

Just a quick reminder that the annual UCLA NeuroImaging Training Program is currently running.  You can download all of the materials (slides, video, etc) from previous years (like 2014) on their website. This years materials will be available after the workshop.  I also highly recommend, if you have some free time, checking out the live feed running while the course is going on for the next two weeks!

Besides learning a few things, you never know when you might see one of your favorite NeuroImaging bloggers, like Andy from Andy’s Bran Blog!

AndyPhoto