The mysterious flipped brain

I previously reviewed a series of DICOM to NIFTI converters.  The entire purpose of this post is to state how important it is to check the orientation of the images coming out of any DICOM to NIFTI converter.  The example today illustrates an incorrect flip in dcm2nii, but I want to stress that this happens with other converters from time to time.

I use dcm2nii for DICOM to NIFTI conversion in a lot of studies, it’s fast and gives consistent naming for the NIFTI files that it outputs.  They give plenty of disclaimers when using it to check the left/right flip and orientation of the data.  And I would say that the error rate is around 1/200.  But when it does give the unexpected results, it’s pretty striking!  As you can see in the image below, the brain is upside down, despite having an RAI orientation.

upside_down

 

If I take the same DICOM files and convert it via Dimon (part of AFNI), I get the correct orientation of the files.

rightside_up

Visualizations thanks to the Mac Quicklook plugin.  You can grab a copy at NITRC.  If you are worried about different types of flips, you can tape a vitamin E tablet to your participants head while they are in the scanner.  This tablet will be visible in both EPIs and anatomic data.  Just make sure that you always put the tablet on the same side!

UPDATE:

It’s worth checking your version of dcm2nii.  The newest version (June 2013) fixes all but one of my flipped  brains!

Quickly Creating Masks in AFNI

Often when creating a mask to use with 3dROIstats, 3dmaskave, or 3dmaskdump, we will create a mask at a higher resolution than our functional runs, detailed here.  One of the reasons the mask is created at a higher resolution is that we base the anatomical masks on either 1) the high-resolution anatomical or 2) the high-resolution Atlas, which may be 1x1x1mm, while our functional data may be 3x3x3mm.  In order to reduce the grid size to match the functional data we use either 3dresample or 3dfractionize.  3dfractionize has the benefit of setting a clip level or being able to warp the mask from one space to another (tlrc –> orig / orig–>tlrc). But it turns out that you don’t necessarily need to make creating masks a two step process, particularly if you are basing the mask on an Atlas!  For example:

3dresample -master Subject1+tlrc. \
-inset CA_N27_ML:left:middle_frontal_gyrus \
-prefix L_MFG

Here I’ve used 3dresample to create a mask of the Left IFG based on the Colin27 Macro Label Atlas.  In this one step I have also supplied 3dresample with a single subject in my dataset with the correct grid size and space (Subject1+tlrc) to resample the Atlas ROI to the correct space. For more information on masking in AFNI, checkout these posts (here and here).

Using afni_proc.py for fMRI analysis

I receive a lot of questions about how to setup the basic analyses in AFNI.  Previously I detailed using uber_subject.py, the AFNI graphical user interface to afni_proc.py which really does all of the hard work under the hood.  Today I’m going to briefly review the common options in afni_proc.py and why I use them.  You can use the script generated from uber_subject.py as a nice starting point or Example 6 from the afni_proc.py help.

So let’s start off with my file structure.  Often when I convert files from DICOM to NIFTI, all of the NIFTI files end up in one directory.  I like to organize them around what their purpose is so that when I write scripts, things get easier.  So first of all, I put all of the anatomical images into a folder called “anat” and each experiment has several EPI files that each go into a folder called func_something, where the something is an easy way of describing what the experiment is.  In this case, I’m processing data from a fast localizer, which has two conditions, one for print and one for speech.  The third necessary folder is a stim_times folder, this is where I keep all of my stimulus timing files.  in this case, I only have two conditions, but in other experiments where I have 9 conditions, things can get cluttered very quickly.  That said, one particular subject might look like this:

Subject001
--anat
----Sag3dMPRAGE.nii.gz
--func_fastloc
----ep2dbold_001.nii.gz
----ep2dbold_002.nii.gz
--stim_times
----condition_01.txt
----condition_02.txt

Once I have my folders organized in this manner, I can run the following script within each folder (or loop through using a shell script).  This will tell afni_proc.py where the functional and structural images are.  More breakdown of what each step does below.

afni_proc.py -subj_id ${aSub} \
-script afni_fastloc_${aSub}.tcsh \
-out_dir ${aSub}.fastloc \
-dsets func_fastloc/*ep2dbold*.nii.gz \
-blocks tshift align tlrc volreg blur mask scale regress \
-copy_anat anat/*Sag3D*.nii.gz \
-anat_has_skull yes \
-tcat_remove_first_trs 6 \
-tshift_opts_ts -tpattern alt+z2 \
-tlrc_opts_at -init_xform AUTO_CENTER \
-align_opts_aea -giant_move \
-volreg_align_e2a \
-volreg_align_to first \
-volreg_tlrc_warp \
-blur_size 8 \
-regress_stim_times stim_times/condition_??.txt \
-regress_stim_labels c1 c2 \
-regress_local_times \
-regress_basis 'GAM' \
-regress_reml_exec \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_censor_outliers 0.1 \
-regress_censor_motion 0.3 \
-regress_opts_3dD \
-num_glt 3 \
-gltsym 'SYM: +c1' -glt_label 1 'print' \
-gltsym 'SYM: +c2' -glt_label 2 'speech' \
-gltsym 'SYM: +c1 -c2' -glt_label 3 'print-speech' \
-jobs 8 \
-bash -execute

So what does it all mean, that’s the question that comes up fairly often.  I find it helpful to go a few lines at a time in understanding what you’re asking afni_proc.py to do.  Also, after you run the script, you will see that afni_proc.py simply generates a LONG tcsh file that runs the analyses on each subject.  You can always view what this script is doing as well.

The first three lines (shown below), tell afni_proc.py the subject name (useful for naming files), the name of the script to create (useful if you want to run multiple iterations of the script), and the output directory to save all of the data to (also useful if you run the script with different permutations).

afni_proc.py -subj_id ${aSub} \
-script afni_fastloc_${aSub}.tcsh \
-out_dir ${aSub}.fastloc \

Next we tell afni_proc.py which “blocks” of analysis code we wish to run.  You can always change or reorder these if you wish.  Here we are asking the script to perform sinc interpolation (a.k.a. time shifting, slice timing correction), next it will align the EPI and anatomical images (co-registration), create a normalized image in standard space (you can select which template to use!), perform motion correction (realignment), apply a spatial smoothing kernel, create a mask (but not actually apply it in single-subject analysis unless you tell it to), and run the regressions using 3dDeconvolve.  One thing to note, some people will wonder why we place the co-registration and normalization before realignment/motion correction.  It’s because AFNI will actually take all three of these spatial transforms and concatenate them into a single transform.  This is useful in that it only has to perform interpolation once on your data.

-blocks tshift align tlrc volreg blur mask regress \

The next three lines tell us about our data, here we tell the script where the anatomical data is, that it has a skull (or that it doesn’t), and that in the functional images we want to remove the first 6 TRs of data that were acquired as pre-steady-state images.  There are people who keep these images and use them for getting a better co-registration with anatomical scans.  The warning I have is that if you perform temporal smoothing, these images can further throw off your results.

-copy_anat anat/*Sag3D*.nii.gz \
-anat_has_skull yes \
-tcat_remove_first_trs 6 \

Now we are down to the meat of the script.  Next we tell the script that we acquired our data in ascending interleaved order, starting with slice 1 instead of slice 0.  Your data may differ from this, but if you acquire using a siemens scanner and have an even number of slices, this seems to be the more common option.  If your scanner puts this information into the DICOM files, and your NIFTI converter correctly codes it, yo could leave out this option and afni_proc.py would plug in the correct info for you.  I don’t recommend doing that, as it’s a lot of assumptions that other programs are all playing nice together.

-tshift_opts_ts -tpattern alt+z2 \

As is often the case, our anatomical image may seem very far off the EPI images in terms of center and orientation.  The following command just tells afni_proc.py to pass to align_epi.anat.py an option widening the search parameters.  I recommend the giant_move because it gives me much better fits than omitting it.

-align_opts_aea -giant_move \

Next, we will perform the normalization.  If you wish to change the template you normalize to, add the -tlrc_base option with the name of the new template.  The init_xform tells afni_proc.py to align the centers of the subject anatomical and the template image if they are far off.

-tlrc_opts_at -init_xform AUTO_CENTER \

If you wish to perform a non-linear warp instead of an affine 12-dof warp, replace the command above with the one below.

-tlrc_NL_warp \

The next three lines all have to do with the realignment / motion correction.  Remember that three transforms are concatenated together.  Here we tell afni_proc.py that we want the transform to go from the EPI to the anatomical data.  We want the motion correction to realign to the FIRST image in the series (other options are third, and last).  And the final option tells afni_proc.py to actually concatenate the transform to standard space.  If you omit this option, your final data and statistics will be in subject space!

-volreg_align_e2a \
-volreg_align_to first \
-volreg_tlrc_warp \

Next we set our spatial smoothing kernel size.  If you want to set a FINAL smoothness value, use the -blur_to_fwhm option as well as the line below!

-blur_size 8   \

Next we need to specify where our stimulus timing files are and the condition names.  Here I tend to label them c1 and c2, because I later add GLTs to specify the names of the conditions.  This is just a style thing for most people doing BLOCK or HRF analysis.  But if you are using TENT/CSPLIN, you end up with a series of regressors for each STEP of these functions which can be confusing.  You can reduce the confusion by using the condition labels here and the actual names in GLTs which can be used to average over all or part of the window.  The final option says that our stimulus timing files are local times, meaning that they are relative to the start of each EPI run.  The alternative is global, which would be relative to the start of the first imaging run.  I recommend local times, there’s more flexibility if you move things around.

-regress_stim_times stim_times/condition_??.txt \
-regress_stim_labels c1 c2 \
-regress_local_times \

Next we specify the basis function to use.  Common options are usually GAM for a gamma function and TENT for making no assumptions about the shape of the HRF and instead estimating it from the data.  It’s worth reading the full help of 3dDeconvolve, as there are many other options (SPM with temporal derivative for example) and you should make decisions based on what fits your need or what you’re trying to replicate.

-regress_basis 'GAM' \

The next option is to run the data through 3dREMLfit as well as the normal 3dDeconvolve.  The advantage here is that 3dREMLfit includes a correction for temporal correlations via ARMA(1,1).  The disadvantage is that it takes a while to run.  But on modern computers with multiple CPU cores should be able to accomplish the analyses in only a little more time than using 3dDeconvolve.

-regress_reml_exec \

The next two lines tell afni_proc.py to estimate the smoothness of the data.  The first one estimates the smoothness of the input EPIs.  The second estimates the smoothness of the error time series file.  These values should be roughly comparable, with the errts file being slightly smaller.  Take these values in the blur_est.1D files and use them to calculate the inputs of 3dClustSim as shown in my previous post.

-regress_est_blur_epits \
-regress_est_blur_errts \

Next we specify the levels of censoring that the script should use.  The first line sets a threshold for outliers of 10% and the second one sets a threshold for motion of roughly 3mm or 3 degrees of rotation (based on euclidean norm).

-regress_censor_outliers 0.1 \
-regress_censor_motion 0.3 \

At this point, almost everything else in the script is directly passed through to 3dDeconvolve or 3dREMLfit.  It begins with the line below.

-regress_opts_3dD \

The first options that we pass are that we have 3 contrasts (or GLTs), one for print, one for speech, and one for the subtraction of print-speech.

-num_glt 3 \
-gltsym 'SYM: +c1' -glt_label 1 'print' \
-gltsym 'SYM: +c2' -glt_label 2 'speech' \
-gltsym 'SYM: +c1 -c2' -glt_label 3 'print-speech' \

To speed up processing we include a flag for using multiple processors.  This only applies to 3dDeconvolve as 3dREMLfit, 3dAllineate, and a few other programs that are OpenMP enabled will use the max number of threads specified by the environmental variable OMP_NUM_THREADS.  You can set this in bash as “export OMP_NUM_THREADS=8”.

-jobs 8 \

Finally we tell afni_proc.py to output the command used to execute the script in bash (my preference), which is really just symbolic as the next line executes the script.

-bash \
-execute

That wraps up a general overview of using afni_proc.py and what each of the options does.  There are plenty of other options that one can set, but these are an excellent starting point.  I use most of these settings in 90% of the analyses run.

Simulating fMRI Designs

I could say a lot about proper simulation of fMRI experiments.  Basically it’s important to measure the efficiency of your design before an experiment (see here).  If you wanted to perform the calculations yourself, MATLAB/Python/Octave are all options and the process is “fairly simple”.

Xmat = [design_matrix' * design_matrix];
main_effects(ct) = [ contrast' * (Xmat^-1) * contrast ];

But 1) if you don’t feel like writing your own code; 2) you want to test several designs to find the most efficient one, then you’re in luck.  Today I’m mostly going to write about adjusting the stimulus timing files to find the most “efficient” design possible.  When simulating your designs, I usually use one (or both) of two software packages: AFNI and Optseq2.

AFNI: Within AFNI, there are a few tools that you will make use of make_random_timing.py, 3dDeconvolve, and 3dTstat.  The first tool, make_random_timing.py automatically generates random stimulus timing files that fit a number of criteria (e.g. number stimulus conditions, number of runs, number of repetitions in each run, etc).  These timings will then get passed to 3dDeconvolve with the “-nodata” command to setup the matrices for analyses.  These tools replace the previous RSFgen style of estimating designs detailed by the AFNI group.  For instance, if I wanted to simulate an experiment with 6 runs (each 155 seconds), 6 conditions, each condition presented 5 times per run, I might use a command as follows.

make_random_timing.py -run_time 155 -stim_dur 2 -num_reps 5 \
-num_stim 6 -num_runs 6 -prefix bil-afni_cond -save_3dd_cmd myDeconvolve.txt

This will generate a series of stimulus timing files all with the prefix bil-afni_cond (e.g. bil-afni_cond_01.1D) and a file called myDeconvolve.txt with the following in it:

3dDeconvolve \
 -nodata 930 1.000 \
 -polort 2 \
 -concat '1D: 0 155 310 465 620 775' \
 -num_stimts 6 \
 -stim_times 1 bil-afni_cond_01.1D 'BLOCK(2,1)' \
 -stim_times 2 bil-afni_cond_02.1D 'BLOCK(2,1)' \
 -stim_times 3 bil-afni_cond_03.1D 'BLOCK(2,1)' \
 -stim_times 4 bil-afni_cond_04.1D 'BLOCK(2,1)' \
 -stim_times 5 bil-afni_cond_05.1D 'BLOCK(2,1)' \
 -stim_times 6 bil-afni_cond_06.1D 'BLOCK(2,1)' \
 -x1D X.xmat.1D -xjpeg disp.jpg

# compute the sum of non-baseline regressors
3dTstat -sum -prefix sum_ideal.1D X.xmat.1D'[18..$]'

# consider plotting the SUM below non-polort regressors
# command: 1dplot -xlabel Time -ynames SUM - sum_ideal.1D X.xmat.1D'[18..$]'

At this point, you can run the file myDeconvolve.txt (ex: sh myDeconvolve.txt > myoutput.txt), which will fit the matrices via 3dDeconvolve and estimate the ideal sum ideal waveform.  If you choose to plot this, it will look something like:

demo_ideal

To now get the relevant values, you now need to look in myoutput.txt.  You can use awk, python or any other number of tools to extract these values, sum them, and then save that value as the “value to beat” for successive iterations.  The goal is to find the smallest sum value of all of your different conditions.  Essentially you are searching for the simulated design that will result in a design matrix with the smallest amount of unexplained variance.

Stimulus: Stim#1 
 h[ 0] norm. std. dev. = 0.1182
Stimulus: Stim#2 
 h[ 0] norm. std. dev. = 0.1148
Stimulus: Stim#3 
 h[ 0] norm. std. dev. = 0.1163
Stimulus: Stim#4 
 h[ 0] norm. std. dev. = 0.1135
Stimulus: Stim#5 
 h[ 0] norm. std. dev. = 0.1202
Stimulus: Stim#6 
 h[ 0] norm. std. dev. = 0.1193

Take that process and loop it a few thousand times and keep the design with the smallest std. deviation.  Also worth noting, I didn’t show here that you can also add GLT (contrasts) to your model design and measure the standard deviation for those, which is also informative (sometimes more informative) to picking an ideal design.  If you are looking for guidance, I just discovered that the AFNI group has an updated script that you can modify for your purposes.

In addition to modeling the efficiency for the evoked responses, you can add GLTs (contrasts) to your 3dDeconvolve command generated before to model the efficiency of the contrasts.  This can be particularly useful if you have several stimulus classes going into one contrast of interest.  The same process applies there, take the efficiency value and use that as the score to beat as you repeat the process in permutations.

Optseq2: Optseq2 provides similar functionality, albeit with slightly different options.  The decision is yours, and often it is helpful to consider measuring the efficiency from optseq2 in AFNI via 3dDeconvolve.  If you’re determined to use optseq, you can checkout the usage examples here.

Adjusting MRI Smoothness for Multi-Scanner Comparisons

Typically when we smooth (aka spatial filter) our fMRI data using a fixed kernel size.  And as we know, the size of a smoothing kernel makes some difference in the final results (see below).  This shows a group analysis map, the results are more shocking on single subject maps.

smoothing

The common misconception is that you are smoothing your data to whatever level you set the kernel size to (if you use AFNI’s 3dmerge or SPM’s Smooth tools).  When in fact you are adding smoothness to the data.  Data coming from different scanners, depending on a host of values (TE, TR, FA, Field Map) may have different inherent smoothness values.  If you are doing cross-scanner comparisons or just concerned about having similar levels of smoothness across your participants, you’re in luck because there are of course AFNI tools to help you.

1. 3dFWHMx will allow you to estimate the smoothness of your data.  In a previous post on Group Analyses, I talked about how to use this in conjunction with 3dClustSim to estimate significant cluster size based on the smoothness of the errts dataset.

2.  3dBlurToFWHM can go one step further and instead of applying a smoothing kernel and then having to estimate your smoothness, this tool will actually iteratively smooth the image until it reaches a final FWHM smoothness value.  Perfect for performing cross-scanner comparisons.

What’s better, you can use these tools on datasets not processed in AFNI, courtesy to all AFNI tools working on NIFTI formatted images (as well as the AFNI style HEAD/BRIK).  Of course if you use AFNI, you can add this to your afni_proc.py processing stream by adding the line “-blur_to_fwhm”.