So after last time, you successfully ran a group-level analysis that you feel fits your scientific question and now you want to know what will survive statistical scrutiny. Well the good news is that you have a few options! First, in order to get any of these options working, you should start by running your group-level analysis with a mask to remove activity outside of the brain (yes it still happens). Since we need the mask to consider which statistical correction to apply, let’s start there.

**Generating Masks for Group Level Analysis**

Just in terms of generating a group-level mask, you have a few options. If you’re using afni_proc.py (which I highly recommend), and you warp your data to standard space early on (e.g. -volreg_tlrc_warp), you can get a file from each subject called mask_group+tlrc.HEAD, which is the individual subjects brain mask after being warped to standard space. You can collect these from each subject and then average them via 3dMean. This can be helpful also to look at registrations as a whole!

If you didn’t use afni_proc.py, then you can take each subject’s standardized anatomical data (without skull) and run 3dAutomask on the brain (default settings tend to work). You can then take all of these and use 3dMean to put them together to create your group mask. Note that the opposite approach will give you similar results, running 3dMean on all of your subject anatomicals in standard space and then using 3dAutomask on that average.

Your third option is to take the standard space anatomic (e.g. TT_N27 or MNI152) and mask that for use with your group analysis. The warning here is that registrations are usually not perfect, so you could be masking out important bits of data from some subjects.

Whether you chose options 1, 2, or 3, it’s possible that your mask is a different size (different dimensions) from your statistical datasets. You can verify this by running 3dinfo -prefix -d3 -n4 *.HEAD in a folder containing all of your stat files and your mask. If your mask turns out to be a different size than your stats datafiles, you can use either 3dresample or 3dfractionize to reduce the size of your mask to match your stats datasets.

**Statistical Corrections**

Now that you have a group mask, you can re-run your data analysis using whatever group program you used before (e.g. 3dttest++, 3dANOVAx) and you will automatically get a “free” FDR correction for each of your comparisons that no longer takes into account all the voxels way outside of the brain. This will allow you to see in the viewer an associated FDR “q-value” for each t-value and p-value that you are used to looking at.

But ultimately, FDR has plenty of things that people don’t like about it, the first of which is that it can make statistical results disappear, even when you have large “blobs” or activation in your group level analysis. The major alternative in AFNI is to use cluster-wise thresholds, where clusters of a large number of voxels count as surviving statistical scrutiny. The way that this is implemented in AFNI is through 3dClustSim.

Before running 3dClustSim, you need to estimate the blur for each of your datasets. If you used afni_proc.py (or uber_subject.py), you can grab the file blur_est.${subject_number}.1D for each subject and average the values (x,y,z) labeled “errts blur estimates” and get your overall group-level blur level to input into 3dClustSim. If you don’t use afni_proc.py, you can run 3dFWHMx on each of your errts datasets with your subject level mask and use those numbers in an average.

Finally, now that you have a Group level mask and your average blur size, you can run 3dClustSim to generate the necessary cluster size for a given p-value to survive statistical correction. Substitute the 9 9 9 for your actual average blur in the example below. This is highly derived example, typical cluster sizes are much larger than those shown below (in part because your datasets are at a higher resolution).

3dClustSim -mask mask+tlrc -fwhmxyz 9 9 9 -niml -prefix CStemp

# 3dClustSim -fwhmxyz 9 9 9 # Grid: 64x64x32 3.50x3.50x3.50 mm^3 (131072 voxels) # # CLUSTER SIZE THRESHOLD(pthr,alpha) in Voxels # -NN 1 | alpha = Prob(Cluster >= given size) # pthr | 0.100 0.050 0.020 0.010 # ------ | ------ ------ ------ ------ 0.020000 112.3 125.0 142.6 154.9 0.010000 69.5 77.9 89.0 96.7 0.005000 47.5 53.3 62.1 68.1 0.002000 31.7 35.9 41.3 45.8 0.001000 24.2 27.9 32.4 36.0 0.000500 19.0 21.9 25.7 29.0 0.000200 14.1 16.5 19.8 22.3 0.000100 11.4 13.4 16.3 18.4

This table shows on the left a p-value and to the right the necessary cluster size for a given alpha level. So if you want an alpha of 0.01, you need a cluster of 154.9 voxels at p=0.02. You can also embed this information into the output of your group analysis program so that when you view the results in AFNI, you can see a p-value associated with each cluster given a set t-value. The most amazing part of this is that the AFNI viewer will update your clusters on the fly.

3drefit -atrstring AFNI_CLUSTSIM_NN1 file:CStemp.NN1.niml \ -atrstring AFNI_CLUSTSIM_MASK file:CStemp.mask \ statistics_dataset+tlrc

## Max Shim

/ October 30, 2013The command used was: 3dClustSim -dxyz 9 9 9

is it not 3dClustSim -fwhmxyz 9 9 9 ?

## pete

/ November 1, 2013You are correct, thanks for catching my typo! It should be -fwhmxyz 9 9 9. I have updated the post to reflect it.