Creating AFNI images via command line and Xvfb

Quite a while ago, I wrote a post about making automated snapshots of MRI activation with AFNI.  One of the things I always appreciated about FSL was that they provided a series of ready-made images to show off where activation was in the brain for a given analysis (at least using FEAT).

So when I started using AFNI,  I wanted a quick way to show the activation of various contrasts without necessarily having to open AFNI for each subject, setting the sub-brik and threshold, and taking a screen shot.  Fortunately, AFNI comes with a tool called the plugout_drive, which allows you to “remote control” the AFNI user interface by setting various elements.  An impotant note, the plugout_drive can be controlled by either a separate program, or through the command used to Launch AFNI.  In this example I use the driver as part of opening AFNI to keep things shorter.  

The plugout_drive works really well for making the entire process hands free, but when you are creating automated snapshots for say 100 subjects, things get clunky for two reasons: 1) it takes a while; 2) AFNI opening and closing will grab your screen away from you making the computer mostly unusable while the process is taking place.  Well we can solve at least one of those problems by using Xvfb (The X Windows Virtual Frame Buffer).  Xvfb allows you to launch X windows (of which AFNI is one) without actually displaying the contents to your screen.

The basic setup works like this, you launch Xvfb with the settings for the screen dimensions (here 1024×768 with 24-bit depth).  Then call AFNI via the DISPLAY that Xvfb is responsible for.  AFNI will then open in this virtual frame buffer and do it’s thing.  First it will create a montage of the Axial view with a 6×6 setup, skipping three slices between each image.  Next, we will close the Sagittal image, set the underlay as the anat_final+orig dataset that uber_subject.py or afni_proc.py automatically generate.  Next it will turn off the functional autorange and set it to an arbitrary value of 10.  Next we load the stats dataset for the overlay and set the image to be centered in the window (center information retrieved from 3dCM (center mass).  Then we load the Coefficient and T-stat sub-briks for the condition of interest (first Print in this case), set the p-value to 0.001 and save a screen shot.  We do the same thing for the speech condition setting the sub-briks threshold and saving a screen shot.

centerC=`3dCM anat_final.${aDir}+orig`;

`which Xvfb` :1 -screen 0 1024x768x24 &

DISPLAY=:1 afni -com "OPEN_WINDOW A.axialimage mont=6x6:3 geom=600x600+800+600" \
-com "CLOSE_WINDOW A.sagittalimage" \
-com "SWITCH_UNDERLAY anat_final.${aDir}+orig" \
-com "SET_FUNC_AUTORANGE A.-" \
-com "SET_FUNC_RANGE A.10" \
-com "SWITCH_OVERLAY stats.${aDir}+orig" \
-com "SET_DICOM_XYZ A ${centerC}" \
-com "SET_SUBBRICKS A -1 13 14" \
-com "SET_THRESHNEW A .001 *p" \
-com "SAVE_JPEG A.axialimage ../snapshots/${aDir}_print.jpg" \
-com "SET_SUBBRICKS A -1 16 17" \
-com "SET_THRESHNEW A .001 *p" \
-com "SAVE_JPEG A.axialimage ../snapshots/${aDir}_speech.jpg" \
-com "QUIT"
killall Xvfb

At the end we kill the Xvfb.  There are cleaner ways to do this, but I usually don’t bother.  You can then of course take this setup and nest it in a loop to go through each subject.  In this case, we cd into each results directory created by afni_proc.py and then run the code below.  If you want to modify the code to use the separate plugout_drive program, you can see the previous blog post and merge it with this one.

Comments are closed.