Main processing loop¶
The title for the final grouped-network connectome file is dependent on the group names. The resulting file for this example is ‘parkinsons-controls.cff’. The following code implements the format a-b-c-...x.cff for an arbitary number of groups.
Warning
The ‘info’ dictionary below is used to define the input files. In this case, the diffusion weighted image contains the string ‘dwi’. The same applies to the b-values and b-vector files, and this must be changed to fit your naming scheme.
This line creates the processing workflow given the information input about the groups and subjects.
See also
- nipype/workflows/dmri/mrtrix/group_connectivity.py
- nipype/workflows/dmri/camino/connectivity_mapping.py
- dMRI: Connectivity - Camino, CMTK, FreeSurfer
The purpose of the second-level workflow is simple: It is used to merge each subject’s CFF file into one, so that there is a single file containing all of the networks for each group. This can be useful for performing Network Brain Statistics using the NBS plugin in ConnectomeViewer.
title = ''
for idx, group_id in enumerate(group_list.keys()):
title += group_id
if not idx == len(group_list.keys()) - 1:
title += '-'
info = dict(dwi=[['subject_id', 'dti']],
bvecs=[['subject_id', 'bvecs']],
bvals=[['subject_id', 'bvals']])
l1pipeline = create_group_connectivity_pipeline(group_list, group_id, data_dir, subjects_dir, output_dir, info)
# Here we define the parcellation scheme and the number of tracks to produce
parcellation_scheme = 'NativeFreesurfer'
cmp_config = cmp.configuration.PipelineConfiguration()
cmp_config.parcellation_scheme = parcellation_scheme
l1pipeline.inputs.connectivity.inputnode.resolution_network_file = cmp_config._get_lausanne_parcellation(parcellation_scheme)['freesurferaparc']['node_information_graphml']
l1pipeline.run()
l1pipeline.write_graph(format='eps', graph2use='flat')
# The second-level pipeline is created here
l2pipeline = create_merge_networks_by_group_workflow(group_list, group_id, data_dir, subjects_dir, output_dir)
l2pipeline.run()
l2pipeline.write_graph(format='eps', graph2use='flat')
Now that the for loop is complete there are two grouped CFF files each containing the appropriate subjects. It is also convenient to have every subject in a single CFF file, so that is what the third-level pipeline does.
l3pipeline = create_merge_group_networks_workflow(group_list, data_dir, subjects_dir, output_dir, title)
l3pipeline.run()
l3pipeline.write_graph(format='eps', graph2use='flat')
The fourth and final workflow averages the networks and saves them in another CFF file
l4pipeline = create_average_networks_by_group_workflow(group_list, data_dir, subjects_dir, output_dir, title)
l4pipeline.run()
l4pipeline.write_graph(format='eps', graph2use='flat')
Example source code
You can download the full source code of this example. This same script is also included in the Nipype source distribution under the examples directory.