Skip to content

ElsjePienaarGroup/TB-in-vitro

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

TB-in-vitro

Overview

This repository contains the code to run the model associated with the paper titled "In silico agent-based modeling approach to characterize multiple in vitro tuberculosis infection models". This model was written using Repast Simphony (https://repast.github.io/). We have included all model files, the slurm scripts we used to run job arrays on the HPC, and a script to process output in MATLAB.

Code setup

The model runs with Repast Simphony version 2.8 and Java 11. The archived version of Repast Simphony can be downloaded at https://github.com/Repast/repast.simphony/releases. Next, follow the advice on this page https://repast.github.io/requirements.html to make sure you are using the correct Java version. Download Java 11 with hotspot from https://adoptopenjdk.net/. In Repast, select Eclipse -> Preferences -> Java -> Installed JREs -> Add…, then select adoptOpenJDK11, check, apply and close. Clone this repository, then in Repast File -> Open project from system files. The code can be run locally by using the gui that is opened with the green play button.

How to run model on HPC

To prepare the model, you must select 'Configure and launch batch runs' in the gui. Generate the batch parameter file and select 'Create model archive for batch runs'. This will create a jar file containing everything needed to run the model on the cluster, which is complete_model.jar in the output folder. This jar file needs to be uploaded to the cluster along with the submission script and wrapper. The slurm submission script is repast.slurm.array, which uses the wrapper repastwrapper_slurm.array.sh. Job name, array size, and account all need to be updated in these files. Unzip the jar, run 'chmod +x *.sh', and replace unrolledParamFile.txt with a new parameter file if desired. Then submit the job array with 'sbatch repast.slurm.array'. After the model is finished running, navigate to the folder it was run in and run './outputcombiner.sh'. The data will be compiled in /combined_data/file-sink/.

Loading data into MATLAB

To load the data use 'BatchProcessing_2022_08_25_noncontinuousruns.m'. Update the fileSinkPath and fileSinkDate according to the names and locations of the combined data. Update the dataTitle, which will be the name of the saved .mat file and run. The .mat files we used for our figures and code for these figures can be found on Zenodo. (https://doi.org/10.5281/zenodo.8179392 and https://doi.org/10.5281/zenodo.7716685)

Useful links: