This repository contains codes that are used for generating numerical results in the following paper:
"Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Shared Global State", Transactions on Machine Learning Research, May, 2023.
@article{mondal2023mean,
title={Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Non-decomposable Shared Global State},
author={Mondal, Washim Uddin and Aggarwal, Vaneet and Ukkusuri, Satish V},
journal={arXiv preprint arXiv:2301.06889},
year={2023}
}
Various parameters used in the experiments can be found in Scripts/Parameters.py file.
python 3.8.12
pytorch 1.10.1
numpy 1.21.2
matplotlib 3.5.0
Generated results will be stored in Results folder (will be created on the fly). Some pre-generated results are available for display in the Display folder. Specifically, Fig. 1 depicts the error as a function of N (the number of agents).
python3 Main.py
The progress of the experiment is logged in Results/progress.log
Various command line options are given below:
--train : if training is required from scratch, otherwise a pre-trained model will be used
--minN : minimum value of N
--numN : number of N values
--divN : difference between two consecutive N values
--maxSeed: number of random seeds