Skip to content
Permalink
main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Go to file
 
 
Cannot retrieve contributors at this time

Introduction

This repository contains codes that are used for generating numerical results in the following paper:

"Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Shared Global State", Transactions on Machine Learning Research, May, 2023.

[arXiv] [TMLR]

@article{mondal2023mean,
  title={Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Non-decomposable Shared Global State},
  author={Mondal, Washim Uddin and Aggarwal, Vaneet and Ukkusuri, Satish V},
  journal={arXiv preprint arXiv:2301.06889},
  year={2023}
}

Parameters

Various parameters used in the experiments can be found in Scripts/Parameters.py file.

Software and Packages

python 3.8.12
pytorch 1.10.1
numpy 1.21.2
matplotlib 3.5.0

Results

Generated results will be stored in Results folder (will be created on the fly). Some pre-generated results are available for display in the Display folder. Specifically, Fig. 1 depicts the error as a function of N (the number of agents).

Run Experiments

python3 Main.py

The progress of the experiment is logged in Results/progress.log

Command Line Options

Various command line options are given below:

--train : if training is required from scratch, otherwise a pre-trained model will be used   
--minN : minimum value of N   
--numN : number of N values  
--divN : difference between two consecutive N values  
--maxSeed: number of random seeds