MTRL¶
Multi Task RL Algorithms
Introduction¶
MTRL is a library of multi-task reinforcement learning algorithms. It has two main components:
Components and agents that implement the multi-task RL algorithms.
Experiment setups that enable training/evaluation on different setups.
Together, these two components enable use of MTRL across different environments and setups.
List of publications & submissions using MTRL (please create a pull request to add the missing entries):¶
License¶
MTRL uses MIT License.
Citing MTRL¶
If you use MTRL in your research, please use the following BibTeX entry:
@Misc{Sodhani2021MTRL,
author = {Shagun Sodhani, Amy Zhang},
title = {MTRL - Multi Task RL Algorithms},
howpublished = {Github},
year = {2021},
url = {https://github.com/facebookresearch/mtrl}
}
Setup¶
Clone the repository:
git clone git@github.com:facebookresearch/mtrl.git
.Install dependencies:
pip install -r requirements/dev.txt
Usage¶
MTRL supports many different multi-task RL algorithms as described here.
MTRL supports multi-task environments using MTEnv. These environments include MetaWorld and multi-task variants of DMControl Suite
Refer the `tutorial]<https://mtrl.readthedocs.io/en/latest/pages/tutorials/overview.html>`_ to get started with MTRL.
Documentation¶
Contributing to MTRL¶
There are several ways to contribute to MTRL.
Use MTRL in your research.
Contribute a new algorithm. The currently supported algorithms are listed here and are looking forward to adding more algorithms.
Check out the beginner-friendly issues on GitHub and contribute to fixing those issues.
Check out additional details here.
Community¶
Ask questions in the chat or github issues: