palaestrAI is an universal framework for multi-agent artificial intelligence, including, but not limited to, deep reinforcement learning. It allows for cooperative, adversarial, and just plain any interaction between agents in diverse environments. palaestrAI is a loosely coupled, distributed system. It allows to use any kind of algorithm in concert: Think A3C agents playing against Q-Learning counterparts who cooperate with ACKTR-based agents.
palaestrAIs mosaik bridge makes integration with any environment and existing co-simulation setups easy. palaestrAI provides an infrastructure for reliable and reproducibly experimentation through experiment studies, efficient experiment space sampling methods, and serialization of all experiment instances with a fitting store backend.
palaestrAI has many use cases: Training of agents on any given environment, e.g., for the analysis of complex Cyber-Physical Systems, resilient operation of critical infrastructures, market analysis, ICT system attack analysis and hardening, market design, and even building blocks of Artificial General Intelligence
The palaestrAI ecosystem forms the reference implementation of the Adversarial Resilience Learning methodology.
palaestrAI is the core framework, providing the infrastructure for experimentation, bridging to environments, the store backend, and agent interaction. If you plan to provide your own environment, palaestrAI provides the necessary interfaces to you.
https://gitlab.com/arl2/palaestrai
hARL bundles the deep reinforcement learning agent code. It interfaces with palaestrAI and the environment accessible through the palaestrAI core framework. If you intend to introduce your own agent to any environment, hARL offers you a simple interface to do so.
arsenAI brings the full force of proper Design of Experiments into the framework. It easily connects agents and environments, generating thousands of reproducible experiment from just one experiment study description designed by you.
https://gitlab.com/arl2/arsenai