Uses The Open 3D Engine for AI simulations and can interoperate with the Gym. Gym (and PettingZoo) wrappers for arbitrary and premade environments with the Unity game engine. Unity ML Agents: Environments for Unity game engine # SlimeVolleyGym: A simple environment for Slime Volleyball game #Ī simple environment for benchmarking single and multi-agent reinforcement learning algorithms on a clone of Slime Volleyball game. Video Game environments # gym-derk: GPU accelerated MOBA environment #Ī 3v3 MOBA environment where you train creatures to fight each other. Many of these can be adapted to work with gymnasium (see Compatibility with Gym), but are not guaranteed to be fully functional. There are a large number of third-party environments using various versions of Gym. Turn a set of matrices ( P_0(s), P(s'| s, a) and R(s', s, a)) into a gym environment that represents the discrete MDP ruled by these dynamics. matrix-mdp: Easily create discrete MDPs #Īn environment to easily implement discrete MDPs as gym environments. gym-saturation: Environments used to prove theorems #Īn environment for guiding automated theorem provers based on saturation algorithms (e.g. Both state and pixel observation environments are available. flappy-bird-gymnasium: A Flappy Bird environment for Gymnasium #Ī simple environment for single-agent reinforcement learning algorithms on a clone of Flappy Bird, the hugely popular arcade-style mobile game. Supported fork of gym-retro: turn classic video games into Gymnasium environments. stable-retro: Classic retro games, a maintained version of OpenAI Retro # Highly scalable and customizable Safe Reinforcement Learning library. Safety-Gymnasium: Ensuring safety in real-world RL scenarios # Gym-jiminy presents an extension of the initial Gym for robotics using Jiminy, an extremely fast and light-weight simulator for poly-articulated systems using Pinocchio for physics evaluation and Meshcat for web-based 3D rendering. It is demonstrated on the TrackMania 2020 video game. Tmrl is a distributed framework for training Deep Reinforcement Learning AIs in real-time applications. PyBullet based simulations of a robotic arm moving objects. panda-gym: Robotics environments using the PyBullet physics engine # Supports both single and multiagent settings (using pettingzoo). Gymnasium wrapper for various environments in the SUMO traffic simulator. sumo-rl: Reinforcement Learning using SUMO traffic simulator # highway-env: Autonomous driving and tactical decision-making tasks #Īn environment for behavioral planning in autonomous driving, with an emphasis on high-level perception and decision rather than low-level sensing and control. If you’d like to contribute an environment, please reach out on Discord. This page contains environments which are not maintained by Farama Foundation and, as such, cannot be guaranteed to function as intended. Topics include:ģD navigation ( Miniworld), and many more. The Farama Foundation maintains a number of other projects, most of which use Gymnasium. Toggle table of contents sidebar Third-Party Environments #
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |