🚗 AI and TM enthusiasts: tmrl enables you to train AIs in TrackMania with minimal effort. Tutorial for you guys here, video of a pre-trained AI here, and beginner introduction to the SAC algorithm here. 🚀 ML developers / roboticists: tmrl is a python library designed to facilitate the implementation of ad-hoc RL pipelines for industrial applications, and most notably real-time control. Minimal example here, full tutorial here and documentation here. 👌 ML developers who are TM enthusiasts with no interest in learning this huge thing: tmrl provides a Gymnasium environment for TrackMania that is easy to use. Fast-track for you guys here. 🌎 Everyone: tmrl hosts the TrackMania Roborace League, a vision-based AI competition where participants design real-time self-racing AIs in the TrackMania 2020 video game. The TMRL project Introduction tmrl is a python framework designed to help you train Artificial Intelligences (AIs) through deep Reinforcement Learning (RL) in real-time applications (robots, video-games, high-frequency control...). As a fun and safe robot proxy for vision-based autonomous driving, tmrl features a readily-implemented example pipeline for the TrackMania 2020 racing video game. Note: In the context of RL, an AI is called a policy. User features (TrackMania example pipeline): Training algorithms: tmrl comes with a readily implemented example pipeline that lets you easily train policies in TrackMania 2020 with state-of-the-art Deep Reinforcement Learning algorithms such as Soft Actor-Critic (SAC) and Randomized Ensembled Double Q-Learning (REDQ). These algorithms store collected samples in a large dataset, called a replay memory. In parallel, these samples are used to train an artificial neural network (policy) that maps observations (images, speed...) to relevant actions (gas, break, steering angle...). Analog control from screenshots: The tmrl example pipeline trains policies that are able to drive from raw screenshots captured in real-time. For beginners, we also provide simpler rangefinder ("LIDAR") observations, which are less potent but easier to learn from. The example pipeline controls the game via a virtual gamepad, which enables analog actions. Models: To process LIDAR measurements, the example tmrl pipeline uses a Multi-Layer Perceptron (MLP). To process raw camera images (snapshots), it uses a Convolutional Neural Network (CNN). These models learn the physics of the game from histories or observations equally spaced in time. Developer features (real-world applications in Python): Python library: tmrl is a complete framework designed to help you successfully implement ad-hoc RL pipelines for real-world applications. It features secure remote training, fine-grained customizability, and it is fully compatible with real-time environments (e.g., robots...). It is based on a single-server / multiple-clients architecture, which enables collecting samples locally from one to arbitrarily many workers, and training remotely on a High Performance Computing cluster. A complete tutorial toward doing this for your specific application is provided here. TrackMania Gymnasium environment: tmrl comes with a Gymnasium environment for TrackMania 2020, based on rtgym. Once the library is installed, it is easy to use this environment in your own training framework. More information here. External libraries: tmrl gave birth to some sub-projects of more general interest, that were cut out and packaged as standalone python libraries. In particular, rtgym enables implementing Gymnasium environments in real-time applications, vgamepad enables emulating virtual game controllers, and tlspyo enables transferring python object over the Internet in a secure fashion. TMRL in the media: In the french show Underscore_ (2022-06-08), we used a vision-based (LIDAR) policy to play against the TrackMania world champions. Spoiler: our policy lost by far (expectedly 😄); the superhuman target was set to about 32s on the tmrl-test track, while the trained policy had a mean performance of about 45.5s. The Gymnasium environment that we used for the show is available here. In 2023, we were invited at Ubisoft Montreal to give a talk describing how video games could serve as visual simulators for vision-based autonomous driving in the near future. Installation Detailed instructions for installation are provided at this link. Getting started