Random Network Distillation (RND)
RND is an exploration bonus for RL methods that's easy to implement and enables significant progress in some hard exploration Atari games such as Montezuma's Revenge. We use Proximal Policy Gradient as our RL method as used by original paper's implementation
Our single-file implementations of RND:
- Uses the blazing fast Envpool vectorized environment.
- For playing Atari games. It uses convolutional layers and common atari-based pre-processing techniques.
- Works with the Atari's pixel
Boxobservation space of shape
(210, 160, 3)
- Works with the
ppo_rnd_envpool.py does not work in Windows and MacOs . See envpool's built wheels here: https://pypi.org/project/envpool/#files
||For continuous action space|
Below are our single-file implementations of TD3:
poetry install -E envpool python cleanrl/ppo_rnd_envpool.py --help python cleanrl/ppo_rnd_envpool.py --env-id MontezumaRevenge-v5
Explanation of the logged metrics
See related docs for
Below is the additional metric for RND:
charts/episode_curiosity_reward: episodic intrinsic rewards.
losses/fwd_loss: the prediction error between predict network and target network, can also be viewed as a proxy of the curiosity reward in that batch.
ppo_rnd_envpool.py uses a customized
RecordEpisodeStatistics to work with envpool but has the same other implementation details as
ppo_atari.py (see related docs). Additionally, it has the following additional details:
- We initialize the normalization parameters by stepping a random agent in the environment by
args.num_steps * args.num_iterations_obs_norm_init.
args.num_iterations_obs_norm_init=50comes from the original implementation.
- We uses sticky action from envpool to facilitate the exploration like done in the original implementation.
To run benchmark experiments, see benchmark/rnd.sh. Specifically, execute the following command:
Below are the average episodic returns for
ppo_rnd_envpool.py. To ensure the quality of the implementation, we compared the results against
||(Burda et al., 2019, Figure 7)1 2000M steps|
|MontezumaRevengeNoFrameSkip-v4||7100 (1 seed)||8152 (3 seeds)|
Note the MontezumaRevengeNoFrameSkip-v4 has same setting to MontezumaRevenge-v5. Our benchmark has one seed due to limited compute resource and extreme long run time (~250 hours).
Tracked experiments and game play videos:
Burda, Yuri, et al. "Exploration by random network distillation." Seventh International Conference on Learning Representations. 2019. ↩