Papers Read on AI

Papers Read on AI header image 1
August 13, 2022  

Can Wikipedia Help Offline Reinforcement Learning?

August 13, 2022

Fine-tuning reinforcement learning (RL) models has been challenging because of a lack of large scale off-the-shelf datasets as well as high variance in transferability among different environments. Recent work has looked at tackling offline RL from the perspective of sequence modeling with improved results as result of the introduction of the Transformer architecture. However, when the model is trained from scratch, it suffers from slow convergence speeds. In this paper, we look to take advantage of this formulation of reinforcement learning as sequence modeling and investigate the transferability of pre-trained sequence models on other domains (vision, language) when finetuned on offline RL tasks (control, games). To this end, we also propose techniques to improve transfer between these domains.

2022: Machel Reid, Yutaro Yamada, S. Gu

https://arxiv.org/pdf/2201.12122v1.pdf