强化学习trick:RBS

    科技2022-07-11  99

    强化学习trick:RBS

    来自2017年论文《Efficient Dialogue Policy Learning with BBQ-Networks》 arXiv:1608.05081v3

    RBS = replay buffer spiking = spike the replay buffer with a few experiences RBS是强化学习的一个简单的tricky,即pre-fill the experience replay buffer with a small set of transitions harvested from a naive, but occasionally successful, rule-based agent。we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.

    在RL中存在很多sources of uncertainty。包括模型参数上的不确定性,也包含unseen parts of the environment的不确定性。即使结局不确定性,也会在reward sparsity上struggle。研究界提出了很多在这种settings下加快学习的技术,例如利用先验知识(leverage prior knowledge) as by reward shaping or imitation learning。RBS就是这类办法。幸运的是,在我们的setting中,it’s easy to produce a few successful dialogues manually.尽管the manual dialogues不遵循某优化策略,他们包含一些成功的 movie bookings, 所以他们 指示了the existence of the large (+40) reward signal。预先向 replay buffer填入这些 experiences可以极大提高性能。

    特点: 1、performance does not strictly improve with the number pre-filled dialogues 2、replay buffer spiking is different from imitation learning. 3、RBS works well with even a small number of warm-start dialogues, suggesting that it is helpful to communicate even the very existence of a big reward.

    Processed: 0.011, SQL: 8