Attention and feedforward dropout
AudioLM and Text-Free Prosody-Aware Generative Spoken Language Modeling (PGSLM) use a dropout of 0.1 (PGSLM explicitly states this is for both attention and feedforward layers, AudioLM isn't so specific). Also a minor fix to enable the `get_tokens` function added in [0.0.62](https://github.com/lucidrains/audiolm-pytorch/commit/dfd4b8030c64370a80972c36f6d2b8d1b9c263f4)
Loading
Please register or sign in to comment