Sourceseparation , Vector quantization, Signal representation, Neural networks
Abstract
In this paper, we present a novel method to address the single-channel speech separation problem. We propose a two-step training procedure for speech separation in a discrete latent space. In the first step, we learn multiple vector-quantized codebooks to optimize reconstruction and entropy and functions to transform between discrete codes and waveform. In the second step, we train multiple classifiers to select codes from codebooks to synthesize speech sources. The proposed method reaches comparable speech separation performance and is general enough to be applicable to other regression problems.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.