I have a MelSpectrogram
generated from:
eval_seq_specgram = torchaudio.transforms.MelSpectrogram(sample_rate=sample_rate, n_fft=256)(eval_audio_data).transpose(1, 2)
So eval_seq_specgram
now has a size
of torch.Size([1, 128, 499])
, where 499 is the number of timesteps and 128 is the n_mels
.
I'm trying to invert it, so I'm trying to use GriffinLim
, but before doing that, I think I need to invert the melscale
, so I have:
inverse_mel_pred = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=256)(eval_seq_specgram)
inverse_mel_pred
has a size
of torch.Size([1, 256, 499])
Then I'm trying to use GriffinLim
:
pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred)
but I get an error:
Traceback (most recent call last):
File "evaluate_spect.py", line 63, in <module>
main()
File "evaluate_spect.py", line 51, in main
pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred)
File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torchaudio/transforms.py", line 169, in forward
return F.griffinlim(specgram, self.window, self.n_fft, self.hop_length, self.win_length, self.power,
File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torchaudio/functional.py", line 179, in griffinlim
inverse = torch.istft(specgram * angles,
RuntimeError: The size of tensor a (256) must match the size of tensor b (129) at non-singleton dimension 1
Not sure what I'm doing wrong or how to resolve this.
from How can I invert a MelSpectrogram with torchaudio and get an audio waveform?
No comments:
Post a Comment