You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why did you chose the hierarchical variance feature prediction instead of parallel prediction like the FastSpeech2(paper version)?
Are there any performance advantages?
The text was updated successfully, but these errors were encountered:
Hello 😌. I hope you're well and that you are having a good day.
Sorry 😅 I don't know how it happened and sorry for that. I was trying to build my own model for my data for my local language and I faced issues. I don't know how I did what you said.
Can you please 🥺 tell me how I can use FastPitch to build my own model in Colab or another notebook?
I have issues with the base configuration: docker, NGC Container in Colab. How can I solve this?
Hello 😌. I hope you're well and that you are having a good day.
Sorry 😅 I don't know how it happened and sorry for that. I was trying to
build my own model for my data for my local language and I faced issues. I
don't know how I did what you said.
Can you please 🥺 tell me how I can use FastPitch to build my own model in
Colab or another notebook?
I have issues with the base configuration: docker, NGC Container in Colab.
How can I solve this?
Thank you always for sharing your thoughtful code.
As we can see in FastPitch code, you added the pitch embedding to encoder output before passing the energy predictor.
DeepLearningExamples/PyTorch/SpeechSynthesis/FastPitch/fastpitch/model.py
Line 300 in da7e1a7
Why did you chose the hierarchical variance feature prediction instead of parallel prediction like the FastSpeech2(paper version)?
Are there any performance advantages?
The text was updated successfully, but these errors were encountered: