You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When inputs contain commas and periods, the output generations often create too much interstitial space / silence between words before and after the characters.
When ellipsis are present in the input, instead of creating more interstitial space (silence), the model tends to hallucinate.
Could this be from removal of characters in the 100K hour training set?
The text was updated successfully, but these errors were encountered:
This issue has seemingly been resolved by finetuning the base model for 20 epochs on hand-crafted voice data which includes commas, multiple sentences, and ellipses (41 < len < 261, n=600, 0.2 train split, no LR decay, default training config). Still seems to have trouble having correct / accurate prosody -- even when zero-shot cloning voice is in the 'train' segmentation of the dataset. We call this effect the "William Shatner" effect... 😸
Thanks for the update @somewheresy & sorry for the delay in my response - I'll go ahead & close out the issue but feel free to reopen if there's anything else to raise.
When inputs contain commas and periods, the output generations often create too much interstitial space / silence between words before and after the characters.
When ellipsis are present in the input, instead of creating more interstitial space (silence), the model tends to hallucinate.
Could this be from removal of characters in the 100K hour training set?
The text was updated successfully, but these errors were encountered: