We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问一下,微调时大概训练多久。我在自己训练的模型尝试加入反向kl loss微调,反而kl loss 变大,生成效果很差
The text was updated successfully, but these errors were encountered:
可以参考 bert_vits_aishell3分支,加入反向kl loss微调时,反向kl loss的比例应该根据使用的数据去调试,比如我在AISHELL3数据上使用了0.05,冻结PosteriorEncoder, https://github.com/PlayVoice/vits_chinese/blob/bert_vits_aishell3/train.py#L266 https://github.com/PlayVoice/vits_chinese/blob/bert_vits_aishell3/train.py#L123
Sorry, something went wrong.
请问调节时的insight是什么呢? 您之前也提到过先只用正向的数据训练,之后再加入反向loss进行调整,这样是能更快收敛是吗? 那么第一阶段应当达到什么效果再加入反向loss呢? 另外,如果有这种观察,是不是把反向kl loss的参数设置成自动增加,会比较好?
这个出至微软nature speech,在优质语料上面使用这个loss确实没什么问题; 在其他语料上面,这个loss不稳定,反而容易使得训练效果变差;
所以在不使用这个loss前,把模型训练到能训练的状态,保存状态,需要用这个状态进行后面的多次尝试; 然后设计一个权重、加入这个loss继续训练,如果效果没有改善就减小这个loss的权重,当减小到非常小的时候也没改善,建议放弃这个loss。
另外,如果有这种观察,是不是把反向kl loss的参数设置成自动增加,会比较好? 这个loss不太好调
No branches or pull requests
请问一下,微调时大概训练多久。我在自己训练的模型尝试加入反向kl loss微调,反而kl loss 变大,生成效果很差
The text was updated successfully, but these errors were encountered: