Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

h100: Worse output & 20x slower inference? #89

Open
addytheyoung opened this issue Nov 26, 2023 · 12 comments
Open

h100: Worse output & 20x slower inference? #89

addytheyoung opened this issue Nov 26, 2023 · 12 comments
Labels
help wanted Extra attention is needed

Comments

@addytheyoung
Copy link

addytheyoung commented Nov 26, 2023

We're testing finetuning on an h100 and 4090, here are the results:

4090: https://voca.ro/11mtxzLHzzih
h100: https://voca.ro/15QldVjuG7nu

Almost identical finetune, but h100 is output is SIGNIFICANTLY worse. It isn't a config issue, and we've replicated it twice with LJSpeech as well.

4090 is also faster during training and considerably faster during inference, almost 20x faster than h100:

Screenshot_2023-11-26_at_5 01 16_PM

h100:

Screenshot_2023-11-24_at_3 46 08_PM

And during training, one epoch took the 4090 about 3 minutes, while the h100 took 4.12 minutes.

Does anyone know what could be going on here? Never seen an issue like this on an h100 before with a diffusion like model. Thanks

@addytheyoung addytheyoung changed the title Extremely weird h100 issue? h100: Worse output & slower inference? Nov 26, 2023
@addytheyoung addytheyoung changed the title h100: Worse output & slower inference? h100: Worse output & 20x slower inference? Nov 26, 2023
@AMEERAZAM08
Copy link
Contributor

I'm using on T4 2 GPU kaggle its working good check you requirement.txt version

@yl4579
Copy link
Owner

yl4579 commented Nov 27, 2023

Can you be sure they’re on the same machine with the same library and everything, except the GPU difference?

@devidw
Copy link
Contributor

devidw commented Nov 27, 2023

@addytheyoung and me are working together on this

We are running this on 2 different machines, one is a lambdalabs h100 and one is a runpod rtx 4090

The env is the same with both running in a python3.10 virtualenv with the latest repo and dependencies

In the screenshots shared earlier there was an issue with non http 200 responses leading to invalid timing

However, rerunning the following simple web api on both machines, and the 4090 seems to be 2x as fast as it should be 3x slower than the h100

4090 ~12s @ 100 concurrent requests
image

h100 ~24s @ 100 concurrent requests
image

@yl4579
Copy link
Owner

yl4579 commented Nov 27, 2023

@devidw Have you tried to match the CUDA driver version as well? Have you tried the docker version?

@devidw
Copy link
Contributor

devidw commented Nov 27, 2023

@devidw Have you tried to match the CUDA driver version as well? Have you tried the docker version?

Yeah, so both GPUs are using cuda 12.x

By docker version, you mean running the inference server within the same cuda/pytorch base docker image on both GPUs? (Currently, the runpod 4090 is using runpod/pytorch and the h100 on lambda doesn't require one, tho we could test that)

@yl4579

@yl4579
Copy link
Owner

yl4579 commented Nov 28, 2023

@devidw This is so weird, I've tested the code on A40, A100 and L40, all good with CUDA 11.8. I'm not sure if it is the CUDA driver version that causes this. I don't have H100 so I can't test it. The worst case scenario is you compare the output for each module with the same input and see which module causes the issue.

@addytheyoung
Copy link
Author

addytheyoung commented Nov 28, 2023

@yl4579 I believe we've done that already, oh well. If anyone else has or can test on an H100, we'd love to hear what's going on. This is a big deal for scaling inference, if the model just doesn't work with an H100 at all that's not good, but hopefully it's on our end.

@yl4579
Copy link
Owner

yl4579 commented Nov 29, 2023

Did you find any output difference or timed each line and find the module that causes the bottleneck?

@yl4579 yl4579 added the help wanted Extra attention is needed label Nov 29, 2023
@addytheyoung
Copy link
Author

addytheyoung commented Nov 29, 2023

@yl4579 Yes the output is considerably worse, I shared some examples in the main post. We have not tested if the module itself causes the bottleneck I don't think? cc @devidw

@yl4579
Copy link
Owner

yl4579 commented Nov 29, 2023

@addytheyoung I think it’s better to check the output from each module and setting a static random seed (don’t use diffusion model) to check which module produces different results, so we can debug from there.

@korakoe
Copy link

korakoe commented Jan 13, 2024

Odd, I've been training a model on a h100 for a day or so, and haven't noticed a detriment in speed or quality, is your repo up to date?

Edit:
I retract this statement, it appears to emerge on larger datasets

Akito-UzukiP added a commit to Akito-UzukiP/StyleTTS2 that referenced this issue Jan 13, 2024
* Create emo_gen.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update server.py, fix bugs in func get_text() and infer(). (yl4579#52)

* Extract get_text() and infer() from webui.py. (yl4579#53)

* Extract get_text() and infer() from webui.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* add emo emb

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo gen

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo

* Delete bert/bert-base-japanese-v3 directory

* Create .gitkeep

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Create add_punc.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bug in bert_gen.py (yl4579#54)

* Update README.md

* fix bug in models.py (yl4579#56)

* 更新 models.py

* Fix japanese cleaner (yl4579#61)

* 初步,睡觉明天继续写(

* 好好好放错分支了,熬夜是大忌

* [pre-commit.ci] pre-commit autoupdate (yl4579#55)

* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](pre-commit/pre-commit-hooks@v4.4.0...v4.5.0)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Create tokenizer_config.json

* update preprocess_text.py:过滤一个音频匹配多个文本的情况 (yl4579#57)

* update preprocess_text.py:过滤音频不存在的情况 (yl4579#58)

* 修复日语cleaner和bert

* better

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sora <atri@suzakuintsubaki.com>

* Apply Code Formatter Change

* Add config.yml for global configuration. (yl4579#62)

* Add config.yml for global configuration.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug in webui.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Rename config.yml to default_config.yml. Add ./config.yml to gitignore.

* Add config.py to parse config.yml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update webui.py (yl4579#65)

* Update webui.py:
1. Add auto translation from Chinese to Japanese.
2. Start to use config.py in webui.py to set config instead of using the command line.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix (yl4579#68)

* 加上ー

* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update infer.py and webui.py.  Supports loading and inference models of 1.1.1 version. (yl4579#66)

* Update infer.py and webui.py. Supports loading and inference models of 1.1.1 version.

* Update config.json

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix bug in translate.py (yl4579#69)

* Supports loading and inference models of 1.1、1.0.1、1.0 version. (yl4579#70)

* Supports loading and inference models of 1.1、1.0.1、1.0 version.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Delete useless file in OldVersion

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update japanese.py (yl4579#71)

Handling JA long pronunciations

* 使用配置文件配置bert_gen.py, preprocess_text.py, resample.py (yl4579#72)

* Update bert_gen.py, preprocess_text.py, resample.py. Support using config.yml in these files.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update bert_gen.py

* Update bert_gen.py, fix bug.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Delete bert/bert-base-japanese-v3 directory

* Create config.json

* Create tokenizer_config.json

* Create vocab.txt

* Update server.py. 支持多版本多模型 (yl4579#76)

* Update server.py. 支持多版本多模型

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Dev webui (yl4579#77)

* 申请pr (yl4579#75)

* 2023/10/11 update

界面优化

* Update webui.py

翻译英文页面为中文

* Update train_ms.py

单卡训练

* 加入图片

* Update extern_subprocess.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update extern_subprocess.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update all_process.py

* Update extern_subprocess.py

* Update all_process.py

* Update all_process.py

* Update asr_transcript.py

* Update extern_subprocess.py

* Update webui.py

* Create re_matching.py

* Update webui.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update asr_transcript.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pack 'update' functions into a module

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update asr_transcript.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update extern_subprocess.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update all_process.py

* Update asr_transcript.py

* Update webui.py

* Add files via upload

* Update extern_subprocess.py

* Update all_process.py

* Update asr_transcript.py

* Update bert_gen.py

* Update extern_subprocess.py

* Update preprocess_text.py

* Update re_matching.py

* Update resample.py

* Update update_status.py

* Update update_status.py

* Update webui.py

* Update all_process.py

* Update preprocess_text.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update train_ms.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>

* Delete all_process.py

* Delete asr_transcript.py

* Delete extern_subprocess.py

---------

Co-authored-by: spicysama <122108331+AnyaCoder@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>

* Create config.json

* Create  preprocessor_config.json

* Create vocab.json

* Delete emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/.gitkeep

* Update emo_gen.py

* Delete add_punc.py

* add emotion_clustering.i

* Apply Code Formatter Change

* Update models.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update preprocess_text.py (yl4579#78)

* Update preprocess_text.py. 检测重复以及不存在的音频 (yl4579#79)

* Handle Janpanese long pronunciations (yl4579#80)

* Handle Janpanese long pronunciations

* Update japanese.py

* Update japanese.py

* Use unified phonemes for Japanese long vowel (yl4579#82)

* Use an unified phoneme for Japanese long vowel

`symbol.py` has not been updated to ensure compatibility with older version models.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* 增加一个按钮,点击后可以按句子切分,添加“|” (yl4579#81)

* Update re_matching.py

* Update webui.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix phonemer bug (yl4579#83)

* Fix phonemer bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix long vowel handler bug (yl4579#84)

* Fix long vowel handler bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* 加入整合包管理器的特性:长文本合成可以自定义句间段间停顿 (yl4579#85)

* Update webui.py

* Update re_matching.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update train_ms.py

* fix'

* Update cleaner.py

* add en

* add en

* Update english.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 更新 README.md

* 更新 README.md

* 更新 README.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change phonemer to pyopenjtalk (yl4579#86)

* Change phonemer to pyopenjtalk

* 修改为openjtalk便于安装

---------

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* 更新 english.py

* Fix english_bert_mock.py. (yl4579#87)

* Add punctuation execptions (yl4579#88)

* Add punctuation execptions

* Ellipses exceptions

* remove get bert

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug in oldVersion. (yl4579#89)

* Update requirements.txt

* change to large

* rollback requirements.txt

* Feat: Enable 1.1.1 models using fix-ver infer. (yl4579#91)

* Feat: Enable 1.1.1 models using fix-ver infer.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add Japanese accent (high-low) (yl4579#90)

* Add punctuation execptions

* Ellipses exceptions

* Add Japanese accent

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Do not replace iteration mark (yl4579#92)

* Add punctuation execptions

* Ellipses exceptions

* Add Japanese accent

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Do not replace iteration mark

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: fix import error in oldVersion (yl4579#93)

* Refactor: reusing model loading in webui.py and server.py. (yl4579#94)

* Feat: Enable using config.yml in train_ms.py (yl4579#96)

* 更新 emo_gen.py

* Change emo_gen.py (yl4579#97)

* Fix emo_gen bugs

* Add multiprocess

* Fix queue (yl4579#98)

* Fix emo_gen bugs

* Add multiprocess

* Del var

* Fix queue

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix training bugs (yl4579#99)

* Updatge cluster notebook

* Fix train

* Fix filename

* Update infer.py (yl4579#100)

* Update infer.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add reference audio (yl4579#101)

* Add reference audio

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* Fix: fix 1.1.1-fix (yl4579#102)

* Fix infer bug (yl4579#103)

* Feat: Add server_fastapi.py. (yl4579#104)

* Feat: Add server_fastapi.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix: Update requirements.txt.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: requirements.txt. (yl4579#105)

* Swith to deberta-v3-large (yl4579#106)

* Swith to deberta-v3-large

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Feat: Update config.py. (yl4579#107)

* Feat: Update config.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Dev fix (yl4579#108)

* fix bugs when deploying

* fix bugs when deploying

* fix bugs when deploying

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Revert "Dev fix (yl4579#108)" (yl4579#109)

This reverts commit 685e18a10498d602b1a9a26079340d11925646f0.

* Dev fix (yl4579#110)

* fix bugs when deploying

* fix bugs when deploying

* fix bugs when deploying

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix fixed bugs

* fix fixed bugs

* fix fixed bug 3

* fix fixed bug 4

* fix fixed bug 5

* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add emo vec quantizer (yl4579#111)

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* Clean req and gitignore (yl4579#112)

* Clean req and gitignore

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Switch to deberta-v2-large-japanese (yl4579#113)

* Switch to deberta-v2-large-japanese

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix emo bugs (yl4579#114)

* Fix english (yl4579#115)

* Remove emo (yl4579#117)

* Don't train codebook

* Remove emo

* Update

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Merge dev into no-emo (yl4579#122)

* [pre-commit.ci] pre-commit autoupdate (yl4579#95)

* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.292 → v0.1.1](astral-sh/ruff-pre-commit@v0.0.292...v0.1.1)
- [github.com/psf/black: 23.9.1 → 23.10.0](psf/black@23.9.1...23.10.0)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Don't train codebook (yl4579#116)

* Update requirements.txt

* Update english_bert_mock.py

* Fix: server_fastapi.py (yl4579#118)

* Fix: server_fastapi.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: don't print debug logging. (yl4579#119)

* Fix: don't print debug logging.

* Feat: support emo_gen config

* Fix config

* Apply Code Formatter Change

* 更新,修正bug (yl4579#121)

* Feat: Update infer.py preprocess_text.py server_fastapi.py.

* Fix resample.py. Maintain same directory structure in out_dir as in_dir.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update resample.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update server_fastapi.py to no-emo ver

* Update config.py, no emo config

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: OedoSoldier <31711261+OedoSoldier@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: Stardust-minus <Stardust-minus@users.noreply.github.com>

* Update train_ms.py

* Update latest version info (yl4579#124)

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: jiangyuxiaoxiao <atri@suzakuintsubaki.com>
Co-authored-by: AkitoLiu <39857739+Akito-UzukiP@users.noreply.github.com>
Co-authored-by: Stardust-minus <Stardust-minus@users.noreply.github.com>
Co-authored-by: OedoSoldier <31711261+OedoSoldier@users.noreply.github.com>
Co-authored-by: spicysama <122108331+AnyaCoder@users.noreply.github.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>
Co-authored-by: YYuX-1145 <138500330+YYuX-1145@users.noreply.github.com>
@sch0ngut
Copy link

sch0ngut commented Mar 26, 2024

Very interesting, I just found this issue by pure chance, and indeed i had the same problem a few weeks ago. I ran 6 fine-tunings in parallel for several days on a single H100, and once the trainings finished I noticed that all models were absolutely useless.
I reran the same trainings on 8xA100 and the results were good! The only differences (besides the number & type of GPUs used) was the per GPU batch size, but I didn't think that this would have caused the issue. I was (and still am) extremely puzzled...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants