Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

embedding微调问题 #762

Open
ListenZen opened this issue May 10, 2024 · 3 comments
Open

embedding微调问题 #762

ListenZen opened this issue May 10, 2024 · 3 comments

Comments

@ListenZen
Copy link

  1. FT后loss值一直降不下去,参数如下,本地cpu跑的,5轮训练后差不多这样,这是什么原因呢或者有什么优化的地方
    {"epoch": 4.18,"learning_rate": 1.6492693110647182e-06,"loss": 0.2706,"step": 2000}
    torchrun --nproc_per_node 1 -m FlagEmbedding.baai_general_embedding.finetune.run --output_dir ./src/aiChatServer/fintune/model --model_name_or_path ./src/aiChatServer/fintune/bge-m3 --train_data ./src/aiChatServer/fintune/fintune_res.jsonl --learning_rate 1e-5 --num_train_epochs 5 --per_device_train_batch_size 20 --dataloader_drop_last False --normlized True --temperature 0.02 --query_max_len 128 --passage_max_len 256 --train_group_size 3 --negatives_cross_device False --logging_steps 10 --save_steps 1000 --use_cpu True

  2. 按这个微调后的模型,使用LM_Cocktail进行merge后,dense的分数普遍都会变低,colbert分数反而会有的变高,是因为微调不好的原因吗?

  3. 我微调的数据里,neg是根据FlagEmbedding.baai_general_embedding.finetune.hn_mine自动生成的,有些语料的pos和neg长得很像,相似度可能在60%-70%以上,只是有部分词汇不一样,这种语料对微调有影响吗,例如:
    {“query":"江苏省南京市的繁华地段是哪里",”pos“:["江苏省南京市的繁华地段是鼓楼"], ”neg":["江苏省南京市的最大医院在xxx"]}

@staoxiao
Copy link
Collaborator

Thanks for your attention to our work!

  1. The loss is not high, and I think it falls within a normal range.
  2. If the negative samples are very challenging, the scores of positive and negative examples may all decrease. However, lower scores don't mean lower ranking accuracy. For downstream tasks, such as passage retrieval or semantic similarity, what matters is the relative order of the scores, not the absolute value. You should judge the success of fine-tuning based on the accuracy of downstream tasks.
  3. The showed example {“query":"江苏省南京市的繁华地段是哪里",”pos“:["江苏省南京市的繁华地段是鼓楼"], ”neg":["江苏省南京市的最大医院在xxx"]} is a good pair data. Besides, you can change the range_for_sampling to reduce the difficulty of negatives.

@mhdpr
Copy link

mhdpr commented May 20, 2024

使用上述类似的方式构造数据集微调模型,{'loss': 0.0484, 'learning_rate': 3.1645569620253168e-09, 'epoch': 5.0}
微调后的模型无论是在训练集还是在测试集,MRR, Recall相比于base模型(bge-large-zh-v1.5)都略有降低,请问可能是什么问题,是需要使用LM_Cocktail将base模型和微调后的模型进行merge吗?

使用相同数据集,base模型换成bge-m3, 微调后MRR, Recall相比于base模型,略有提升?请问可能是什么原因导致使用bge-large-zh-v1.5作base模型时效果变差?

@sevenandseven
Copy link

  1. FT后loss值一直降不下去,参数如下,本地cpu跑的,5轮训练后差不多这样,这是什么原因呢或者有什么优化的地方
    {"epoch": 4.18,"learning_rate": 1.6492693110647182e-06,"loss": 0.2706,"step": 2000}
    torchrun --nproc_per_node 1 -m FlagEmbedding.baai_general_embedding.finetune.run --output_dir ./src/aiChatServer/fintune/model --model_name_or_path ./src/aiChatServer/fintune/bge-m3 --train_data ./src/aiChatServer/fintune/fintune_res.jsonl --learning_rate 1e-5 --num_train_epochs 5 --per_device_train_batch_size 20 --dataloader_drop_last False --normlized True --temperature 0.02 --query_max_len 128 --passage_max_len 256 --train_group_size 3 --negatives_cross_device False --logging_steps 10 --save_steps 1000 --use_cpu True
  2. 按这个微调后的模型,使用LM_Cocktail进行merge后,dense的分数普遍都会变低,colbert分数反而会有的变高,是因为微调不好的原因吗?
  3. 我微调的数据里,neg是根据FlagEmbedding.baai_general_embedding.finetune.hn_mine自动生成的,有些语料的pos和neg长得很像,相似度可能在60%-70%以上,只是有部分词汇不一样,这种语料对微调有影响吗,例如:
    {“query":"江苏省南京市的繁华地段是哪里",”pos“:["江苏省南京市的繁华地段是鼓楼"], ”neg":["江苏省南京市的最大医院在xxx"]}

请问embedding模型微调后应该不需要进行合并吧?只有reranker模型微调才需要合并?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants