Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare Zero-Epoch Prediction with Fine-Tuned Prediction as well as Validation Score Comparison #695

Open
meganjkurka opened this issue May 3, 2024 · 1 comment
Labels
type/feature Feature request

Comments

@meganjkurka
Copy link

馃殌 Feature

In the Validation Prediction Insights, include the Zero-Epoch Response (i.e. how would the original foundation model response) as well as the BLEU (or other validation score) on the Zero-Epoch Response. This will show the user what the original foundation model would say vs their fine-tuned model as well as the validation performance improvement from fine-tuning.

Motivation

This will allow the user to quickly see the value of their fine-tuning exercise.

@meganjkurka meganjkurka added the type/feature Feature request label May 3, 2024
@psinger
Copy link
Collaborator

psinger commented May 30, 2024

This can be already solved by running a separate experiment and set epochs=0, then one can compare the outputs in separate windows, or by downloading the csvs and comparing outside.
One idea could be to save the output for each validation run and then display as separate columns in the insights, but it is already pretty hard to navigate there.

@pascal-pfeiffer pascal-pfeiffer changed the title Compare Zero-Epoch Prediction with Fine-Tuned Prediction as well as Validaiton Score Comparison Compare Zero-Epoch Prediction with Fine-Tuned Prediction as well as Validation Score Comparison Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature Feature request
Projects
None yet
Development

No branches or pull requests

2 participants