Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Change parameter settings explainer #2903

Open
imtuyethan opened this issue May 14, 2024 · 0 comments
Open

feat: Change parameter settings explainer #2903

imtuyethan opened this issue May 14, 2024 · 0 comments
Assignees
Labels

Comments

@imtuyethan
Copy link
Contributor

imtuyethan commented May 14, 2024

Problem

The current explainer for parameters settings are not clear enough according to user feedback:
Screenshot 2024-05-14 at 11 44 54 PM

We need to guide user's how to set it effectively (directly in app, not just through docs).

Success Criteria

Explainers should be short, precise, straight to the point

Setting parameters explainer

Inference Parameters

Parameter Description
Temperature Influences the randomness of the model's output. A higher value leads to more random and diverse responses, while a lower value produces more predictable outputs.
Top P Set probability threshold for more relevant outputs. A lower value (e.g., 0.9) may be more suitable for focused, task-oriented applications, while a higher value (e.g., 0.95 or 0.97) may be better for more open-ended, creative tasks.
Stream Enables real-time data processing, which is useful for applications needing immediate responses, like live interactions. It accelerates predictions by processing data as it becomes available.
Max Tokens Sets the upper limit on the number of tokens the model can generate in a single output. A higher limit benefits detailed and complex responses, while a lower limit helps maintain conciseness.
Stop Sequences Defines specific tokens or phrases that signal the model to stop producing further output, allowing you to control the length and coherence of the output.
Frequency Penalty Modifies the likelihood of the model repeating the same words or phrases within a single output. Increasing it can help avoid repetition, which is useful for scenarios where you want more varied language, like creative writing or content generation.
Presence Penalty Reduces the likelihood of repeating tokens, promoting novelty in the output. Use a higher value for tasks requiring diverse ideas.

Model Parameters

Parameter Description
Prompt Template A predefined text or framework that guides the AI model's response generation. It includes placeholders or instructions for the model to fill in or expand upon.

Engine Parameters

Parameter Description
Context Length Sets the maximum input the model can use to generate a response, it varies with the model used. Higher length is better for tasks needing extensive context, like summarizing long documents. Lower length can improve response time and reduce computing needs for simple queries.
@imtuyethan imtuyethan added the type: feature request A new feature label May 14, 2024
@imtuyethan imtuyethan changed the title feat: Change settings parameter explainer feat: Change parameter settings explainer May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Archived in project
Development

No branches or pull requests

3 participants