Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to surpass BERT through large models #442

Open
tianchiguaixia opened this issue Feb 17, 2024 · 1 comment
Open

How to surpass BERT through large models #442

tianchiguaixia opened this issue Feb 17, 2024 · 1 comment
Labels
usage How to use `spacy-llm`

Comments

@tianchiguaixia
Copy link

The current disadvantage of doing NER for large models is that they cannot achieve the effect of fine-tuning BERT. Is there any way to solve it. For example, through prompt words and so on. If the large model can achieve this, it will greatly reduce labor costs

@rmitsch rmitsch added the usage How to use `spacy-llm` label Feb 19, 2024
@rmitsch
Copy link
Collaborator

rmitsch commented Feb 19, 2024

Hi @tianchiguaixia, LLM performance for extractive tasks can be improved by (1) tuning your prompt, (2) providing example, (3) fine-tuning your LLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use `spacy-llm`
Projects
None yet
Development

No branches or pull requests

2 participants