Skip to content

h2oai/h2o-wizardlm

Repository files navigation

wizardlm

Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning

Automatically creates high-complexity instructions from existing instruct-tuned LLM models, for further fine-tuning. Towards truly open ChatGPT clones, no Vicuna/ShareGPT TOS-violation, everything can be based on top of Apache 2.0 models and data.

  • Input: Instruction-tuned LLM and (optional) seed prompts (or document corpus, coming soon)
  • Output: Set of high-complexity instruction prompts (and responses)

Based on https://arxiv.org/abs/2304.12244

Example

  • Starting (seed) prompt: "What's trending in science & technology?"
  • Auto-generated prompt: "As a researcher in the field of artificial intelligence (AI) and healthcare, you have been tasked with conducting a comprehensive study of the potential of AI in healthcare. Your research must be based on sources that are at least 10 years old, and you must use a minimum of 15 academic sources. Your study must be at least 30 pages long and include a detailed analysis of the current state of AI in healthcare, its potential future developments, and the challenges that need to be addressed to fully realize its potential. Additionally, you must provide a critical evaluation of the existing literature on AI in healthcare, identifying gaps and areas for further research. However, you have been given the additional task of creating a series of recommendations for healthcare organizations and policymakers to help them make informed decisions about the implementation of AI in healthcare. Your recommendations must be evidence-based and take into account the ethical implications of AI in healthcare."

Installation

Create a Python3.10 environment and install the dependencies:

pip install -r requirements.txt

Create WizardLM dataset

Edit the base model, and the number of desired rows in wizardlm.py, then run it:

python wizardlm.py

You will end up with a file called wizard_lm.uuid.json where uuid is a random string. Example files are placed in this folder.

Known issues

  • Slow, even with pipeline and batching
  • Requires reasonably good instruct-tuned LLM for current prompting logic
  • Auto-generated prompts are good, but responses can be empty, at least for junelee/wizard-vicuna-13b

Roadmap

  • Speed up
  • Improve generated responses
  • Allow complexity control
  • Handle instruction/input vs just instruction, to enable summarization/code
  • Use in complete Apache 2.0 loop, Open LLaMa + oasst1 -> wizardlm -> iterate.

About

Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages