Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatGPT Plugin Functionality #1417

Closed
wants to merge 2 commits into from
Closed

Conversation

Icemaster-Eric
Copy link

@Icemaster-Eric Icemaster-Eric commented Sep 14, 2023

Describe your changes

Added chatgpt style plugin functionality to the python bindings for GPT4All. The existing codebase has not been modified much. The only changes to gpt4all.py is the addition of a plugins parameter in the GPT4All class that takes an iterable of strings, and registers each plugin url and generates the final plugin instructions. I've added the plugin functions in a new plugins.py file, along with some tests in the tests folder.

Issue ticket number and link

#1391

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • I have added thorough documentation for my code.
  • I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
  • If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.

Demo

plugin-demo

Installation / Usage

Build this repo from source locally using the build instructions.

Usage is very simple, by adding the model.plugin_instructions string to the prompt. Give the model output to the model.get_plugin_response function, which will return a string containing the plugin response details. Finally, you can give the plugin response to the model in a prompt again to get the final output.

Plugin URLs should usually end with /.well-known/ai-plugin.json, as that's the path which contains the information for the plugin.

Example Code

from gpt4all import GPT4All

model = GPT4All(
    "GPT4All-13B-snoozy.ggmlv3.q4_1.bin",
    plugins=("https://chatgpt-plugins.replit.app/openapi/weather-plugin",)
)

while True:
    prompt = input("\nPrompt: ")

    output = ""

    for token in model.generate(f"""### System:
{model.plugin_instructions}

### Human:
{prompt}

### Assistant:
""", temp=0, max_tokens=256, streaming=True):
        print(token, end="", flush=True)
        output += token

    print()

    plugin_response = model.get_plugin_response(output)

    for token in model.generate(f"""### System:
{plugin_response}

### Human:
{prompt}

### Assistant:
""", temp=0, max_tokens=128, streaming=True):
        print(token, end="", flush=True)

Notes

This does not work for chatgpt restricted plugins. Currently, the 13b GPT4All-snoozy can generate plugin calls correctly. Most similar sized models should also be able to do the same. However, the complexity of the returned data has a big impact on the final response. It is recommended to trim the information returned by plugins to the minimum required if possible.

@Icemaster-Eric Icemaster-Eric marked this pull request as draft September 15, 2023 04:24
@Icemaster-Eric Icemaster-Eric marked this pull request as ready for review September 15, 2023 04:26
@jacoobes jacoobes added the bindings gpt4all-binding issues label Sep 15, 2023
@posix4e
Copy link

posix4e commented Sep 16, 2023

What's this going to take to get this review. Me Need...

@cosmic-snow
Copy link
Collaborator

Sorry, but I don't think it's me who should be reviewing this.

@AndriyMulyar
Copy link
Contributor

AndriyMulyar commented Sep 17, 2023

Taking a look @Icemaster-Eric . I'm really excited about supporting plugin calling against open ai's plugin sepc but need to think about edge cases

  1. prompt templates differ by model (you can access the right one in the attributes of the GPT4All object) but this code currently uses the System/Response template.
  2. Plugins inherently require internet access to run the model. It should be the case that no network calls are made when plugins are not being used.

I'll be giving this is a deeper review this week and floating it around in the discord for discusion.

@jacoobes jacoobes self-requested a review September 17, 2023 23:24
@Icemaster-Eric
Copy link
Author

Thanks for the suggestions!

  1. Will update code accordingly, thanks.
  2. If no plugins are provided in the plugins parameter, it won't send any network calls. However, letting the model decide whether it needs to use a plugin or not currently relies on having the plugin spec in the request. I believe this shouldn't be an issue, but maybe adding a feature that "turns on"/"turns off" the plugin functionality could be useful as to not waste context.

response = requests.request(method, url, json=parameters)

# Check if the response is successful
if response.ok:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can reverse this condition and clean up the nesting:

if response is error, throw value error,

if response.text.strip():
...continue

@ngtanthanh

This comment was marked as spam.

@christianjuth
Copy link

Any updates here? Lack of browsing and plugin support is the main thing holding me to ChatGPT for now. Adding this here would be huge.

@cebtenzzre cebtenzzre added the python-bindings gpt4all-bindings Python specific issues label Mar 1, 2024
Copy link
Collaborator

@manyoso manyoso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be updated for conflicts, but we certainly would love this functionality. Currently out of date :(

@cosmic-snow cosmic-snow removed their request for review March 12, 2024 05:56
@jacoobes jacoobes mentioned this pull request Mar 17, 2024
@jacoobes jacoobes requested review from jacoobes and removed request for jacoobes March 18, 2024 06:27
@cebtenzzre
Copy link
Member

It seems like this PR will never be finished. I prefer the approach in ggerganov/llama.cpp#5695 which relies on a model-specific template to actually include function calls in the middle of a conversation instead of as the sole response of the model.

@cebtenzzre cebtenzzre closed this May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bindings gpt4all-binding issues python-bindings gpt4all-bindings Python specific issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants