Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Add] Logging for each message of GPTAssistant #2677

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

krishnashed
Copy link
Collaborator

@IANTHEREAL @ekzhu

Why are these changes needed?

It solves the logging issue with GPTAssistant Sequential chats, where only the last message of a sequential step was being logged, rather than logging each GPTAssistant Agent interaction.

Related issue number

Closes #2644

Checks

@krishnashed
Copy link
Collaborator Author

Added Fix to resolve #2697

@krishnashed
Copy link
Collaborator Author

@ekzhu @WaelKarkoub this check is failing "ContribTests / RetrieveChatTest-Ubuntu (3.9) (pull_request)" any idea why ? and how do we fix it ?

@@ -213,6 +217,12 @@ def _invoke_assistant(
role=message["role"],
)

self.client.create(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand the log related implement, but I would like to know if it will call openai completion function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.create() calls the log_chat_completion() which is responsible only for logging data into sqlite DB.

its the self._get_run_response() which makes openai chat completions call

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the GPTAssistantAgent we use the run to get the responses from the assistant. i think calling the openai completion doesn't add up to the actual assistant logs.

@WaelKarkoub
Copy link
Collaborator

@ekzhu @WaelKarkoub this check is failing "ContribTests / RetrieveChatTest-Ubuntu (3.9) (pull_request)" any idea why ? and how do we fix it ?

See if adding --no-cache-dir flag to the pip install in .github/workflows/contrib-tests.yml at L110 helps

@krishnashed krishnashed self-assigned this May 21, 2024
@krishnashed
Copy link
Collaborator Author

@sonichi @ekzhu @WaelKarkoub i ran the pre-commit run --all-files on my side, all the checks pass. But the workflows in github are failing, how do we fix them, and make all checks pass. Are these checks failing for other PRs as well ?

@krishnashed
Copy link
Collaborator Author

image
image

These are the error messages found from failed checks, how do we fix them during workflow runs ?

Hk669

This comment was marked as outdated.

@Hk669 Hk669 dismissed their stale review May 21, 2024 18:03

changes are required.

@@ -213,6 +217,12 @@ def _invoke_assistant(
role=message["role"],
)

self.client.create(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the GPTAssistantAgent we use the run to get the responses from the assistant. i think calling the openai completion doesn't add up to the actual assistant logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Response data not available in Sequential chats, except for the first response
4 participants