Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inject message history through initiate_chat. #2671

Open
Tracked by #2358
ekzhu opened this issue May 13, 2024 · 2 comments
Open
Tracked by #2358

Inject message history through initiate_chat. #2671

ekzhu opened this issue May 13, 2024 · 2 comments
Labels
enhancement New feature or request help wanted Extra attention is needed persistence

Comments

@ekzhu
Copy link
Collaborator

ekzhu commented May 13, 2024

  1. In a typical scenario, the "end user" (human) would want the interface (single Agent, SoM, Group Chat etc) to retain their own context. Any system implementing this would most likely store the end user's query and the interface's response in some DB.
  2. To recreate this, the only details available would be user's query and the response. This should be passed to the agent (user proxy or similar).

I would expect something like below

  1. ConversableAgent's initiate_chat to take a previous history of context and then be able to use it.

While something like this can be achieved using initiate_chat's message param which can be a callable, having a native implementation would be helpful. Because in this scenario, it sometimes end up loosing the context, because it also depends on the system prompt of the agent to be able to realise this.

assistant = ConversableAgent(
      name="Assistant",
      system_message="Your are a friendly Q&A Bot"
      "You may use the previous messages to understand the context of current Query"
      "Return 'TERMINATE' when the task is done.",
      llm_config=self.llm_config,
      human_input_mode="NEVER",
      default_auto_reply="Reply with 'TERMINATE' if the task is done.",
      is_termination_msg=is_termination_msg
    )

def my_message(sender: ConversableAgent,
                   recipient: ConversableAgent,
                   context: dict) -> Union[str, Dict]:
      message = context.get("user_query", "")
      last_chats = context.get("last_chats", [])
      prelude = [f"Query: {x.get('query')}. Answer: {x.get('answer')}" for x in last_chats]
      curr_msg = f"CURRENT QUERY: {message}"
      final_msg = f"Previous Messages: {' '.join(prelude)}. {curr_msg}"
      return final_msg


user_proxy.initiate_chat(
      assistant,
      message=my_message,
      user_query=query,
      last_chats=last_chats,
      verbose=True
    )

In another example from the blog mentioned here, people are replaying the chat in GroupChat and broadcasting it to all other agents. I believe this need can also be eliminated if only the user's query & response are passed onto while initiating group chat

Originally posted by @r4881t in #2437 (comment)

@ekzhu ekzhu added help wanted Extra attention is needed persistence labels May 13, 2024
@ekzhu ekzhu added the enhancement New feature or request label May 13, 2024
@jtoy
Copy link
Collaborator

jtoy commented May 17, 2024

In replicantlife, we implemented a log file as the database, but also allowed for in memory store, redis or postgres.

  1. Is there an official default DB we support such as sqlite?
  2. Do we want the history to be in a special format like JSON or newline separated records?

I think this needs more fleshing out before implementing, love the idea!

@r4881t
Copy link
Collaborator

r4881t commented May 23, 2024

I tried some versions of it locally. Based on my exp it seemed that the LLM is sometimes unnecessarily pulling up past context in current q&a even when it's not really needed.

What really worked for us (more deterministically) was to add a tool that would fetch last conversations. So the LLM can decide if it needs the last context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed persistence
Projects
None yet
Development

No branches or pull requests

3 participants