Friday 5 May 2023

LangChain losing context and timing out when using memory with agent

I've got a function that initializes an agent and makes a call to get the generated reply. It uses a tool that creates structured output from the user's input. Using this without any memory context works fine - a response is returned in the expected format in +- 4 seconds. The LangChain docs state that the agent I'm using by default uses a BufferMemory, so I create a BufferMemory instance and assign that to the agent executor instance, this causes the response to time out with responses taking well over a minute. By default, LangChain logs the process, and I can see the correct output is logged in the terminal, although it doesn't get returned. After this, the agent appears to lose the context of the question and then finally outputs an answer in the wrong format. It only eventually returns output if I remove the timeout limit on my backend.

    const activityTextParser = StructuredOutputParser.fromNamesAndDescriptions({
      name: "name of user's activity",
      duration: "time duration in seconds of user's activity",
    });
    const activityFormatInstructions =
      activityTextParser.getFormatInstructions();
    const activityPrompt = new PromptTemplate({
      template:
        "Structure the output based on user's input.\n{format_instructions}\n{instruction}",
      inputVariables: ['instruction'],
      partialVariables: { format_instructions: activityFormatInstructions },
    });
    const model = new ChatOpenAI({
      openAIApiKey: process.env.OPENAI_API_KEY,
      temperature: 0,
    });
    const activityTool = new DynamicTool({
      name: 'Create activity tool',
      description: 'Uses user input to create activity JSON object',
      func: async (text: string) => {
        const input = await activityPrompt.format({
          instruction: text,
        });
        return input;
      },
    });
    const tools = [activityTool];
    const executor = await initializeAgentExecutorWithOptions(tools, model, {
      agentType: 'chat-conversational-react-description',
      verbose: true,
    });
    const chatHistory = [
      new AIChatMessage(
        'Hi there! I am your productivity assistant, how can I help you today?',
      ),
    ];
    const memory = new BufferMemory({
      chatHistory: new ChatMessageHistory(chatHistory),
      memoryKey: 'chat_history',
      returnMessages: true,
    });
    executor.memory = memory;
    const res = await executor.call({
      input: 'Create an activity named Reading that lasts for 10 minutes',
    });
    console.log({ res });
    return res;

Commenting out the line executor.memory = memory the output is returned as expected

This is what the expected output is like:

    "output": {
        "name": "Reading",
        "duration": "600"
    }

When the agent loses context and then reminds itself of the question the output is in the following format:

{
    "output": "The activity named Reading lasts for 10 minutes."
}

A minimal reproducible example is available here . Note that Node version 18 or higher is required

The screenshot below is an example of what gets logged when using memory and it appears that the expected value is generated where after it's not returned, but context is lost enter image description here

Any advice on how to correctly use conversation history in combination with tools for output parsing will be appreciated. TIA



from LangChain losing context and timing out when using memory with agent

No comments:

Post a Comment