Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

模型从chatglm3-6b切换到qwen-1_8B的时候对话就会报错 #4032

Open
Ma-Chang-an opened this issue May 16, 2024 · 3 comments
Open

模型从chatglm3-6b切换到qwen-1_8B的时候对话就会报错 #4032

Ma-Chang-an opened this issue May 16, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@Ma-Chang-an
Copy link

问题描述 / Problem Description
当我将LLM模型从chatglm3-6b切换到qwen-1_8B的时候对话就会报错,只有在我使用docker运行时会出现,直接在linux服务器上运行没有发现这个问题,这两种情况使用的模型是相同的
报错信息如下

2024-05-16 07:45:25,858 - utils.py[line:38] - ERROR: object of type 'NoneType' has no len()
Traceback (most recent call last):
  File "/Langchain-Chatchat/server/utils.py", line 36, in wrap_done
    await fn
  File "/usr/local/lib/python3.11/dist-packages/langchain/chains/base.py", line 385, in acall
    raise e
  File "/usr/local/lib/python3.11/dist-packages/langchain/chains/base.py", line 379, in acall
    await self._acall(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.11/dist-packages/langchain/chains/llm.py", line 275, in _acall
    response = await self.agenerate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain/chains/llm.py", line 142, in agenerate
    return await self.llm.agenerate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/language_models/chat_models.py", line 554, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/language_models/chat_models.py", line 514, in agenerate
    raise exceptions[0]
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/language_models/chat_models.py", line 617, in _agenerate_with_cache
    return await self._agenerate(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate
    return await agenerate_from_stream(stream_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/language_models/chat_models.py", line 87, in agenerate_from_stream
    async for chunk in stream:
  File "/usr/local/lib/python3.11/dist-packages/langchain_community/chat_models/openai.py", line 493, in _astream
    if len(chunk["choices"]) == 0:
       ^^^^^^^^^^^^^^^^^^^^^
TypeError: object of type 'NoneType' has no len()
2024-05-16 07:45:25,858 - utils.py[line:40] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len()

这是我的dockerfile

FROM alpine/git:2.36.2 as repos

RUN git config --global http.sslVerify false && \
    git clone https://github.com/chatchat-space/Langchain-Chatchat.git /code && \
    cd /code && \
    git checkout v0.2.10 && \
    rm -rf .git

FROM python:3.10.9-slim as models

RUN sed -i 's/http:\/\/deb.debian.org/http:\/\/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list && \
    sed -i 's/http:\/\/security.debian.org/http:\/\/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list

RUN apt update && apt install -y aria2 curl git git-lfs

ENV HF_ENDPOINT=https://hf-mirror.com

COPY --chmod=a+x ./hfd.sh /hfd.sh

RUN bash /hfd.sh BAAI/bge-large-zh-v1.5 --tool aria2c -x 8 --local-dir /bge-large-zh-v1.5
RUN bash /hfd.sh Qwen/Qwen-1_8B-Chat --tool aria2c -x 8 --local-dir /Qwen-1_8B-Chat

FROM python:3.11-slim as deps

RUN --mount=type=bind,from=repos,source=/code,target=/code \
    python3 -m venv /venv && \
    . /venv/bin/activate && \
    pip install --no-cache-dir && \
    -r /code/requirements.txt && \
    -i https://pypi.tuna.tsinghua.edu.cn/simple

# Base Image
FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04 as out
# Labels
LABEL maintainer=chatchat
# Environment Variables
ENV HOME=/Langchain-Chatchat
ENV GROUP_ID=1003
ENV USER_ID=1003
ENV GROUP_NAME=paas
ENV USER_NAME=paas

RUN groupadd -g ${GROUP_ID} ${GROUP_NAME} && \
    useradd -m -u ${USER_ID} -g ${GROUP_ID} ${USER_NAME}
# Commands
WORKDIR /

RUN sed -i 's|http://.*archive.ubuntu.com/ubuntu/|https://mirrors.tuna.tsinghua.edu.cn/ubuntu/|g' /etc/apt/sources.list && \
    sed -i 's|http://.*security.ubuntu.com/ubuntu/|https://mirrors.tuna.tsinghua.edu.cn/ubuntu/|g' /etc/apt/sources.list

RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
    echo "Asia/Shanghai" > /etc/timezone && \
    apt-get update -y && \
    apt-get install -y --no-install-recommends python3.11 python3-pip curl libgl1 libglib2.0-0 jq && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* && \
    rm -f /usr/bin/python3 && \
    ln -s /usr/bin/python3.11 /usr/bin/python3
# Copy the application files
COPY --from=repos --chown=${USER_ID}:${GROUP_ID} /code $HOME
COPY --from=models --chown=${USER_ID}:${GROUP_ID} /bge-large-zh-v1.5 /build-in/models/bge-large-zh-v1.5
COPY --from=models --chown=${USER_ID}:${GROUP_ID} /Qwen-1_8B-Chat /build-in/models/Qwen-1_8B-Chat
WORKDIR $HOME
# Install dependencies from requirements.txt
RUN pip3 install -r requirements.txt --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple && \
    python3 copy_config_example.py && \
    sed -i 's|MODEL_ROOT_PATH = ""|MODEL_ROOT_PATH = "/build-in/models"|' configs/model_config.py && \
    sed -i 's/8501/8000/g' configs/server_config.py

RUN pip3 install langchain-community==0.0.19 --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN chown ${USER_ID}:${GROUP_ID} /mnt
USER ${USER_ID}
# RUN python3 init_database.py --recreate-vs
RUN python3 init_database.py --create-tables
RUN sed -i 's|"chatglm3-6b", "zhipu-api", "openai-api"|"Qwen-1_8B-Chat", "zhipu-api"|' configs/model_config.py && \
    sed -i 's|MODEL_ROOT_PATH = "/build-in/models"|MODEL_ROOT_PATH = "/Langchain-Chatchat/models"|' configs/model_config.py

# RUN python3 init_database.py --create-tables

WORKDIR /
COPY --chown=${USER_ID}:${GROUP_ID} ./entrypoint.sh  /entrypoint.sh
RUN chmod 750 /entrypoint.sh
EXPOSE 7861 8000
ENTRYPOINT ["bash", "/entrypoint.sh"]
@Ma-Chang-an Ma-Chang-an added the bug Something isn't working label May 16, 2024
@Ma-Chang-an Ma-Chang-an changed the title [BUG] 简洁阐述问题 / Concise description of the issue 模型从chatglm3-6b切换到qwen-1_8B的时候对话就会报错 May 16, 2024
Copy link

dosubot bot commented May 16, 2024

The error you're encountering, TypeError: object of type 'NoneType' has no len(), suggests an issue with the chunk["choices"] being None when the application expects it to have a length. This typically happens when the expected data structure isn't initialized correctly or when the model doesn't return the expected output format. Here's how to address this:

  1. Verify Model Configuration and Path: Ensure the qwen-1_8B model's configuration and path are correctly set in your Docker environment. The model files should be correctly placed and accessible.

  2. Model Initialization Parameters: Check if Qwen-1_8B-Chat requires specific initialization parameters different from chatglm3-6b and adjust your configuration accordingly.

  3. Environment Variables and Configuration: Docker environments can differ from server environments. Make sure all necessary environment variables and configurations for qwen-1_8B are correctly set in your Docker setup.

  4. Dependencies and Compatibility: Confirm that all dependencies required by qwen-1_8B are installed and compatible within the Docker container. This includes ensuring the correct versions of libraries and that the hardware configuration (e.g., CUDA for GPU acceleration) is supported.

  5. Enhance Error Handling and Logging: Improve error handling and logging to capture more detailed information about the failure. This can help identify the exact cause of the error.

  6. Memory and Resource Allocation: Ensure the Docker container has sufficient resources allocated for the qwen-1_8B model, as it may require more resources than chatglm3-6b.

  7. Testing and Validation: Thoroughly test the model switching functionality in a local Docker environment that closely mimics your production setup to identify any environment-specific issues.

By following these steps, you should be able to resolve the error and ensure smooth operation when switching models.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@danny-zhu
Copy link

根据报错信息File "/usr/local/lib/python3.11/dist-packages/langchain_community/chat_models/openai.py", line 493, in _astream找到文件openai.py底493行,将判断条件改成 if chunk is None or chunk["choices"] is None or len(chunk["choices"]) == 0: 这样就可以正常运行了,使用知识库回答问题了,我切换了几个不同的LLM也能正常回答

@Ma-Chang-an
Copy link
Author

谢谢,我试试

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants