You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
Hello, I trained a LoRA using ms_swift and merged the weights into InternVL2.5 78B. Afterwards, I attempted inference using LMDeploy versions 0.8.0 and 0.7.3. I followed the syntax from this documentation:
I provided an image and text input, and experimented with various batch sizes. When I used batch sizes around 16 or higher, the model would randomly freeze and stop working — GPU usage would spike to 100%, while VRAM usage would drop to 0%. This occurred on a single node with 8×H100 GPUs. I observed the same behavior even with the base model without LoRA. On version 0.8.0, this happens roughly 1–2 times per 1000 samples, but on 0.7.3 it occurs almost every time.
To clarify: the model runs for a certain number of iterations with batch size equal to or greater than 16, and then at some random point, it freezes completely.
with TurbomindEngine
Reproduction
#!/usr/bin/env python3
"""
Repro MVP: InternVL-2.5 batched inference with LMDeploy.
Set BATCH_SIZE ≥ 16 to trigger the hang you observed.
"""
import glob, os
from PIL import Image
from lmdeploy import (
pipeline, TurbomindEngineConfig,
ChatTemplateConfig, GenerationConfig
)
# ─── Adjust these three lines only ────────────────────────────────────────────
MODEL_PATH = "OpenGVLab/InternVL2_5-78B-MPO" #
IMAGE_DIR = "bench2" # folder with test images
BATCH_SIZE = 16 # try 16+ to reproduce issue
# ─────────────────────────────────────────────────────────────────────────────
PROMPT = (
"Provide very detailed description of the image."
)
# grab *any* images in the folder
IMAGE_EXTS = ("*.jpg", "*.jpeg", "*.png", "*.webp",
"*.bmp", "*.tif", "*.tiff")
paths = [p for ext in IMAGE_EXTS
for p in glob.glob(os.path.join(IMAGE_DIR, ext))]
if len(paths) < BATCH_SIZE:
raise RuntimeError(f"Need ≥ {BATCH_SIZE} images in {IMAGE_DIR}")
# load model once
pipe = pipeline(
MODEL_PATH,
backend_config=TurbomindEngineConfig(
session_len=4096,
tp=8,
max_batch_size=BATCH_SIZE, # critical for reproducing the bug
),
chat_template_config=ChatTemplateConfig(model_name="internvl2_5"),
)
gen_cfg = GenerationConfig(max_new_tokens=512)
with tqdm(total=len(paths), desc="Describing", unit="img") as pbar:
for i in range(0, len(paths), BATCH_SIZE):
batch_paths = paths[i : i + BATCH_SIZE]
batch_imgs = [Image.open(p).convert("RGB") for p in batch_paths]
batch_prompts = [(PROMPT, img) for img in batch_imgs]
pbar.update(16)
try:
batch_resps = pipe(batch_prompts, gen_config=gen_cfg)
except Exception as err:
print(f"[WARN] batch starting with {os.path.basename(batch_paths[0])}: {err}")
batch_resps = [None] * len(batch_paths)
Environment
sys.platform: linux
Python: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
CUDA_HOME: /usr/local/cuda-12.4
NVCC: Cuda compilation tools, release 12.4, V12.4.131
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.5.1+cu124
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- CUDA Runtime 12.4
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
- CuDNN 90.1
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
TorchVision: 0.20.1+cu124
LMDeploy: 0.8.0+
transformers: 4.51.3
gradio: Not Found
fastapi: 0.115.12
pydantic: 2.11.4
triton: 3.1.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-63 0N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-63 0N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-63 0N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-63 0N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 64-127 1N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 64-127 1N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 64-127 1N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 64-127 1N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Error traceback
The text was updated successfully, but these errors were encountered:
Bezdarnost
changed the title
[Bug] InternVL2.5 78B stuck during the inference
[Bug] InternVL2.5 78B stuck during inference
May 8, 2025
Checklist
Describe the bug
Hello, I trained a LoRA using ms_swift and merged the weights into InternVL2.5 78B. Afterwards, I attempted inference using LMDeploy versions 0.8.0 and 0.7.3. I followed the syntax from this documentation:
https://internvl.readthedocs.io/en/latest/internvl2.5/deployment.html
I provided an image and text input, and experimented with various batch sizes. When I used batch sizes around 16 or higher, the model would randomly freeze and stop working — GPU usage would spike to 100%, while VRAM usage would drop to 0%. This occurred on a single node with 8×H100 GPUs. I observed the same behavior even with the base model without LoRA. On version 0.8.0, this happens roughly 1–2 times per 1000 samples, but on 0.7.3 it occurs almost every time.
To clarify: the model runs for a certain number of iterations with batch size equal to or greater than 16, and then at some random point, it freezes completely.
Reproduction
Environment
Error traceback
The text was updated successfully, but these errors were encountered: