中文 | English
MiniCPM 技术博客 | OmniLMM 多模态模型 | CPM-C 千亿模型试用 | 加入我们的 discord 和 wechat
MiniCPM 是面壁智能与清华大学自然语言处理实验室共同开源的系列端侧大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。
我们完全开源MiniCPM-2B的模型参数供学术研究和有限商用,在未来我们还将发布训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 具体而言,我们目前已公开以下模型,地址详见 模型下载 部分
语言模型
多模态模型
HuggingFace | ModelScope | WiseModel |
---|---|---|
MiniCPM-V | MiniCPM-V | MiniCPM-V |
OmniLMM | OmniLMM | OmniLMM |
安装支持 MiniCPM 的 vLLM
安装支持 MiniCPM 的 vLLM 版本
pip install inference/vllm
<hf_repo_path>
, <vllmcpm_repo_path>
均为本地路径python inference/convert_hf_to_vllmcpm.py --load <hf_repo_path> --save <vllmcpm_repo_path>
cd inference/vllm/examples/infer_cpm
python inference.py --model_path <vllmcpm_repo_path> --prompt_path prompts/prompt_demo.txt
<用户>: Which city is the capital of China?
<AI>:
The capital city of China is Beijing. Beijing is a major political, cultural, and economic center in China, and it is known for its rich history, beautiful architecture, and vibrant nightlife. It is also home to many of China's most important cultural and historical sites, including the Forbidden City, the Great Wall of China, and the Temple of Heaven. Beijing is a popular destination for tourists from around the world, and it is an important hub for international business and trade.
transformers>=4.36.0
以及accelerate
后,运行以下代码from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.5, top_p=0.8, repetition_penalty=1.02)
print(responds)
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V/', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True)
model.eval().cuda()
image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': question}]
res, context, _ = model.chat(
image=image,
msgs=msgs,
context=None,
tokenizer=tokenizer,
sampling=True,
temperature=0.7
)
print(res)
git clone https://github.com/OpenBMB/UltraEval.git
cd UltraEval
pip install -e .
wget -O RawData.zip "https://cloud.tsinghua.edu.cn/f/71b5232264ae4833a4d0/?dl=1"
unzip RawData.zip
python data_process.py
bash run_eval.sh
越级比较:
模型 | 平均分 | 英文均分 | 中文均分 | C-Eval | CMMLU | MMLU | HumanEval | MBPP | GSM8K | MATH | BBH | ARC-E | ARC-C | HellaSwag |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Llama2-7B | 35.40 | 36.21 | 31.765 | 32.42 | 31.11 | 44.32 | 12.2 | 27.17 | 13.57 | 1.8 | 33.23 | 75.25 | 42.75 | 75.62* |
Qwen-7B | 49.46 | 47.19 | 59.655 | 58.96 | 60.35 | 57.65 | 17.07 | 42.15 | 41.24 | 5.34 | 37.75 | 83.42 | 64.76 | 75.32* |
Deepseek-7B | 39.96 | 39.15 | 43.635 | 42.82 | 44.45 | 47.82 | 20.12 | 41.45 | 15.85 | 1.53 | 33.38 | 74.58* | 42.15* | 75.45* |
Mistral-7B | 48.97 | 49.96 | 44.54 | 46.12 | 42.96 | 62.69 | 27.44 | 45.2 | 33.13 | 5.0 | 41.06 | 83.92 | 70.73 | 80.43* |
Llama2-13B | 41.48 | 42.44 | 37.19 | 37.32 | 37.06 | 54.71 | 17.07 | 32.55 | 21.15 | 2.25 | 37.92 | 78.87* | 58.19 | 79.23* |
MPT-30B | 38.17 | 39.82 | 30.715 | 29.34 | 32.09 | 46.56 | 21.95 | 35.36 | 10.31 | 1.56 | 38.22 | 78.66* | 46.08* | 79.72* |
Falcon-40B | 43.62 | 44.21 | 40.93 | 40.29 | 41.57 | 53.53 | 24.39 | 36.53 | 22.44 | 1.92 | 36.24 | 81.94* | 57.68 | 83.26* |
MiniCPM-2B | 52.33 | 52.6 | 51.1 | 51.13 | 51.07 | 53.46 | 50.00 | 47.31 | 53.83 | 10.24 | 36.87 | 85.44 | 68.00 | 68.25 |
同级比较:
模型 | 平均分 | 英文均分 | 中文均分 | C-Eval | CMMLU | MMLU | HumanEval | MBPP | GSM8K | MATH | BBH | ARC-E | ARC-C | HellaSwag |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TinyLlama-1.1B | 25.36 | 25.55 | 24.525 | 25.02 | 24.03 | 24.3 | 6.71 | 19.91 | 2.27 | 0.74 | 28.78 | 60.77* | 28.15* | 58.33* |
Qwen-1.8B | 34.72 | 31.87 | 47.565 | 49.81 | 45.32 | 43.37 | 7.93 | 17.8 | 19.26 | 2.42 | 29.07 | 63.97* | 43.69 | 59.28* |
Gemini Nano-3B | - | - | - | - | - | - | - | 27.2(report) | 22.8(report) | - | 42.4(report) | - | - | - |
StableLM-Zephyr-3B | 43.46 | 46.31 | 30.615 | 30.34 | 30.89 | 45.9 | 35.37 | 31.85 | 52.54 | 12.49 | 37.68 | 73.78 | 55.38 | 71.87* |
Phi-2-2B | 48.84 | 54.41 | 23.775 | 23.37 | 24.18 | 52.66 | 47.56 | 55.04 | 57.16 | 3.5 | 43.39 | 86.11 | 71.25 | 73.07* |
MiniCPM-2B | 52.33 | 52.6 | 51.1 | 51.13 | 51.07 | 53.46 | 50.00 | 47.31 | 53.83 | 10.24 | 36.87 | 85.44 | 68.00 | 68.25 |
Chat模型比较:
模型 | 平均分 | 英文均分 | 中文均分 | C-Eval | CMMLU | MMLU | HumanEval | MBPP | GSM8K | MATH | BBH | ARC-E | ARC-C | HellaSwag |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ChatGLM2-6B | 37.98 | 35.17 | 50.63 | 52.05 | 49.21 | 45.77 | 10.37 | 9.38 | 22.74 | 5.96 | 32.6 | 74.45 | 56.82 | 58.48* |
Mistral-7B-Instruct-v0.1 | 44.36 | 45.89 | 37.51 | 38.06 | 36.96 | 53.56 | 29.27 | 39.34 | 28.73 | 3.48 | 39.52 | 81.61 | 63.99 | 73.47* |
Mistral-7B-Instruct-v0.2 | 50.91 | 52.83 | 42.235 | 42.55 | 41.92 | 60.51 | 36.59 | 48.95 | 40.49 | 4.95 | 39.81 | 86.28 | 73.38 | 84.55* |
Qwen-7B-Chat | 44.93 | 42.05 | 57.9 | 58.57 | 57.23 | 56.03 | 15.85 | 40.52 | 42.23 | 8.3 | 37.34 | 64.44* | 39.25* | 74.52* |
Yi-6B-Chat | 50.46 | 45.89 | 70.995 | 70.88 | 71.11 | 62.95 | 14.02 | 28.34 | 36.54 | 3.88 | 37.43 | 84.89 | 70.39 | 74.6* |
Baichuan2-7B-Chat | 44.68 | 42.74 | 53.39 | 53.28 | 53.5 | 53 | 21.34 | 32.32 | 25.25 | 6.32 | 37.46 | 79.63 | 60.15 | 69.23* |
Deepseek-7B-chat | 49.34 | 49.56 | 48.335 | 46.95 | 49.72 | 51.67 | 40.85 | 48.48 | 48.52 | 4.26 | 35.7 | 76.85 | 63.05 | 76.68* |
Llama2-7B-Chat | 38.16 | 39.17 | 33.59 | 34.54 | 32.64 | 47.64 | 14.02 | 27.4 | 21.15 | 2.08 | 35.54 | 74.28 | 54.78 | 75.65* |
MiniCPM-2B | 52.33 | 52.6 | 51.1 | 51.13 | 51.07 | 53.46 | 50.00 | 47.31 | 53.83 | 10.24 | 36.87 | 85.44 | 68.00 | 68.25 |
DPO后模型比较:
模型 | MT-bench |
---|---|
GPT-4-turbo | 9.32 |
GPT-3.5-turbo | 8.39 |
Mistral-8*7b-Instruct-v0.1 | 8.30 |
Claude-2.1 | 8.18 |
Zephyr-7B-beta | 7.34 |
MiniCPM-2B | 7.25 |
Vicuna-33B | 7.12 |
Zephyr-7B-alpha | 6.88 |
LLaMA-2-70B-chat | 6.86 |
Mistral-7B-Instruct-v0.1 | 6.84 |
MPT-34B-instruct | 6.39 |
Model | Size | MME | MMB dev (en) | MMB dev (zh) | MMMU val | CMMMU val |
---|---|---|---|---|---|---|
LLaVA-Phi | 3B | 1335 | 59.8 | - | - | - |
MobileVLM | 3B | 1289 | 59.6 | - | - | - |
Imp-v1 | 3B | 1434 | 66.5 | - | - | - |
Qwen-VL-Chat | 9.6B | 1487 | 60.6 | 56.7 | 35.9 | 30.7 |
CogVLM | 17.4B | 1438 | 63.7 | 53.8 | 32.1 | - |
MiniCPM-V(3B) | 3B | 1452 | 67.3 | 61.9 | 34.7 | 32.1 |
# generation powered by vllm
python demo/vllm_based_demo.py --model_path <vllmcpm_repo_path>
# generation powered by huggingface
python demo/hf_based_demo.py --model_path <hf_repo_path>
高效参数微调
全参数微调 or 持续训练
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}