HyperAI

One-click Deployment of DeepSeek-V2-Lite-Chat

This tutorial is a one-click deployment demo of DeepSeek-V2-Lite-Chat. You only need to clone and start the container and directly copy the generated API address to experience the model inference.

1. Introduction to the model

DeepSeek-V2, a powerful mixture of experts (MoE) language model, is characterized by economical training and efficient inference. It contains a total of 236B parameters, of which 21B parameters are activated per token. Compared with DeepSeek 67B, DeepSeek-V2 achieves stronger performance while saving 42.5% of training cost, reducing 93.3% of KV cache, and increasing the maximum generation throughput to 5.76 times.

2. Evaluation Results

BenchmarkDomainQWen1.5 72B ChatMixtral 8x22BLLaMA3 70B InstructDeepSeek-V1 Chat (SFT)DeepSeek-V2 Chat (SFT)DeepSeek-V2 Chat (RL)
MMLUEnglish76.277.880.371.178.477.8
BBHEnglish65.978.480.171.781.379.7
C-EvalEnglish82.260.067.965.280.978.0
CMMLUEnglish82.961.070.767.882.481.6
HumanEvalCode68.975.076.273.876.881.1
MBPPCode52.264.469.861.470.472.0
LiveCodeBench (0901-0401)Code18.825.030.518.328.732.5
GSM8KMath81.987.993.284.190.892.2
MathMath40.649.848.532.652.753.9

3. How to use

This tutorial has deployed the model and environment. You can directly use the large model for reasoning dialogue according to the tutorial instructions. The specific tutorial is as follows:

Step 1: Clone and start the container

After cloning and starting the container successfully, you will see this interface. Wait for a dozen seconds to load the model and copy the API address on the right to the browser. 

Step 2: Enter the website

After entering the webpage, you can communicate with the large model (the relevant parameters have been debugged and no adjustment is required) 

Discussion and Exchange

🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [Tutorial Exchange] to join the group to discuss various technical issues and share application effects↓