HyperAI

One-click Deployment of Mistral-Nemo-Instruct-2407

Tutorial Introduction

This tutorial is a one-click deployment of Mistral-Nemo-Instruct-2407. The relevant environment and dependencies have been installed. You only need to clone it to experience the reasoning. Mistral-Nemo-Instruct-2407 is a fine-tuned version of the Mistral-Nemo-Base-2407 instruction jointly open sourced by Mistral AI and NVIDIA. Its performance is significantly better than existing smaller or similar-sized models. Mistral NeMo has 12 billion (12B) parameters and a context window of 128k. Its reasoning, world knowledge, and encoding accuracy are at the forefront of similar scales. Because Mistral NeMo relies on a standard architecture, it is easy to use and can be a drop-in replacement for any system using Mistral 7B.

Model Features

  • Training with 128k context windows
  • Trained with a large amount of multi-language and code data
  • Direct replacement for Mistral 7B
  • The pre-trained base checkpoints and instruction fine-tuning checkpoints are released under the Apache 2.0 license, allowing commercial use.

How to run

1. Clone and start the container

2. When the container is "running", open the "API address" to infer the experience model

3. Click below to enter the text prompt, click Submit

4. Generate results