Mistral-7B-Instruct-v0.3
Instruction-tuned 7.25 B parameter transformer from Mistral AI. v0.3 expands
the vocabulary to 32,768 tokens, adds function-calling support and ships a
v3 tokenizer. Mirrored from upstream on first request and refreshed every 12 h.
text-generation
conversational
transformers
safetensors
apache-2.0
en
7B
Note: file downloads below redirect to the upstream
huggingface.co resolver. Mirror nodes serve the bytes through
their nearest peer; latency depends on your route to FRA / AMS / WAW / STO.
For bulk transfer use rsync (see below).
Files in /
name
size
last modified
action
15 entries · checksums: SHA256SUMS · SHA256SUMS.asc (signed by noc@hf-mirror.eu)
Bulk transfer
For full repository snapshots prefer huggingface_hub with the mirror endpoint, or rsync from a peer:
# rsync from Frankfurt peer (anonymous, read-only) rsync -aH --info=progress2 \ rsync://fra1.hf-mirror.eu/models/mistralai/Mistral-7B-Instruct-v0.3/ \ ./Mistral-7B-Instruct-v0.3/ # or HTTP via huggingface_hub HF_ENDPOINT=https://hf-mirror.eu \ huggingface-cli download mistralai/Mistral-7B-Instruct-v0.3 \ --local-dir ./Mistral-7B-Instruct-v0.3 --local-dir-use-symlinks False
Model card (excerpt)
Mistral-7B-Instruct-v0.3 is an instruct fine-tuned version of the Mistral-7B-v0.3 base model. Compared with v0.2 it adds:
- Extended vocabulary of 32,768 tokens.
- Support for v3 tokenizer (
tokenizer.model.v3). - Function-calling format (see upstream README for the schema).
Released under Apache-2.0. The full model card, evaluation numbers and inference snippets are mirrored at the upstream page.
Specifications
- Architecture
- MistralForCausalLM
- Parameters
- 7.25 B
- Hidden size
- 4096
- Layers
- 32
- Heads (attn / kv)
- 32 / 8
- Vocab size
- 32,768
- Context length
- 32,768
- Format
- safetensors (sharded ×3) + consolidated
- Quantizations available
- see community/Mistral-7B-Instruct-v0.3-GGUF