/ models mistralai Mistral-7B-Instruct-v0.3
Mistral-7B-Instruct-v0.3
Instruction-tuned 7.25 B parameter transformer from Mistral AI. v0.3 expands the vocabulary to 32,768 tokens, adds function-calling support and ships a v3 tokenizer. Mirrored from upstream on first request and refreshed every 12 h.
text-generation conversational transformers safetensors apache-2.0 en 7B
Mirror path:/mistralai/Mistral-7B-Instruct-v0.3/
Last sync:2026-05-07 04:32 UTC
Cached size:14.49 GB
Cache tier:hot · all peers
Note: file downloads below redirect to the upstream huggingface.co resolver. Mirror nodes serve the bytes through their nearest peer; latency depends on your route to FRA / AMS / WAW / STO. For bulk transfer use rsync (see below).

Files in /

name size last modified action
../
config.json 734 B 2024-05-22 14:18 download
generation_config.json 132 B 2024-05-22 14:18 download
special_tokens_map.json 414 B 2024-05-22 14:18 download
tokenizer.json 1.96 MB 2024-05-22 14:18 download
tokenizer.model.v3 587 KB 2024-05-22 14:18 download
tokenizer_config.json 141 KB 2024-05-22 14:18 download
model-00001-of-00003.safetensors 4.94 GB 2024-05-22 14:21 download
model-00002-of-00003.safetensors 5.00 GB 2024-05-22 14:23 download
model-00003-of-00003.safetensors 4.55 GB 2024-05-22 14:25 download
model.safetensors.index.json 23.9 KB 2024-05-22 14:21 download
consolidated.safetensors 14.5 GB 2024-05-22 14:30 download
params.json 214 B 2024-05-22 14:30 download
README.md 9.41 KB 2024-09-01 11:02 view
.gitattributes 1.52 KB 2024-05-22 14:18 view

15 entries · checksums: SHA256SUMS · SHA256SUMS.asc (signed by noc@hf-mirror.eu)

Bulk transfer

For full repository snapshots prefer huggingface_hub with the mirror endpoint, or rsync from a peer:

# rsync from Frankfurt peer (anonymous, read-only)
rsync -aH --info=progress2 \
    rsync://fra1.hf-mirror.eu/models/mistralai/Mistral-7B-Instruct-v0.3/ \
    ./Mistral-7B-Instruct-v0.3/

# or HTTP via huggingface_hub
HF_ENDPOINT=https://hf-mirror.eu \
    huggingface-cli download mistralai/Mistral-7B-Instruct-v0.3 \
    --local-dir ./Mistral-7B-Instruct-v0.3 --local-dir-use-symlinks False

Model card (excerpt)

Mistral-7B-Instruct-v0.3 is an instruct fine-tuned version of the Mistral-7B-v0.3 base model. Compared with v0.2 it adds:

  • Extended vocabulary of 32,768 tokens.
  • Support for v3 tokenizer (tokenizer.model.v3).
  • Function-calling format (see upstream README for the schema).

Released under Apache-2.0. The full model card, evaluation numbers and inference snippets are mirrored at the upstream page.

Specifications

Architecture
MistralForCausalLM
Parameters
7.25 B
Hidden size
4096
Layers
32
Heads (attn / kv)
32 / 8
Vocab size
32,768
Context length
32,768
Format
safetensors (sharded ×3) + consolidated
Quantizations available
see community/Mistral-7B-Instruct-v0.3-GGUF