-
meta/meta-llama-3-70b-instruct
- A 70 billion parameter language model fine-tuned for chat completions.
-
meta/meta-llama-3-8b
- An 8 billion parameter base version of Llama 3.
-
meta/meta-llama-3-8b-instruct
- An 8 billion parameter language model fine-tuned for chat completions.
-
mistralai/mixtral-8x7b-instruct-v0.1
- A generative Sparse Mixture of Experts tuned for assistance.
-
meta/llama-2-7b-chat
- A 7 billion parameter language model fine-tuned for chat completions.
-
meta/llama-2-70b-chat
- A 70 billion parameter language model fine-tuned for chat completions.
-
meta/llama-2-13b-chat
- A 13 billion parameter language model fine-tuned for chat completions.
-
mistralai/mistral-7b-instruct-v0.2
- An improved instruct fine-tuned version of Mistral-7B.
-
mistralai/mistral-7b-v0.1
- A 7 billion parameter language model from Mistral.
-
mistralai/mistral-7b-instruct-v0.1
- An instruction-tuned 7 billion parameter language model.
-
replicate/dolly-v2-12b
- An open-source instruction-tuned large language model by Databricks.
-
meta/meta-llama-3-70b
- A 70 billion parameter base version of Llama 3.
-
01-ai/yi-34b-chat
- A large language model trained from scratch by 01.AI.
-
replicate/vicuna-13b
- A large language model fine-tuned on ChatGPT interactions.
-
01-ai/yi-6b
- A large language model trained from scratch by 01.AI.
-
replicate/flan-t5-xl
- A language model by Google for tasks like classification and summarization.
-
stability-ai/stablelm-tuned-alpha-7b
- A 7 billion parameter language model by Stability AI.
-
replicate/llama-7b
- A Transformers implementation of the LLaMA language model.
-
google-deepmind/gemma-2b-it
- A 2 billion parameter instruct version of Google’s Gemma model.
-
google-deepmind/gemma-7b-it
- A 7 billion parameter instruct version of Google’s Gemma model.
-
nateraw/nous-hermes-2-solar-10.7b
- The Nous Hermes 2 - SOLAR 10.7B model.
-
replicate/oasst-sft-1-pythia-12b
- An open-source instruction-tuned large language model by Open-Assistant.
-
kcaverly/nous-hermes-2-yi-34b-gguf
- A state-of-the-art Yi fine-tune on GPT-4 generated synthetic data.
-
replicate/gpt-j-6b
- A large language model by EleutherAI.
-
nateraw/nous-hermes-llama2-awq
- Served with vLLM, by TheBloke.
-
google-deepmind/gemma-7b
- A 7 billion parameter base version of Google’s Gemma model.
-
01-ai/yi-6b-chat
- A large language model trained from scratch by 01.AI.
-
lucataco/qwen1.5-72b
- A beta version of Qwen2, a transformer-based model.
-
lucataco/phi-2
-
replit/replit-code-v1-3b
- A code generation model by Replit.
-
google-deepmind/gemma-2b
- A 2 billion parameter base version of Google’s Gemma model.
-
lucataco/qwen1.5-14b
- A beta version of Qwen2, a transformer-based model.
-
adirik/mamba-2.8b
- A 2.8 billion parameter state space language model.
-
lucataco/phixtral-2x2_8
- A MoE model with two Microsoft/phi-2 models.
-
lucataco/qwen1.5-7b
- A beta version of Qwen2, a transformer-based model.
-
adirik/mamba-130m
- A 130 million parameter state space language model.
-
lucataco/olmo-7b
- An Open Language Model series for language model science.
-
adirik/mamba-1.4b
- A 1.4 billion parameter state space language model.
-
adirik/mamba-2.8b-slimpj
- A 2.8 billion parameter state space language model.
-
adirik/mamba-370m
- A 370 million parameter state space language model.
-
adirik/mamba-790m
- A 790 million parameter state space language model.