TweetSync

ReplicateAI

Setting Up the API Key for ReplicateAI:

  1. Create an account on ReplicateAI.
  2. Once logged in, go to your account settings.
  3. Look for the "API Key" option and generate a new key.
  4. Copy the generated API key.

Recommended Models

  1. meta/meta-llama-3-70b-instruct

    • A 70 billion parameter language model fine-tuned for chat completions.
  2. meta/meta-llama-3-8b

    • An 8 billion parameter base version of Llama 3.
  3. meta/meta-llama-3-8b-instruct

    • An 8 billion parameter language model fine-tuned for chat completions.
  4. mistralai/mixtral-8x7b-instruct-v0.1

    • A generative Sparse Mixture of Experts tuned for assistance.
  5. meta/llama-2-7b-chat

    • A 7 billion parameter language model fine-tuned for chat completions.
  6. meta/llama-2-70b-chat

    • A 70 billion parameter language model fine-tuned for chat completions.
  7. meta/llama-2-13b-chat

    • A 13 billion parameter language model fine-tuned for chat completions.
  8. mistralai/mistral-7b-instruct-v0.2

    • An improved instruct fine-tuned version of Mistral-7B.
  9. mistralai/mistral-7b-v0.1

    • A 7 billion parameter language model from Mistral.
  10. mistralai/mistral-7b-instruct-v0.1

    • An instruction-tuned 7 billion parameter language model.
  11. replicate/dolly-v2-12b

    • An open-source instruction-tuned large language model by Databricks.
  12. meta/meta-llama-3-70b

    • A 70 billion parameter base version of Llama 3.
  13. 01-ai/yi-34b-chat

    • A large language model trained from scratch by 01.AI.
  14. replicate/vicuna-13b

    • A large language model fine-tuned on ChatGPT interactions.
  15. 01-ai/yi-6b

    • A large language model trained from scratch by 01.AI.
  16. replicate/flan-t5-xl

    • A language model by Google for tasks like classification and summarization.
  17. stability-ai/stablelm-tuned-alpha-7b

    • A 7 billion parameter language model by Stability AI.
  18. replicate/llama-7b

    • A Transformers implementation of the LLaMA language model.
  19. google-deepmind/gemma-2b-it

    • A 2 billion parameter instruct version of Google’s Gemma model.
  20. google-deepmind/gemma-7b-it

    • A 7 billion parameter instruct version of Google’s Gemma model.
  21. nateraw/nous-hermes-2-solar-10.7b

    • The Nous Hermes 2 - SOLAR 10.7B model.
  22. replicate/oasst-sft-1-pythia-12b

    • An open-source instruction-tuned large language model by Open-Assistant.
  23. kcaverly/nous-hermes-2-yi-34b-gguf

    • A state-of-the-art Yi fine-tune on GPT-4 generated synthetic data.
  24. replicate/gpt-j-6b

    • A large language model by EleutherAI.
  25. nateraw/nous-hermes-llama2-awq

    • Served with vLLM, by TheBloke.
  26. google-deepmind/gemma-7b

    • A 7 billion parameter base version of Google’s Gemma model.
  27. 01-ai/yi-6b-chat

    • A large language model trained from scratch by 01.AI.
  28. lucataco/qwen1.5-72b

    • A beta version of Qwen2, a transformer-based model.
  29. lucataco/phi-2

    • A model by Microsoft.
  30. replit/replit-code-v1-3b

    • A code generation model by Replit.
  31. google-deepmind/gemma-2b

    • A 2 billion parameter base version of Google’s Gemma model.
  32. lucataco/qwen1.5-14b

    • A beta version of Qwen2, a transformer-based model.
  33. adirik/mamba-2.8b

    • A 2.8 billion parameter state space language model.
  34. lucataco/phixtral-2x2_8

    • A MoE model with two Microsoft/phi-2 models.
  35. lucataco/qwen1.5-7b

    • A beta version of Qwen2, a transformer-based model.
  36. adirik/mamba-130m

    • A 130 million parameter state space language model.
  37. lucataco/olmo-7b

    • An Open Language Model series for language model science.
  38. adirik/mamba-1.4b

    • A 1.4 billion parameter state space language model.
  39. adirik/mamba-2.8b-slimpj

    • A 2.8 billion parameter state space language model.
  40. adirik/mamba-370m

    • A 370 million parameter state space language model.
  41. adirik/mamba-790m

    • A 790 million parameter state space language model.