Discover reviews on "best open source uncensored llm for 8gb vram in 2024" based on Reddit discussions and experiences.
Last updated: September 4, 2024 at 07:45 PM
Best open source uncensored LLM for 8GB VRAM in 2024
LLM Models for 8GB VRAM:
- Dolphin 2.6 Mistral DPO:
- Pros: "Every model I've tried to use as a general use case model hasn't come close."
- Cons: Not suitable for all tasks.
- Mistral-7B:
- Pros: "Worked well and gave good responses."
- Cons: May struggle with certain topics.
- YI 34B Capybara:
- Pros: Good for general tasks.
- Cons: Mixed results from users.
- Mixtral:
- Pros: Fast and consistently good results.
- Cons: May give poor results for some users.
Additional LLM Models:
- Nous-Hermes-2-Mixtral: Well balanced with excellent performance and results.
- Bartowski/dolphin-2.6-mistral-7b-dpo: Achieved good results for users.
- Nous-Hermes-2-SOLAR-10.7B: Balanced niche model between 7b mistral models and MoE models.
User Feedback:
- Users have had good experiences with running Mistral7b and NousResearch/Nous-Hermes-2-SOLAR-10.7B.
- Nous-Capybara-34b performed well under pressure testing, achieving nearly perfect context work.
- NeuralDaredevil Llama 3 8b Abliterated is praised for its high MMLU.
Recommendations:
- Consider Mixtral for a set and forget model that performs well.
- Capyb34b can be used for JSON summarization in production.
- Funcionary is good for function calling tasks.
- c4ai-command-r-v01 is recommended for knowledge graph creation.
Other Insights:
- AnythingLLM is a stable and user-friendly tool with convenient features, but can improve UI and documentation.
- Streaming commands ensure accuracy with LLMs.
- Langroid offers tools like ToolMessage for structuring data and APIs for complex tasks.
- Some models like CohereForAI's Command R Plus are recommended for specific tasks.