LLM Rankings (Full Info)

Auto Router
openrouter @ - | Auto Router
Slug: openrouter/auto | HF:
Context: 2000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output.

To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model.

The meta-model is powered by [Not Diamond](https://docs.notdiamond.ai/docs/how-not-diamond-works). Learn more in our [docs](/docs/model-routing).

Requests will be routed to the following models:
- [openai/gpt-4o-2024-08-06](/openai/gpt-4o-2024-08-06)
- [openai/gpt-4o-2024-05-13](/openai...
Google: Gemini 1.5 Pro
google @ Google Vertex | Gemini 1.5 Pro
Slug: google/gemini-pro-1.5 | HF:
Context: 2000000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Google's latest multimodal model, supports image and video[0] in text or chat prompts.

Optimized for language tasks including:

- Code generation
- Text generation
- Text editing
- Problem solving
- Recommendations
- Information extraction
- Data extraction or generation
- AI agents

Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).

* [0]: Video input is not available through OpenRouter at this time....
Google: Gemini 2.0 Flash
google @ Google AI Studio | Gemini 2.0 Flash
Slug: google/gemini-2.0-flash-001 | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: text, image, file | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences....
Google: Gemini 2.0 Flash Experimental (free)
google @ Google Vertex | Gemini 2.0 Flash Experimental (free)
Slug: google/gemini-2.0-flash-exp | HF:
Context: 1048576 tokens | Free: Yes | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop
Reasoning: ✘ | Moderation: ✘ | TOS
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences....
Google: Gemini 2.0 Flash Lite
google @ Google Vertex | Gemini 2.0 Flash Lite
Slug: google/gemini-2.0-flash-lite-001 | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: text, image, file | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5), all at extremely economical token prices....
Google: Gemini 2.5 Flash
google @ Google Vertex | Gemini 2.5 Flash
Slug: google/gemini-2.5-flash | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: file, image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop
Reasoning: ✔ | Moderation: ✘ | TOS
Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling.

Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning)....
Google: Gemini 2.5 Flash Lite Preview 06-17
google @ Google Vertex | Gemini 2.5 Flash Lite Preview 06-17
Slug: google/gemini-2.5-flash-lite-preview-06-17 | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: file, image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop
Reasoning: ✔ | Moderation: ✘ | TOS
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence. ...
Google: Gemini 2.5 Flash Preview 04-17
google @ - | Gemini 2.5 Flash Preview 04-17
Slug: google/gemini-2.5-flash-preview | HF:
Context: 1048576 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: image, text, file | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling.

Note: This model is available in two variants: thinking and non-thinking. The output pricing varies significantly depending on whether the thinking capability is active. If you select the standard variant (without the ":thinking" suffix), the model will explicitly avoid generating thinking tokens.

To utilize the...
Google: Gemini 2.5 Flash Preview 05-20
google @ - | Gemini 2.5 Flash Preview 05-20
Slug: google/gemini-2.5-flash-preview-05-20 | HF:
Context: 1048576 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: image, text, file | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemini 2.5 Flash May 20th Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling.

Note: This model is available in two variants: thinking and non-thinking. The output pricing varies significantly depending on whether the thinking capability is active. If you select the standard variant (without the ":thinking" suffix), the model will explicitly avoid generating thinking toke...
Google: Gemini 2.5 Pro
google @ Google Vertex (Global) | Gemini 2.5 Pro
Slug: google/gemini-2.5-pro | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: file, image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop
Reasoning: ✔ | Moderation: ✘ | TOS
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities....
Google: Gemini 2.5 Pro Experimental
google @ Google Vertex | Gemini 2.5 Pro Experimental
Slug: google/gemini-2.5-pro-exp-03-25 | HF:
Context: 1048576 tokens | Free: Yes | Quant: n/a
Input: text, image, file | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
This model has been deprecated by Google in favor of the (paid Preview model)[google/gemini-2.5-pro-preview]
 
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities....
Google: Gemini 2.5 Pro Preview 05-06
google @ Google Vertex | Gemini 2.5 Pro Preview 05-06
Slug: google/gemini-2.5-pro-preview-05-06 | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: text, image, file | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop seed
Reasoning: ✔ | Moderation: ✘ | TOS
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities....
Google: Gemini 2.5 Pro Preview 06-05
google @ Google Vertex | Gemini 2.5 Pro Preview 06-05
Slug: google/gemini-2.5-pro-preview | HF:
Context: 1048576 tokens | Free: No | Quant: n/a
Input: file, image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop
Reasoning: ✔ | Moderation: ✘ | TOS
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.
...
Meta: Llama 4 Maverick
meta-llama @ DeepInfra | Llama 4 Maverick
Slug: meta-llama/llama-4-maverick | HF: meta-llama/Llama-4-Maverick-17B-128E-Instruct
Context: 1048576 tokens | Free: No | Quant: fp8
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction.

Maverick features early fusion for native multimodality and a 1 million token context window. ...
Meta: Llama 4 Scout
meta-llama @ Lambda | Llama 4 Scout
Slug: meta-llama/llama-4-scout | HF: meta-llama/Llama-4-Scout-17B-16E-Instruct
Context: 1048576 tokens | Free: No | Quant: fp8
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format min_p repetition_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens.

Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modal...
OpenAI: GPT-4.1
openai @ OpenAI | GPT-4.1
Slug: openai/gpt-4.1 | HF:
Context: 1047576 tokens | Free: No | Quant: n/a
Input: image, text, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4.1 is a flagship large language model optimized for advanced instruction following, real-world software engineering, and long-context reasoning. It supports a 1 million token context window and outperforms GPT-4o and GPT-4.5 across coding (54.6% SWE-bench Verified), instruction compliance (87.4% IFEval), and multimodal understanding benchmarks. It is tuned for precise code diffs, agent reliability, and high recall in large document contexts, making it ideal for agents, IDE tooling, and enterprise knowledge retrieval....
OpenAI: GPT-4.1 Mini
openai @ OpenAI | GPT-4.1 Mini
Slug: openai/gpt-4.1-mini | HF:
Context: 1047576 tokens | Free: No | Quant: n/a
Input: image, text, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million token context window and scores 45.1% on hard instruction evals, 35.8% on MultiChallenge, and 84.1% on IFEval. Mini also shows strong coding ability (e.g., 31.6% on Aider’s polyglot diff benchmark) and vision understanding, making it suitable for interactive applications with tight performance constraints....
OpenAI: GPT-4.1 Nano
openai @ OpenAI | GPT-4.1 Nano
Slug: openai/gpt-4.1-nano | HF:
Context: 1047576 tokens | Free: No | Quant: n/a
Input: image, text, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion....
MiniMax: MiniMax-01
minimax @ Minimax | MiniMax-01
Slug: minimax/minimax-01 | HF: MiniMaxAI/MiniMax-Text-01
Context: 1000192 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can handle a context of up to 4 million tokens.

The text model adopts a hybrid architecture that combines Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). The image model adopts the “ViT-MLP-LLM” framework and is trained on top of the text model.

To read more about the release, see: https://www.minimaxi.com/en/news/minimax-01-series-2...
Cypher Alpha
openrouter @ - | Cypher Alpha
Slug: openrouter/cypher-alpha | HF:
Context: 1000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is a cloaked model provided to the community to gather feedback. It's an all-purpose model supporting real-world, long-context tasks including code generation.

Note: All prompts and completions for this model are logged by the provider and may be used to improve the model and other products and services. You remain responsible for any required end user notices and consents and for ensuring that no personal, confidential, or otherwise sensitive information, including data from individuals under the age of 18, is submitted....
Google: Gemini 1.5 Flash
google @ Google Vertex | Gemini 1.5 Flash
Slug: google/gemini-flash-1.5 | HF:
Context: 1000000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemini 1.5 Flash is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video. It's adept at processing visual and text inputs such as photographs, documents, infographics, and screenshots.

Gemini 1.5 Flash is designed for high-volume, high-frequency tasks where cost and latency matter. On most common tasks, Flash achieves comparable quality to other Gemini Pro models at a significantly reduced cost. Flash is well-suited for applications like chat assistants and on-demand...
Google: Gemini 1.5 Flash 8B
google @ Google AI Studio | Gemini 1.5 Flash 8B
Slug: google/gemini-flash-1.5-8b | HF:
Context: 1000000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemini Flash 1.5 8B is optimized for speed and efficiency, offering enhanced performance in small prompt tasks like chat, transcription, and translation. With reduced latency, it is highly effective for real-time and large-scale operations. This model focuses on cost-effective solutions while maintaining high-quality results.

[Click here to learn more about this model](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/).

Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms)....
Google: Gemini 1.5 Flash Experimental
google @ - | Gemini 1.5 Flash Experimental
Slug: google/gemini-flash-1.5-exp | HF:
Context: 1000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemini 1.5 Flash Experimental is an experimental version of the [Gemini 1.5 Flash](/models/google/gemini-flash-1.5) model.

Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).

#multimodal

Note: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future....
Google: Gemini 1.5 Pro Experimental
google @ - | Gemini 1.5 Pro Experimental
Slug: google/gemini-pro-1.5-exp | HF:
Context: 1000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemini 1.5 Pro Experimental is a bleeding-edge version of the [Gemini 1.5 Pro](/models/google/gemini-pro-1.5) model. Because it's currently experimental, it will be **heavily rate-limited** by Google.

Usage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).

#multimodal...
MiniMax: MiniMax M1
minimax @ Minimax | MiniMax M1
Slug: minimax/minimax-m1 | HF:
Context: 1000000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning
Reasoning: ✔ | Moderation: ✘ | TOS
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks.

Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, a...
Optimus Alpha
openrouter @ - | Optimus Alpha
Slug: openrouter/optimus-alpha | HF:
Context: 1000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: image, text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is a cloaked model provided to the community to gather feedback. It's geared toward real world use cases, including programming.

**Note:** All prompts and completions for this model are logged by the provider and may be used to improve the model....
Quasar Alpha
openrouter @ - | Quasar Alpha
Slug: openrouter/quasar-alpha | HF:
Context: 1000000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: image, text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is a cloaked model provided to the community to gather feedback. It’s a powerful, all-purpose model supporting long-context tasks, including code generation.

**Note:** All prompts and completions for this model are logged by the provider and may be used to improve the model....
Qwen: Qwen-Turbo
qwen @ Alibaba | Qwen-Turbo
Slug: qwen/qwen-turbo | HF:
Context: 1000000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks....
Amazon: Nova Lite 1.0
amazon @ Amazon Bedrock | Nova Lite 1.0
Slug: amazon/nova-lite-v1 | HF:
Context: 300000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite can handle real-time customer interactions, document analysis, and visual question-answering tasks with high accuracy.

With an input context of 300K tokens, it can analyze multiple images or up to 30 minutes of video in a single input....
Amazon: Nova Pro 1.0
amazon @ Amazon Bedrock | Nova Pro 1.0
Slug: amazon/nova-pro-v1 | HF:
Context: 300000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December 2024, it achieves state-of-the-art performance on key benchmarks including visual question answering (TextVQA) and video understanding (VATEX).

Amazon Nova Pro demonstrates strong capabilities in processing both visual and textual information and at analyzing financial documents.

**NOTE**: Video input is not supported at this time....
Mistral: Codestral 2501
mistralai @ Mistral | Codestral 2501
Slug: mistralai/codestral-2501 | HF:
Context: 262144 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
[Mistral](/mistralai)'s cutting-edge language model for coding. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation.

Learn more on their blog post: https://mistral.ai/news/codestral-2501/...
AI21: Jamba 1.5 Large
ai21 @ - | Jamba 1.5 Large
Slug: ai21/jamba-1-5-large | HF:
Context: 256000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Jamba 1.5 Large is part of AI21's new family of open models, offering superior speed, efficiency, and quality.

It features a 256K effective context window, the longest among open models, enabling improved performance on tasks like document summarization and analysis.

Built on a novel SSM-Transformer architecture, it outperforms larger models like Llama 3.1 70B on benchmarks while maintaining resource efficiency.

Read their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more....
AI21: Jamba 1.5 Mini
ai21 @ - | Jamba 1.5 Mini
Slug: ai21/jamba-1-5-mini | HF:
Context: 256000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Jamba 1.5 Mini is the world's first production-grade Mamba-based model, combining SSM and Transformer architectures for a 256K context window and high efficiency.

It works with 9 languages and can handle various writing and analysis tasks as well as or better than similar small models.

This model uses less computer memory and works faster with longer texts than previous designs.

Read their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more....
AI21: Jamba 1.6 Large
ai21 @ AI21 | Jamba 1.6 Large
Slug: ai21/jamba-1.6-large | HF: ai21labs/AI21-Jamba-Large-1.6
Context: 256000 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop
Reasoning: ✘ | Moderation: ✘ | TOS
AI21 Jamba Large 1.6 is a high-performance hybrid foundation model combining State Space Models (Mamba) with Transformer attention mechanisms. Developed by AI21, it excels in extremely long-context handling (256K tokens), demonstrates superior inference efficiency (up to 2.5x faster than comparable models), and supports structured JSON output and tool-use capabilities. It has 94 billion active parameters (398 billion total), optimized quantization support (ExpertsInt8), and multilingual proficiency in languages such as English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic, and H...
AI21: Jamba Instruct
ai21 @ - | Jamba Instruct
Slug: ai21/jamba-instruct | HF:
Context: 256000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Jamba-Instruct model, introduced by AI21 Labs, is an instruction-tuned variant of their hybrid SSM-Transformer Jamba model, specifically optimized for enterprise applications.

- 256K Context Window: It can process extensive information, equivalent to a 400-page novel, which is beneficial for tasks involving large documents such as financial reports or legal documents
- Safety and Accuracy: Jamba-Instruct is designed with enhanced safety features to ensure secure deployment in enterprise environments, reducing the risk and cost of implementation

Read their [announcement](https://www.ai21....
AI21: Jamba Mini 1.6
ai21 @ AI21 | Jamba Mini 1.6
Slug: ai21/jamba-1.6-mini | HF: ai21labs/AI21-Jamba-Mini-1.6
Context: 256000 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop
Reasoning: ✘ | Moderation: ✘ | TOS
AI21 Jamba Mini 1.6 is a hybrid foundation model combining State Space Models (Mamba) with Transformer attention mechanisms. With 12 billion active parameters (52 billion total), this model excels in extremely long-context tasks (up to 256K tokens) and achieves superior inference efficiency, outperforming comparable open models on tasks such as retrieval-augmented generation (RAG) and grounded question answering. Jamba Mini 1.6 supports multilingual tasks across English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic, and Hebrew, along with structured JSON output and tool-use capa...
Cohere: Command A
cohere @ Cohere | Command A
Slug: cohere/command-a | HF: CohereForAI/c4ai-command-a-03-2025
Context: 256000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases.
Compared to other leading proprietary and open-weights models Command A delivers maximum performance with minimum hardware costs, excelling on business-critical agentic and multilingual tasks....
Mistral: Codestral Mamba
mistralai @ - | Codestral Mamba
Slug: mistralai/codestral-mamba | HF: mistralai/mamba-codestral-7B-v0.1
Context: 256000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A 7.3B parameter Mamba-based model designed for code and reasoning tasks.

- Linear time inference, allowing for theoretically infinite sequence lengths
- 256k token context window
- Optimized for quick responses, especially beneficial for code productivity
- Performs comparably to state-of-the-art transformer models in code and reasoning tasks
- Available under the Apache 2.0 license for free use, modification, and distribution...
xAI: Grok 4
x-ai @ xAI | Grok 4
Slug: x-ai/grok-4 | HF:
Context: 256000 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs seed logprobs top_logprobs response_format
Reasoning: ✔ | Moderation: ✘ | TOS
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning cannot be disabled, and the reasoning effort cannot be specified. Pricing increases once the total tokens in a given request is greater than 128k tokens. See more details on the [xAI docs](https://docs.x.ai/docs/models/grok-4-0709)...
Anthropic: Claude 3 Haiku
anthropic @ Anthropic | Claude 3 Haiku
Slug: anthropic/claude-3-haiku | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3 Haiku is Anthropic's fastest and most compact model for
near-instant responsiveness. Quick and accurate targeted performance.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku)

#multimodal...
Anthropic: Claude 3 Haiku (self-moderated)
anthropic @ Anthropic | Claude 3 Haiku (self-moderated)
Slug: anthropic/claude-3-haiku | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3 Haiku is Anthropic's fastest and most compact model for
near-instant responsiveness. Quick and accurate targeted performance.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku)

#multimodal...
Anthropic: Claude 3 Opus
anthropic @ Anthropic | Claude 3 Opus
Slug: anthropic/claude-3-opus | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)

#multimodal...
Anthropic: Claude 3 Opus (self-moderated)
anthropic @ Anthropic | Claude 3 Opus (self-moderated)
Slug: anthropic/claude-3-opus | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)

#multimodal...
Anthropic: Claude 3 Sonnet
anthropic @ Anthropic | Claude 3 Sonnet
Slug: anthropic/claude-3-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)

#multimodal...
Anthropic: Claude 3 Sonnet (self-moderated)
anthropic @ Anthropic | Claude 3 Sonnet (self-moderated)
Slug: anthropic/claude-3-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)

#multimodal...
Anthropic: Claude 3.5 Haiku
anthropic @ Anthropic | Claude 3.5 Haiku
Slug: anthropic/claude-3.5-haiku | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic tasks such as chat interactions and immediate coding suggestions.

This makes it highly suitable for environments that demand both speed and precision, such as software development, customer service bots, and data management systems.

This model is currently pointing to [Claude 3.5 Haiku (2024-10-22)](/anthropic/claude-3-5-haiku-20241022)....
Anthropic: Claude 3.5 Haiku (2024-10-22)
anthropic @ Anthropic | Claude 3.5 Haiku (2024-10-22)
Slug: anthropic/claude-3.5-haiku-20241022 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3.5 Haiku features enhancements across all skill sets including coding, tool use, and reasoning. As the fastest model in the Anthropic lineup, it offers rapid response times suitable for applications that require high interactivity and low latency, such as user-facing chatbots and on-the-fly code completions. It also excels in specialized tasks like data extraction and real-time content moderation, making it a versatile tool for a broad range of industries.

It does not support image inputs.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/3-5-mod...
Anthropic: Claude 3.5 Haiku (2024-10-22) (self-moderated)
anthropic @ Anthropic | Claude 3.5 Haiku (2024-10-22) (self-moderated)
Slug: anthropic/claude-3.5-haiku-20241022 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3.5 Haiku features enhancements across all skill sets including coding, tool use, and reasoning. As the fastest model in the Anthropic lineup, it offers rapid response times suitable for applications that require high interactivity and low latency, such as user-facing chatbots and on-the-fly code completions. It also excels in specialized tasks like data extraction and real-time content moderation, making it a versatile tool for a broad range of industries.

It does not support image inputs.

See the launch announcement and benchmark results [here](https://www.anthropic.com/news/3-5-mod...
Anthropic: Claude 3.5 Haiku (self-moderated)
anthropic @ Anthropic | Claude 3.5 Haiku (self-moderated)
Slug: anthropic/claude-3.5-haiku | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic tasks such as chat interactions and immediate coding suggestions.

This makes it highly suitable for environments that demand both speed and precision, such as software development, customer service bots, and data management systems.

This model is currently pointing to [Claude 3.5 Haiku (2024-10-22)](/anthropic/claude-3-5-haiku-20241022)....
Anthropic: Claude 3.5 Sonnet
anthropic @ Anthropic | Claude 3.5 Sonnet
Slug: anthropic/claude-3.5-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
New Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:

- Coding: Scores ~49% on SWE-Bench Verified, higher than the last best score, and without any fancy prompt scaffolding
- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights
- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone
- Agentic tasks: exceptional tool use, making it great at a...
Anthropic: Claude 3.5 Sonnet (2024-06-20)
anthropic @ Anthropic | Claude 3.5 Sonnet (2024-06-20)
Slug: anthropic/claude-3.5-sonnet-20240620 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:

- Coding: Autonomously writes, edits, and runs code with reasoning and troubleshooting
- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights
- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone
- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-st...
Anthropic: Claude 3.5 Sonnet (2024-06-20) (self-moderated)
anthropic @ Anthropic | Claude 3.5 Sonnet (2024-06-20) (self-moderated)
Slug: anthropic/claude-3.5-sonnet-20240620 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:

- Coding: Autonomously writes, edits, and runs code with reasoning and troubleshooting
- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights
- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone
- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-st...
Anthropic: Claude 3.5 Sonnet (self-moderated)
anthropic @ Anthropic | Claude 3.5 Sonnet (self-moderated)
Slug: anthropic/claude-3.5-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
New Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:

- Coding: Scores ~49% on SWE-Bench Verified, higher than the last best score, and without any fancy prompt scaffolding
- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights
- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone
- Agentic tasks: exceptional tool use, making it great at a...
Anthropic: Claude 3.7 Sonnet
anthropic @ Google Vertex | Claude 3.7 Sonnet
Slug: anthropic/claude-3.7-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature stop reasoning include_reasoning tools tool_choice
Reasoning: ✔ | Moderation: ✘ | TOS
Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rapid responses and extended, step-by-step processing for complex tasks. The model demonstrates notable improvements in coding, particularly in front-end development and full-stack updates, and excels in agentic workflows, where it can autonomously navigate multi-step processes.

Claude 3.7 Sonnet maintains performance parity with its predecessor in standard mode while offering an extended reasoning...
Anthropic: Claude 3.7 Sonnet (self-moderated)
anthropic @ Anthropic | Claude 3.7 Sonnet (self-moderated)
Slug: anthropic/claude-3.7-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature stop reasoning include_reasoning tools tool_choice
Reasoning: ✔ | Moderation: ✘ | TOS
Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rapid responses and extended, step-by-step processing for complex tasks. The model demonstrates notable improvements in coding, particularly in front-end development and full-stack updates, and excels in agentic workflows, where it can autonomously navigate multi-step processes.

Claude 3.7 Sonnet maintains performance parity with its predecessor in standard mode while offering an extended reasoning...
Anthropic: Claude 3.7 Sonnet (thinking)
anthropic @ Google Vertex | Claude 3.7 Sonnet (thinking)
Slug: anthropic/claude-3.7-sonnet | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature stop reasoning include_reasoning tools tool_choice
Reasoning: ✔ | Moderation: ✘ | TOS
Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rapid responses and extended, step-by-step processing for complex tasks. The model demonstrates notable improvements in coding, particularly in front-end development and full-stack updates, and excels in agentic workflows, where it can autonomously navigate multi-step processes.

Claude 3.7 Sonnet maintains performance parity with its predecessor in standard mode while offering an extended reasoning...
Anthropic: Claude Opus 4
anthropic @ Anthropic | Claude Opus 4
Slug: anthropic/claude-opus-4 | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature stop reasoning include_reasoning tools tool_choice
Reasoning: ✔ | Moderation: ✔ | TOS
Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software engineering, achieving leading results on SWE-bench (72.5%) and Terminal-bench (43.2%). Opus 4 supports extended, agentic workflows, handling thousands of task steps continuously for hours without degradation.

Read more at the [blog post here](https://www.anthropic.com/news/claude-4)...
Anthropic: Claude Sonnet 4
anthropic @ Google Vertex | Claude Sonnet 4
Slug: anthropic/claude-sonnet-4 | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature stop reasoning include_reasoning tools tool_choice
Reasoning: ✔ | Moderation: ✘ | TOS
Claude Sonnet 4 significantly enhances the capabilities of its predecessor, Sonnet 3.7, excelling in both coding and reasoning tasks with improved precision and controllability. Achieving state-of-the-art performance on SWE-bench (72.7%), Sonnet 4 balances capability and computational efficiency, making it suitable for a broad range of applications from routine coding tasks to complex software development projects. Key enhancements include improved autonomous codebase navigation, reduced error rates in agent-driven workflows, and increased reliability in following intricate instructions. Sonne...
Anthropic: Claude v2
anthropic @ Anthropic | Claude v2
Slug: anthropic/claude-2 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use....
Anthropic: Claude v2 (self-moderated)
anthropic @ Anthropic | Claude v2 (self-moderated)
Slug: anthropic/claude-2 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use....
Anthropic: Claude v2.1
anthropic @ Anthropic | Claude v2.1
Slug: anthropic/claude-2.1 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use....
Anthropic: Claude v2.1 (self-moderated)
anthropic @ Anthropic | Claude v2.1 (self-moderated)
Slug: anthropic/claude-2.1 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use....
Bagel 34B v0.2
jondurbin @ - | Bagel 34B v0.2
Slug: jondurbin/bagel-34b | HF: jondurbin/bagel-34b-v0.2
Context: 200000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
An experimental fine-tune of [Yi 34b 200k](/models/01-ai/yi-34b-200k) using [bagel](https://github.com/jondurbin/bagel). This is the version of the fine-tune before direct preference optimization (DPO) has been applied. DPO performs better on benchmarks, but this version is likely better for creative writing, roleplay, etc....
Nous: Capybara 34B
nousresearch @ - | Capybara 34B
Slug: nousresearch/nous-capybara-34b | HF: NousResearch/Nous-Capybara-34B
Context: 200000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This model is trained on the Yi-34B model for 3 epochs on the Capybara dataset. It's the first 34B Nous model and first 200K context length Nous model....
OpenAI: Codex Mini
openai @ OpenAI | Codex Mini
Slug: openai/codex-mini | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1....
OpenAI: o1
openai @ OpenAI | o1
Slug: openai/o1 | HF:
Context: 200000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought.

The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).
...
OpenAI: o1-pro
openai @ OpenAI | o1-pro
Slug: openai/o1-pro | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide consistently better answers....
OpenAI: o3
openai @ OpenAI | o3
Slug: openai/o3 | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text, file | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use it to think through multi-step problems that involve analysis across text, code, and images. Note that BYOK is required for this model. Set up here: https://openrouter.ai/settings/integrations...
OpenAI: o3 Mini
openai @ OpenAI | o3 Mini
Slug: openai/o3-mini | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
OpenAI o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding.

This model supports the `reasoning_effort` parameter, which can be set to "high", "medium", or "low" to control the thinking time of the model. The default is "medium". OpenRouter also offers the model slug `openai/o3-mini-high` to default the parameter to "high".

The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not inclu...
OpenAI: o3 Mini High
openai @ OpenAI | o3 Mini High
Slug: openai/o3-mini-high | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
OpenAI o3-mini-high is the same model as [o3-mini](/openai/o3-mini) with reasoning_effort set to high.

o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not include vision processing capabilities.

The model demonstrates significant improvements over its predecessor, with expert testers preferring its responses 56% of the time an...
OpenAI: o3 Pro
openai @ OpenAI | o3 Pro
Slug: openai/o3-pro | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, file, image | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
The o-series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o3-pro model uses more compute to think harder and provide consistently better answers.

Note that BYOK is required for this model. Set up here: https://openrouter.ai/settings/integrations...
OpenAI: o4 Mini
openai @ OpenAI | o4 Mini
Slug: openai/o4-mini | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains.

Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where lat...
OpenAI: o4 Mini High
openai @ OpenAI | o4 Mini High
Slug: openai/o4-mini-high | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: image, text, file | Output: text | Completions: ✔ | Chat:
Supported Params:
tools tool_choice seed max_tokens response_format structured_outputs
Reasoning: ✔ | Moderation: ✔ | TOS
OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high.

OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains.

Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g.,...
Perplexity: Sonar Pro
perplexity @ Perplexity | Sonar Pro
Slug: perplexity/sonar-pro | HF:
Context: 200000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p web_search_options top_k frequency_penalty presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)

For enterprises seeking more advanced capabilities, the Sonar Pro API can handle in-depth, multi-step queries with added extensibility, like double the number of citations per search as Sonar on average. Plus, with a larger context window, it can handle longer and more nuanced searches and follow-up questions. ...
Yi 34B 200K
01-ai @ - | Yi 34B 200K
Slug: 01-ai/yi-34b-200k | HF: 01-ai/Yi-34B-200K
Context: 200000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This version was trained on a large context length, allowing ~200k words (1000 paragraphs) of combined input and output....
DeepSeek: DeepSeek Prover V2
deepseek @ DeepInfra | DeepSeek Prover V2
Slug: deepseek/deepseek-prover-v2 | HF: deepseek-ai/DeepSeek-Prover-V2-671B
Context: 163840 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
DeepSeek Prover V2 is a 671B parameter model, speculated to be geared towards logic and mathematics. Likely an upgrade from [DeepSeek-Prover-V1.5](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1.5-RL) Not much is known about the model yet, as DeepSeek released it on Hugging Face without an announcement or description....
DeepSeek: DeepSeek R1 Zero
deepseek @ - | DeepSeek R1 Zero
Slug: deepseek/deepseek-r1-zero | HF: deepseek-ai/DeepSeek-R1-Zero
Context: 163840 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
DeepSeek-R1-Zero is a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. It's 671B parameters in size, with 37B active in an inference pass.

It demonstrates remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.

DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. See [DeepSeek R1](/deepseek/deepseek-r1) for the SFT model.

...
DeepSeek: DeepSeek V3
deepseek @ Targon | DeepSeek V3
Slug: deepseek/deepseek-chat | HF: deepseek-ai/DeepSeek-V3
Context: 163840 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models.

For model details, please visit [the DeepSeek-V3 repo](https://github.com/deepseek-ai/DeepSeek-V3) for more information, or see the [launch announcement](https://api-docs.deepseek.com/news/news1226)....
DeepSeek: DeepSeek V3 (free)
deepseek @ Chutes | DeepSeek V3 (free)
Slug: deepseek/deepseek-chat | HF: deepseek-ai/DeepSeek-V3
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models.

For model details, please visit [the DeepSeek-V3 repo](https://github.com/deepseek-ai/DeepSeek-V3) for more information, or see the [launch announcement](https://api-docs.deepseek.com/news/news1226)....
DeepSeek: DeepSeek V3 0324
deepseek @ Targon | DeepSeek V3 0324
Slug: deepseek/deepseek-chat-v3-0324 | HF: deepseek-ai/DeepSeek-V3-0324
Context: 163840 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team.

It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks....
DeepSeek: DeepSeek V3 Base (free)
deepseek @ Chutes | DeepSeek V3 Base (free)
Slug: deepseek/deepseek-v3-base | HF: deepseek-ai/Deepseek-v3-base
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Note that this is a base model mostly meant for testing, you need to provide detailed prompts for the model to return useful responses.

DeepSeek-V3 Base is a 671B parameter open Mixture-of-Experts (MoE) language model with 37B active parameters per forward pass and a context length of 128K tokens. Trained on 14.8T tokens using FP8 mixed precision, it achieves high training efficiency and stability, with strong performance across language, reasoning, math, and coding tasks.

DeepSeek-V3 Base is the pre-trained model behind [DeepSeek V3](/deepseek/deepseek-chat-v3)...
DeepSeek: R1
deepseek @ Targon | R1
Slug: deepseek/deepseek-r1 | HF: deepseek-ai/DeepSeek-R1
Context: 163840 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k repetition_penalty
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120).

MIT licensed: Distill & commercialize freely!...
DeepSeek: R1 (free)
deepseek @ Chutes | R1 (free)
Slug: deepseek/deepseek-r1 | HF: deepseek-ai/DeepSeek-R1
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens reasoning include_reasoning temperature
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120).

MIT licensed: Distill & commercialize freely!...
DeepSeek: R1 0528
deepseek @ Chutes | R1 0528
Slug: deepseek/deepseek-r1-0528 | HF: deepseek-ai/DeepSeek-R1-0528
Context: 163840 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

Fully open-source model....
DeepSeek: R1 0528 (free)
deepseek @ Chutes | R1 0528 (free)
Slug: deepseek/deepseek-r1-0528 | HF: deepseek-ai/DeepSeek-R1-0528
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

Fully open-source model....
Meta: Llama Guard 4 12B
meta-llama @ DeepInfra | Llama Guard 4 12B
Slug: meta-llama/llama-guard-4-12b | HF: meta-llama/Llama-Guard-4-12B
Context: 163840 tokens | Free: No | Quant: bf16
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM—generating text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.

Llama Guard 4 was aligned to safeguard against the standardized MLCommons hazards taxonomy and designed to support multimodal Llama 4 capabilities...
Microsoft: MAI DS R1 (free)
microsoft @ Chutes | MAI DS R1 (free)
Slug: microsoft/mai-ds-r1 | HF: microsoft/MAI-DS-R1
Context: 163840 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
MAI-DS-R1 is a post-trained variant of DeepSeek-R1 developed by the Microsoft AI team to improve the model’s responsiveness on previously blocked topics while enhancing its safety profile. Built on top of DeepSeek-R1’s reasoning foundation, it integrates 110k examples from the Tulu-3 SFT dataset and 350k internally curated multilingual safety-alignment samples. The model retains strong reasoning, coding, and problem-solving capabilities, while unblocking a wide range of prompts previously restricted in R1.

MAI-DS-R1 demonstrates improved performance on harm mitigation benchmarks and maint...
TNG: DeepSeek R1T Chimera (free)
tngtech @ Chutes | DeepSeek R1T Chimera (free)
Slug: tngtech/deepseek-r1t-chimera | HF: tngtech/DeepSeek-R1T-Chimera
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Transformer architecture and is optimized for general text generation tasks.

The model merges pretrained weights from both source models to balance performance across reasoning, efficiency, and instruction-following tasks. It is released under the MIT license and intended for research and commercial use....
TNG: DeepSeek R1T2 Chimera (free)
tngtech @ Chutes | DeepSeek R1T2 Chimera (free)
Slug: tngtech/deepseek-r1t2-chimera | HF: tngtech/DeepSeek-TNG-R1T2-Chimera
Context: 163840 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 checkpoints with an Assembly-of-Experts merge. The tri-parent design yields strong reasoning performance while running roughly 20 % faster than the original R1 and more than 2× faster than R1-0528 under vLLM, giving a favorable cost-to-intelligence trade-off. The checkpoint supports contexts up to 60 k tokens in standard use (tested to ~130 k) and maintains consistent <think> token behaviour, ma...
AionLabs: Aion-1.0
aion-labs @ AionLabs | Aion-1.0
Slug: aion-labs/aion-1.0 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning
Reasoning: ✔ | Moderation: ✘ | TOS
Aion-1.0 is a multi-model system designed for high performance across various tasks, including reasoning and coding. It is built on DeepSeek-R1, augmented with additional models and techniques such as Tree of Thoughts (ToT) and Mixture of Experts (MoE). It is Aion Lab's most powerful reasoning model....
AionLabs: Aion-1.0-Mini
aion-labs @ AionLabs | Aion-1.0-Mini
Slug: aion-labs/aion-1.0-mini | HF: FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning
Reasoning: ✔ | Moderation: ✘ | TOS
Aion-1.0-Mini 32B parameter model is a distilled version of the DeepSeek-R1 model, designed for strong performance in reasoning domains such as mathematics, coding, and logic. It is a modified variant of a FuseAI model that outperforms R1-Distill-Qwen-32B and R1-Distill-Llama-70B, with benchmark results available on its [Hugging Face page](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview), independently replicated for verification....
Arcee AI: Maestro Reasoning
arcee-ai @ Together | Maestro Reasoning
Slug: arcee-ai/maestro-reasoning | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Maestro Reasoning is Arcee's flagship analysis model: a 32 B‑parameter derivative of Qwen 2.5‑32 B tuned with DPO and chain‑of‑thought RL for step‑by‑step logic. Compared to the earlier 7 B preview, the production 32 B release widens the context window to 128 k tokens and doubles pass‑rate on MATH and GSM‑8K, while also lifting code completion accuracy. Its instruction style encourages structured "thought → answer" traces that can be parsed or hidden according to user preference. That transparency pairs well with audit‑focused industries like finance or healthca...
Arcee AI: Spotlight
arcee-ai @ Together | Spotlight
Slug: arcee-ai/spotlight | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich multimodal conversations that combine lengthy documents with one or more images. Training emphasized fast inference on consumer GPUs while retaining strong captioning, visual‐question‑answering, and diagram‑analysis accuracy. As a result, Spotlight slots neatly into agent workflows where screenshots, charts or UI mock‑ups need to be interpreted on the fly. Early benchmark...
Arcee AI: Virtuoso Large
arcee-ai @ Together | Virtuoso Large
Slug: arcee-ai/virtuoso-large | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k context inherited from Qwen 2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeek R1 distillation, multi‑epoch supervised fine‑tuning and a final DPO/RLHF alignment stage, yielding strong performance on BIG‑Bench‑Hard, GSM‑8K and long‑context Needle‑In‑Haystack tests. Enterprises use Virtuoso‑Large as the "fallback" brain ...
Arcee AI: Virtuoso Medium V2
arcee-ai @ Together | Virtuoso Medium V2
Slug: arcee-ai/virtuoso-medium-v2 | HF: arcee-ai/Virtuoso-Medium-v2
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Virtuoso‑Medium‑v2 is a 32 B model distilled from DeepSeek‑v3 logits and merged back onto a Qwen 2.5 backbone, yielding a sharper, more factual successor to the original Virtuoso Medium. The team harvested ~1.1 B logit tokens and applied "fusion‑merging" plus DPO alignment, which pushed scores past Arcee‑Nova 2024 and many 40 B‑plus peers on MMLU‑Pro, MATH and HumanEval. With a 128 k context and aggressive quantization options (from BF16 down to 4‑bit GGUF), it balances capability with deployability on single‑GPU nodes. Typical use cases include enterprise chat as...
DeepSeek: Deepseek R1 0528 Qwen3 8B (free)
deepseek @ Chutes | Deepseek R1 0528 Qwen3 8B (free)
Slug: deepseek/deepseek-r1-0528-qwen3-8b | HF: deepseek-ai/deepseek-r1-0528-qwen3-8b
Context: 131072 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek-R1-0528 is a lightly upgraded release of DeepSeek R1 that taps more compute and smarter post-training tricks, pushing its reasoning and inference to the brink of flagship models like O3 and Gemini 2.5 Pro.
It now tops math, programming, and logic leaderboards, showcasing a step-change in depth-of-thought.
The distilled variant, DeepSeek-R1-0528-Qwen3-8B, transfers this chain-of-thought into an 8 B-parameter form, beating standard Qwen3 8B by +10 pp and tying the 235 B “thinking” giant on AIME 2024....
DeepSeek: R1 Distill Llama 70B
deepseek @ Chutes | R1 Distill Llama 70B
Slug: deepseek/deepseek-r1-distill-llama-70b | HF: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

- AIME 2024 pass@1: 70.0
- MATH-500 pass@1: 94.5
- CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
DeepSeek: R1 Distill Qwen 1.5B
deepseek @ Together | R1 Distill Qwen 1.5B
Slug: deepseek/deepseek-r1-distill-qwen-1.5b | HF: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on [Qwen 2.5 Math 1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It's a very small and efficient model which outperforms [GPT 4o 0513](/openai/gpt-4o-2024-05-13) on Math Benchmarks.

Other benchmark results include:

- AIME 2024 pass@1: 28.9
- AIME 2024 cons@64: 52.7
- MATH-500 pass@1: 83.9

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
DeepSeek: R1 Distill Qwen 32B
deepseek @ DeepInfra | R1 Distill Qwen 32B
Slug: deepseek/deepseek-r1-distill-qwen-32b | HF: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
DeepSeek: R1 Distill Qwen 7B
deepseek @ GMICloud | R1 Distill Qwen 7B
Slug: deepseek/deepseek-r1-distill-qwen-7b | HF: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning seed
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larg...
Google: Gemma 3 27B
google @ DeepInfra | Gemma 3 27B
Slug: google/gemma-3-27b-it | HF:
Context: 131072 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to [Gemma 2](google/gemma-2-27b-it)...
Google: Gemma 3 4B
google @ DeepInfra | Gemma 3 4B
Slug: google/gemma-3-4b-it | HF: google/gemma-3-4b-it
Context: 131072 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling....
Kimi Dev 72b (free)
moonshotai @ Chutes | Kimi Dev 72b (free)
Slug: moonshotai/kimi-dev-72b | HF: moonshotai/Kimi-Dev-72B
Context: 131072 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Kimi-Dev-72B is an open-source large language model fine-tuned for software engineering and issue resolution tasks. Based on Qwen2.5-72B, it is optimized using large-scale reinforcement learning that applies code patches in real repositories and validates them via full test suite execution—rewarding only correct, robust completions. The model achieves 60.4% on SWE-bench Verified, setting a new benchmark among open-source models for software bug fixing and code reasoning....
Llama Guard 3 8B
meta-llama @ Nebius AI Studio | Llama Guard 3 8B
Slug: meta-llama/llama-guard-3-8b | HF: meta-llama/Llama-Guard-3-8B
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k logit_bias logprobs top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.

Llama Guard 3 was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities. Specifically, it provid...
Meta: Llama 3.1 70B Instruct
meta-llama @ DeepInfra Turbo | Llama 3.1 70B Instruct
Slug: meta-llama/llama-3.1-70b-instruct | HF: meta-llama/Meta-Llama-3.1-70B-Instruct
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice response_format stop frequency_penalty presence_penalty repetition_penalty top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3.1 8B Instruct
meta-llama @ DeepInfra (Turbo) | Llama 3.1 8B Instruct
Slug: meta-llama/llama-3.1-8b-instruct | HF: meta-llama/Meta-Llama-3.1-8B-Instruct
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3.2 11B Vision Instruct
meta-llama @ DeepInfra | Llama 3.2 11B Vision Instruct
Slug: meta-llama/llama-3.2-11b-vision-instruct | HF: meta-llama/Llama-3.2-11B-Vision-Instruct
Context: 131072 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven custo...
Meta: Llama 3.2 11B Vision Instruct (free)
meta-llama @ Together | Llama 3.2 11B Vision Instruct (free)
Slug: meta-llama/llama-3.2-11b-vision-instruct | HF: meta-llama/Llama-3.2-11B-Vision-Instruct
Context: 131072 tokens | Free: Yes | Quant: fp8
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven custo...
Meta: Llama 3.2 1B Instruct
meta-llama @ DeepInfra | Llama 3.2 1B Instruct
Slug: meta-llama/llama-3.2-1b-instruct | HF: meta-llama/Llama-3.2-1B-Instruct
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance.

Supporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models.

Click here for the [original model card]...
Meta: Llama 3.2 3B Instruct (free)
meta-llama @ Venice | Llama 3.2 3B Instruct (free)
Slug: meta-llama/llama-3.2-3b-instruct | HF: meta-llama/Llama-3.2-3B-Instruct
Context: 131072 tokens | Free: Yes | Quant: fp16
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages.

Trained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual sett...
Meta: Llama 3.2 90B Vision Instruct
meta-llama @ Together | Llama 3.2 90B Vision Instruct
Slug: meta-llama/llama-3.2-90b-vision-instruct | HF: meta-llama/Llama-3.2-90B-Vision-Instruct
Context: 131072 tokens | Free: No | Quant: fp8
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioning, visual question answering, and advanced image-text comprehension. Pre-trained on vast multimodal datasets and fine-tuned with human feedback, the Llama 90B Vision is engineered to handle the most demanding image-based AI tasks.

This model is perfect for industries requiring cutting-edge multimodal AI capabilities, particularly those dealing with complex, real-time visual and textual analysis....
Meta: Llama 3.3 70B Instruct
meta-llama @ DeepInfra Turbo | Llama 3.3 70B Instruct
Slug: meta-llama/llama-3.3-70b-instruct | HF: meta-llama/Llama-3.3-70B-Instruct
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

[Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)...
Microsoft: Phi 4 Multimodal Instruct
microsoft @ DeepInfra | Phi 4 Multimodal Instruct
Slug: microsoft/phi-4-multimodal-instruct | HF: microsoft/Phi-4-multimodal-instruct
Context: 131072 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate text outputs. The unified architecture enables efficient, low-latency inference, suitable for edge and mobile deployments. Phi-4 Multimodal Instruct supports text inputs in multiple languages including Arabic, Chinese, English, French, German, Japanese, Spanish, and more, with visual input optimized primarily for English. It delivers impressive performance on multimodal tasks involving mathematica...
Mistral Large 2407
mistralai @ Mistral | Mistral Large 2407
Slug: mistralai/mistral-large-2407 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/).

It supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents.
...
Mistral Large 2411
mistralai @ Mistral | Mistral Large 2411
Slug: mistralai/mistral-large-2411 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411)

It provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable improvements in long context understanding, a new system prompt, and more accurate function calling....
Mistral: Devstral Medium
mistralai @ Mistral | Devstral Medium
Slug: mistralai/devstral-medium | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves 61.6% on SWE-Bench Verified, placing it ahead of Gemini 2.5 Pro and GPT-4.1 in code-related tasks, at a fraction of the cost. It is designed for generalization across prompt styles and tool use in code agents and frameworks.

Devstral Medium is available via API only (not open-weight), and supports enterprise deployment on private infrastructure, with optional fine-tuning capabilities....
Mistral: Ministral 3B
mistralai @ Mistral | Ministral 3B
Slug: mistralai/ministral-3b | HF:
Context: 131072 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference....
Mistral: Mistral Medium 3
mistralai @ Mistral | Mistral Medium 3
Slug: mistralai/mistral-medium-3 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases.

The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3 offers comp...
Mistral: Mistral Nemo (free)
mistralai @ Chutes | Mistral Nemo (free)
Slug: mistralai/mistral-nemo | HF: mistralai/Mistral-Nemo-Instruct-2407
Context: 131072 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.

It supports function calling and is released under the Apache 2.0 license....
Mistral: Pixtral Large 2411
mistralai @ Mistral | Pixtral Large 2411
Slug: mistralai/pixtral-large-2411 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images.

The model is available under the Mistral Research License (MRL) for research and educational use, and the Mistral Commercial License for experimentation, testing, and production for commercial purposes.

...
Moonshot AI: Kimi VL A3B Thinking (free)
moonshotai @ Chutes | Kimi VL A3B Thinking (free)
Slug: moonshotai/kimi-vl-a3b-thinking | HF: moonshotai/Kimi-VL-A3B-Thinking
Context: 131072 tokens | Free: Yes | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Kimi-VL is a lightweight Mixture-of-Experts vision-language model that activates only 2.8B parameters per step while delivering strong performance on multimodal reasoning and long-context tasks. The Kimi-VL-A3B-Thinking variant, fine-tuned with chain-of-thought and reinforcement learning, excels in math and visual reasoning benchmarks like MathVision, MMMU, and MathVista, rivaling much larger models such as Qwen2.5-VL-7B and Gemma-3-12B. It supports 128K context and high-resolution input via its MoonViT encoder....
NVIDIA: Llama 3.1 Nemotron 70B Instruct
nvidia @ Lambda | Llama 3.1 Nemotron 70B Instruct
Slug: nvidia/llama-3.1-nemotron-70b-instruct | HF: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format min_p repetition_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinforcement Learning from Human Feedback (RLHF), it excels in automatic alignment benchmarks. This model is tailored for applications requiring high accuracy in helpfulness and response generation, suitable for diverse user queries across multiple domains.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/)....
NVIDIA: Llama 3.1 Nemotron Nano 8B v1
nvidia @ - | Llama 3.1 Nemotron Nano 8B v1
Slug: nvidia/llama-3.1-nemotron-nano-8b-v1 | HF: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama-3.1-Nemotron-Nano-8B-v1 is a compact large language model (LLM) derived from Meta's Llama-3.1-8B-Instruct, specifically optimized for reasoning tasks, conversational interactions, retrieval-augmented generation (RAG), and tool-calling applications. It balances accuracy and efficiency, fitting comfortably onto a single consumer-grade RTX GPU for local deployment. The model supports extended context lengths of up to 128K tokens.

Note: you must include `detailed thinking on` in the system prompt to enable reasoning. Please see [Usage Recommendations](https://huggingface.co/nvidia/Llama-3_1...
NVIDIA: Llama 3.1 Nemotron Ultra 253B v1
nvidia @ Nebius AI Studio | Llama 3.1 Nemotron Ultra 253B v1
Slug: nvidia/llama-3.1-nemotron-ultra-253b-v1 | HF: nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k logit_bias logprobs top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.

Note: you must include `detailed thinking on` in the system prompt to enable reasoning. Pl...
NVIDIA: Llama 3.1 Nemotron Ultra 253B v1 (free)
nvidia @ Chutes | Llama 3.1 Nemotron Ultra 253B v1 (free)
Slug: nvidia/llama-3.1-nemotron-ultra-253b-v1 | HF: nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Context: 131072 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.

Note: you must include `detailed thinking on` in the system prompt to enable reasoning. Pl...
NVIDIA: Llama 3.3 Nemotron Super 49B v1
nvidia @ Nebius AI Studio | Llama 3.3 Nemotron Super 49B v1
Slug: nvidia/llama-3.3-nemotron-super-49b-v1 | HF: nvidia/Llama-3_3-Nemotron-Super-49B-v1
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k logit_bias logprobs top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Llama-3.3-Nemotron-Super-49B-v1 is a large language model (LLM) optimized for advanced reasoning, conversational interactions, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta's Llama-3.3-70B-Instruct, it employs a Neural Architecture Search (NAS) approach, significantly enhancing efficiency and reducing memory requirements. This allows the model to support a context length of up to 128K tokens and fit efficiently on single high-performance GPUs, such as NVIDIA H200.

Note: you must include `detailed thinking on` in the system prompt to enable reasoning. Please s...
NeverSleep: Lumimaid v0.2 70B
neversleep @ - | Lumimaid v0.2 70B
Slug: neversleep/llama-3.1-lumimaid-70b | HF: NeverSleep/Lumimaid-v0.2-70B
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Lumimaid v0.2 70B is a finetune of [Llama 3.1 70B](/meta-llama/llama-3.1-70b-instruct) with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Nous: DeepHermes 3 Llama 3 8B Preview (free)
nousresearch @ Chutes | DeepHermes 3 Llama 3 8B Preview (free)
Slug: nousresearch/deephermes-3-llama-3-8b-preview | HF: NousResearch/DeepHermes-3-Llama-3-8B-Preview
Context: 131072 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
DeepHermes 3 Preview is the latest version of our flagship Hermes series of LLMs by Nous Research, and one of the first models in the world to unify Reasoning (long chains of thought that improve answer accuracy) and normal LLM response modes into one model. We have also improved LLM annotation, judgement, and function calling.

DeepHermes 3 Preview is one of the first LLM models to unify both "intuitive", traditional mode responses and long chain of thought reasoning responses into a single model, toggled by a system prompt....
Nous: Hermes 3 405B Instruct
nousresearch @ DeepInfra | Hermes 3 405B Instruct
Slug: nousresearch/hermes-3-llama-3.1-405b | HF: NousResearch/Hermes-3-Llama-3.1-405B
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.

Hermes 3 405B is a frontier-level, full-parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.

The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output...
Nous: Hermes 3 70B Instruct
nousresearch @ DeepInfra | Hermes 3 70B Instruct
Slug: nousresearch/hermes-3-llama-3.1-70b | HF: NousResearch/Hermes-3-Llama-3.1-70B
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.

Hermes 3 70B is a competitive, if not superior finetune of the [Llama-3.1 70B foundation model](/models/meta-llama/llama-3.1-70b-instruct), focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.

The Hermes 3 series builds and expands on the Hermes 2 se...
NousResearch: Hermes 2 Pro - Llama-3 8B
nousresearch @ Lambda | Hermes 2 Pro - Llama-3 8B
Slug: nousresearch/hermes-2-pro-llama-3-8b | HF: NousResearch/Hermes-2-Pro-Llama-3-8B
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format min_p repetition_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house....
OpenHands LM 32B V0.1
all-hands @ - | OpenHands LM 32B V0.1
Slug: all-hands/openhands-lm-32b-v0.1 | HF: all-hands/openhands-lm-32b-v0.1
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OpenHands LM v0.1 is a 32B open-source coding model fine-tuned from Qwen2.5-Coder-32B-Instruct using reinforcement learning techniques outlined in SWE-Gym. It is optimized for autonomous software development agents and achieves strong performance on SWE-Bench Verified, with a 37.2% resolve rate. The model supports a 128K token context window, making it well-suited for long-horizon code reasoning and large codebase tasks.

OpenHands LM is designed for local deployment and runs on consumer-grade GPUs such as a single 3090. It enables fully offline agent workflows without dependency on proprietar...
Qwen: QwQ 32B
qwen @ DeepInfra | QwQ 32B
Slug: qwen/qwq-32b | HF: Qwen/QwQ-32B
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini....
Qwen: Qwen-Plus
qwen @ Alibaba | Qwen-Plus
Slug: qwen/qwen-plus | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination....
Qwen: Qwen2.5 32B Instruct
qwen @ - | Qwen2.5 32B Instruct
Slug: qwen/qwen2.5-32b-instruct | HF: Qwen/Qwen2.5-32B-Instruct
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen2.5 32B Instruct is the instruction-tuned variant of the latest Qwen large language model series. It provides enhanced instruction-following capabilities, improved proficiency in coding and mathematical reasoning, and robust handling of structured data and outputs such as JSON. It supports long-context processing up to 128K tokens and multilingual tasks across 29+ languages. The model has 32.5 billion parameters, 64 layers, and utilizes an advanced transformer architecture with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.

For more details, please refer to the [Qwen2.5 Blog](https://qwen...
Qwen: Qwen2.5 Coder 7B Instruct
qwen @ - | Qwen2.5 Coder 7B Instruct
Slug: qwen/qwen2.5-coder-7b-instruct | HF: Qwen/Qwen2.5-Coder-7B-Instruct
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows.

This model is part of the Qwen2.5-Coder family and offers strong compatibility with...
Qwen: Qwen3 235B A22B (free)
qwen @ Venice | Qwen3 235B A22B (free)
Slug: qwen/qwen3-235b-a22b | HF: Qwen/Qwen3-235B-A22B
Context: 131072 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling....
Reflection 70B
mattshumer @ - | Reflection 70B
Slug: mattshumer/reflection-70b | HF: mattshumer/Reflection-70B
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Reflection Llama-3.1 70B is trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.

The model was trained on synthetic data....
Sao10K: Llama 3.3 Euryale 70B
sao10k @ NextBit | Llama 3.3 Euryale 70B
Slug: sao10k/l3.3-euryale-70b | HF: Sao10K/L3.3-70B-Euryale-v2.3
Context: 131072 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Euryale L3.3 70B is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.2](/models/sao10k/l3-euryale-70b)....
SentientAGI: Dobby Mini Plus Llama 3.1 8B
sentientagi @ - | Dobby Mini Plus Llama 3.1 8B
Slug: sentientagi/dobby-mini-unhinged-plus-llama-3.1-8b | HF: SentientAGI/Dobby-Mini-Unhinged-Plus-Llama-3.1-8B
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Dobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B are language models fine-tuned from Llama-3.1-8B-Instruct. Dobby models have a strong conviction towards personal freedom, decentralization, and all things crypto — even when coerced to speak otherwise.

Dobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B have their own unique, uhh, personalities. The two versions are being released to be improved using the community’s feedback, which will steer the development of a 70B model.

...
Switchpoint Router
switchpoint @ Switchpoint | Switchpoint Router
Slug: switchpoint/router | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop top_k seed
Reasoning: ✔ | Moderation: ✘ | TOS
Switchpoint AI's router instantly analyzes your request and directs it to the optimal AI from an ever-evolving library.

As the world of LLMs advances, our router gets smarter, ensuring you always benefit from the industry's newest models without changing your workflow.

This model is configured for a simple, flat rate per response here on OpenRouter. It's powered by the full routing engine from [Switchpoint AI](https://www.switchpoint.dev)....
TheDrummer: Anubis 70B V1.1
thedrummer @ Parasail | Anubis 70B V1.1
Slug: thedrummer/anubis-70b-v1.1 | HF: TheDrummer/Anubis-70B-v1.1
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p frequency_penalty min_p presence_penalty repetition_penalty seed stop top_k
Reasoning: ✘ | Moderation: ✘ | TOS
TheDrummer's Anubis v1.1 is an unaligned, creative Llama 3.3 70B model focused on providing character-driven roleplay & stories. It excels at gritty, visceral prose, unique character adherence, and coherent narratives, while maintaining the instruction following Llama 3.3 70B is known for....
TheDrummer: Anubis Pro 105B V1
thedrummer @ Parasail | Anubis Pro 105B V1
Slug: thedrummer/anubis-pro-105b-v1 | HF: TheDrummer/Anubis-Pro-105B-v1
Context: 131072 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p frequency_penalty min_p presence_penalty repetition_penalty seed stop top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Anubis Pro 105B v1 is an expanded and refined variant of Meta’s Llama 3.3 70B, featuring 50% additional layers and further fine-tuning to leverage its increased capacity. Designed for advanced narrative, roleplay, and instructional tasks, it demonstrates enhanced emotional intelligence, creativity, nuanced character portrayal, and superior prompt adherence compared to smaller models. Its larger parameter count allows for deeper contextual understanding and extended reasoning capabilities, optimized for engaging, intelligent, and coherent interactions....
xAI: Grok 2 1212
x-ai @ xAI | Grok 2 1212
Slug: x-ai/grok-2-1212 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Grok 2 1212 introduces significant enhancements to accuracy, instruction adherence, and multilingual support, making it a powerful and flexible choice for developers seeking a highly steerable, intelligent model....
xAI: Grok 3
x-ai @ xAI | Grok 3
Slug: x-ai/grok-3 | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice structured_outputs stop frequency_penalty presence_penalty seed logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, healthcare, law, and science.

...
xAI: Grok 3 Beta
x-ai @ xAI | Grok 3 Beta
Slug: x-ai/grok-3-beta | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, healthcare, law, and science.

Excels in structured tasks and benchmarks like GPQA, LCB, and MMLU-Pro where it outperforms Grok 3 Mini even on high thinking.

Note: That there are two xAI endpoints for this model. By default when using this model we will always route you to the base endpoint. If you want the fast endpoint you can add `provider: { sort: throughput}`, to sort by throughput instead.
...
xAI: Grok 3 Mini
x-ai @ xAI | Grok 3 Mini
Slug: x-ai/grok-3-mini | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs stop seed logprobs top_logprobs response_format
Reasoning: ✔ | Moderation: ✘ | TOS
A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible....
xAI: Grok 3 Mini Beta
x-ai @ xAI | Grok 3 Mini Beta
Slug: x-ai/grok-3-mini-beta | HF:
Context: 131072 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning stop seed logprobs top_logprobs response_format
Reasoning: ✔ | Moderation: ✘ | TOS
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t demand extensive domain knowledge, and shines in math-specific and quantitative use cases, such as solving challenging puzzles or math problems.

Transparent "thinking" traces accessible. Defaults to low reasoning, can boost with setting `reasoning: { effort: "high" }`

Note: That there are two xAI endpoints for this model. By default when using this model we will always route you to the base...
xAI: Grok Beta
x-ai @ - | Grok Beta
Slug: x-ai/grok-beta | HF:
Context: 131072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Grok Beta is xAI's experimental language model with state-of-the-art reasoning capabilities, best for complex and multi-step use cases.

It is the successor of [Grok 2](https://x.ai/blog/grok-2) with enhanced context length....
AllenAI: Olmo 2 32B Instruct
allenai @ - | Olmo 2 32B Instruct
Slug: allenai/olmo-2-0325-32b-instruct | HF: allenai/OLMo-2-0325-32B-Instruct
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OLMo-2 32B Instruct is a supervised instruction-finetuned variant of the OLMo-2 32B March 2025 base model. It excels in complex reasoning and instruction-following tasks across diverse benchmarks such as GSM8K, MATH, IFEval, and general NLP evaluation. Developed by AI2, OLMo-2 32B is part of an open, research-oriented initiative, trained primarily on English-language datasets to advance the understanding and development of open-source language models....
Amazon: Nova Micro 1.0
amazon @ Amazon Bedrock | Nova Micro 1.0
Slug: amazon/nova-micro-v1 | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for speed and cost, Amazon Nova Micro excels at tasks such as text summarization, translation, content classification, interactive chat, and brainstorming. It has simple mathematical reasoning and coding abilities....
Cohere: Command R
cohere @ Cohere | Command R
Slug: cohere/command-r | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command-R is a 35B parameter model that performs conversational language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.

Read the launch post [here](https://txt.cohere.com/command-r/).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R (03-2024)
cohere @ Cohere | Command R (03-2024)
Slug: cohere/command-r-03-2024 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command-R is a 35B parameter model that performs conversational language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.

Read the launch post [here](https://txt.cohere.com/command-r/).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R (08-2024)
cohere @ Cohere | Command R (08-2024)
Slug: cohere/command-r-08-2024 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and is competitive with the previous version of the larger Command R+ model.

Read the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R+
cohere @ Cohere | Command R+
Slug: cohere/command-r-plus | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command R+ is a new, 104B-parameter LLM from Cohere. It's useful for roleplay, general consumer usecases, and Retrieval Augmented Generation (RAG).

It offers multilingual support for ten key languages to facilitate global business operations. See benchmarks and the launch post [here](https://txt.cohere.com/command-r-plus-microsoft-azure/).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R+ (04-2024)
cohere @ Cohere | Command R+ (04-2024)
Slug: cohere/command-r-plus-04-2024 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command R+ is a new, 104B-parameter LLM from Cohere. It's useful for roleplay, general consumer usecases, and Retrieval Augmented Generation (RAG).

It offers multilingual support for ten key languages to facilitate global business operations. See benchmarks and the launch post [here](https://txt.cohere.com/command-r-plus-microsoft-azure/).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R+ (08-2024)
cohere @ Cohere | Command R+ (08-2024)
Slug: cohere/command-r-plus-08-2024 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint the same.

Read the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
Cohere: Command R7B (12-2024)
cohere @ Cohere | Command R7B (12-2024)
Slug: cohere/command-r7b-12-2024 | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning and multiple steps.

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
DeepSeek V2.5
deepseek @ - | DeepSeek V2.5
Slug: deepseek/deepseek-chat-v2.5 | HF: deepseek-ai/DeepSeek-V2.5
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct. The new model integrates the general and coding abilities of the two previous versions. For model details, please visit [DeepSeek-V2 page](https://github.com/deepseek-ai/DeepSeek-V2) for more information....
LatitudeGames: Wayfarer Large 70B Llama 3.3
latitudegames @ - | Wayfarer Large 70B Llama 3.3
Slug: latitudegames/wayfarer-large-70b-llama-3.3 | HF: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Wayfarer Large 70B is a roleplay and text-adventure model fine-tuned from Meta’s Llama-3.3-70B-Instruct. Specifically optimized for narrative-driven, challenging scenarios, it introduces realistic stakes, conflicts, and consequences often avoided by standard RLHF-aligned models. Trained using a curated blend of adventure, roleplay, and instructive fiction datasets, Wayfarer emphasizes tense storytelling, authentic player failure scenarios, and robust narrative immersion, making it uniquely suited for interactive fiction and gaming experiences....
Meta: Llama 3.3 8B Instruct
meta-llama @ - | Llama 3.3 8B Instruct
Slug: meta-llama/llama-3.3-8b-instruct | HF:
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most....
Microsoft: Phi-3 Medium 128K Instruct
microsoft @ Azure | Phi-3 Medium 128K Instruct
Slug: microsoft/phi-3-medium-128k-instruct | HF: microsoft/Phi-3-medium-128k-instruct
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice
Reasoning: ✘ | Moderation: ✘ | TOS
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.

At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance.

For 4k context length, try [Phi-3 Medium 4K](/models/microsoft/phi-3-medium-4k-instruct)....
Microsoft: Phi-3 Mini 128K Instruct
microsoft @ Azure | Phi-3 Mini 128K Instruct
Slug: microsoft/phi-3-mini-128k-instruct | HF: microsoft/Phi-3-mini-128k-instruct
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice
Reasoning: ✘ | Moderation: ✘ | TOS
Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.

At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date....
Microsoft: Phi-3.5 Mini 128K Instruct
microsoft @ Azure | Phi-3.5 Mini 128K Instruct
Slug: microsoft/phi-3.5-mini-128k-instruct | HF: microsoft/Phi-3.5-mini-instruct
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice
Reasoning: ✘ | Moderation: ✘ | TOS
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as [Phi-3 Mini](/models/microsoft/phi-3-mini-128k-instruct).

The models underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise...
Mistral Large
mistralai @ Mistral | Mistral Large
Slug: mistralai/mistral-large | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/).

It supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents....
Mistral: Devstral Small 1.1
mistralai @ DeepInfra | Devstral Small 1.1
Slug: mistralai/devstral-small | HF: mistralai/Devstral-Small-2507
Context: 128000 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and released under the Apache 2.0 license, it features a 128k token context window and supports both Mistral-style function calling and XML output formats.

Designed for agentic coding workflows, Devstral Small 1.1 is optimized for tasks such as codebase exploration, multi-file edits, and integration into autonomous development agents like OpenHands and Cline. It achieves 53.6% on SWE-Bench Verified, surpa...
Mistral: Ministral 8B
mistralai @ Mistral | Ministral 8B
Slug: mistralai/ministral-8b | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications....
Mistral: Mistral Small 3.1 24B
mistralai @ DeepInfra | Mistral Small 3.1 24B
Slug: mistralai/mistral-small-3.1-24b-instruct | HF: mistralai/Mistral-Small-3.1-24B-Instruct-2503
Context: 128000 tokens | Free: No | Quant: fp8
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated vers...
Mistral: Mistral Small 3.1 24B (free)
mistralai @ Venice | Mistral Small 3.1 24B (free)
Slug: mistralai/mistral-small-3.1-24b-instruct | HF: mistralai/Mistral-Small-3.1-24B-Instruct-2503
Context: 128000 tokens | Free: Yes | Quant: fp8
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated vers...
Mistral: Mistral Small 3.2 24B
mistralai @ DeepInfra | Mistral Small 3.2 24B
Slug: mistralai/mistral-small-3.2-24b-instruct | HF: mistralai/Mistral-Small-3.2-24B-Instruct-2506
Context: 128000 tokens | Free: No | Quant: fp8
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks.

It supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA)....
OpenAI: ChatGPT-4o
openai @ OpenAI | ChatGPT-4o
Slug: openai/chatgpt-4o-latest | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
OpenAI ChatGPT 4o is continually updated by OpenAI to point to the current version of GPT-4o used by ChatGPT. It therefore differs slightly from the API version of [GPT-4o](/models/openai/gpt-4o) in that it has additional RLHF. It is intended for research and evaluation.

OpenAI notes that this model is not suited for production use-cases as it may be removed or redirected to another model in the future....
OpenAI: GPT-4 Turbo
openai @ OpenAI | GPT-4 Turbo
Slug: openai/gpt-4-turbo | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✔ | TOS
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling.

Training data: up to December 2023....
OpenAI: GPT-4 Turbo (older v1106)
openai @ OpenAI | GPT-4 Turbo (older v1106)
Slug: openai/gpt-4-1106-preview | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling.

Training data: up to April 2023....
OpenAI: GPT-4 Turbo Preview
openai @ OpenAI | GPT-4 Turbo Preview
Slug: openai/gpt-4-turbo-preview | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023.

**Note:** heavily rate limited by OpenAI while in preview....
OpenAI: GPT-4 Vision
openai @ - | GPT-4 Vision
Slug: openai/gpt-4-vision-preview | HF:
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Ability to understand images, in addition to all other [GPT-4 Turbo capabilties](/models/openai/gpt-4-turbo). Training data: up to Apr 2023.

**Note:** heavily rate limited by OpenAI while in preview.

#multimodal...
OpenAI: GPT-4.5 (Preview)
openai @ - | GPT-4.5 (Preview)
Slug: openai/gpt-4.5-preview | HF:
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GPT-4.5 (Preview) is a research preview of OpenAI’s latest language model, designed to advance capabilities in reasoning, creativity, and multi-turn conversation. It builds on previous iterations with improvements in world knowledge, contextual coherence, and the ability to follow user intent more effectively.

The model demonstrates enhanced performance in tasks that require open-ended thinking, problem-solving, and communication. Early testing suggests it is better at generating nuanced responses, maintaining long-context coherence, and reducing hallucinations compared to earlier versions....
OpenAI: GPT-4o
openai @ OpenAI | GPT-4o
Slug: openai/gpt-4o | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.

For benchmarking against other models, it was briefly called ["im-also-a-good-gpt2-chatbot"](https://twitter.com/LiamFedus/status/1790064963966370209)

#multimodal...
OpenAI: GPT-4o (2024-05-13)
openai @ OpenAI | GPT-4o (2024-05-13)
Slug: openai/gpt-4o-2024-05-13 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.

For benchmarking against other models, it was briefly called ["im-also-a-good-gpt2-chatbot"](https://twitter.com/LiamFedus/status/1790064963966370209)

#multimodal...
OpenAI: GPT-4o (2024-08-06)
openai @ Azure | GPT-4o (2024-08-06)
Slug: openai/gpt-4o-2024-08-06 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducing-structured-outputs-in-the-api/).

GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.

For ben...
OpenAI: GPT-4o (2024-11-20)
openai @ OpenAI | GPT-4o (2024-11-20)
Slug: openai/gpt-4o-2024-11-20 | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with uploaded files, providing deeper insights & more thorough responses.

GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhan...
OpenAI: GPT-4o (extended)
openai @ OpenAI | GPT-4o (extended)
Slug: openai/gpt-4o | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.

For benchmarking against other models, it was briefly called ["im-also-a-good-gpt2-chatbot"](https://twitter.com/LiamFedus/status/1790064963966370209)

#multimodal...
OpenAI: GPT-4o Search Preview
openai @ OpenAI | GPT-4o Search Preview
Slug: openai/gpt-4o-search-preview | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
web_search_options max_tokens response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o Search Previewis a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries....
OpenAI: GPT-4o-mini
openai @ OpenAI | GPT-4o-mini
Slug: openai/gpt-4o-mini | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs.

As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective.

GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/).

Check out the [launch announcement](https://op...
OpenAI: GPT-4o-mini (2024-07-18)
openai @ OpenAI | GPT-4o-mini (2024-07-18)
Slug: openai/gpt-4o-mini-2024-07-18 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text, image, file | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty web_search_options seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs.

As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective.

GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/).

Check out the [launch announcement](https://op...
OpenAI: GPT-4o-mini Search Preview
openai @ OpenAI | GPT-4o-mini Search Preview
Slug: openai/gpt-4o-mini-search-preview | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
web_search_options max_tokens response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4o mini Search Preview is a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries....
OpenAI: o1-mini
openai @ OpenAI | o1-mini
Slug: openai/o1-mini | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
seed max_tokens
Reasoning: ✔ | Moderation: ✔ | TOS
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.

The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).

Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited....
OpenAI: o1-mini (2024-09-12)
openai @ OpenAI | o1-mini (2024-09-12)
Slug: openai/o1-mini-2024-09-12 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
seed max_tokens
Reasoning: ✘ | Moderation: ✔ | TOS
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.

The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).

Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited....
OpenAI: o1-preview
openai @ OpenAI | o1-preview
Slug: openai/o1-preview | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
seed max_tokens
Reasoning: ✘ | Moderation: ✔ | TOS
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.

The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).

Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited....
OpenAI: o1-preview (2024-09-12)
openai @ OpenAI | o1-preview (2024-09-12)
Slug: openai/o1-preview-2024-09-12 | HF:
Context: 128000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
seed max_tokens
Reasoning: ✘ | Moderation: ✔ | TOS
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.

The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).

Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited....
Perplexity: R1 1776
perplexity @ Perplexity | R1 1776
Slug: perplexity/r1-1776 | HF: perplexity-ai/r1-1776
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning top_k frequency_penalty presence_penalty
Reasoning: ✔ | Moderation: ✘ | TOS
R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem.

The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. [Evaluation Results](https://cdn-uploads.huggingface.co/production/uploads/675c8332d...
Perplexity: Sonar Deep Research
perplexity @ Perplexity | Sonar Deep Research
Slug: perplexity/sonar-deep-research | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning top_k frequency_penalty presence_penalty
Reasoning: ✔ | Moderation: ✘ | TOS
Sonar Deep Research is a research-focused model designed for multi-step retrieval, synthesis, and reasoning across complex topics. It autonomously searches, reads, and evaluates sources, refining its approach as it gathers information. This enables comprehensive report generation across domains like finance, technology, health, and current events.

Notes on Pricing ([Source](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-deep-research))
- Input tokens comprise of Prompt tokens (user prompt) + Citation tokens (these are processed tokens from running searches)
- ...
Perplexity: Sonar Reasoning Pro
perplexity @ Perplexity | Sonar Reasoning Pro
Slug: perplexity/sonar-reasoning-pro | HF:
Context: 128000 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning web_search_options top_k frequency_penalty presence_penalty
Reasoning: ✔ | Moderation: ✘ | TOS
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)

Sonar Reasoning Pro is a premier reasoning model powered by DeepSeek R1 with Chain of Thought (CoT). Designed for advanced use cases, it supports in-depth, multi-step queries with a larger context window and can surface more citations per search, enabling more comprehensive and extensible responses....
Qwen: Qwen2.5 VL 32B Instruct
qwen @ DeepInfra | Qwen2.5 VL 32B Instruct
Slug: qwen/qwen2.5-vl-32b-instruct | HF: Qwen/Qwen2.5-VL-32B-Instruct
Context: 128000 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual interpretation within images, and precise event localization in extended videos. Qwen2.5-VL-32B demonstrates state-of-the-art performance across multimodal benchmarks such as MMMU, MathVista, and VideoMME, while maintaining strong reasoning and clarity in text-based tasks like MMLU, mathematical problem-solving, and code generation...
Qwen: Qwen3 8B
qwen @ NovitaAI | Qwen3 8B
Slug: qwen/qwen3-8b | HF: Qwen/Qwen3-8B
Context: 128000 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling....
SteelSkull: L3.3 Electra R1 70B
steelskull @ - | L3.3 Electra R1 70B
Slug: steelskull/l3.3-electra-r1-70b | HF: Steelskull/L3.3-Electra-R1-70b
Context: 128000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
L3.3-Electra-R1-70 is the newest release of the Unnamed series. Built on a DeepSeek R1 Distill base, Electra-R1 integrates various models together to provide an intelligent and coherent model capable of providing deep character insights. Through proper prompting, the model demonstrates advanced reasoning capabilities and unprompted exploration of character inner thoughts and motivations. Read more about the model and [prompting here](https://huggingface.co/Steelskull/L3.3-Electra-R1-70b)...
Perplexity: Llama 3.1 Sonar 70B Online
perplexity @ - | Llama 3.1 Sonar 70B Online
Slug: perplexity/llama-3.1-sonar-large-128k-online | HF:
Context: 127072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is the online version of the [offline chat model](/models/perplexity/llama-3.1-sonar-large-128k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online...
Perplexity: Llama 3.1 Sonar 8B Online
perplexity @ - | Llama 3.1 Sonar 8B Online
Slug: perplexity/llama-3.1-sonar-small-128k-online | HF:
Context: 127072 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is the online version of the [offline chat model](/models/perplexity/llama-3.1-sonar-small-128k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online...
Perplexity: Sonar
perplexity @ Perplexity | Sonar
Slug: perplexity/sonar | HF:
Context: 127072 tokens | Free: No | Quant: unknown
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p web_search_options top_k frequency_penalty presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Sonar is lightweight, affordable, fast, and simple to use — now featuring citations and the ability to customize sources. It is designed for companies seeking to integrate lightweight question-and-answer features optimized for speed....
Perplexity: Sonar Reasoning
perplexity @ Perplexity | Sonar Reasoning
Slug: perplexity/sonar-reasoning | HF:
Context: 127000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning web_search_options top_k frequency_penalty presence_penalty
Reasoning: ✔ | Moderation: ✘ | TOS
Sonar Reasoning is a reasoning model provided by Perplexity based on [DeepSeek R1](/deepseek/deepseek-r1).

It allows developers to utilize long chain of thought with built-in web search. Sonar Reasoning is uncensored and hosted in US datacenters. ...
Baidu: ERNIE 4.5 300B A47B
baidu @ NovitaAI | ERNIE 4.5 300B A47B
Slug: baidu/ernie-4.5-300b-a47b | HF: baidu/ERNIE-4.5-300B-A47B-PT
Context: 123000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✘ | Moderation: ✘ | TOS
ERNIE-4.5-300B-A47B is a 300B parameter Mixture-of-Experts (MoE) language model developed by Baidu as part of the ERNIE 4.5 series. It activates 47B parameters per token and supports text generation in both English and Chinese. Optimized for high-throughput inference and efficient scaling, it uses a heterogeneous MoE structure with advanced routing and quantization strategies, including FP8 and 2-bit formats. This version is fine-tuned for language-only tasks and supports reasoning, tool parameters, and extended context lengths up to 131k tokens. Suitable for general-purpose LLM applications w...
Anthropic: Claude Instant v1
anthropic @ - | Claude Instant v1
Slug: anthropic/claude-instant-1 | HF:
Context: 100000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text....
Anthropic: Claude Instant v1.0
anthropic @ - | Claude Instant v1.0
Slug: anthropic/claude-instant-1.0 | HF:
Context: 100000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text....
Anthropic: Claude Instant v1.1
anthropic @ - | Claude Instant v1.1
Slug: anthropic/claude-instant-1.1 | HF:
Context: 100000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text....
Anthropic: Claude v1
anthropic @ - | Claude v1
Slug: anthropic/claude-1 | HF:
Context: 100000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text....
Anthropic: Claude v1.2
anthropic @ - | Claude v1.2
Slug: anthropic/claude-1.2 | HF:
Context: 100000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text....
Anthropic: Claude v2.0
anthropic @ Anthropic | Claude v2.0
Slug: anthropic/claude-2.0 | HF:
Context: 100000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✔ | TOS
Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text....
Anthropic: Claude v2.0 (self-moderated)
anthropic @ Anthropic | Claude v2.0 (self-moderated)
Slug: anthropic/claude-2.0 | HF:
Context: 100000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p top_k stop
Reasoning: ✘ | Moderation: ✘ | TOS
Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text....
Agentica: Deepcoder 14B Preview
agentica-org @ Chutes | Deepcoder 14B Preview
Slug: agentica-org/deepcoder-14b-preview | HF: agentica-org/DeepCoder-14B-Preview
Context: 96000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepCoder-14B-Preview is a 14B parameter code generation model fine-tuned from DeepSeek-R1-Distill-Qwen-14B using reinforcement learning with GRPO+ and iterative context lengthening. It is optimized for long-context program synthesis and achieves strong performance across coding benchmarks, including 60.6% on LiveCodeBench v5, competitive with models like o3-Mini...
Agentica: Deepcoder 14B Preview (free)
agentica-org @ Chutes | Deepcoder 14B Preview (free)
Slug: agentica-org/deepcoder-14b-preview | HF: agentica-org/DeepCoder-14B-Preview
Context: 96000 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepCoder-14B-Preview is a 14B parameter code generation model fine-tuned from DeepSeek-R1-Distill-Qwen-14B using reinforcement learning with GRPO+ and iterative context lengthening. It is optimized for long-context program synthesis and achieves strong performance across coding benchmarks, including 60.6% on LiveCodeBench v5, competitive with models like o3-Mini...
Google: Gemma 3 12B
google @ Chutes | Gemma 3 12B
Slug: google/gemma-3-12b-it | HF: google/gemma-3-12b-it
Context: 96000 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 12B is the second largest in the family of Gemma 3 models after [Gemma 3 27B](google/gemma-3-27b-it)...
Google: Gemma 3 12B (free)
google @ Chutes | Gemma 3 12B (free)
Slug: google/gemma-3-12b-it | HF: google/gemma-3-12b-it
Context: 96000 tokens | Free: Yes | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 12B is the second largest in the family of Gemma 3 models after [Gemma 3 27B](google/gemma-3-27b-it)...
Google: Gemma 3 27B (free)
google @ Chutes | Gemma 3 27B (free)
Slug: google/gemma-3-27b-it | HF:
Context: 96000 tokens | Free: Yes | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to [Gemma 2](google/gemma-2-27b-it)...
Mistral: Mistral Small 3.2 24B (free)
mistralai @ Chutes | Mistral Small 3.2 24B (free)
Slug: mistralai/mistral-small-3.2-24b-instruct | HF: mistralai/Mistral-Small-3.2-24B-Instruct-2506
Context: 96000 tokens | Free: Yes | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice structured_outputs stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks.

It supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA)....
Liquid: LFM 40B MoE
liquid @ Lambda | LFM 40B MoE
Slug: liquid/lfm-40b | HF:
Context: 65536 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format min_p repetition_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems.

LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals.

See the [launch announcement](https://www.liquid.ai/liquid-foundation-models) for benchmarks and more info....
Meta: Llama 3.1 405B Instruct (free)
meta-llama @ Venice | Llama 3.1 405B Instruct (free)
Slug: meta-llama/llama-3.1-405b-instruct | HF: meta-llama/Meta-Llama-3.1-405B-Instruct
Context: 65536 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs.

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases.

It has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is...
Meta: Llama 3.3 70B Instruct (free)
meta-llama @ Venice | Llama 3.3 70B Instruct (free)
Slug: meta-llama/llama-3.3-70b-instruct | HF: meta-llama/Llama-3.3-70B-Instruct
Context: 65536 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

[Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)...
Mistral: Mixtral 8x22B (base)
mistralai @ - | Mixtral 8x22B (base)
Slug: mistralai/mixtral-8x22b | HF: mistralai/Mixtral-8x22B-v0.1
Context: 65536 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Mixtral 8x22B is a large-scale language model from Mistral AI. It consists of 8 experts, each 22 billion parameters, with each token using 2 experts at a time.

It was released via [X](https://twitter.com/MistralAI/status/1777869263778291896).

#moe...
Mistral: Mixtral 8x22B Instruct
mistralai @ Fireworks | Mixtral 8x22B Instruct
Slug: mistralai/mixtral-8x22b-instruct | HF: mistralai/Mixtral-8x22B-Instruct-v0.1
Context: 65536 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty top_k repetition_penalty response_format structured_outputs logit_bias logprobs top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:
- strong math, coding, and reasoning
- large context length (64k)
- fluency in English, French, Italian, German, and Spanish

See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/).
#moe...
MoonshotAI: Kimi K2 (free)
moonshotai @ Chutes | Kimi K2 (free)
Slug: moonshotai/kimi-k2 | HF: moonshotai/Kimi-K2-Instruct
Context: 65536 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large...
THUDM: GLM 4.1V 9B Thinking
thudm @ NovitaAI | GLM 4.1V 9B Thinking
Slug: thudm/glm-4.1v-9b-thinking | HF: THUDM/GLM-4.1V-9B-Thinking
Context: 65536 tokens | Free: No | Quant: n/a
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✔ | Moderation: ✘ | TOS
GLM-4.1V-9B-Thinking is a 9B parameter vision-language model developed by THUDM, based on the GLM-4-9B foundation. It introduces a reasoning-centric "thinking paradigm" enhanced with reinforcement learning to improve multimodal reasoning, long-context understanding (up to 64K tokens), and complex problem solving. It achieves state-of-the-art performance among models in its class, outperforming even larger models like Qwen-2.5-VL-72B on a majority of benchmark tasks. ...
WizardLM-2 8x22B
microsoft @ Parasail | WizardLM-2 8x22B
Slug: microsoft/wizardlm-2-8x22b | HF: microsoft/WizardLM-2-8x22B
Context: 65536 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p frequency_penalty min_p presence_penalty repetition_penalty seed stop top_k
Reasoning: ✘ | Moderation: ✘ | TOS
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models.

It is an instruct finetune of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b).

To read more about the model release, [click here](https://wizardlm.github.io/WizardLM2/).

#moe...
Zephyr 141B-A35B
huggingfaceh4 @ - | Zephyr 141B-A35B
Slug: huggingfaceh4/zephyr-orpo-141b-a35b | HF: HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
Context: 65536 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Zephyr 141B-A35B is A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.

It is an instruct finetune of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b).

#moe...
DeepSeek: R1 Distill Qwen 14B
deepseek @ NovitaAI | R1 Distill Qwen 14B
Slug: deepseek/deepseek-r1-distill-qwen-14b | HF: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Context: 64000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.

Other benchmark results include:

- AIME 2024 pass@1: 69.7
- MATH-500 pass@1: 93.9
- CodeForces Rating: 1481

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
DeepSeek: R1 Distill Qwen 14B (free)
deepseek @ Chutes | R1 Distill Qwen 14B (free)
Slug: deepseek/deepseek-r1-distill-qwen-14b | HF: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Context: 64000 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.

Other benchmark results include:

- AIME 2024 pass@1: 69.7
- MATH-500 pass@1: 93.9
- CodeForces Rating: 1481

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
Qwen: Qwen2.5 VL 3B Instruct
qwen @ - | Qwen2.5 VL 3B Instruct
Slug: qwen/qwen2.5-vl-3b-instruct | HF: Qwen/Qwen2.5-VL-3B-Instruct
Context: 64000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen2.5 VL 3B is a multimodal LLM from the Qwen Team with the following key enhancements:

- SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

- Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2.5-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.

- Multilingual Support: to serve global u...
MoonshotAI: Kimi K2
moonshotai @ Targon | Kimi K2
Slug: moonshotai/kimi-k2 | HF: moonshotai/Kimi-K2-Instruct
Context: 63000 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed top_k repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large...
Google: Gemini Experimental 1114
google @ - | Gemini Experimental 1114
Slug: google/gemini-exp-1114 | HF:
Context: 40960 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemini 11-14 (2024) experimental model features "quality" improvements....
Google: Gemini Experimental 1121
google @ - | Gemini Experimental 1121
Slug: google/gemini-exp-1121 | HF:
Context: 40960 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Experimental release (November 21st, 2024) of Gemini....
Mistral: Magistral Medium 2506
mistralai @ Mistral | Magistral Medium 2506
Slug: mistralai/magistral-medium-2506 | HF:
Context: 40960 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop frequency_penalty presence_penalty seed
Reasoning: ✔ | Moderation: ✘ | TOS
Magistral is Mistral's first reasoning model. It is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling — this model solves multi-step challenges where transparency and precision are critical....
Mistral: Magistral Medium 2506 (thinking)
mistralai @ Mistral | Magistral Medium 2506 (thinking)
Slug: mistralai/magistral-medium-2506 | HF:
Context: 40960 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop frequency_penalty presence_penalty seed
Reasoning: ✔ | Moderation: ✘ | TOS
Magistral is Mistral's first reasoning model. It is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling — this model solves multi-step challenges where transparency and precision are critical....
Qwen: Qwen3 14B
qwen @ DeepInfra | Qwen3 14B
Slug: qwen/qwen3-14b | HF: Qwen/Qwen3-14B
Context: 40960 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning response_format stop frequency_penalty presence_penalty repetition_penalty top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, programming, and logical inference, and a "non-thinking" mode for general-purpose conversation. The model is fine-tuned for instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling....
Qwen: Qwen3 14B (free)
qwen @ Chutes | Qwen3 14B (free)
Slug: qwen/qwen3-14b | HF: Qwen/Qwen3-14B
Context: 40960 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, programming, and logical inference, and a "non-thinking" mode for general-purpose conversation. The model is fine-tuned for instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling....
Qwen: Qwen3 235B A22B
qwen @ DeepInfra | Qwen3 235B A22B
Slug: qwen/qwen3-235b-a22b | HF: Qwen/Qwen3-235B-A22B
Context: 40960 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling....
Qwen: Qwen3 30B A3B
qwen @ DeepInfra | Qwen3 30B A3B
Slug: qwen/qwen3-30b-a3b | HF: Qwen/Qwen3-30B-A3B
Context: 40960 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning response_format stop frequency_penalty presence_penalty repetition_penalty top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance.

Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant inc...
Qwen: Qwen3 30B A3B (free)
qwen @ Chutes | Qwen3 30B A3B (free)
Slug: qwen/qwen3-30b-a3b | HF: Qwen/Qwen3-30B-A3B
Context: 40960 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance.

Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant inc...
Qwen: Qwen3 32B
qwen @ DeepInfra | Qwen3 32B
Slug: qwen/qwen3-32b | HF: Qwen/Qwen3-32B
Context: 40960 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling. ...
Qwen: Qwen3 32B (free)
qwen @ Chutes | Qwen3 32B (free)
Slug: qwen/qwen3-32b | HF: Qwen/Qwen3-32B
Context: 40960 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling. ...
Qwen: Qwen3 4B (free)
qwen @ Venice | Qwen3 4B (free)
Slug: qwen/qwen3-4b | HF: Qwen/Qwen3-4B
Context: 40960 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-4B is a 4 billion parameter dense language model from the Qwen3 series, designed to support both general-purpose and reasoning-intensive tasks. It introduces a dual-mode architecture—thinking and non-thinking—allowing dynamic switching between high-precision logical reasoning and efficient dialogue generation. This makes it well-suited for multi-turn chat, instruction following, and complex agent workflows....
Qwen: Qwen3 8B (free)
qwen @ Chutes | Qwen3 8B (free)
Slug: qwen/qwen3-8b | HF: Qwen/Qwen3-8B
Context: 40960 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling....
Mistral: Magistral Small 2506
mistralai @ Mistral | Magistral Small 2506
Slug: mistralai/magistral-small-2506 | HF: mistralai/Magistral-Small-2506
Context: 40000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice reasoning include_reasoning structured_outputs response_format stop frequency_penalty presence_penalty seed
Reasoning: ✔ | Moderation: ✘ | TOS
Magistral Small is a 24B parameter instruction-tuned model based on Mistral-Small-3.1 (2503), enhanced through supervised fine-tuning on traces from Magistral Medium and further refined via reinforcement learning. It is optimized for reasoning and supports a wide multilingual range, including over 20 languages....
01.AI: Yi Large
01-ai @ Fireworks | Yi Large
Slug: 01-ai/yi-large | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty response_format structured_outputs logit_bias logprobs top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
The Yi Large model was designed by 01.AI with the following usecases in mind: knowledge search, data classification, human-like chat bots, and customer service.

It stands out for its multilingual proficiency, particularly in Spanish, Chinese, Japanese, German, and French.

Check out the [launch announcement](https://01-ai.github.io/blog/01.ai-yi-large-llm-launch) to learn more....
AionLabs: Aion-RP 1.0 (8B)
aion-labs @ AionLabs | Aion-RP 1.0 (8B)
Slug: aion-labs/aion-rp-llama-3.1-8b | HF:
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
Aion-RP-Llama-3.1-8B ranks the highest in the character evaluation portion of the RPBench-Auto benchmark, a roleplaying-specific variant of Arena-Hard-Auto, where LLMs evaluate each other’s responses. It is a fine-tuned base model rather than an instruct model, designed to produce more natural and varied writing....
Arcee AI: Arcee Blitz
arcee-ai @ Together | Arcee Blitz
Slug: arcee-ai/arcee-blitz | HF: arcee-ai/arcee-blitz
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Arcee Blitz is a 24 B‑parameter dense model distilled from DeepSeek and built on Mistral architecture for "everyday" chat. The distillation‑plus‑refinement pipeline trims compute while keeping DeepSeek‑style reasoning, so Blitz punches above its weight on MMLU, GSM‑8K and BBH compared with other mid‑size open models. With a default 128 k context window and competitive throughput, it serves as a cost‑efficient workhorse for summarization, brainstorming and light code help. Internally, Arcee uses Blitz as the default writer in Conductor pipelines when the heavier Virtuoso line ...
Arcee AI: Caller Large
arcee-ai @ Together | Caller Large
Slug: arcee-ai/caller-large | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Caller Large is Arcee's specialist "function‑calling" SLM built to orchestrate external tools and APIs. Instead of maximizing next‑token accuracy, training focuses on structured JSON outputs, parameter extraction and multi‑step tool chains, making Caller a natural choice for retrieval‑augmented generation, robotic process automation or data‑pull chatbots. It incorporates a routing head that decides when (and how) to invoke a tool versus answering directly, reducing hallucinated calls. The model is already the backbone of Arcee Conductor's auto‑tool mode, where it parses user intent...
Arcee AI: Coder Large
arcee-ai @ Together | Coder Large
Slug: arcee-ai/coder-large | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context window, enabling multi‑file refactoring or long diff review in a single call, and understands 30‑plus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5–8 pt gains over CodeLlama‑34 B‑Python on HumanEval and competitive BugFix scores thanks to a reinforcement pass that rewards compilable output. The model emits structur...
ArliAI: QwQ 32B RpR v1 (free)
arliai @ Chutes | QwQ 32B RpR v1 (free)
Slug: arliai/qwq-32b-arliai-rpr-v1 | HF: ArliAI/QwQ-32B-ArliAI-RpR-v1
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.

The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on min...
Bytedance: UI-TARS 72B
bytedance-research @ - | UI-TARS 72B
Slug: bytedance-research/ui-tars-72b | HF: bytedance-research/UI-TARS-72B-DPO
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
UI-TARS 72B is an open-source multimodal AI model designed specifically for automating browser and desktop tasks through visual interaction and control. The model is built with a specialized vision architecture enabling accurate interpretation and manipulation of on-screen visual data. It supports automation tasks within web browsers as well as desktop applications, including Microsoft Office and VS Code.

Core capabilities include intelligent screen detection, predictive action modeling, and efficient handling of repetitive interactions. UI-TARS employs supervised fine-tuning (SFT) tailored e...
Databricks: DBRX 132B Instruct
databricks @ - | DBRX 132B Instruct
Slug: databricks/dbrx-instruct | HF: databricks/dbrx-instruct
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
DBRX is a new open source large language model developed by Databricks. At 132B, it outperforms existing open source LLMs like Llama 2 70B and [Mixtral-8x7b](/models/mistralai/mixtral-8x7b) on standard industry benchmarks for language understanding, programming, math, and logic.

It uses a fine-grained mixture-of-experts (MoE) architecture. 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts.

See the launch announc...
DeepSeek: DeepSeek V3 0324 (free)
deepseek @ AtlasCloud | DeepSeek V3 0324 (free)
Slug: deepseek/deepseek-chat-v3-0324 | HF: deepseek-ai/DeepSeek-V3-0324
Context: 32768 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team.

It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks....
Dolphin 2.6 Mixtral 8x7B 🐬
cognitivecomputations @ - | Dolphin 2.6 Mixtral 8x7B 🐬
Slug: cognitivecomputations/dolphin-mixtral-8x7b | HF: cognitivecomputations/dolphin-2.6-mixtral-8x7b
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is a 16k context fine-tune of [Mixtral-8x7b](/models/mistralai/mixtral-8x7b). It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning.

The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models).

#moe #uncensored...
Dolphin3.0 Mistral 24B (free)
cognitivecomputations @ Chutes | Dolphin3.0 Mistral 24B (free)
Slug: cognitivecomputations/dolphin3.0-mistral-24b | HF: cognitivecomputations/Dolphin3.0-Mistral-24B
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

Dolphin aims to be a general purpose instruct model, similar to the models behind ChatGPT, Claude, Gemini.

Part of the [Dolphin 3.0 Collection](https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3) Curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm), [BlouseJury](https://hug...
Dolphin3.0 R1 Mistral 24B
cognitivecomputations @ Chutes | Dolphin3.0 R1 Mistral 24B
Slug: cognitivecomputations/dolphin3.0-r1-mistral-24b | HF: cognitivecomputations/Dolphin3.0-R1-Mistral-24B
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.

Dolphin aims to be a general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini.

Part of the [Dolphin 3.0 Collection](https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3) Curated and trained ...
Dolphin3.0 R1 Mistral 24B (free)
cognitivecomputations @ Chutes | Dolphin3.0 R1 Mistral 24B (free)
Slug: cognitivecomputations/dolphin3.0-r1-mistral-24b | HF: cognitivecomputations/Dolphin3.0-R1-Mistral-24B
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.

Dolphin aims to be a general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini.

Part of the [Dolphin 3.0 Collection](https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3) Curated and trained ...
EVA Qwen2.5 14B
eva-unit-01 @ - | EVA Qwen2.5 14B
Slug: eva-unit-01/eva-qwen-2.5-14b | HF: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A model specializing in RP and creative writing, this model is based on Qwen2.5-14B, fine-tuned with a mixture of synthetic and natural data.

It is trained on 1.5M tokens of role-play data, and fine-tuned on 1.5M tokens of synthetic data....
Google: Gemma 3 4B (free)
google @ Google AI Studio | Gemma 3 4B (free)
Slug: google/gemma-3-4b-it | HF: google/gemma-3-4b-it
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling....
Google: Gemma 3n 4B
google @ Together | Gemma 3n 4B
Slug: google/gemma-3n-e4b-it | HF: google/gemma-3n-E4B-it
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements.

This model supports a wide linguistic ran...
Liquid: LFM 3B
liquid @ Liquid | LFM 3B
Slug: liquid/lfm-3b | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Liquid's LFM 3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller.

LFM-3B is the ideal choice for mobile and other edge text-based applications.

See the [launch announcement](https://www.liquid.ai/liquid-foundation-models) for benchmarks and more info....
Liquid: LFM 7B
liquid @ Liquid | LFM 7B
Slug: liquid/lfm-7b | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
LFM-7B, a new best-in-class language model. LFM-7B is designed for exceptional chat capabilities, including languages like Arabic and Japanese. Powered by the Liquid Foundation Model (LFM) architecture, it exhibits unique features like low memory footprint and fast inference speed.

LFM-7B is the world’s best-in-class multilingual language model in English, Arabic, and Japanese.

See the [launch announcement](https://www.liquid.ai/lfm-7b) for benchmarks and more info....
Lynn: Llama 3 Soliloquy 7B v3 32K
lynn @ - | Llama 3 Soliloquy 7B v3 32K
Slug: lynn/soliloquy-v3 | HF: openlynn/Soliloquy-7B-v3
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Soliloquy v3 is a highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 2 billion tokens of roleplaying data, Soliloquy v3 boasts a vast knowledge base and rich literary expression, supporting up to 32k context length. It outperforms existing models of comparable size, delivering enhanced roleplaying capabilities.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Magnum v2 72B
anthracite-org @ Infermatic | Magnum v2 72B
Slug: anthracite-org/magnum-v2-72b | HF: anthracite-org/magnum-v2-72b
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty logit_bias top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
From the maker of [Goliath](https://openrouter.ai/models/alpindale/goliath-120b), Magnum 72B is the seventh in a family of models designed to achieve the prose quality of the Claude 3 models, notably Opus & Sonnet.

The model is based on [Qwen2 72B](https://openrouter.ai/models/qwen/qwen-2-72b-instruct) and trained with 55 million tokens of highly curated roleplay (RP) data....
Meta: Llama 3.1 405B (base)
meta-llama @ Hyperbolic | Llama 3.1 405B (base)
Slug: meta-llama/llama-3.1-405b | HF: meta-llama/llama-3.1-405B
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty logprobs top_logprobs seed logit_bias top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This is the base 405B pre-trained version.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3.1 405B Instruct
meta-llama @ DeepInfra | Llama 3.1 405B Instruct
Slug: meta-llama/llama-3.1-405b-instruct | HF: meta-llama/Meta-Llama-3.1-405B-Instruct
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs.

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases.

It has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is...
Microsoft: Phi 4 Reasoning
microsoft @ - | Phi 4 Reasoning
Slug: microsoft/phi-4-reasoning | HF: microsoft/Phi-4-reasoning
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Phi-4-reasoning is a 14B parameter dense decoder-only transformer developed by Microsoft, fine-tuned from Phi-4 to enhance complex reasoning capabilities. It uses a combination of supervised fine-tuning on chain-of-thought traces and reinforcement learning, targeting math, science, and code reasoning tasks. With a 32k context window and high inference efficiency, it is optimized for structured responses in a two-part format: reasoning trace followed by a final solution.

The model achieves strong results on specialized benchmarks such as AIME, OmniMath, and LiveCodeBench, outperforming many la...
Microsoft: Phi 4 Reasoning Plus
microsoft @ DeepInfra | Phi 4 Reasoning Plus
Slug: microsoft/phi-4-reasoning-plus | HF: microsoft/Phi-4-reasoning-plus
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✔ | Moderation: ✘ | TOS
Phi-4-reasoning-plus is an enhanced 14B parameter model from Microsoft, fine-tuned from Phi-4 with additional reinforcement learning to boost accuracy on math, science, and code reasoning tasks. It uses the same dense decoder-only transformer architecture as Phi-4, but generates longer, more comprehensive outputs structured into a step-by-step reasoning trace and final answer.

While it offers improved benchmark scores over Phi-4-reasoning across tasks like AIME, OmniMath, and HumanEvalPlus, its responses are typically ~50% longer, resulting in higher latency. Designed for English-only applica...
Mistral Small
mistralai @ Mistral | Mistral Small
Slug: mistralai/mistral-small | HF:
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
With 22 billion parameters, Mistral Small v24.09 offers a convenient mid-point between (Mistral NeMo 12B)[/mistralai/mistral-nemo] and (Mistral Large 2)[/mistralai/mistral-large], providing a cost-effective solution that can be deployed across various platforms and environments. It has better reasoning, exhibits more capabilities, can produce and reason about code, and is multiligual, supporting English, French, German, Italian, and Spanish....
Mistral Tiny
mistralai @ Mistral | Mistral Tiny
Slug: mistralai/mistral-tiny | HF:
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b)

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1), inspired by community work. It's best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial....
Mistral: Devstral Small 2505
mistralai @ Chutes | Devstral Small 2505
Slug: mistralai/devstral-small-2505 | HF: mistralai/Devstral-Small-2505
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Devstral-Small-2505 is a 24B parameter agentic LLM fine-tuned from Mistral-Small-3.1, jointly developed by Mistral AI and All Hands AI for advanced software engineering tasks. It is optimized for codebase exploration, multi-file editing, and integration into coding agents, achieving state-of-the-art results on SWE-Bench Verified (46.8%).

Devstral supports a 128k context window and uses a custom Tekken tokenizer. It is text-only, with the vision encoder removed, and is suitable for local deployment on high-end consumer hardware (e.g., RTX 4090, 32GB RAM Macs). Devstral is best used in agentic ...
Mistral: Devstral Small 2505 (free)
mistralai @ Chutes | Devstral Small 2505 (free)
Slug: mistralai/devstral-small-2505 | HF: mistralai/Devstral-Small-2505
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Devstral-Small-2505 is a 24B parameter agentic LLM fine-tuned from Mistral-Small-3.1, jointly developed by Mistral AI and All Hands AI for advanced software engineering tasks. It is optimized for codebase exploration, multi-file editing, and integration into coding agents, achieving state-of-the-art results on SWE-Bench Verified (46.8%).

Devstral supports a 128k context window and uses a custom Tekken tokenizer. It is text-only, with the vision encoder removed, and is suitable for local deployment on high-end consumer hardware (e.g., RTX 4090, 32GB RAM Macs). Devstral is best used in agentic ...
Mistral: Mistral 7B Instruct
mistralai @ Enfer | Mistral 7B Instruct
Slug: mistralai/mistral-7b-instruct | HF: mistralai/Mistral-7B-Instruct-v0.3
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty logit_bias logprobs seed repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

*Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.*...
Mistral: Mistral 7B Instruct (free)
mistralai @ DeepInfra | Mistral 7B Instruct (free)
Slug: mistralai/mistral-7b-instruct | HF: mistralai/Mistral-7B-Instruct-v0.3
Context: 32768 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

*Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.*...
Mistral: Mistral 7B Instruct v0.2
mistralai @ Together | Mistral 7B Instruct v0.2
Slug: mistralai/mistral-7b-instruct-v0.2 | HF: mistralai/Mistral-7B-Instruct-v0.2
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes:

- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention...
Mistral: Mistral 7B Instruct v0.3
mistralai @ DeepInfra | Mistral 7B Instruct v0.3
Slug: mistralai/mistral-7b-instruct-v0.3 | HF: mistralai/Mistral-7B-Instruct-v0.3
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

An improved version of [Mistral 7B Instruct v0.2](/models/mistralai/mistral-7b-instruct-v0.2), with the following changes:

- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling

NOTE: Support for function calling depends on the provider....
Mistral: Mistral Small 3
mistralai @ Chutes | Mistral Small 3
Slug: mistralai/mistral-small-24b-instruct-2501 | HF: mistralai/Mistral-Small-24B-Instruct-2501
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.

The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)...
Mistral: Mistral Small 3 (free)
mistralai @ Chutes | Mistral Small 3 (free)
Slug: mistralai/mistral-small-24b-instruct-2501 | HF: mistralai/Mistral-Small-24B-Instruct-2501
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.

The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)...
Mistral: Mixtral 8x7B Instruct
mistralai @ DeepInfra | Mixtral 8x7B Instruct
Slug: mistralai/mixtral-8x7b-instruct | HF: mistralai/Mixtral-8x7B-Instruct-v0.1
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.

Instruct model fine-tuned by Mistral. #moe...
Mistral: Pixtral 12B
mistralai @ Hyperbolic | Pixtral 12B
Slug: mistralai/pixtral-12b | HF: mistralai/Pixtral-12B-2409
Context: 32768 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty logprobs top_logprobs seed logit_bias top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836....
Mistral: Saba
mistralai @ Mistral | Saba
Slug: mistralai/mistral-saba | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty response_format structured_outputs seed
Reasoning: ✘ | Moderation: ✘ | TOS
Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance. Trained on curated regional datasets, it supports multiple Indian-origin languages—including Tamil and Malayalam—alongside Arabic. This makes it a versatile option for a range of regional and multilingual applications. Read more at the blog post [here](https://mistral.ai/en/news/mistral-saba)...
MythoMist 7B
gryphe @ - | MythoMist 7B
Slug: gryphe/mythomist-7b | HF: Gryphe/MythoMist-7b
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
From the creator of [MythoMax](/models/gryphe/mythomax-l2-13b), merges a suite of models to reduce word anticipation, ministrations, and other undesirable words in ChatGPT roleplaying data.

It combines [Neural Chat 7B](/models/intel/neural-chat-7b), Airoboros 7b, [Toppy M 7B](/models/undi95/toppy-m-7b), [Zepher 7b beta](/models/huggingfaceh4/zephyr-7b-beta), [Nous Capybara 34B](/models/nousresearch/nous-capybara-34b), [OpenHeremes 2.5](/models/teknium/openhermes-2.5-mistral-7b), and many others.

#merge...
NeverSleep: Lumimaid v0.2 8B
neversleep @ NextBit | Lumimaid v0.2 8B
Slug: neversleep/llama-3.1-lumimaid-8b | HF: NeverSleep/Lumimaid-v0.2-8B
Context: 32768 tokens | Free: No | Quant: int4
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Lumimaid v0.2 8B is a finetune of [Llama 3.1 8B](/models/meta-llama/llama-3.1-8b-instruct) with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Nous: DeepHermes 3 Mistral 24B Preview
nousresearch @ - | DeepHermes 3 Mistral 24B Preview
Slug: nousresearch/deephermes-3-mistral-24b-preview | HF: NousResearch/DeepHermes-3-Mistral-24B-Preview
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
DeepHermes 3 (Mistral 24B Preview) is an instruction-tuned language model by Nous Research based on Mistral-Small-24B, designed for chat, function calling, and advanced multi-turn reasoning. It introduces a dual-mode system that toggles between intuitive chat responses and structured “deep reasoning” mode using special system prompts. Fine-tuned via distillation from R1, it supports structured output (JSON mode) and function call syntax for agent-based applications.

DeepHermes 3 supports a **reasoning toggle via system prompt**, allowing users to switch between fast, intuitive responses a...
Nous: Hermes 2 Mixtral 8x7B DPO
nousresearch @ Together | Hermes 2 Mixtral 8x7B DPO
Slug: nousresearch/nous-hermes-2-mixtral-8x7b-dpo | HF: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b).

The model was trained on over 1,000,000 entries of primarily [GPT-4](/models/openai/gpt-4) generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe...
Nous: Hermes 2 Mixtral 8x7B SFT
nousresearch @ - | Hermes 2 Mixtral 8x7B SFT
Slug: nousresearch/nous-hermes-2-mixtral-8x7b-sft | HF: NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of [the Nous Research model](/models/nousresearch/nous-hermes-2-mixtral-8x7b-dpo) trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b).

The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe...
OlympicCoder 32B
open-r1 @ - | OlympicCoder 32B
Slug: open-r1/olympiccoder-32b | HF: open-r1/OlympicCoder-32B
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OlympicCoder-32B is a high-performing open-source model fine-tuned using the CodeForces-CoTs dataset, containing approximately 100,000 chain-of-thought programming samples. It excels at complex competitive programming benchmarks, such as IOI 2024 and Codeforces-style challenges, frequently surpassing state-of-the-art closed-source models. OlympicCoder-32B provides advanced reasoning, coherent multi-step problem-solving, and robust code generation capabilities, demonstrating significant potential for olympiad-level competitive programming applications....
Perplexity: Llama3 Sonar 70B
perplexity @ - | Llama3 Sonar 70B
Slug: perplexity/llama-3-sonar-large-32k-chat | HF:
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is a normal offline LLM, but the [online version](/models/perplexity/llama-3-sonar-large-32k-online) of this model has Internet access....
Perplexity: Llama3 Sonar 8B
perplexity @ - | Llama3 Sonar 8B
Slug: perplexity/llama-3-sonar-small-32k-chat | HF:
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is a normal offline LLM, but the [online version](/models/perplexity/llama-3-sonar-small-32k-online) of this model has Internet access....
Qwen 1.5 110B Chat
qwen @ - | Qwen 1.5 110B Chat
Slug: qwen/qwen-110b-chat | HF: Qwen/Qwen1.5-110B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 110B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](http...
Qwen 1.5 14B Chat
qwen @ - | Qwen 1.5 14B Chat
Slug: qwen/qwen-14b-chat | HF: Qwen/Qwen1.5-14B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 14B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https...
Qwen 1.5 32B Chat
qwen @ - | Qwen 1.5 32B Chat
Slug: qwen/qwen-32b-chat | HF: Qwen/Qwen1.5-32B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 32B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https...
Qwen 1.5 4B Chat
qwen @ - | Qwen 1.5 4B Chat
Slug: qwen/qwen-4b-chat | HF: Qwen/Qwen1.5-4B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 4B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https:...
Qwen 1.5 72B Chat
qwen @ - | Qwen 1.5 72B Chat
Slug: qwen/qwen-72b-chat | HF: Qwen/Qwen1.5-72B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 72B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https...
Qwen 1.5 7B Chat
qwen @ - | Qwen 1.5 7B Chat
Slug: qwen/qwen-7b-chat | HF: Qwen/Qwen1.5-7B-Chat
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen1.5 7B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https:...
Qwen 2 72B Instruct
qwen @ Together | Qwen 2 72B Instruct
Slug: qwen/qwen-2-72b-instruct | HF: Qwen/Qwen2-72B-Instruct
Context: 32768 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2 72B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.

It features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE)....
Qwen 2 7B Instruct
qwen @ - | Qwen 2 7B Instruct
Slug: qwen/qwen-2-7b-instruct | HF: Qwen/Qwen2-7B-Instruct
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.

It features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.

For more details, see this [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).

Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE)....
Qwen2.5 72B Instruct
qwen @ DeepInfra | Qwen2.5 72B Instruct
Slug: qwen/qwen-2.5-72b-instruct | HF: Qwen/Qwen2.5-72B-Instruct
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:

- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.

- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.

- Long-context...
Qwen2.5 72B Instruct (free)
qwen @ Chutes | Qwen2.5 72B Instruct (free)
Slug: qwen/qwen-2.5-72b-instruct | HF: Qwen/Qwen2.5-72B-Instruct
Context: 32768 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:

- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.

- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.

- Long-context...
Qwen2.5 7B Instruct
qwen @ DeepInfra | Qwen2.5 7B Instruct
Slug: qwen/qwen-2.5-7b-instruct | HF: Qwen/Qwen2.5-7B-Instruct
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:

- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.

- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.

- Long-context ...
Qwen2.5 Coder 32B Instruct
qwen @ DeepInfra | Qwen2.5 Coder 32B Instruct
Slug: qwen/qwen-2.5-coder-32b-instruct | HF: Qwen/Qwen2.5-Coder-32B-Instruct
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

- Significantly improvements in **code generation**, **code reasoning** and **code fixing**.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.

To read more about its evaluation results, check out [Qwen 2.5 Coder's blog](https://qwenlm.github.io/blog/qwen2.5-coder-family...
Qwen2.5 Coder 32B Instruct (free)
qwen @ Venice | Qwen2.5 Coder 32B Instruct (free)
Slug: qwen/qwen-2.5-coder-32b-instruct | HF: Qwen/Qwen2.5-Coder-32B-Instruct
Context: 32768 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

- Significantly improvements in **code generation**, **code reasoning** and **code fixing**.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.

To read more about its evaluation results, check out [Qwen 2.5 Coder's blog](https://qwenlm.github.io/blog/qwen2.5-coder-family...
Qwen: QwQ 32B (free)
qwen @ Venice | QwQ 32B (free)
Slug: qwen/qwq-32b | HF: Qwen/QwQ-32B
Context: 32768 tokens | Free: Yes | Quant: fp8
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini....
Qwen: QwQ 32B Preview
qwen @ Hyperbolic | QwQ 32B Preview
Slug: qwen/qwq-32b-preview | HF: Qwen/QwQ-32B-Preview
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty logprobs top_logprobs seed logit_bias top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
QwQ-32B-Preview is an experimental research model focused on AI reasoning capabilities developed by the Qwen Team. As a preview release, it demonstrates promising analytical abilities while having several important limitations:

1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.
2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensur...
Qwen: Qwen-Max
qwen @ Alibaba | Qwen-Max
Slug: qwen/qwen-max | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown....
Qwen: Qwen2.5 VL 72B Instruct (free)
qwen @ Venice | Qwen2.5 VL 72B Instruct (free)
Slug: qwen/qwen2.5-vl-72b-instruct | HF: Qwen/Qwen2.5-VL-72B-Instruct
Context: 32768 tokens | Free: Yes | Quant: fp8
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and insects. It is also highly capable of analyzing texts, charts, icons, graphics, and layouts within images....
Qwen: Qwen2.5-VL 7B Instruct
qwen @ Hyperbolic | Qwen2.5-VL 7B Instruct
Slug: qwen/qwen-2.5-vl-7b-instruct | HF: Qwen/Qwen2.5-VL-7B-Instruct
Context: 32768 tokens | Free: No | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty logprobs top_logprobs seed logit_bias top_k min_p repetition_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements:

- SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

- Understanding videos of 20min+: Qwen2.5-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.

- Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2.5-VL can be integrated ...
Qwerky 72B (free)
featherless @ Featherless | Qwerky 72B (free)
Slug: featherless/qwerky-72b | HF: featherless-ai/Qwerky-72B
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
Qwerky-72B is a linear-attention RWKV variant of the Qwen 2.5 72B model, optimized to significantly reduce computational cost at scale. Leveraging linear attention, it achieves substantial inference speedups (>1000x) while retaining competitive accuracy on common benchmarks like ARC, HellaSwag, Lambada, and MMLU. It inherits knowledge and language support from Qwen 2.5, supporting approximately 30 languages, making it suitable for efficient inference in large-context applications....
Reka: Flash 3
rekaai @ Chutes | Flash 3
Slug: rekaai/reka-flash-3 | HF: RekaAI/reka-flash-3
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Reka Flash 3 is a general-purpose, instruction-tuned large language model with 21 billion parameters, developed by Reka. It excels at general chat, coding tasks, instruction-following, and function calling. Featuring a 32K context length and optimized through reinforcement learning (RLOO), it provides competitive performance comparable to proprietary models within a smaller parameter footprint. Ideal for low-latency, local, or on-device deployments, Reka Flash 3 is compact, supports efficient quantization (down to 11GB at 4-bit precision), and employs explicit reasoning tags ("<reasoning>") to...
Reka: Flash 3 (free)
rekaai @ Chutes | Flash 3 (free)
Slug: rekaai/reka-flash-3 | HF: RekaAI/reka-flash-3
Context: 32768 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Reka Flash 3 is a general-purpose, instruction-tuned large language model with 21 billion parameters, developed by Reka. It excels at general chat, coding tasks, instruction-following, and function calling. Featuring a 32K context length and optimized through reinforcement learning (RLOO), it provides competitive performance comparable to proprietary models within a smaller parameter footprint. Ideal for low-latency, local, or on-device deployments, Reka Flash 3 is compact, supports efficient quantization (down to 11GB at 4-bit precision), and employs explicit reasoning tags ("<reasoning>") to...
Sao10K: Llama 3.1 Euryale 70B v2.2
sao10k @ NextBit | Llama 3.1 Euryale 70B v2.2
Slug: sao10k/l3.1-euryale-70b | HF: Sao10K/L3.1-70B-Euryale-v2.2
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Euryale L3.1 70B v2.2 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.1](/models/sao10k/l3-euryale-70b)....
Sarvam AI: Sarvam-M
sarvamai @ Chutes | Sarvam-M
Slug: sarvamai/sarvam-m | HF: sarvamai/sarvam-m
Context: 32768 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Sarvam-M is a 24 B-parameter, instruction-tuned derivative of Mistral-Small-3.1-24B-Base-2503, post-trained on English plus eleven major Indic languages (bn, hi, kn, gu, mr, ml, or, pa, ta, te). The model introduces a dual-mode interface: “non-think” for low-latency chat and a optional “think” phase that exposes chain-of-thought tokens for more demanding reasoning, math, and coding tasks.

Benchmark reports show solid gains versus similarly sized open models on Indic-language QA, GSM-8K math, and SWE-Bench coding, making Sarvam-M a practical general-purpose choice for multilingual con...
Sarvam AI: Sarvam-M (free)
sarvamai @ Chutes | Sarvam-M (free)
Slug: sarvamai/sarvam-m | HF: sarvamai/sarvam-m
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Sarvam-M is a 24 B-parameter, instruction-tuned derivative of Mistral-Small-3.1-24B-Base-2503, post-trained on English plus eleven major Indic languages (bn, hi, kn, gu, mr, ml, or, pa, ta, te). The model introduces a dual-mode interface: “non-think” for low-latency chat and a optional “think” phase that exposes chain-of-thought tokens for more demanding reasoning, math, and coding tasks.

Benchmark reports show solid gains versus similarly sized open models on Indic-language QA, GSM-8K math, and SWE-Bench coding, making Sarvam-M a practical general-purpose choice for multilingual con...
Shisa AI: Shisa V2 Llama 3.3 70B (free)
shisa-ai @ Chutes | Shisa V2 Llama 3.3 70B (free)
Slug: shisa-ai/shisa-v2-llama3.3-70b | HF: shisa-ai/shisa-v2-llama3.3-70b
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Shisa V2 Llama 3.3 70B is a bilingual Japanese-English chat model fine-tuned by Shisa.AI on Meta’s Llama-3.3-70B-Instruct base. It prioritizes Japanese language performance while retaining strong English capabilities. The model was optimized entirely through post-training, using a refined mix of supervised fine-tuning (SFT) and DPO datasets including regenerated ShareGPT-style data, translation tasks, roleplaying conversations, and instruction-following prompts. Unlike earlier Shisa releases, this version avoids tokenizer modifications or extended pretraining.

Shisa V2 70B achieves leading ...
StripedHyena Hessian 7B (base)
togethercomputer @ - | StripedHyena Hessian 7B (base)
Slug: togethercomputer/stripedhyena-hessian-7b | HF: togethercomputer/StripedHyena-Hessian-7B
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is the base model variant of the [StripedHyena series](/models?q=stripedhyena), developed by Together.

StripedHyena uses a new architecture that competes with traditional Transformers, particularly in long-context data processing. It combines attention mechanisms with gated convolutions for improved speed, efficiency, and scaling. This model marks an advancement in AI architecture for sequence modeling tasks....
StripedHyena Nous 7B
togethercomputer @ - | StripedHyena Nous 7B
Slug: togethercomputer/stripedhyena-nous-7b | HF: togethercomputer/StripedHyena-Nous-7B
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is the chat model variant of the [StripedHyena series](/models?q=stripedhyena) developed by Together in collaboration with Nous Research.

StripedHyena uses a new architecture that competes with traditional Transformers, particularly in long-context data processing. It combines attention mechanisms with gated convolutions for improved speed, efficiency, and scaling. This model marks a significant advancement in AI architecture for sequence modeling tasks....
THUDM: GLM 4 32B (free)
thudm @ Chutes | GLM 4 32B (free)
Slug: thudm/glm-4-32b | HF: THUDM/GLM-4-32B-0414
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
GLM-4-32B-0414 is a 32B bilingual (Chinese-English) open-weight language model optimized for code generation, function calling, and agent-style tasks. Pretrained on 15T of high-quality and reasoning-heavy data, it was further refined using human preference alignment, rejection sampling, and reinforcement learning. The model excels in complex reasoning, artifact generation, and structured output tasks, achieving performance comparable to GPT-4o and DeepSeek-V3-0324 across several benchmarks....
THUDM: GLM Z1 32B (free)
thudm @ Chutes | GLM Z1 32B (free)
Slug: thudm/glm-z1-32b | HF: THUDM/GLM-Z1-32B-0414
Context: 32768 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
GLM-Z1-32B-0414 is an enhanced reasoning variant of GLM-4-32B, built for deep mathematical, logical, and code-oriented problem solving. It applies extended reinforcement learning—both task-specific and general pairwise preference-based—to improve performance on complex multi-step tasks. Compared to the base GLM-4-32B model, Z1 significantly boosts capabilities in structured reasoning and formal domains.

The model supports enforced “thinking” steps via prompt engineering and offers improved coherence for long-form outputs. It’s optimized for use in agentic workflows, and includes sup...
Tencent: Hunyuan A13B Instruct
tencent @ Chutes | Hunyuan A13B Instruct
Slug: tencent/hunyuan-a13b-instruct | HF: tencent/Hunyuan-A13B-Instruct
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers competitive benchmark performance across mathematics, science, coding, and multi-turn reasoning tasks, while maintaining high inference efficiency via Grouped Query Attention (GQA) and quantization support (FP8, GPTQ, etc.)....
Tencent: Hunyuan A13B Instruct (free)
tencent @ Chutes | Hunyuan A13B Instruct (free)
Slug: tencent/hunyuan-a13b-instruct | HF: tencent/Hunyuan-A13B-Instruct
Context: 32768 tokens | Free: Yes | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✔ | Moderation: ✘ | TOS
Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers competitive benchmark performance across mathematics, science, coding, and multi-turn reasoning tasks, while maintaining high inference efficiency via Grouped Query Attention (GQA) and quantization support (FP8, GPTQ, etc.)....
TheDrummer: Rocinante 12B
thedrummer @ NextBit | Rocinante 12B
Slug: thedrummer/rocinante-12b | HF: TheDrummer/Rocinante-12B-v1.1
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Rocinante 12B is designed for engaging storytelling and rich prose.

Early testers have reported:
- Expanded vocabulary with unique and expressive word choices
- Enhanced creativity for vivid narratives
- Adventure-filled and captivating stories...
TheDrummer: Skyfall 36B V2
thedrummer @ NextBit | Skyfall 36B V2
Slug: thedrummer/skyfall-36b-v2 | HF: TheDrummer/Skyfall-36B-v2
Context: 32768 tokens | Free: No | Quant: int4
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Skyfall 36B v2 is an enhanced iteration of Mistral Small 2501, specifically fine-tuned for improved creativity, nuanced writing, role-playing, and coherent storytelling....
TheDrummer: UnslopNemo 12B
thedrummer @ NextBit | UnslopNemo 12B
Slug: thedrummer/unslopnemo-12b | HF: TheDrummer/UnslopNemo-12B-v4.1
Context: 32768 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios....
TheDrummer: Valkyrie 49B V1
thedrummer @ NextBit | Valkyrie 49B V1
Slug: thedrummer/valkyrie-49b-v1 | HF: TheDrummer/Valkyrie-49B-v1
Context: 32768 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
Built on top of NVIDIA's Llama 3.3 Nemotron Super 49B, Valkyrie is TheDrummer's newest model drop for creative writing....
Venice: Uncensored (free)
cognitivecomputations @ Venice | Uncensored (free)
Slug: cognitivecomputations/dolphin-mistral-24b-venice-edition | HF: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
Context: 32768 tokens | Free: Yes | Quant: fp16
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p structured_outputs response_format stop frequency_penalty presence_penalty top_k
Reasoning: ✘ | Moderation: ✘ | TOS
Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “uncensored” instruct-tuned LLM, preserving user control over alignment, system prompts, and behavior. Intended for advanced and unrestricted use cases, Venice Uncensored emphasizes steerability and transparent behavior, removing default safety and alignment layers typically found in mainstream assistant models....
xAI: Grok 2
x-ai @ - | Grok 2
Slug: x-ai/grok-2 | HF:
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Grok 2 is xAI's frontier language model with state-of-the-art reasoning capabilities, best for complex and multi-step use cases.

To use a faster version, see [Grok 2 Mini](/x-ai/grok-2-mini).

For more information, see the [launch announcement](https://x.ai/blog/grok-2)....
xAI: Grok 2 Vision 1212
x-ai @ xAI | Grok 2 Vision 1212
Slug: x-ai/grok-2-vision-1212 | HF:
Context: 32768 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Grok 2 Vision 1212 advances image-based AI with stronger visual comprehension, refined instruction-following, and multilingual support. From object recognition to style analysis, it empowers developers to build more intuitive, visually aware applications. Its enhanced steerability and reasoning establish a robust foundation for next-generation image solutions.

To read more about this model, check out [xAI's announcement](https://x.ai/blog/grok-1212)....
xAI: Grok 2 mini
x-ai @ - | Grok 2 mini
Slug: x-ai/grok-2-mini | HF:
Context: 32768 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Grok 2 Mini is xAI's fast, lightweight language model that offers a balance between speed and answer quality.

To use the stronger model, see [Grok Beta](/x-ai/grok-beta).

For more information, see the [launch announcement](https://x.ai/blog/grok-2)....
OpenAI: GPT-4 32k
openai @ - | GPT-4 32k
Slug: openai/gpt-4-32k | HF:
Context: 32767 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021....
OpenAI: GPT-4 32k (older v0314)
openai @ - | GPT-4 32k (older v0314)
Slug: openai/gpt-4-32k-0314 | HF:
Context: 32767 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021....
Google: PaLM 2 Chat 32k
google @ - | PaLM 2 Chat 32k
Slug: google/palm-2-chat-bison-32k | HF:
Context: 32760 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
PaLM 2 is a language model by Google with improved multilingual, reasoning and coding capabilities....
Google: PaLM 2 Code Chat 32k
google @ - | PaLM 2 Code Chat 32k
Slug: google/palm-2-codechat-bison-32k | HF:
Context: 32760 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
PaLM 2 fine-tuned for chatbot conversations that help with code-related questions....
DeepSeek: Deepseek R1 0528 Qwen3 8B
deepseek @ Nineteen | Deepseek R1 0528 Qwen3 8B
Slug: deepseek/deepseek-r1-0528-qwen3-8b | HF: deepseek-ai/deepseek-r1-0528-qwen3-8b
Context: 32000 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek-R1-0528 is a lightly upgraded release of DeepSeek R1 that taps more compute and smarter post-training tricks, pushing its reasoning and inference to the brink of flagship models like O3 and Gemini 2.5 Pro.
It now tops math, programming, and logic leaderboards, showcasing a step-change in depth-of-thought.
The distilled variant, DeepSeek-R1-0528-Qwen3-8B, transfers this chain-of-thought into an 8 B-parameter form, beating standard Qwen3 8B by +10 pp and tying the 235 B “thinking” giant on AIME 2024....
DeepSeek: R1 Distill Llama 8B
deepseek @ NovitaAI | R1 Distill Llama 8B
Slug: deepseek/deepseek-r1-distill-llama-8b | HF: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Context: 32000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Llama 8B is a distilled large language model based on [Llama-3.1-8B-Instruct](/meta-llama/llama-3.1-8b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

- AIME 2024 pass@1: 50.4
- MATH-500 pass@1: 89.1
- CodeForces Rating: 1205

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Hugging Face:
- [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
...
EVA Qwen2.5 32B
eva-unit-01 @ - | EVA Qwen2.5 32B
Slug: eva-unit-01/eva-qwen-2.5-32b | HF: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
EVA Qwen2.5 32B is a roleplaying/storywriting specialist model. It's a full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.

It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model....
Google: Gemma 3 1B
google @ - | Gemma 3 1B
Slug: google/gemma-3-1b-it | HF: google/gemma-3-1b-it
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemma 3 1B is the smallest of the new Gemma 3 family. It handles context windows up to 32k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Note: Gemma 3 1B is not multimodal. For the smallest multimodal Gemma 3 model, please see [Gemma 3 4B](google/gemma-3-4b-it)...
Inception: Mercury
inception @ Inception | Mercury
Slug: inception/mercury | HF:
Context: 32000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens frequency_penalty presence_penalty stop
Reasoning: ✘ | Moderation: ✘ | TOS
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide responsive user experiences, including with voice agents, search interfaces, and chatbots. Read more in the blog post here. ...
Inception: Mercury Coder
inception @ Inception | Mercury Coder
Slug: inception/mercury-coder | HF:
Context: 32000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens frequency_penalty presence_penalty stop
Reasoning: ✘ | Moderation: ✘ | TOS
Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/introducing-mercury)....
Inflatebot: Mag Mell R1 12B
inflatebot @ - | Mag Mell R1 12B
Slug: inflatebot/mn-mag-mell-r1 | HF: inflatebot/MN-12B-Mag-Mell-R1
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Mag Mell is a merge of pre-trained language models created using mergekit, based on [Mistral Nemo](/mistralai/mistral-nemo). It is a great roleplay and storytelling model which combines the best parts of many other models to be a general purpose solution for many usecases.

Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case.

Mag Mell is composed of 3 intermediate parts:
- Hero (RP, trope coverage)
- Monk (Intelligence, groundedness)
- Deity (Prose, flair)...
Mistral Medium
mistralai @ - | Mistral Medium
Slug: mistralai/mistral-medium | HF:
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is Mistral AI's closed-source, medium-sided model. It's powered by a closed-source prototype and excels at reasoning, code, JSON, chat, and more. In benchmarks, it compares with many of the flagship models of other companies....
Mistral: Mistral Nemo
mistralai @ Nineteen | Mistral Nemo
Slug: mistralai/mistral-nemo | HF: mistralai/Mistral-Nemo-Instruct-2407
Context: 32000 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.

It supports function calling and is released under the Apache 2.0 license....
Morph: Fast Apply
morph @ Morph | Fast Apply
Slug: morph/morph-v2 | HF:
Context: 32000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
Reasoning: ✘ | Moderation: ✘ | TOS
Morph Apply is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at 4000+ tokens per second.

The model requires the prompt to be in the following format:
<code>${originalCode}</code>\n<update>${updateSnippet}</update>

Learn more about this model in their [documentation](https://docs.morphllm.com/)...
Morph: Morph V3 Fast
morph @ Morph | Morph V3 Fast
Slug: morph/morph-v3-fast | HF:
Context: 32000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
Reasoning: ✘ | Moderation: ✘ | TOS
Morph's fastest apply model for code edits. 4500+ tokens/sec with 96% accuracy for rapid code transformations....
Morph: Morph V3 Large
morph @ Morph | Morph V3 Large
Slug: morph/morph-v3-large | HF:
Context: 32000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
Reasoning: ✘ | Moderation: ✘ | TOS
Morph's high-accuracy apply model for complex code edits. 2000+ tokens/sec with 98% accuracy for precise code transformations....
OpenGVLab: InternVL3 2B
opengvlab @ - | InternVL3 2B
Slug: opengvlab/internvl3-2b | HF: OpenGVLab/InternVL3-2B
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: image, text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The 2b version of the InternVL3 series, for an even higher inference speed and very reasonable performance. An advanced multimodal large language model (MLLM) series that demonstrates superior overall performance. Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more....
Qwen: Qwen2.5 VL 72B Instruct
qwen @ Nebius AI Studio | Qwen2.5 VL 72B Instruct
Slug: qwen/qwen2.5-vl-72b-instruct | HF: Qwen/Qwen2.5-VL-72B-Instruct
Context: 32000 tokens | Free: No | Quant: fp8
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k logit_bias logprobs top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and insects. It is also highly capable of analyzing texts, charts, icons, graphics, and layouts within images....
Qwen: Qwen3 0.6B
qwen @ - | Qwen3 0.6B
Slug: qwen/qwen3-0.6b-04-28 | HF: Qwen/Qwen3-0.6B
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen3-0.6B is a lightweight, 0.6 billion parameter language model in the Qwen3 series, offering support for both general-purpose dialogue and structured reasoning through a dual-mode (thinking/non-thinking) architecture. Despite its small size, it supports long contexts up to 32,768 tokens and provides multilingual, tool-use, and instruction-following capabilities....
Qwen: Qwen3 1.7B
qwen @ - | Qwen3 1.7B
Slug: qwen/qwen3-1.7b | HF: Qwen/Qwen3-1.7B
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Qwen3-1.7B is a compact, 1.7 billion parameter dense language model from the Qwen3 series, featuring dual-mode operation for both efficient dialogue (non-thinking) and advanced reasoning (thinking). Despite its small size, it supports 32,768-token contexts and delivers strong multilingual, instruction-following, and agentic capabilities, including tool use and structured output....
Sao10K: Llama 3 Stheno 8B v3.3 32K
sao10k @ - | Llama 3 Stheno 8B v3.3 32K
Slug: sao10k/l3-stheno-8b | HF: Sao10K/L3-8B-Stheno-v3.3-32K
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Stheno 8B 32K is a creative writing/roleplay model from [Sao10k](https://ko-fi.com/sao10k). It was trained at 8K context, then expanded to 32K context.

Compared to older Stheno version, this model is trained on:
- 2x the amount of creative writing samples
- Cleaned up roleplaying samples
- Fewer low quality samples...
THUDM: GLM 4 32B
thudm @ NovitaAI | GLM 4 32B
Slug: thudm/glm-4-32b | HF: THUDM/GLM-4-32B-0414
Context: 32000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✘ | Moderation: ✘ | TOS
GLM-4-32B-0414 is a 32B bilingual (Chinese-English) open-weight language model optimized for code generation, function calling, and agent-style tasks. Pretrained on 15T of high-quality and reasoning-heavy data, it was further refined using human preference alignment, rejection sampling, and reinforcement learning. The model excels in complex reasoning, artifact generation, and structured output tasks, achieving performance comparable to GPT-4o and DeepSeek-V3-0324 across several benchmarks....
THUDM: GLM 4 9B
thudm @ - | GLM 4 9B
Slug: thudm/glm-4-9b | HF: THUDM/GLM-4-9B-0414
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GLM-4-9B-0414 is a 9 billion parameter language model from the GLM-4 series developed by THUDM. Trained using the same reinforcement learning and alignment strategies as its larger 32B counterparts, GLM-4-9B-0414 achieves high performance relative to its size, making it suitable for resource-constrained deployments that still require robust language understanding and generation capabilities....
THUDM: GLM Z1 9B
thudm @ - | GLM Z1 9B
Slug: thudm/glm-z1-9b | HF: thudm/glm-z1-9b-0414
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GLM-Z1-9B-0414 is a 9B-parameter language model developed by THUDM as part of the GLM-4 family. It incorporates techniques originally applied to larger GLM-Z1 models, including extended reinforcement learning, pairwise ranking alignment, and training on reasoning-intensive tasks such as mathematics, code, and logic. Despite its smaller size, it demonstrates strong performance on general-purpose reasoning tasks and outperforms many open-source models in its weight class....
THUDM: GLM Z1 Rumination 32B
thudm @ - | GLM Z1 Rumination 32B
Slug: thudm/glm-z1-rumination-32b | HF: THUDM/GLM-Z1-Rumination-32B-0414
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
THUDM: GLM Z1 Rumination 32B is a 32B-parameter deep reasoning model from the GLM-4-Z1 series, optimized for complex, open-ended tasks requiring prolonged deliberation. It builds upon glm-4-32b-0414 with additional reinforcement learning phases and multi-stage alignment strategies, introducing “rumination” capabilities designed to emulate extended cognitive processing. This includes iterative reasoning, multi-hop analysis, and tool-augmented workflows such as search, retrieval, and citation-aware synthesis.

The model excels in research-style writing, comparative analysis, and intricate qu...
WizardLM-2 7B
microsoft @ - | WizardLM-2 7B
Slug: microsoft/wizardlm-2-7b | HF: microsoft/WizardLM-2-7B
Context: 32000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
WizardLM-2 7B is the smaller variant of Microsoft AI's latest Wizard model. It is the fastest and achieves comparable performance with existing 10x larger opensource leading models

It is a finetune of [Mistral 7B Instruct](/models/mistralai/mistral-7b-instruct), using the same technique as [WizardLM-2 8x22B](/models/microsoft/wizardlm-2-8x22b).

To read more about the model release, [click here](https://wizardlm.github.io/WizardLM2/).

#moe...
Perplexity: Llama3 Sonar 70B Online
perplexity @ - | Llama3 Sonar 70B Online
Slug: perplexity/llama-3-sonar-large-32k-online | HF:
Context: 28000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is the online version of the [offline chat model](/models/perplexity/llama-3-sonar-large-32k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online...
Perplexity: Llama3 Sonar 8B Online
perplexity @ - | Llama3 Sonar 8B Online
Slug: perplexity/llama-3-sonar-small-32k-online | HF:
Context: 28000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.

This is the online version of the [offline chat model](/models/perplexity/llama-3-sonar-small-32k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online...
Lynn: Llama 3 Soliloquy 8B v2
lynn @ - | Llama 3 Soliloquy 8B v2
Slug: lynn/soliloquy-l3 | HF: openlynn/Llama-3-Soliloquy-8B-v2
Context: 24576 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Soliloquy-L3 v2 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
NeverSleep: Llama 3 Lumimaid 8B
neversleep @ - | Llama 3 Lumimaid 8B
Slug: neversleep/llama-3-lumimaid-8b | HF: NeverSleep/Llama-3-Lumimaid-8B-v0.1
Context: 24576 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.

To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3.2 3B Instruct
meta-llama @ Nineteen | Llama 3.2 3B Instruct
Slug: meta-llama/llama-3.2-3b-instruct | HF: meta-llama/Llama-3.2-3B-Instruct
Context: 20000 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages.

Trained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual sett...
OpenAI: GPT-3.5 Turbo
openai @ OpenAI | GPT-3.5 Turbo
Slug: openai/gpt-3.5-turbo | HF:
Context: 16385 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.

Training data up to Sep 2021....
OpenAI: GPT-3.5 Turbo 16k
openai @ OpenAI | GPT-3.5 Turbo 16k
Slug: openai/gpt-3.5-turbo-16k | HF:
Context: 16385 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✔ | TOS
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up to Sep 2021....
OpenAI: GPT-3.5 Turbo 16k
openai @ - | GPT-3.5 Turbo 16k
Slug: openai/gpt-3.5-turbo-0125 | HF:
Context: 16385 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021.

This version has a higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls....
OpenAI: GPT-3.5 Turbo 16k (older v1106)
openai @ - | GPT-3.5 Turbo 16k (older v1106)
Slug: openai/gpt-3.5-turbo-1106 | HF:
Context: 16385 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
An older GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021....
01.AI: Yi Large FC
01-ai @ - | Yi Large FC
Slug: 01-ai/yi-large-fc | HF:
Context: 16384 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi Large Function Calling (FC) is a specialized model with capability of tool use. The model can decide whether to call the tool based on the tool definition passed in by the user, and the calling method will be generate in the specified format.

It's applicable to various production scenarios that require building agents or workflows....
01.AI: Yi Vision
01-ai @ - | Yi Vision
Slug: 01-ai/yi-vision | HF:
Context: 16384 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi Vision is a complex visual task models provide high-performance understanding and analysis capabilities based on multiple images.

It's ideal for scenarios that require analysis and interpretation of images and charts, such as image question answering, chart understanding, OCR, visual reasoning, education, research report understanding, or multilingual document reading....
Aetherwiing: Starcannon 12B
aetherwiing @ Featherless | Starcannon 12B
Slug: aetherwiing/mn-starcannon-12b | HF: aetherwiing/MN-12B-Starcannon-v2
Context: 16384 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
Starcannon 12B v2 is a creative roleplay and story writing model, based on Mistral Nemo, using [nothingiisreal/mn-celeste-12b](/nothingiisreal/mn-celeste-12b) as a base, with [intervitens/mini-magnum-12b-v1.1](https://huggingface.co/intervitens/mini-magnum-12b-v1.1) merged in using the [TIES](https://arxiv.org/abs/2306.01708) method.

Although more similar to Magnum overall, the model remains very creative, with a pleasant writing style. It is recommended for people wanting more variety than Magnum, and yet more verbose prose than Celeste....
EVA Llama 3.33 70B
eva-unit-01 @ Featherless | EVA Llama 3.33 70B
Slug: eva-unit-01/eva-llama-3.33-70b | HF: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
Context: 16384 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
EVA Llama 3.33 70b is a roleplay and storywriting specialist model. It is a full-parameter finetune of [Llama-3.3-70B-Instruct](https://openrouter.ai/meta-llama/llama-3.3-70b-instruct) on mixture of synthetic and natural data.

It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model

This model was built with Llama by Meta.
...
EVA Qwen2.5 72B
eva-unit-01 @ Featherless | EVA Qwen2.5 72B
Slug: eva-unit-01/eva-qwen-2.5-72b | HF: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
Context: 16384 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
EVA Qwen2.5 72B is a roleplay and storywriting specialist model. It's a full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.

It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model....
Infermatic: Mistral Nemo Inferor 12B
infermatic @ Featherless | Mistral Nemo Inferor 12B
Slug: infermatic/mn-inferor-12b | HF: Infermatic/MN-12B-Inferor-v0.0
Context: 16384 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
Inferor 12B is a merge of top roleplay models, expert on immersive narratives and storytelling.

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [anthracite-org/magnum-v4-12b](https://openrouter.ai/anthracite-org/magnum-v4-72b) as a base.
...
Magnum 72B
alpindale @ Featherless | Magnum 72B
Slug: alpindale/magnum-72b | HF: alpindale/magnum-72b-v1
Context: 16384 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
From the maker of [Goliath](https://openrouter.ai/models/alpindale/goliath-120b), Magnum 72B is the first in a new family of models designed to achieve the prose quality of the Claude 3 models, notably Opus & Sonnet.

The model is based on [Qwen2 72B](https://openrouter.ai/models/qwen/qwen-2-72b-instruct) and trained with 55 million tokens of highly curated roleplay (RP) data....
Magnum v4 72B
anthracite-org @ Mancer (private) | Magnum v4 72B
Slug: anthracite-org/magnum-v4-72b | HF: anthracite-org/magnum-v4-72b
Context: 16384 tokens | Free: No | Quant: fp6
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty logit_bias top_k min_p seed top_a
Reasoning: ✘ | Moderation: ✘ | TOS
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://openrouter.ai/anthropic/claude-3.5-sonnet) and Opus(https://openrouter.ai/anthropic/claude-3-opus).

The model is fine-tuned on top of [Qwen2.5 72B](https://openrouter.ai/qwen/qwen-2.5-72b-instruct)....
Microsoft: Phi 4
microsoft @ DeepInfra | Phi 4
Slug: microsoft/phi-4 | HF: microsoft/phi-4
Context: 16384 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
[Microsoft Research](/microsoft) Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed.

At 14 billion parameters, it was trained on a mix of high-quality synthetic datasets, data from curated websites, and academic materials. It has undergone careful improvement to follow instructions accurately and maintain strong safety standards. It works best with English language inputs.

For more information, please see [Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
...
Mistral Nemo 12B Celeste
nothingiisreal @ Featherless | Mistral Nemo 12B Celeste
Slug: nothingiisreal/mn-celeste-12b | HF: nothingiisreal/MN-12B-Celeste-V1.9
Context: 16384 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
A specialized story writing and roleplaying model based on Mistral's NeMo 12B Instruct. Fine-tuned on curated datasets including Reddit Writing Prompts and Opus Instruct 25K.

This model excels at creative writing, offering improved NSFW capabilities, with smarter and more active narration. It demonstrates remarkable versatility in both SFW and NSFW scenarios, with strong Out of Character (OOC) steering capabilities, allowing fine-tuned control over narrative direction and character behavior.

Check out the model's [HuggingFace page](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) f...
Nous: Hermes 2 Theta 8B
nousresearch @ - | Hermes 2 Theta 8B
Slug: nousresearch/hermes-2-theta-llama-3-8b | HF: NousResearch/Hermes-2-Theta-Llama-3-8B
Context: 16384 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
An experimental merge model based on Llama 3, exhibiting a very distinctive style of writing. It combines the the best of [Meta's Llama 3 8B](https://openrouter.ai/models/meta-llama/llama-3-8b-instruct) and Nous Research's [Hermes 2 Pro](https://openrouter.ai/models/nousresearch/hermes-2-pro-llama-3-8b).

Hermes-2 Θ (theta) was specifically designed with a few capabilities in mind: executing function calls, generating JSON output, and most remarkably, demonstrating metacognitive abilities (contemplating the nature of thought and recognizing the diversity of cognitive processes among individua...
StarCoder2 15B Instruct
bigcode @ - | StarCoder2 15B Instruct
Slug: bigcode/starcoder2-15b-instruct | HF: bigcode/starcoder2-15b-instruct-v0.1
Context: 16384 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
StarCoder2 15B Instruct excels in coding-related tasks, primarily in Python. It is the first self-aligned open-source LLM developed by BigCode. This model was fine-tuned without any human annotations or distilled data from proprietary LLMs.

The base model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245) and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) objective on 4+ trillion tokens....
Swallow: Llama 3.1 Swallow 8B Instruct V0.3
tokyotech-llm @ - | Llama 3.1 Swallow 8B Instruct V0.3
Slug: tokyotech-llm/llama-3.1-swallow-8b-instruct-v0.3 | HF: tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3
Context: 16384 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama 3.1 Swallow 8B is a large language model that was built by continual pre-training on the Meta Llama 3.1 8B. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities.
Swallow used approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section of the base model) for continual pre-training. The instruction-tuned models (Instruct) were built by su...
Dolphin 2.9.2 Mixtral 8x22B 🐬
cognitivecomputations @ NovitaAI | Dolphin 2.9.2 Mixtral 8x22B 🐬
Slug: cognitivecomputations/dolphin-mixtral-8x22b | HF: cognitivecomputations/dolphin-2.9.2-mixtral-8x22b
Context: 16000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✘ | Moderation: ✘ | TOS
Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of [Mixtral 8x22B Instruct](/models/mistralai/mixtral-8x22b-instruct). It features a 64k context length and was fine-tuned with a 16k sequence length using ChatML templates.

This model is a successor to [Dolphin Mixtral 8x7B](/models/cognitivecomputations/dolphin-mixtral-8x7b).

The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post ...
Sao10K: Llama 3.1 70B Hanami x1
sao10k @ - | Llama 3.1 70B Hanami x1
Slug: sao10k/l3.1-70b-hanami-x1 | HF: Sao10K/L3.1-70B-Hanami-x1
Context: 16000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is [Sao10K](/sao10k)'s experiment over [Euryale v2.2](/sao10k/l3.1-euryale-70b)....
SorcererLM 8x22B
raifle @ Infermatic | SorcererLM 8x22B
Slug: raifle/sorcererlm-8x22b | HF: rAIfle/SorcererLM-8x22b-bf16
Context: 16000 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty logit_bias top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on [WizardLM-2 8x22B](/microsoft/wizardlm-2-8x22b).

- Advanced reasoning and emotional intelligence for engaging and immersive interactions
- Vivid writing capabilities enriched with spatial and contextual awareness
- Enhanced narrative depth, promoting creative and dynamic storytelling...
OpenGVLab: InternVL3 14B
opengvlab @ Nineteen | InternVL3 14B
Slug: opengvlab/internvl3-14b | HF: OpenGVLab/InternVL3-14B
Context: 12288 tokens | Free: No | Quant: bf16
Input: image, text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p
Reasoning: ✘ | Moderation: ✘ | TOS
The 14b version of the InternVL3 series. An advanced multimodal large language model (MLLM) series that demonstrates superior overall performance. Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more....
RWKV v5 3B AI Town
recursal @ - | RWKV v5 3B AI Town
Slug: recursal/rwkv-5-3b-ai-town | HF: recursal/rwkv-5-3b-ai-town
Context: 10000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is an [RWKV 3B model](/models/rwkv/rwkv-5-world-3b) finetuned specifically for the [AI Town](https://github.com/a16z-infra/ai-town) project.

[RWKV](https://wiki.rwkv.com) is an RNN (recurrent neural network) with transformer-level performance. It aims to combine the best of RNNs and transformers - great performance, fast inference, low VRAM, fast training, "infinite" context length, and free sentence embedding.

RWKV 3B models are provided for free, by Recursal.AI, for the beta period. More details [here](https://substack.recursal.ai/p/public-rwkv-3b-model-via-openrouter).

#rnn...
RWKV v5 World 3B
rwkv @ - | RWKV v5 World 3B
Slug: rwkv/rwkv-5-world-3b | HF: RWKV/rwkv-5-world-3b
Context: 10000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
[RWKV](https://wiki.rwkv.com) is an RNN (recurrent neural network) with transformer-level performance. It aims to combine the best of RNNs and transformers - great performance, fast inference, low VRAM, fast training, "infinite" context length, and free sentence embedding.

RWKV-5 is trained on 100+ world languages (70% English, 15% multilang, 15% code).

RWKV 3B models are provided for free, by Recursal.AI, for the beta period. More details [here](https://substack.recursal.ai/p/public-rwkv-3b-model-via-openrouter).

#rnn...
RWKV v5: Eagle 7B
recursal @ - | Eagle 7B
Slug: recursal/eagle-7b | HF: RWKV/v5-Eagle
Context: 10000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Eagle 7B is trained on 1.1 Trillion Tokens across 100+ world languages (70% English, 15% multilang, 15% code).

- Built on the [RWKV-v5](/models?q=rwkv) architecture (a linear transformer with 10-100x+ lower inference cost)
- Ranks as the world's greenest 7B model (per token)
- Outperforms all 7B class models in multi-lingual benchmarks
- Approaches Falcon (1.5T), LLaMA2 (2T), Mistral (>2T?) level of performance in English evals
- Trade blows with MPT-7B (1T) in English evals
- All while being an ["Attention-Free Transformer"](https://www.isattentionallyouneed.com/)

Eagle 7B models are provid...
Google: PaLM 2 Chat
google @ - | PaLM 2 Chat
Slug: google/palm-2-chat-bison | HF:
Context: 9216 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
PaLM 2 is a language model by Google with improved multilingual, reasoning and coding capabilities....
DeepSeek: R1 Distill Llama 70B (free)
deepseek @ Together | R1 Distill Llama 70B (free)
Slug: deepseek/deepseek-r1-distill-llama-70b | HF: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
Context: 8192 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p reasoning include_reasoning stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✔ | Moderation: ✘ | TOS
DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

- AIME 2024 pass@1: 70.0
- MATH-500 pass@1: 94.5
- CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models....
Dolphin Llama 3 70B 🐬
cognitivecomputations @ - | Dolphin Llama 3 70B 🐬
Slug: cognitivecomputations/dolphin-llama-3-70b | HF: cognitivecomputations/dolphin-2.9.1-llama-3-70b
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a fine-tune of [Llama 3 70B](/models/meta-llama/llama-3-70b-instruct). It demonstrates improvements in instruction, conversation, coding, and function calling abilities, when compared to the original.

Uncensored and is stripped of alignment and bias, it requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-mod...
Google: Gemma 1 2B
google @ - | Gemma 1 2B
Slug: google/gemma-2b-it | HF: google/gemma-2b-it
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemma 1 2B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini).

Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.

Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms)....
Google: Gemma 2 27B
google @ Together | Gemma 2 27B
Slug: google/gemma-2-27b-it | HF: google/gemma-2-27b-it
Context: 8192 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini).

Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.

See the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms)....
Google: Gemma 2 9B
google @ Chutes | Gemma 2 9B
Slug: google/gemma-2-9b-it | HF: google/gemma-2-9b-it
Context: 8192 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class.

Designed for a wide variety of tasks, it empowers developers and researchers to build innovative applications, while maintaining accessibility, safety, and cost-effectiveness.

See the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms)....
Google: Gemma 2 9B (free)
google @ Chutes | Gemma 2 9B (free)
Slug: google/gemma-2-9b-it | HF: google/gemma-2-9b-it
Context: 8192 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logprobs logit_bias top_logprobs
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class.

Designed for a wide variety of tasks, it empowers developers and researchers to build innovative applications, while maintaining accessibility, safety, and cost-effectiveness.

See the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms)....
Google: Gemma 3n 2B (free)
google @ Google AI Studio | Gemma 3n 2B (free)
Slug: google/gemma-3n-e2b-it | HF: google/gemma-3n-E2B-it
Context: 8192 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data....
Google: Gemma 3n 4B (free)
google @ Google AI Studio | Gemma 3n 4B (free)
Slug: google/gemma-3n-e4b-it | HF: google/gemma-3n-E4B-it
Context: 8192 tokens | Free: Yes | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements.

This model supports a wide linguistic ran...
Google: Gemma 7B
google @ - | Gemma 7B
Slug: google/gemma-7b-it | HF: google/gemma-1.1-7b-it
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Gemma by Google is an advanced, open-source language model family, leveraging the latest in decoder-only, text-to-text technology. It offers English language capabilities across text generation tasks like question answering, summarization, and reasoning. The Gemma 7B variant is comparable in performance to leading open source models.

Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms)....
Meta: CodeLlama 34B Instruct
meta-llama @ - | CodeLlama 34B Instruct
Slug: meta-llama/codellama-34b-instruct | HF: meta-llama/CodeLlama-34b-Instruct-hf
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Code Llama is built upon Llama 2 and excels at filling in code, handling extensive input contexts, and following programming instructions without prior training for various programming tasks....
Meta: Llama 3 70B (Base)
meta-llama @ - | Llama 3 70B (Base)
Slug: meta-llama/llama-3-70b | HF: meta-llama/Meta-Llama-3-70b
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This is the base 70B pre-trained version.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3 70B Instruct
meta-llama @ DeepInfra | Llama 3 70B Instruct
Slug: meta-llama/llama-3-70b-instruct | HF: meta-llama/Meta-Llama-3-70B-Instruct
Context: 8192 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3 8B (Base)
meta-llama @ - | Llama 3 8B (Base)
Slug: meta-llama/llama-3-8b | HF: meta-llama/Meta-Llama-3-8b
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This is the base 8B pre-trained version.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: Llama 3 8B Instruct
meta-llama @ DeepInfra | Llama 3 8B Instruct
Slug: meta-llama/llama-3-8b-instruct | HF: meta-llama/Meta-Llama-3-8B-Instruct
Context: 8192 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.

It has demonstrated strong performance compared to leading closed-source models in human evaluations.

To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Meta: LlamaGuard 2 8B
meta-llama @ Together | LlamaGuard 2 8B
Slug: meta-llama/llama-guard-2-8b | HF: meta-llama/Meta-Llama-Guard-2-8B
Context: 8192 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response classification.

LlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated.

For best results, please use raw prompt input or the `/completions` endpoint, instead of the chat API.

It has demonstrated strong performance compared to leading closed-source models in human...
Mistral OpenOrca 7B
open-orca @ - | Mistral OpenOrca 7B
Slug: open-orca/mistral-7b-openorca | HF: Open-Orca/Mistral-7B-OpenOrca
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A fine-tune of Mistral using the OpenOrca dataset. First 7B model to beat all other models <30B....
Moonshot AI: Moonlight 16B A3B Instruct
moonshotai @ - | Moonlight 16B A3B Instruct
Slug: moonshotai/moonlight-16b-a3b-instruct | HF: moonshotai/Moonlight-16B-A3B-Instruct
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Moonlight-16B-A3B-Instruct is a 16B-parameter Mixture-of-Experts (MoE) language model developed by Moonshot AI. It is optimized for instruction-following tasks with 3B activated parameters per inference. The model advances the Pareto frontier in performance per FLOP across English, coding, math, and Chinese benchmarks. It outperforms comparable models like Llama3-3B and Deepseek-v2-Lite while maintaining efficient deployment capabilities through Hugging Face integration and compatibility with popular inference engines like vLLM12....
NeverSleep: Llama 3 Lumimaid 70B
neversleep @ Featherless | Llama 3 Lumimaid 70B
Slug: neversleep/llama-3-lumimaid-70b | HF: NeverSleep/Llama-3-Lumimaid-70B-v0.1
Context: 8192 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
The NeverSleep team is back, with a Llama 3 70B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.

To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.

Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/)....
Noromaid 20B
neversleep @ Mancer (private) | Noromaid 20B
Slug: neversleep/noromaid-20b | HF: NeverSleep/Noromaid-20b-v0.1.1
Context: 8192 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty logit_bias top_k min_p seed top_a
Reasoning: ✘ | Moderation: ✘ | TOS
A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge.

#merge #uncensored...
Nous: Capybara 7B
nousresearch @ - | Capybara 7B
Slug: nousresearch/nous-capybara-7b | HF: NousResearch/Nous-Capybara-7B-V1.9
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Capybara series is a collection of datasets and models made by fine-tuning on data created by Nous, mostly in-house.

V1.9 uses unalignment techniques for more consistent and dynamic control. It also leverages a significantly better foundation model, [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1)....
Nous: Hermes 2 Mistral 7B DPO
nousresearch @ - | Hermes 2 Mistral 7B DPO
Slug: nousresearch/nous-hermes-2-mistral-7b-dpo | HF: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This is the flagship 7B Hermes model, a Direct Preference Optimization (DPO) of [Teknium/OpenHermes-2.5-Mistral-7B](/models/teknium/openhermes-2.5-mistral-7b). It shows improvement across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.

The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets....
OpenChat 3.5 7B
openchat @ - | OpenChat 3.5 7B
Slug: openchat/openchat-7b | HF: openchat/openchat-3.5-0106
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.

- For OpenChat fine-tuned on Mistral 7B, check out [OpenChat 7B](/models/openchat/openchat-7b).
- For OpenChat fine-tuned on Llama 8B, check out [OpenChat 8B](/models/openchat/openchat-8b).

#open-source...
OpenChat 3.6 8B
openchat @ - | OpenChat 3.6 8B
Slug: openchat/openchat-8b | HF: openchat/openchat-3.6-8b-20240522
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OpenChat 8B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.

It outperforms many similarly sized models including [Llama 3 8B Instruct](/models/meta-llama/llama-3-8b-instruct) and various fine-tuned models. It excels in general conversation, coding assistance, and mathematical reasoning.

- For OpenChat fine-tuned on Mistral 7B, check out [OpenChat 7B](/models/openchat/openchat-7b).
- For OpenChat fi...
OpenHermes 2 Mistral 7B
teknium @ - | OpenHermes 2 Mistral 7B
Slug: teknium/openhermes-2-mistral-7b | HF: teknium/OpenHermes-2-Mistral-7B
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Trained on 900k instructions, surpasses all previous versions of Hermes 13B and below, and matches 70B on some benchmarks. Hermes 2 has strong multiturn chat skills and system prompt capabilities....
Qwen: Qwen2.5 VL 32B Instruct (free)
qwen @ Alibaba | Qwen2.5 VL 32B Instruct (free)
Slug: qwen/qwen2.5-vl-32b-instruct | HF: Qwen/Qwen2.5-VL-32B-Instruct
Context: 8192 tokens | Free: Yes | Quant: bf16
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual interpretation within images, and precise event localization in extended videos. Qwen2.5-VL-32B demonstrates state-of-the-art performance across multimodal benchmarks such as MMMU, MathVista, and VideoMME, while maintaining strong reasoning and clarity in text-based tasks like MMLU, mathematical problem-solving, and code generation...
Sao10K: Llama 3 8B Lunaris
sao10k @ DeepInfra | Llama 3 8B Lunaris
Slug: sao10k/l3-lunaris-8b | HF: Sao10K/L3-8B-Lunaris-v1
Context: 8192 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty response_format top_k seed min_p
Reasoning: ✘ | Moderation: ✘ | TOS
Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge.

Created by [Sao10k](https://huggingface.co/Sao10k), this model aims to offer an improved experience over Stheno v3.2, with enhanced creativity and logical reasoning.

For best results, use with Llama 3 Instruct context template, temperature 1.4, and min_p 0.1....
Sao10k: Llama 3 Euryale 70B v2.1
sao10k @ NovitaAI | Llama 3 Euryale 70B v2.1
Slug: sao10k/l3-euryale-70b | HF: Sao10K/L3-70B-Euryale-v2.1
Context: 8192 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✘ | Moderation: ✘ | TOS
Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k).

- Better prompt adherence.
- Better anatomy / spatial awareness.
- Adapts much better to unique and custom formatting / reply formats.
- Very creative, lots of unique swipes.
- Is not restrictive during roleplays....
Synthia 70B
migtissera @ - | Synthia 70B
Slug: migtissera/synthia-70b | HF: migtissera/Synthia-70B-v1.2b
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
SynthIA (Synthetic Intelligent Agent) is a LLama-2 70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations....
Typhoon2 70B Instruct
scb10x @ Together | Typhoon2 70B Instruct
Slug: scb10x/llama3.1-typhoon2-70b-instruct | HF: scb10x/llama3.1-typhoon2-70b-instruct
Context: 8192 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k repetition_penalty logit_bias min_p response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Llama3.1-Typhoon2-70B-Instruct is a Thai-English instruction-tuned language model with 70 billion parameters, built on Llama 3.1. It demonstrates strong performance across general instruction-following, math, coding, and tool-use tasks, with state-of-the-art results in Thai-specific benchmarks such as IFEval, MT-Bench, and Thai-English code-switching.

The model excels in bilingual reasoning and function-calling scenarios, offering high accuracy across diverse domains. Comparative evaluations show consistent improvements over prior Thai LLMs and other Llama-based baselines. Full results and me...
Typhoon2 8B Instruct
scb10x @ - | Typhoon2 8B Instruct
Slug: scb10x/llama3.1-typhoon2-8b-instruct | HF: scb10x/llama3.1-typhoon2-8b-instruct
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Llama3.1-Typhoon2-8B-Instruct is a Thai-English instruction-tuned model with 8 billion parameters, built on Llama 3.1. It significantly improves over its base model in Thai reasoning, instruction-following, and function-calling tasks, while maintaining competitive English performance. The model is optimized for bilingual interaction and performs well on Thai-English code-switching, MT-Bench, IFEval, and tool-use benchmarks.

Despite its smaller size, it demonstrates strong generalization across math, coding, and multilingual benchmarks, outperforming comparable 8B models across most Thai-speci...
Xwin 70B
xwin-lm @ - | Xwin 70B
Slug: xwin-lm/xwin-lm-70b | HF: Xwin-LM/Xwin-LM-70B-V0.1
Context: 8192 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Xwin-LM aims to develop and open-source alignment tech for LLMs. Our first release, built-upon on the [Llama2](/models/${Model.Llama_2_13B_Chat}) base models, ranked TOP-1 on AlpacaEval. Notably, it's the first to surpass [GPT-4](/models/${Model.GPT_4}) on this benchmark. The project will be continuously updated....
xAI: Grok Vision Beta
x-ai @ xAI | Grok Vision Beta
Slug: x-ai/grok-vision-beta | HF:
Context: 8192 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✘ | TOS
Grok Vision Beta is xAI's experimental language model with vision capability.

...
OpenAI: GPT-4
openai @ OpenAI | GPT-4
Slug: openai/gpt-4 | HF:
Context: 8191 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✔ | TOS
OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning capabilities. Training data: up to Sep 2021....
OpenAI: GPT-4 (older v0314)
openai @ OpenAI | GPT-4 (older v0314)
Slug: openai/gpt-4-0314 | HF:
Context: 8191 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021....
Cinematika 7B (alpha)
openrouter @ - | Cinematika 7B (alpha)
Slug: openrouter/cinematika-7b | HF: OpenRouter/cinematika-7b-v0.1
Context: 8000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This model is under development. Check the [OpenRouter Discord](https://discord.gg/fVyRaUDgxW) for updates....
Inflection: Inflection 3 Pi
inflection @ Inflection | Inflection 3 Pi
Slug: inflection/inflection-3-pi | HF:
Context: 8000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop
Reasoning: ✘ | Moderation: ✘ | TOS
Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and roleplay.

Pi has been trained to mirror your tone and style, if you use more emojis, so will Pi! Try experimenting with various prompts and conversation styles....
Inflection: Inflection 3 Productivity
inflection @ Inflection | Inflection 3 Productivity
Slug: inflection/inflection-3-productivity | HF:
Context: 8000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop
Reasoning: ✘ | Moderation: ✘ | TOS
Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news.

For emotional intelligence similar to Pi, see [Inflect 3 Pi](/inflection/inflection-3-pi)

See [Inflection's announcement](https://inflection.ai/blog/enterprise) for more details....
Mancer: Weaver (alpha)
mancer @ Mancer (private) | Weaver (alpha)
Slug: mancer/weaver | HF:
Context: 8000 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty logit_bias top_k min_p seed top_a
Reasoning: ✘ | Moderation: ✘ | TOS
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations....
Noromaid Mixtral 8x7B Instruct
neversleep @ - | Noromaid Mixtral 8x7B Instruct
Slug: neversleep/noromaid-mixtral-8x7b-instruct | HF: NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
Context: 8000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens....
Qwen: Qwen VL Max
qwen @ Alibaba | Qwen VL Max
Slug: qwen/qwen-vl-max | HF:
Context: 7500 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.
...
Qwen: Qwen VL Plus
qwen @ Alibaba | Qwen VL Plus
Slug: qwen/qwen-vl-plus | HF:
Context: 7500 tokens | Free: No | Quant: n/a
Input: text, image | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p seed response_format presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixels and extreme aspect ratios for image input. It delivers significant performance across a broad range of visual tasks.
...
Google: PaLM 2 Code Chat
google @ - | PaLM 2 Code Chat
Slug: google/palm-2-codechat-bison | HF:
Context: 7168 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
PaLM 2 fine-tuned for chatbot conversations that help with code-related questions....
Goliath 120B
alpindale @ NextBit | Goliath 120B
Slug: alpindale/goliath-120b | HF: alpindale/goliath-120b
Context: 6144 tokens | Free: No | Quant: int4
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale.

Credits to
- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
- [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.

#merge...
ReMM SLERP 13B
undi95 @ NextBit | ReMM SLERP 13B
Slug: undi95/remm-slerp-l2-13b | HF: Undi95/ReMM-SLERP-L2-13B
Context: 6144 tokens | Free: No | Quant: bf16
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge...
01.AI: Yi Large Turbo
01-ai @ - | Yi Large Turbo
Slug: 01-ai/yi-large-turbo | HF:
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi Large Turbo model is a High Performance and Cost-Effectiveness model offering powerful capabilities at a competitive price.

It's ideal for a wide range of scenarios, including complex inference and high-quality text generation.

Check out the [launch announcement](https://01-ai.github.io/blog/01.ai-yi-large-llm-launch) to learn more....
Airoboros 70B
jondurbin @ - | Airoboros 70B
Slug: jondurbin/airoboros-l2-70b | HF: jondurbin/airoboros-l2-70b-2.2.1
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A Llama 2 70B fine-tune using synthetic data (the Airoboros dataset).

Currently based on [jondurbin/airoboros-l2-70b](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1), but might get updated in the future....
AlfredPros: CodeLLaMa 7B Instruct Solidity
alfredpros @ Featherless | CodeLLaMa 7B Instruct Solidity
Slug: alfredpros/codellama-7b-instruct-solidity | HF: AlfredPros/CodeLlama-7b-Instruct-Solidity
Context: 4096 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library....
AllenAI: Molmo 7B D
allenai @ - | Molmo 7B D
Slug: allenai/molmo-7b-d | HF: allenai/Molmo-7B-D-0924
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19). Learn more about the Molmo family [in the announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).

Molmo 7B-D is based on ...
Chronos Hermes 13B v2
austism @ - | Chronos Hermes 13B v2
Slug: austism/chronos-hermes-13b | HF: Austism/chronos-hermes-13b-v2
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A 75/25 merge of [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2) and [Nous Hermes Llama2 13b](/models/nousresearch/nous-hermes-llama2-13b). This offers the imaginative writing style of Chronos while retaining coherency. Outputs are long and use exceptional prose. #merge...
Cohere: Command
cohere @ Cohere | Command
Slug: cohere/command | HF:
Context: 4096 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty top_k seed response_format structured_outputs
Reasoning: ✘ | Moderation: ✔ | TOS
Command is an instruction-following conversational model that performs language tasks with high quality, more reliably and with a longer context than our base generative models.

Use of this model is subject to Cohere's [Usage Policy](https://docs.cohere.com/docs/usage-policy) and [SaaS Agreement](https://cohere.com/saas-agreement)....
EleutherAI: Llemma 7b
eleutherai @ Featherless | Llemma 7b
Slug: eleutherai/llemma_7b | HF: EleutherAI/llemma_7b
Context: 4096 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
Llemma 7B is a language model for mathematics. It was initialized with Code Llama 7B weights, and trained on the Proof-Pile-2 for 200B tokens. Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers....
Fimbulvetr 11B v2
sao10k @ Featherless | Fimbulvetr 11B v2
Slug: sao10k/fimbulvetr-11b-v2 | HF: Sao10K/Fimbulvetr-11B-v2
Context: 4096 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
Creative writing model, routed with permission. It's fast, it keeps the conversation going, and it stays in character.

If you submit a raw prompt, you can use Alpaca or Vicuna formats....
Fireworks: FireLLaVA 13B
fireworks @ - | FireLLaVA 13B
Slug: fireworks/firellava-13b | HF: fireworks-ai/FireLLaVA-13b
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A blazing fast vision-language model, FireLLaVA quickly understands both text and images. It achieves impressive chat skills in tests, and was designed to mimic multimodal GPT-4.

The first commercially permissive open source LLaVA model, trained entirely on open source LLM generated instruction following data....
Hugging Face: Zephyr 7B
huggingfaceh4 @ - | Zephyr 7B
Slug: huggingfaceh4/zephyr-7b-beta | HF: HuggingFaceH4/zephyr-7b-beta
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](/models/mistralai/mistral-7b-instruct-v0.1) that was trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO)....
LLaVA v1.6 34B
liuhaotian @ - | LLaVA v1.6 34B
Slug: liuhaotian/llava-yi-34b | HF: liuhaotian/llava-v1.6-34b
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
LLaVA Yi 34B is an open-source model trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](/models/nousresearch/nous-hermes-yi-34b)

It was trained in December 2023....
Meta: Llama 2 13B Chat
meta-llama @ - | Llama 2 13B Chat
Slug: meta-llama/llama-2-13b-chat | HF: meta-llama/Llama-2-13b-chat-hf
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A 13 billion parameter language model from Meta, fine tuned for chat completions...
Meta: Llama 2 70B Chat
meta-llama @ - | Llama 2 70B Chat
Slug: meta-llama/llama-2-70b-chat | HF: meta-llama/Llama-2-70b-chat-hf
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The flagship, 70 billion parameter language model from Meta, fine tuned for chat completions. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety....
Midnight Rose 70B
sophosympatheia @ NovitaAI | Midnight Rose 70B
Slug: sophosympatheia/midnight-rose-70b | HF: sophosympatheia/Midnight-Rose-70B-v2.0.3
Context: 4096 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed top_k min_p repetition_penalty logit_bias
Reasoning: ✘ | Moderation: ✘ | TOS
A merge with a complex family tree, this model was crafted for roleplaying and storytelling. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge produced so far by sophosympatheia.

Descending from earlier versions of Midnight Rose and [Wizard Tulu Dolphin 70B](https://huggingface.co/sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0), it inherits the best qualities of each....
MythoMax 13B
gryphe @ NextBit | MythoMax 13B
Slug: gryphe/mythomax-l2-13b | HF: Gryphe/MythoMax-L2-13b
Context: 4096 tokens | Free: No | Quant: int4
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge...
NVIDIA: Nemotron-4 340B Instruct
nvidia @ - | Nemotron-4 340B Instruct
Slug: nvidia/nemotron-4-340b-instruct | HF: nvidia/Nemotron-4-340B-Instruct
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Nemotron-4-340B-Instruct is an English-language chat model optimized for synthetic data generation. This large language model (LLM) is a fine-tuned version of Nemotron-4-340B-Base, designed for single and multi-turn chat use-cases with a 4,096 token context length.

The base model was pre-trained on 9 trillion tokens from diverse English texts, 50+ natural languages, and 40+ coding languages. The instruct model underwent additional alignment steps:

1. Supervised Fine-tuning (SFT)
2. Direct Preference Optimization (DPO)
3. Reward-aware Preference Optimization (RPO)

The alignment process used ...
Neural Chat 7B v3.1
intel @ - | Neural Chat 7B v3.1
Slug: intel/neural-chat-7b | HF: Intel/neural-chat-7b-v3-1
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A fine-tuned model based on [mistralai/Mistral-7B-v0.1](/models/mistralai/mistral-7b-instruct-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), aligned with DPO algorithm. For more details, refer to the blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3)....
Nous: Hermes 13B
nousresearch @ - | Hermes 13B
Slug: nousresearch/nous-hermes-llama2-13b | HF: NousResearch/Nous-Hermes-Llama2-13b
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A state-of-the-art language model fine-tuned on over 300k instructions by Nous Research, with Teknium and Emozilla leading the fine tuning process....
Nous: Hermes 2 Vision 7B (alpha)
nousresearch @ - | Hermes 2 Vision 7B (alpha)
Slug: nousresearch/nous-hermes-2-vision-7b | HF: NousResearch/Nous-Hermes-2-Vision-Alpha
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
This vision-language model builds on innovations from the popular [OpenHermes-2.5](/models/teknium/openhermes-2.5-mistral-7b) model, by Teknium. It adds vision support, and is trained on a custom dataset enriched with function calling

This project is led by [qnguyen3](https://twitter.com/stablequan) and [teknium](https://twitter.com/Teknium1).

#multimodal...
Nous: Hermes 2 Yi 34B
nousresearch @ - | Hermes 2 Yi 34B
Slug: nousresearch/nous-hermes-yi-34b | HF: NousResearch/Nous-Hermes-2-Yi-34B
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Nous Hermes 2 Yi 34B was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape.

Nous-Hermes 2 on Yi 34B outperforms all Nous-Hermes & Open-Hermes models of the past, achieving new heights in all benchmarks for a Nous Research LLM as well as surpassing many popular finetunes....
Nous: Hermes 70B
nousresearch @ - | Hermes 70B
Slug: nousresearch/nous-hermes-llama2-70b | HF: NousResearch/Nous-Hermes-Llama2-70b
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A state-of-the-art language model fine-tuned on over 300k instructions by Nous Research, with Teknium and Emozilla leading the fine tuning process....
OpenHermes 2.5 Mistral 7B
teknium @ - | OpenHermes 2.5 Mistral 7B
Slug: teknium/openhermes-2.5-mistral-7b | HF: teknium/OpenHermes-2.5-Mistral-7B
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A continuation of [OpenHermes 2 model](/models/teknium/openhermes-2-mistral-7b), trained on additional code datasets.
Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant....
Phind: CodeLlama 34B v2
phind @ - | CodeLlama 34B v2
Slug: phind/phind-codellama-34b | HF: Phind/Phind-CodeLlama-34B-v2
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A fine-tune of CodeLlama-34B on an internal dataset that helps it exceed GPT-4 on some benchmarks, including HumanEval....
Psyfighter 13B
jebcarter @ - | Psyfighter 13B
Slug: jebcarter/psyfighter-13b | HF: jebcarter/Psyfighter-13B
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A merge model based on [Llama-2-13B](/models/meta-llama/llama-2-13b-chat) and made possible thanks to the compute provided by the KoboldAI community. It's a merge between:

- [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
- [chaoyi-wu/MedLLaMA_13B](https://huggingface.co/chaoyi-wu/MedLLaMA_13B)
- [Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged).

#merge...
Psyfighter v2 13B
koboldai @ - | Psyfighter v2 13B
Slug: koboldai/psyfighter-13b-2 | HF: KoboldAI/LLaMA2-13B-Psyfighter2
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The v2 of [Psyfighter](/models/jebcarter/psyfighter-13b) - a merged model created by the KoboldAI community members Jeb Carter and TwistedShadows, made possible thanks to the KoboldAI merge request service.

The intent was to add medical data to supplement the model's fictional ability with more details on anatomy and mental states. This model should not be used for medical advice or therapy because of its high likelihood of pulling in fictional data.

It's a merge between:

- [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
- [Doctor-Shotgun/cat-v1.0-13b...
Pygmalion: Mythalion 13B
pygmalionai @ Featherless | Mythalion 13B
Slug: pygmalionai/mythalion-13b | HF: PygmalionAI/mythalion-13b
Context: 4096 tokens | Free: No | Quant: fp8
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
A blend of the new Pygmalion-13b and MythoMax. #merge...
Snowflake: Arctic Instruct
snowflake @ - | Arctic Instruct
Slug: snowflake/snowflake-arctic-instruct | HF: Snowflake/snowflake-arctic-instruct
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team. Arctic combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating.

To read more about this model's release, [click here](https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/)....
Toppy M 7B
undi95 @ Featherless | Toppy M 7B
Slug: undi95/toppy-m-7b | HF: Undi95/Toppy-M-7B
Context: 4096 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty repetition_penalty top_k min_p seed
Reasoning: ✘ | Moderation: ✘ | TOS
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.
List of merged models:
- NousResearch/Nous-Capybara-7B-V1.9
- [HuggingFaceH4/zephyr-7b-beta](/models/huggingfaceh4/zephyr-7b-beta)
- lemonilia/AshhLimaRP-Mistral-7B
- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b
- Undi95/Mistral-pippa-sharegpt-7b-qlora

#merge #uncensored...
Yi 1.5 34B Chat
01-ai @ - | Yi 1.5 34B Chat
Slug: 01-ai/yi-1.5-34b-chat | HF: 01-ai/Yi-1.5-34B-Chat
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is a predecessor to the Yi 34B model. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.....
Yi 34B (base)
01-ai @ - | Yi 34B (base)
Slug: 01-ai/yi-34b | HF: 01-ai/Yi-34B
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is the base 34B parameter model....
Yi 34B Chat
01-ai @ - | Yi 34B Chat
Slug: 01-ai/yi-34b-chat | HF: 01-ai/Yi-34B-Chat
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This 34B parameter model has been instruct-tuned for chat....
Yi 6B (base)
01-ai @ - | Yi 6B (base)
Slug: 01-ai/yi-6b | HF: 01-ai/Yi-6B
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
The Yi series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). This is the base 6B parameter model....
lzlv 70B
lizpreciatior @ - | lzlv 70B
Slug: lizpreciatior/lzlv-70b-fp16-hf | HF: lizpreciatior/lzlv_70b_fp16_hf
Context: 4096 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
A Mythomax/MLewd_13B-style merge of selected 70B models.
A multi-model merge of several LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combines creativity with intelligence for an enhanced experience.

#merge #uncensored...
OpenAI: GPT-3.5 Turbo (older v0301)
openai @ - | GPT-3.5 Turbo (older v0301)
Slug: openai/gpt-3.5-turbo-0301 | HF:
Context: 4095 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.

Training data up to Sep 2021....
OpenAI: GPT-3.5 Turbo (older v0613)
openai @ Azure | GPT-3.5 Turbo (older v0613)
Slug: openai/gpt-3.5-turbo-0613 | HF:
Context: 4095 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p tools tool_choice stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format structured_outputs
Reasoning: ✘ | Moderation: ✘ | TOS
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.

Training data up to Sep 2021....
OpenAI: GPT-3.5 Turbo Instruct
openai @ OpenAI | GPT-3.5 Turbo Instruct
Slug: openai/gpt-3.5-turbo-instruct | HF:
Context: 4095 tokens | Free: No | Quant: unknown
Input: text | Output: text | Completions: ✔ | Chat:
Supported Params:
max_tokens temperature top_p stop frequency_penalty presence_penalty seed logit_bias logprobs top_logprobs response_format
Reasoning: ✘ | Moderation: ✔ | TOS
This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021....
Microsoft: Phi-3 Medium 4K Instruct
microsoft @ - | Phi-3 Medium 4K Instruct
Slug: microsoft/phi-3-medium-4k-instruct | HF: microsoft/Phi-3-medium-4k-instruct
Context: 4000 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Phi-3 4K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.

At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance.

For 128k context length, try [Phi-3 Medium 128K](/models/microsoft/phi-3-medium-128k-instruct)....
Mistral: Mistral 7B Instruct v0.1
mistralai @ Cloudflare | Mistral 7B Instruct v0.1
Slug: mistralai/mistral-7b-instruct-v0.1 | HF: mistralai/Mistral-7B-Instruct-v0.1
Context: 2824 tokens | Free: No | Quant: n/a
Input: text | Output: text | Completions: ✘ | Chat:
Supported Params:
max_tokens temperature top_p top_k seed repetition_penalty frequency_penalty presence_penalty
Reasoning: ✘ | Moderation: ✘ | TOS
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length....
LLaVA 13B
liuhaotian @ - | LLaVA 13B
Slug: liuhaotian/llava-13b | HF: liuhaotian/llava-v1.6-vicuna-13b
Context: 2048 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text, image | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
LLaVA is a large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities and setting a new state-of-the-art accuracy on Science QA.

#multimodal...
Meta: CodeLlama 70B Instruct
meta-llama @ - | CodeLlama 70B Instruct
Slug: meta-llama/codellama-70b-instruct | HF: meta-llama/CodeLlama-70b-Instruct-hf
Context: 2048 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Code Llama is a family of large language models for code. This one is based on [Llama 2 70B](/models/meta-llama/llama-2-70b-chat) and provides zero-shot instruction-following ability for programming tasks....
OLMo 7B Instruct
allenai @ - | OLMo 7B Instruct
Slug: allenai/olmo-7b-instruct | HF: allenai/OLMo-7B-Instruct
Context: 2048 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
OLMo 7B Instruct by the Allen Institute for AI is a model finetuned for question answering. It demonstrates **notable performance** across multiple benchmarks including TruthfulQA and ToxiGen.

**Open Source**: The model, its code, checkpoints, logs are released under the [Apache 2.0 license](https://choosealicense.com/licenses/apache-2.0).

- [Core repo (training, inference, fine-tuning etc.)](https://github.com/allenai/OLMo)
- [Evaluation code](https://github.com/allenai/OLMo-Eval)
- [Further fine-tuning code](https://github.com/allenai/open-instruct)
- [Paper](https://arxiv.org/abs/2402.008...
Llama 3.1 Tulu 3 405B
allenai @ - | Llama 3.1 Tulu 3 405B
Slug: allenai/llama-3.1-tulu-3-405b | HF: allenai/Llama-3.1-Tulu-3-405B
Context: 0 tokens | Free:
Warning: Undefined array key "is_free" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 38
No | Quant: n/a
Input: text | Output: text | Completions:
Warning: Undefined array key "has_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 44
✘ | Chat:
Warning: Undefined array key "has_chat_completions" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 45
Supported Params:
Reasoning:
Warning: Undefined array key "supports_reasoning" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 54
✘ | Moderation:
Warning: Undefined array key "moderation_required" in /home/aihabvrw/planetaai.aihax.xyz/wp-content/plugins/oxygen/component-framework/components/classes/code-block.class.php(133) : eval()'d code on line 55
✘ | TOS
Tülu 3 405B is the largest model in the Tülu 3 family, applying fully open post-training recipes at a 405B parameter scale. Built on the Llama 3.1 405B base, it leverages Reinforcement Learning with Verifiable Rewards (RLVR) to enhance instruction following, MATH, GSM8K, and IFEval performance. As part of Tülu 3’s fully open-source approach, it offers state-of-the-art capabilities while surpassing prior open-weight models like Llama 3.1 405B Instruct and Nous Hermes 3 405B on multiple benchmarks. To read more, [click here.](https://allenai.org/blog/tulu-3-405B)...
Planeta AI 2025 
menu linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram