2026-04-17





Source: https://huggingface.co
Exploring google/gemma-3-27b-it on Hugging Face
Simulating 35B parameters at FP32 (9.38Mb)
Simulating 27B parameters at FP32 (5.70Mb)
Simulating 9B parameters at FP32 (1.90Mb)
Simulating 4B parameters at FP32 (0.84Mb)
Simulating 2B parameters at FP32 (0.41Mb)
Simulating 0.8B parameters at FP32 (0.16Mb)
Simulating 35B parameters at FP32 (9.38Mb)
Simulating 35B parameters using Q8_0 (8-bit) Quantization (2.49Mb)
Simulating 35B parameters using Q4_0 (4-bit) Quantization (1.32Mb)
Simulating 35B parameters using Q2 (2-bit) Quantization (0.73Mb)
Exploring unsloth/gemma-3-27b-it-GGUF and mlx-community/gemma-3-27b-it-qat-4bit on Hugging Face
Browsing, downloading, and running Gemma3-27B on LM Studio
Vision Processing using local Gemma3-27B
A local Qwen TTS model working alongside Gemma3-27B
Exploring Qwen/Qwen3.5-35B-A3B-GGUF on Hugging Face
Running OpenCode with Qwen3.5-35B-A3B local model
| Benchmark | Qwen3.5-27B | GPT-5-mini | GPT-OSS-120B |
|---|---|---|---|
| MMLU-Pro | 86.1% | 83.7% | 80.8% |
| GPQA Diamond | 85.5% | 82.8% | 80.1% |
| SWE-bench Verified | 72.4% | 72.0% | 62.0% |
| LiveCodeBench v6 | 80.7% | 80.5% | 82.7% |
| Benchmark | Gemma 4 31B | Gemini 2.5 Pro | Claude 4 Opus |
|---|---|---|---|
| MMLU-Pro | 85.2% | — | — |
| GPQA Diamond | 84.3% | 86.4% | 79.6% |
| LiveCodeBench v6 | 80.0% | 72.5% | 48.9% |
| AIME 2026 | 89.2% | — | — |
| Model | SWE-bench Verified | Open? |
|---|---|---|
| Claude Opus 4.5 | 77.8% | No |
| Qwen3.5-27B | 72.4% | Yes |
| Claude Sonnet 4 | 70.4% | No |
| Qwen3-Coder (480B) | 69.6% | Yes |
| GPT-OSS-120B | 62.0% | Partially |