Model Benchmarks
Benchmark results for various AI models across different evaluation categories.
Model Benchmarks
Models | ||||||
---|---|---|---|---|---|---|
GPT-4o | 51.4% | 66.1% | 31% | 60.3% | 72.09% | 27% |
Claude 3.5 Sonnet | 18% | 65% | 49% | 76% | 65.46% | 55.6% |
Claude 3.7 Sonnet | 23.3% | 68% | 62.3% | 82.2% | 68.3 | 60.4% |
GPT-4.5 | 36.7% | 74.1% | 39% | n/a | 69.94% | 44.6% |
DeepSeek V3 G324 | 58.4% | 64.9% | 38.6% | 81% | 68.05% | 58.1% |
Claude 3.7 Sonnet [P] | 61.3% | 78.2% | 73.3% | 86.5% | 83.3% | 64.9% |
OpenAI o1-mini | 63.6% | 60% | n/a | 90% | 62.2% | 52.8% |
OpenAI o1 | 79.2% | 79.7% | 46.6% | 95.4% | 67.8% | 67% |
OpenAI o1-mini-2 | 87.3% | 79.7% | 61% | 97.6% | 65.12% | 60.4% |
Gemini 2.0 Pro | 52% | 84% | 63.6% | n/a | n/a | 72.5% |
Gemini 3 (Beta) | 93.3% | 84.8% | n/a | n/a | n/a | n/a |
Llama 4 Behemoth | n/a | 73.7% | n/a | 95% | n/a | n/a |
Llama 4 Scout | n/a | 87.2% | n/a | n/a | n/a | n/a |
Llama 4 Maverick | n/a | 88.8% | n/a | n/a | n/a | 58.6% |
Gemini 3 Pro | n/a | 42.4% | 52.2% | 69% | n/a | 4.8% |
Qwen 2.5-VL-32B | n/a | 46% | 18.8% | 82.2% | n/a | 69.84% |
Gemini 2.0 Flash | n/a | 62.1% | 51.8% | 83.7% | 60.42 | 22.2% |
Llama 3.1 70b | n/a | 50.5% | n/a | 77% | 77.5 | 61.45% |
Nous Pro | n/a | 46.8% | n/a | 76.6% | 68.4% | 61.38% |
Claude 3.5 Haiku | n/a | 49.8% | 40.5% | 68.4% | 64.31% | 28% |
Llama 3.1 405b | n/a | 49% | n/a | 73.8% | 81.1% | n/a |
GPT-4o-mini | n/a | 40.2% | n/a | 70.2% | 64.1 | 3.6% |
Model Details
GPT-4o (OpenAI)
Multimodal model excelling in academic evaluations and mathematical reasoning. Shows strong performance in logical reasoning (BFCL 72.09%) but relatively weaker in automotive domain tasks.
Claude 3.5 Sonnet (Anthropic)
Balanced model with strong coding capabilities (SWE Bench 49%) and mathematical proficiency (MATH 500 76%). Improved automotive domain understanding compared to previous versions.
Claude 3.7 Sonnet [P] (Anthropic)
Enhanced version showing dramatic improvements across all benchmarks, particularly in software engineering (SWE Bench 73.3%) and foundational logic (BFCL 83.3%). Potential proprietary variant.
DeepSeek V3 G324
Strong all-rounder with top MMAE 2024 performance (58.4%) and excellent mathematical capabilities (MATH 500 81%). Maintains consistent performance across domains.
OpenAI o1 Series
Next-gen models showing breakthrough performance in mathematics (o1-mini-2: MATH 500 97.6%). The o1-mini-2 variant leads in academic evaluations (MMAE 87.3%) while maintaining strong coding abilities.
Gemini 3 Series (Google)
Beta version shows record MMAE 2024 performance (93.3%) but incomplete benchmarking. Pro variant appears specialized for different tasks with mixed results.
Llama 4 Series (Meta)
Specialized variants with Maverick edition leading in GPQA (88.8%). Behemoth model shows strong mathematical reasoning (MATH 500 95%) while Scout variant excels in academic questioning.
Qwen 2.5-VL-32B
Computer vision-focused model with surprising automotive domain performance (Aide Peugeot 69.84%). Shows decent mathematical capabilities despite weaker programming scores.
Nous Pro
General purpose model with consistent mid-range performance across benchmarks. Shows particular strength in automotive applications (61.38%) compared to similar-sized models.
Llama 3.1 Series
Large-scale models showing strong foundational logic capabilities (405b: BFCL 81.1%). The 70b variant maintains balanced performance across multiple domains.