Model Benchmarks

Models
GPT-4o51.4%66.1%31%60.3%72.09%27%
Claude 3.5 Sonnet18%65%49%76%65.46%55.6%
Claude 3.7 Sonnet23.3%68%62.3%82.2%68.360.4%
GPT-4.536.7%74.1%39%n/a69.94%44.6%
DeepSeek V3 G32458.4%64.9%38.6%81%68.05%58.1%
Claude 3.7 Sonnet [P]61.3%78.2%73.3%86.5%83.3%64.9%
OpenAI o1-mini63.6%60%n/a90%62.2%52.8%
OpenAI o179.2%79.7%46.6%95.4%67.8%67%
OpenAI o1-mini-287.3%79.7%61%97.6%65.12%60.4%
Gemini 2.0 Pro52%84%63.6%n/an/a72.5%
Gemini 3 (Beta)93.3%84.8%n/an/an/an/a
Llama 4 Behemothn/a73.7%n/a95%n/an/a
Llama 4 Scoutn/a87.2%n/an/an/an/a
Llama 4 Maverickn/a88.8%n/an/an/a58.6%
Gemini 3 Pron/a42.4%52.2%69%n/a4.8%
Qwen 2.5-VL-32Bn/a46%18.8%82.2%n/a69.84%
Gemini 2.0 Flashn/a62.1%51.8%83.7%60.4222.2%
Llama 3.1 70bn/a50.5%n/a77%77.561.45%
Nous Pron/a46.8%n/a76.6%68.4%61.38%
Claude 3.5 Haikun/a49.8%40.5%68.4%64.31%28%
Llama 3.1 405bn/a49%n/a73.8%81.1%n/a
GPT-4o-minin/a40.2%n/a70.2%64.13.6%

Model Details

GPT-4o (OpenAI)

Multimodal model excelling in academic evaluations and mathematical reasoning. Shows strong performance in logical reasoning (BFCL 72.09%) but relatively weaker in automotive domain tasks.

Claude 3.5 Sonnet (Anthropic)

Balanced model with strong coding capabilities (SWE Bench 49%) and mathematical proficiency (MATH 500 76%). Improved automotive domain understanding compared to previous versions.

Claude 3.7 Sonnet [P] (Anthropic)

Enhanced version showing dramatic improvements across all benchmarks, particularly in software engineering (SWE Bench 73.3%) and foundational logic (BFCL 83.3%). Potential proprietary variant.

DeepSeek V3 G324

Strong all-rounder with top MMAE 2024 performance (58.4%) and excellent mathematical capabilities (MATH 500 81%). Maintains consistent performance across domains.

OpenAI o1 Series

Next-gen models showing breakthrough performance in mathematics (o1-mini-2: MATH 500 97.6%). The o1-mini-2 variant leads in academic evaluations (MMAE 87.3%) while maintaining strong coding abilities.

Gemini 3 Series (Google)

Beta version shows record MMAE 2024 performance (93.3%) but incomplete benchmarking. Pro variant appears specialized for different tasks with mixed results.

Llama 4 Series (Meta)

Specialized variants with Maverick edition leading in GPQA (88.8%). Behemoth model shows strong mathematical reasoning (MATH 500 95%) while Scout variant excels in academic questioning.

Qwen 2.5-VL-32B

Computer vision-focused model with surprising automotive domain performance (Aide Peugeot 69.84%). Shows decent mathematical capabilities despite weaker programming scores.

Nous Pro

General purpose model with consistent mid-range performance across benchmarks. Shows particular strength in automotive applications (61.38%) compared to similar-sized models.

Llama 3.1 Series

Large-scale models showing strong foundational logic capabilities (405b: BFCL 81.1%). The 70b variant maintains balanced performance across multiple domains.