LLM API比較

ModelContext/OutputKnowledge Cutoff$Input/1M$Output/1M
gpt-5.5-pro1M/128k2025-1230180
gpt-5.51M/128k2025-12530
gpt-5.4-pro1M/128k2025-0830180
gpt-5.41M/128k2025-082.515
gpt-5.4-mini400k/128k2025-080.754.5
gpt-5.4-nano400k/128k2025-080.21.25
gpt-4.11M/32k2024-0628
claude-opus-4-71M/128k2026-01525
claude-sonnet-4-61M/64k2025-08315
claude-haiku-4-5200k/64k2025-0215
gemini-3.1-pro-preview1M/64k2025-012〜412〜18
gemini-3-pro-preview1M/64k2025-012〜412〜18
gemini-3-flash-preview1M/64k2025-01(0.5)(3)
gemini-2.5-pro1M/64k2025-01(1.25〜2.50)(10〜15)
gemini-2.5-flash1M/64k2025-01(0.3)(2.5)
gemini-2.5-flash-lite1M/64k2025-01(0.1)(0.4)
grok-4.31M?1.252.5
plamo-2.2-prime32k/4k60円250円