LLM API比較

ModelContext/OutputKnowledge Cutoff$Input/1M$Output/1M
gpt-5.2-pro400k/128k2025-0821168
gpt-5.2400k/128k2025-081.7514
gpt-5.1400k/128k2024-091.2510
gpt-5.1-codex400k/128k2024-091.2510
gpt-5.1-codex-max400k/128k2024-091.2510
gpt-5.1-codex-mini400k/128k2024-090.252
gpt-5-pro400k/272k2024-0915120
gpt-5400k/128k2024-091.2510
gpt-5-codex400k/128k2024-091.2510
gpt-5-mini400k/128k2024-050.252
gpt-5-nano400k/128k2024-050.050.4
gpt-4.11M/32k2024-0628
claude-opus-4-5200k/64k2025-08525
claude-sonnet-4-5200k/64k2025-07315
claude-haiku-4-5200k/64k2025-0715
gemini-3-pro-preview1M/64k2025-01(2〜4)(12〜18)
gemini-2.5-pro1M/64k2025-01(1.25〜2.50)(10〜15)
gemini-2.5-flash1M/64k2025-01(0.3)(2.5)
gemini-2.5-flash-lite1M/64k2025-01(0.1)(0.4)
grok-4-1-fast-reasoning2M?0.20.5
grok-4-1-fast-non-reasoning2M?0.20.5
grok-code-fast-1256k?0.21.5
plamo-2.1-prime32k60円250円