LLM API比較

ModelContext/OutputKnowledge Cutoff$Input/1M$Output/1M
gpt-4.5-preview128k/16k2023-1075150
o3-mini-2025-01-31200k/100k2023-101.14.4
o1-2024-12-17200k/100k2023-101560
o1-mini-2024-09-12128k/64k2023-10312
gpt-4o-2024-11-20128k/16k2023-102.510
gpt-4o-mini-2024-07-18128k/16k2023-100.150.60
claude-3-7-sonnet-20250219200k/8k
thinking:64k(128k)
2024-10315
claude-3-5-sonnet-20241022200k/8k2024-04315
claude-3-5-haiku-20241022200k/8k2024-0715
claude-3-opus-20240229200k/4k2023-081575
claude-3-sonnet-20240229200k/4k2023-08315
claude-3-haiku-20240307200k/4k2023-080.251.25
gemini-2.0-flash1M/8k2024-080 (0.1)0 (0.4)
gemini-2.0-flash-lite-preview-02-01M/8k2024-080 (0.075)0 (0.3)
grok-2-1212128k2024-07210