A natively multimodal, instruction-tuned model featuring a 17 B activated parameter MoE architecture (128 experts, ~400 B total). Processes text + image inputs and produces text output.

Llama 4 Maverick 17B 128E Instruct
Provider: Azure_ai
Provider: Azure_ai