Compact yet powerful open-source model with a 24B parameter size and 128 K token context. Offers multimodal understanding and strong instruction-following at edge-level efficiency.