Meta
Llama 4 Maverick (17Bx128E)
meta-llama/llama-4-maverick
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding. These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
Capability
Vision Support
Context Window
10,000,000
Max Output Tokens
328,000
Using Llama 4 Maverick (17Bx128E) with Python API
Using Llama 4 Maverick (17Bx128E) with OpenAI compatible API
import openai
client = openai.Client(
api_key= '{your_api_key}',
base_url="https://api.model.box/v1",
)
response = client.chat.completions.create(
model="meta-llama/llama-4-maverick",
messages: [
{
role: 'user',
content:
'introduce your self',
},
]
)
print(response)