Loading...
Loading...
Meta Llama Action
Generate responses using Llama models.
chat_completion6 parameters required
Llama model (llama-3.3-70b, llama-3.2-3b, etc.).
Conversation messages.
System instructions.
Maximum tokens to generate.
Sampling temperature (0-2).
Nucleus sampling.
4 parameters returned
Generated response.
Tokens in prompt.
Tokens generated.
Why generation stopped.