API Reference
Evaluate Prompt
Operations about evaluate_prompts
POST
/
evaluate_prompt
/predict
Post Evaluate Prompt Predict
Evaluate an AI Prompt
Request Body
application/json
Requiredmessages[role]
Requiredarray<string>
messages[content]
Requiredarray<string>
max_tokens
integer
Maximum number of output tokens, maximum 400
Default:
300
Format: "int32"
temperature
number
How creative the response should be. Between 0 and 2, the lower the less creative
Format:
"float"
system
string
For Anthropic, set the system prompt to use
model_kind
string
Which model provider should be used
Default:
"openai"
Value in: "openai" | "anthropic"
curl -X POST "https://app.gitbutler.com/api/evaluate_prompt/predict" \
-H "Content-Type: application/json" \
-d '{
"messages[role]": [
"string"
],
"messages[content]": [
"string"
],
"max_tokens": 300,
"temperature": 0.1,
"system": "string",
"model_kind": "openai"
}'
Evaluate an AI Prompt