API Documentation

42 Ai provides an OpenAI-compatible API. Use any OpenAI SDK or HTTP client.

Authentication

All requests require a JWT task token in the Authorization header. Obtain one by exchanging your API key:

curl -X POST https://the-42-lab.com/api/token \
  -H "Authorization: Bearer sk-42ai-YOUR_KEY" \
  -H "Content-Type: application/json"

# Response: { "task_token": "eyJ..." }

Task tokens expire after 60 seconds. Use the token as a Bearer token for all API calls.

Base URL

https://42ai.the-42-lab.com

Models

Model IDDescriptionTiers
42-thinkDeep reasoning (70B)
prointernal
42-visionImage understanding (72B)
prointernal
42-logicFast logic (27B)
freeprointernal
42-coderCode generation
prointernal
42-generalGeneral purpose
freeprointernal
42-embedText embeddings
freeprointernal

Endpoints

POST/v1/chat/completions

Generate chat completions (OpenAI-compatible)

Request Body

{
  "model": "42-logic",
  "messages": [
    { "role": "user", "content": "Hello!" }
  ]
}

Response

{
  "id": "chatcmpl-...",
  "choices": [{
    "message": { "role": "assistant", "content": "..." },
    "finish_reason": "stop"
  }],
  "usage": { "prompt_tokens": 5, "completion_tokens": 42, "total_tokens": 47 }
}
POST/v1/embeddings

Generate text embeddings for RAG and search

Request Body

{
  "model": "42-embed",
  "input": "Hello world"
}

Response

{
  "data": [{ "embedding": [0.1, -0.2, ...], "index": 0 }],
  "usage": { "prompt_tokens": 2, "total_tokens": 2 }
}
POST/v1/batch

Submit batch of tasks for async processing

Request Body

{
  "model": "42-logic",
  "tasks": [
    { "id": "t1", "messages": [{ "role": "user", "content": "Task 1" }] },
    { "id": "t2", "messages": [{ "role": "user", "content": "Task 2" }] }
  ]
}

Response

{
  "batch_id": "batch_abc123",
  "status": "pending",
  "total": 2
}
GET/v1/batch/{batch_id}

Check batch job status and retrieve results

Response

{
  "batch_id": "batch_abc123",
  "status": "completed",
  "total": 2,
  "completed": 2,
  "results": { "t1": { "content": "..." }, "t2": { "content": "..." } }
}
POST/v1/pipeline

Sequential model chaining with reference resolution

Request Body

{
  "template": "design-analysis",
  "input": { "image_url": "https://example.com/design.png" }
}

Response

{
  "status": "completed",
  "results": [
    { "id": "scan", "model": "42-vision", "status": "completed", "content": "..." },
    { "id": "plan", "model": "42-think", "status": "completed", "content": "..." }
  ],
  "final_output": "..."
}

TypeScript SDK

import { FortyTwoAI } from "@42-lab/sdk";

const ai = new FortyTwoAI({ apiKey: "sk-42ai-YOUR_KEY" });

// Chat
const res = await ai.chat("42-logic", [
  { role: "user", content: "Hello!" }
]);

// Embeddings
const emb = await ai.embed("Hello world");

// Batch
const batch = await ai.batch("42-logic", [
  { id: "t1", messages: [{ role: "user", content: "Task 1" }] },
  { id: "t2", messages: [{ role: "user", content: "Task 2" }] },
]);

// Pipeline
const pipe = await ai.pipeline("design-analysis", {
  image_url: "https://example.com/design.png"
});

Rate Limits

TierConcurrentModels
free142-logic, 42-general, 42-embed
pro3All models
internalUnlimitedAll models

Error Codes

CodeMeaning
401Invalid or expired token
429Concurrency limit reached (Retry-After: 5s)
503Insufficient GPU memory (Retry-After: 30s)