Introduction

is a commercialized model service aggregation platform dedicated to providing domestic AIGC application innovation teams with commercial model API services featuring lower costs, greater diversity, and higher stability. It aims to reduce the model integration costs for AIGC application innovation teams and make AIGC application development more efficient.

All API requests should be made over HTTPS to the endpoint: https://api.apigo.ai

Responses are returned in JSON format.

Quick Start

Get started with the API·GO API in minutes

The Api·Go API provides OpenAI-compatible endpoints that give you access to advanced language models. Get started with just a few lines of code using your preferred SDK or framework.

Looking for your API key? Get it from the API·GO Dashboard.

1. Authentication

Most endpoints require authentication via an API key. Include your API key in the request header:

Authorization: Bearer YOUR_API_KEY

2. Make a Request

Here's how to get a list of users:

curl -H "Authorization: Bearer YOUR_API_KEY" https://api.example.com/api/v1/users

3. Handle the Response

The API will respond with a JSON object containing the requested data or an error message.

Guides

Detailed guides on how to use specific features of the API.

Chat Management

Learn how to create chat

  • Create Chat Completion
  • Create Message

Rate Limiting

API·GO operates on a prepaid balance model and does not impose any hard limits on request rate or concurrency. As long as your account has a sufficient balance, you can continue to send requests.

This flexible approach allows you to scale your usage according to your needs without being constrained by platform-level restrictions.

  • While API·GO does not limit your request rate, the underlying AI providers (e.g., OpenAI, Anthropic, Google) may enforce their own rate or concurrency limits on a per-API-key basis.
  • If you are sending a high volume of concurrent requests, you must still adhere to the usage policies of the upstream provider you are routing to. We recommend implementing robust retry and error-handling logic in your application to manage potential rate limit errors from providers.
POST /v1/chat/completions

Create Chat Completion

Generate a completion for chat-based conversations

Language Examples

CURL
Python (requests)
NodeJs
golang
Java
C#
curl https://api.apigo.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "vendor: API_VENDOR" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'
import requests
import json

API_URL = "https://api.apigo.ai/v1/chat/completions"
API_KEY = "YOUR_API_KEY"
VENDOR = "API_VENDOR"

payload = {
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
}

headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
    "vendor": VENDOR
}

resp = requests.post(API_URL, headers=headers, json=payload, timeout=30)
print("Status:", resp.status_code)
# 尝试解析 JSON,否则打印原始文本
try:
    print(json.dumps(resp.json(), indent=2, ensure_ascii=False))
except Exception:
    print(resp.text)
// Node 18+ built-in fetch
const API_URL = "https://api.apigo.ai/v1/chat/completions";
const API_KEY = "YOUR_API_KEY";
const VENDOR = "API_VENDOR";

const body = {
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }]
};

(async () => {
  const res = await fetch(API_URL, {
    method: "POST",
    headers: {
      "Authorization": `Bearer `,
      "Content-Type": "application/json",
      "vendor": VENDOR
    },
    body: JSON.stringify(body),
    // signal: abortController.signal // 可选:支持取消
  });

  console.log("Status:", res.status);
  const text = await res.text();
  try {
    console.log(JSON.stringify(JSON.parse(text), null, 2));
  } catch (e) {
    console.log(text);
  }
})();
package main

import (
	"bytes"
	"encoding/json"
	"fmt"
	"io"
	"net/http"
	"time"
)

func main() {
	apiURL := "https://api.apigo.ai/v1/chat/completions"
	apiKey := "YOUR_API_KEY"
	vendor := "API_VENDOR"

	payload := map[string]interface{}{
		"model": "gpt-4o",
		"messages": []map[string]string{
			{"role": "user", "content": "Hello!"},
		},
	}
	b, _ := json.Marshal(payload)
	req, _ := http.NewRequest("POST", apiURL, bytes.NewReader(b))
	req.Header.Set("Authorization", "Bearer "+apiKey)
	req.Header.Set("Content-Type", "application/json")
	req.Header.Set("vendor", vendor)

	client := &http.Client{Timeout: 30 * time.Second}
	resp, err := client.Do(req)
	if err != nil {
		fmt.Println("Request error:", err)
		return
	}
	defer resp.Body.Close()
	fmt.Println("Status:", resp.StatusCode)
	body, _ := io.ReadAll(resp.Body)
	// 尝试解析 JSON
	var obj interface{}
	if err := json.Unmarshal(body, &obj); err == nil {
		out, _ := json.MarshalIndent(obj, "", "  ")
		fmt.Println(string(out))
	} else {
		fmt.Println(string(body))
	}
}

import java.net.URI;
import java.net.http.*;
import java.time.Duration;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.Map;

public class ApigoExample {
  public static void main(String[] args) throws Exception {
    String apiUrl = "https://api.apigo.ai/v1/chat/completions";
    String apiKey = "YOUR_API_KEY";
    String vendor = "API_VENDOR";

    Map payload = Map.of(
      "model", "gpt-4o",
      "messages", new Object[] { Map.of("role","user","content","Hello!") }
    );

    ObjectMapper mapper = new ObjectMapper();
    String body = mapper.writeValueAsString(payload);

    HttpRequest req = HttpRequest.newBuilder()
      .uri(URI.create(apiUrl))
      .timeout(Duration.ofSeconds(30))
      .header("Authorization", "Bearer " + apiKey)
      .header("Content-Type", "application/json")
      .header("vendor", vendor)
      .POST(HttpRequest.BodyPublishers.ofString(body))
      .build();

    HttpClient client = HttpClient.newHttpClient();
    HttpResponse resp = client.send(req, HttpResponse.BodyHandlers.ofString());
    System.out.println("Status: " + resp.statusCode());
    System.out.println(resp.body());
  }
}


using System;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;

class Program {
  static async Task Main() {
    var apiUrl = "https://api.apigo.ai/v1/chat/completions";
    var apiKey = "YOUR_API_KEY";
    var vendor = "API_VENDOR";

    var payload = new {
      model = "gpt-4o",
      messages = new[] { new { role = "user", content = "Hello!" } }
    };

    var json = JsonSerializer.Serialize(payload);
    using var client = new HttpClient { Timeout = TimeSpan.FromSeconds(30) };
    var req = new HttpRequestMessage(HttpMethod.Post, apiUrl);
    req.Headers.Add("Authorization", $"Bearer {apiKey}");
    req.Headers.Add("vendor", vendor);
    req.Content = new StringContent(json, Encoding.UTF8, "application/json");

    var resp = await client.SendAsync(req);
    Console.WriteLine("Status: " + (int)resp.StatusCode);
    var body = await resp.Content.ReadAsStringAsync();
    Console.WriteLine(body);
  }
}

Header

Name Type Required Description
Authorization string required Include your API key in the Authorization header of your requests
Content-Type string required application/json
vendor string API_VENDOR

Query Parameters

Name Type Required Description
model string required The model ID to use for the completion.
messages array required Array of chat messages describing the conversation so far.
stream boolean default:"false" If true, results are returned as server-sent events.

Responses Parameters

Name Type Description Example
id string unique identifier for the request "8096df8781e547f38cc2c24ed0de4a28"
object string Object type, typically chat.completion "chat.completion"
created integer Timestamp when the request was created (Unix timestamp) 1741570283
model string Name of the model used "gemini-2.5-flash"
choices object[] List of generated text options
 index integer Index of the option, starting from 0 0
 message object
  message.role enum Message role; "assistant" indicates an assistant message,Available options: assistant "assistant"
  message.content string Generated text content "assistant"
  message.reasoning_content string Content of the model's reasoning process when generating text ""
 finish_reason enum "stop" indicates that the model stops generating further content when it encounters the string specified in the "stop" field; "length" indicates that the model stops generating when it reaches the maximum length; "null" indicates that no termination reason is specified. Available options: stop, length
usage object Token usage for the current request
 usage.prompt_tokens integer Token count of the input message 1117
 usage.completion_tokens integer Token count of the generated text 46
 usage.total_tokens integer Total token count for the current request (input + output) 1163
 usage.prompt_tokens_details object Detailed token information of the input message
  usage.prompt_tokens_details.audio_tokens integer Token count of the input audio 0
  usage.prompt_tokens_details.cached_tokens integer Token count of cache hits 0
 usage.completion_tokens_details object Detailed token information of the generated text
  usage.completion_tokens_details.audio_tokens integer Token count of the output audio 0
  usage.completion_tokens_details.reasoning_tokens integer Token count of the output reasoning process 0

Responses Result

Code Description
200

chat completion response

Example Response:
{
  "id": "8096df8781e547f38cc2c24ed0de4a28",
  "object": "chat.completion",
  "created": 1741570283,
  "model": "gemini-2.5-flash",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "hello!",
        "reasoning_content": ""
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 1117,
    "completion_tokens": 46,
    "total_tokens": 1163,
    "prompt_tokens_details": {
      "audio_tokens": 0,
      "cached_tokens": 0
    },
    "completion_tokens_details": {
      "audio_tokens": 0,
      "reasoning_tokens": 0
    }
  }
}
400 Bad Request - Invalid query parameters.
500 Internal Server Error.
POST /v1/messages

Create Message

Interact with Anthropic’s Messages API

Language Examples

cURL
curl https://api.apigo.ai/v1/messages \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2025-05-14" \
  -H "vendor: ClaudeCode" \
  -d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello!"}]}'
import requests

url = "https://api.apigo.ai/v1/messages"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
params = {"page": 1, "limit": 10}

response = requests.get(url, headers=headers, params=params)
print(response.json())

Header

Name Type Required Description
Authorization string required Include your API key in the Authorization header of your requests
Content-Type string required application/json
anthropic-version string required anthropic version
vendor string required API_VENDOR

Query Parameters

Name Type Required Description
model string required The model ID to use for the completion.
messages array required Array of chat messages describing the conversation so far.
stream boolean default:"false" If true, results are returned as server-sent events.

Responses Parameters

Name Type Description Example
id string unique identifier for the request "8096df8781e547f38cc2c24ed0de4a28"
type string Object type, typically message message
stop_reason string
stop_sequence string
model string Name of the model used claude-sonnet-4-20250514
vendor string Name of the vendor ClaudeCode
role string assistant
usage object Token usage for the current request
 usage.input_tokens integer Token count of the input message 1117
 usage.output_tokens integer Token count of the output message 1117
 usage.cache_creation_input_tokens integer 46
 usage.cache_read_input_tokens integer 1163
 usage.service_tier string standard
 usage.cache_creation object
  usage.cache_creation.ephemeral_5m_input_tokens integer 0
  usage.cache_creation.ephemeral_1h_input_tokens integer 0

Responses Result

Code Description
200

chat completion response

Example Response:
{
"model":"claude-sonnet-4-20250514",
"id":"your-39fe8dd36d044b6d5d157bcd88ce2607",
"type":"message",
"role":"assistant",
"content":
	[
		{
		"type":"text",
		"text":"Hello! I'm Claude, Anthropic's AI assistant. I'm here to help you with a wide variety of tasks - from answering questions and helping with analysis, to writing, coding, math, creative projects, and much more.\n\nWhat can I help you with today?"
		}
	],
"stop_reason":"end_turn",
"stop_sequence":null,
"usage":
	{
		"input_tokens":23,
		"cache_creation_input_tokens":0,
		"cache_read_input_tokens":0,
		"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},
		"output_tokens":61,
		"service_tier":"standard"
	},
"vendor":"ClaudeCode"
}
400 Bad Request - Invalid query parameters.
500 Internal Server Error.
POST /v1/videos

Create Video

Call OpenAI's video generation API (supporting models like Sora) to generate videos.

Language Examples

cURL
 curl https://api.apigo.ai/v1/videos \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=sora-2" \
  -F "prompt=A calico cat playing a piano on stage"
  
import requests

url = "https://api.apigo.ai/v1/messages"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
params = {"page": 1, "limit": 10}

response = requests.get(url, headers=headers, params=params)
print(response.json())

Header

Name Type Required Description
Authorization string required Include your API key in the Authorization header of your requests
Content-Type string required application/json

Query Parameters

Name Type Required Description
prompt string required The text prompt describing the video to be generated.
model string required Video generation model, defaulting to sora-2.
seconds string Video duration (in seconds), defaulting to 10 seconds.
size string Output resolution, formatted as width×height, defaulting to 720×1280.
input_reference file Optional image reference, used to guide generation.

Responses Parameters

Name Type Description Example
id string Video task ID video_123
object string Object type, fixed as "video" video
model string Name of the model used sora-2
status string Task status (queued: in queue, processing: in processing, completed: completed, failed: failed) queued
progress integer Processing progress (0-100) 0
created_at integer Creation timestamp 1712697600
size string Video resolution 1024x1808
seconds string Video duration (in seconds) 10
quality string Video quality standard

Responses Result

Code Description
200

video completion response

Example Response:
{
  "id": "video_123",
  "object": "video",
  "model": "sora-2",
  "status": "queued",
  "progress": 0,
  "created_at": 1712697600,
  "size": "1024x1808",
  "seconds": "8",
  "quality": "standard"
}
400 Bad Request - Invalid query parameters.
500 Internal Server Error.
GET /v1/videos/{video_id}

Search Video

Query the status and result of a video generation task using the task ID.

Language Examples

CURL
curl 'https://api.apigo.ai/v1/videos/video_123' \
  -H "Authorization: Bearer YOUR_API_KEY"
  
import requests

url = "https://api.apigo.ai/v1/messages"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
params = {"page": 1, "limit": 10}

response = requests.get(url, headers=headers, params=params)
print(response.json())

Path Parameter

Name Type Required Description
video_id string required Video task ID

Responses Parameters

Name Type Description Example
id string Video task ID video_123
object string Object type, fixed as "video" video
model string Name of the model used sora-2
status string Task status (queued: in queue, processing: in processing, completed: completed, failed: failed) queued
progress integer Processing progress (0-100) 0
created_at integer Creation timestamp 1712697600
size string Video resolution 1024x1808
seconds string Video duration (in seconds) 10
quality string Video quality standard
url string Video url https://example.com/video.mp4

Responses Result

Code Description
200

video completion response

Example Response:
{
  "id": "video_123",
  "object": "video",
  "model": "sora-2",
  "status": "completed",
  "progress": 100,
  "created_at": 1712697600,
  "size": "1024x1808",
  "seconds": "8",
  "quality": "standard",
  "url": "https://example.com/video.mp4"
}
400 Bad Request - Invalid query parameters.
500 Internal Server Error.
POST /v1/images/generations

Create Image

Call OpenAI's image generation API (supporting models like Sora) to generate videos.

Language Examples

CURL
 curl https://api.apigo.ai/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $NEWAPI_API_KEY" \
  -d '{
    "model": "dall-e-3",
    "prompt": "一只可爱的小海獭",
    "n": 1,
    "size": "1024x1024"
  }'
  
import requests

url = "https://api.apigo.ai/v1/messages"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
params = {"page": 1, "limit": 10}

response = requests.get(url, headers=headers, params=params)
print(response.json())

Header

Name Type Required Description
Authorization string required Include your API key in the Authorization header of your requests
Content-Type string required application/json

Query Parameters

Name Type Required Description
prompt string required The text prompt describing the video to be generated.
model string required Video generation model, defaulting to sora-2.
n integer
size string Output resolution, formatted as width×height, defaulting to 720×1280.

Responses Parameters

Name Type Description Example
id string image ID image_123
data string image
data.b64_json string
created integer Creation timestamp 1712697600

Responses Result

Code Description
200

image completion response

Example Response:
{
  "created": 1589478378,
  "data": [
    {
      "url": "https://...",
      "revised_prompt": "一只可爱的小海獭在水中嬉戏,它有着圆圆的眼睛和毛茸茸的皮毛"
    }
  ]
}
400 Bad Request - Invalid query parameters.
500 Internal Server Error.

FAQ

What is API·GO?

API·GO is a unified AI API gateway that allows you to access multiple leading AI models (like ChatGPT, Claude, Gemini, etc.) through a single endpoint, simplifying your development workflow with intelligent load balancing and reliable API call experience.

Which AI models does Api·Go support?

We support models from major AI providers including OpenAI (GPT-3.5, GPT-4 series), Anthropic (Claude series), Google (Gemini series), Azure OpenAI, AWS Bedrock, DeepSeek, Mistral, and more. We continuously add support for new models.

How do I get started with Api·Go?

Simply sign up for an account, get your API key, and replace your existing AI API endpoints with Api·Go's unified endpoint. We provide comprehensive documentation and SDKs supporting multiple programming languages.

What are the advantages of using Api·Go?

Key advantages include: unified API interface reducing integration complexity, intelligent load balancing for higher availability, automatic failover ensuring service stability, cost optimization with usage analytics, and global CDN acceleration.

How does Api·Go pricing work?

We use a pay-as-you-go model with transparent pricing structure. Our basic plan includes free credits suitable for development and testing. Enterprise users enjoy volume discounts and dedicated support services.

How is data security and privacy protected?

We implement enterprise-grade security standards including end-to-end encryption, no data retention policy, SOC2 compliance certification, and more. All API calls are encrypted in transit with full protection of user data privacy.

Errors

The API uses standard HTTP status codes to indicate the success or failure of a request.

Client Errors (4xx)

Code Description
400 Bad Request - The request was unacceptable, often due to missing a required parameter.
401 Unauthorized - No valid API key provided.
403 Forbidden - The API key doesn't have permissions to perform the request.
404 Not Found - The requested resource doesn't exist.
409 Conflict - The request conflicts with another request (perhaps due to using the same idempotent key).
429 Too Many Requests - Too many requests hit the API too quickly. We recommend an exponential backoff.

Server Errors (5xx)

Code Description
500 Internal Server Error - We had a problem with our server. Try again later.
503 Service Unavailable - The server is temporarily unavailable (e.g., for maintenance). Try again later.