Quickstart
Prerequisites
Before you begin, ensure you have:
A valid 1min.ai API key.
An active 1minrelay subscription.
Access to one of the following:
An OpenAI-compatible client (ChatKit, ChatWise, etc.).
A custom application that can configure:
Base URL (or endpoint).
API key / Authorization header.
Step 1: Purchase and Retrieve Your 1minrelay Key
Navigate to the 1minrelay purchase page:
https://kokodev.cc/1minrelayComplete the checkout process.
After purchase, you will receive an email containing your 1minrelay API key.
If you do not see the email:
Check your spam or junk folder.
Confirm that you used the correct email address at checkout.
At this point you will have:
Your 1min.ai API key.
Your 1minrelay API key (from the email).
These keys must be used in different places, as described below.
Step 2: Core Configuration Pattern
All integrations follow the same basic pattern:
Base URL (provider level)
Use this when your client lets you specify a provider with a root OpenAI endpoint:
Base URL (model/endpoint level)
Use this when your client wants a specific endpoint such as
chat/completions:API key field in the client
Always set this to your 1min.ai API key.
Important:
The 1minrelay key goes in the URL path (between the domain and
/v1).The 1min.ai key goes in the API key / Authorization field.
Never swap their roles.
Step 3: Configuring OpenAI-Compatible GUI Clients
Many graphical clients support multiple providers and/or custom models. Use one of the following approaches based on your client’s capabilities.
Recommended: Configure as a Provider
If your client supports the concept of “providers” or “backends”:
Create a new provider (for example:
1minrelay).Set the provider’s Base URL to:
In the API key (or token) field, enter your 1min.ai API key.
Save the configuration.
When initiating a chat or completion within the client, select this provider and use any 1min.ai-supported model identifier as instructed by the client.
Alternative: Configure as a Model
If your client does not support providers but does allow custom model endpoints:
Create a new model entry (for example:
1minrelay-gpt).Set the Base URL or endpoint to:
Set the API key field to your 1min.ai API key.
Save the model configuration.
Select this model whenever you send a request.
Configuration Reference
Use this table to quickly verify that your configuration is correct.
Base URL (provider style)
https://1minrelay.kokodev.cc/<1MINRELAY_KEY>/v1
Base URL (model style)
https://1minrelay.kokodev.cc/<1MINRELAY_KEY>/v1/chat/completions
API key field
Your 1min.ai API key
Key embedded in the URL path
Your 1minrelay API key
Protocol
OpenAI-compatible Chat Completions interface
Common Mistakes and How to Avoid Them
Putting the 1minrelay key in the API key field
Symptom: Authentication errors even though the URL looks correct.
Fix: Ensure the API key field contains the 1min.ai key only. The 1minrelay key must remain in the URL path.
Forgetting to include the 1minrelay key in the URL
Symptom: Errors indicating an invalid or missing route.
Fix: Confirm that your URL looks like:
https://1minrelay.kokodev.cc/<YOUR_1MINRELAY_KEY>/v1[...].
Using the wrong endpoint level
Some clients expect a root
/v1endpoint; others require/v1/chat/completions.Fix: If your client fails with
/v1, try/v1/chat/completions, and vice versa, according to its documentation.
Next Steps
Once your client is configured:
Use it exactly as you would with an OpenAI backend.
Create chats or completions, choose the appropriate model, and 1minrelay will transparently relay your requests to 1min.ai and return OpenAI-compatible responses.
For language-specific examples (Python, Node.js, curl, and more), see the SDK Usage page.
Last updated