Your first QuantenRam request in just a few minutes
The fastest way to get started with QuantenRam is not to try as many models as possible right away, but to establish one clean baseline path first. Once registration, API key, and the first request work cleanly, you can then expand models, tiers, and hosting options deliberately without having to restructure your integration later.
Before you begin, you only need two things: an account on quantenram.net and an active API key. These two steps appear early on purpose because QuantenRam does more than return model responses; it also ties together usage, tiers, and visibility in the dashboard. That is exactly why a good quickstart begins not with code, but with a clear account and key foundation.
Step 1: Create an account on quantenram.net
Registration with later usage in mind
Go to quantenram.net or directly to /accounts/signup/ and create your account. This is more than a formal entry point, because your account later becomes the place where tier assignment, dashboard activity, and API key management come together. If you want to use QuantenRam in a team or company, it is worth starting here with a clear project or company identity instead of random test accounts.
Step 2: Get your API key from the dashboard
Why the key is the real starting point for integrations
After logging in, switch to the dashboard and open /keys/. There you create your API key. This key does not just identify your requests; it also determines which models and tiers are enabled for you. If you later run into missing models or unexpected responses, the first thing to check is almost always the key you are using, not your application code.
export QUANTENRAM_API_KEY="dein-api-key"
It is a good idea to store the key immediately as an environment variable. That keeps local tests, scripts, and later deployments consistent. At the same time, it helps you avoid leaking credentials into source code or screenshots by accident.
Step 3: Send your first API request
Your first request should be as small and unambiguous as possible. The goal is not to build a complex agent right away, but first to verify that authentication, base URL, and model selection work correctly together. A simple curl request is ideal for exactly that reason, because it makes every layer of the communication visible.
curl https://quantenram.net/v1/chat/completions \
-H "Authorization: Bearer $QUANTENRAM_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "quantenram-start/deepseek-chat",
"messages": [
{
"role": "user",
"content": "Erklaere in zwei Saetzen, warum ein API Gateway fuer LLMs nuetzlich ist."
}
]
}'
If the request succeeds, you will receive a structured response in the OpenAI-compatible format. That is precisely the moment when QuantenRam shows its strength: you are talking to the same familiar API shape while the QuantenRam product logic is already embedded behind the model ID.
{
"id": "chatcmpl_...",
"object": "chat.completion",
"created": 1712345678,
"model": "quantenram-start/deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Ein API Gateway macht Modellwechsel einfacher und reduziert Integrationsaufwand. Dadurch koennen Teams Modelle nach Qualitaet, Datenschutz und Kosten waehlen, ohne ihre Anwendung jedes Mal neu zu verbinden."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 30,
"completion_tokens": 38,
"total_tokens": 68
}
}
Step 4: Use the web chat interface
If you prefer to test interactively first, you can use the web interface in the dashboard after logging in, for example via /simple-chat/. That is especially useful when you want to try prompts, model selection, and response quality quickly before building an integration into code or product logic. The big advantage is that you are using the same platform that you will later use through the API. Chat and API therefore do not sit next to each other as two separate worlds, but form a shared entry point into the same model landscape.
How to choose the right tier
Tier selection is not just a pricing question. It determines how you mentally position QuantenRam and which kinds of tasks you use it for. The Free tier is a good fit when you first want to understand the connection path, response format, and general behavior. For many developers, the Start tier is the real beginning of productive usage because it offers a fast path into serious API work and is conceptually close to opencode-go. Zenmaster is the right tier once you move beyond isolated test prompts and need access to stronger premium models, more demanding review tasks, or team-ready standards; in that role, it corresponds more closely to opencode-zen.
Free
Free is ideal for the first contact with the platform, the API contract, and the dashboard. If your main goal is to verify that your application is wired up correctly, start here and only scale upward afterward.
Start
For many users, Start is the best first productive path. The tier makes sense when you already want to build real workflows while still keeping a clear cost profile and a straightforward entry point.
Zenmaster
Zenmaster becomes interesting as soon as model quality, review workflows, and broader model coverage matter more than the entry price alone. Anyone not just testing AI, but integrating it into decisions and production, often ends up here.
curl https://quantenram.net/v1/models -H "Authorization: Bearer $QUANTENRAM_API_KEY"
This small request becomes extremely helpful later on. It shows you which alias models are visible for your current access and saves a lot of guesswork when a model name does not work or a tier change does not seem to be active yet.
Typical first steps after a successful hello world
After the first successful response, do not jump straight into complex multi-agent workflows. It is more useful to harden the integration step by step. One good next step is to switch the alias model deliberately once so you can see how flexibly the platform behaves in your code. Another sensible step is to compare the API and the web chat so you get a feel for model character and response style. And finally, it is worth agreeing early on a shared model family for your team or project, so later cost and quality comparisons can be made cleanly at all.
curl https://quantenram.net/v1/chat/completions \
-H "Authorization: Bearer $QUANTENRAM_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "quantenram-zenmaster/gpt-5.4",
"messages": [
{
"role": "user",
"content": "Pruefe diesen Text auf technische Ungenauigkeiten und formuliere eine bessere Version."
}
]
}'
That is how you quickly notice that QuantenRam is not designed for just one single model path. The real value appears when the same integration path can carry multiple model types without forcing your product to be rewired for every new use case.