← Back to Docs
Operations

Network and Proxy for Stable API Access

QuantenRam works immediately in the open internet, but in enterprise networks only becomes truly reliable when proxy, certificates, and VPN are consciously configured. This page describes the minimal but reliable network path you need for productive API usage.

The public contract is deliberately simple: clients communicate via HTTPS with https://quantenram.net/v1. Nevertheless, real problems often arise not in the payload but on the way there. Corporate proxies implement CONNECT rules differently, TLS inspection brings its own root certificates, VPN clients change DNS or latency profiles, and some runtime environments ignore proxy settings even though the shell appears correct. Making this layer explicit saves a lot of incident time later.

Only outbound HTTPS is required

For normal API usage, your client needs no incoming authorizations. Typically, outgoing traffic on port 443 to quantenram.net and functioning DNS resolution within your enterprise network are sufficient.

Proxy rules should be explicit

Don't rely on random system defaults. If a proxy is required, HTTPS_PROXY, exceptions via NO_PROXY, and possibly PAC or gateway rules per runtime should be documented.

TLS problems are often trust store problems

When an enterprise proxy breaks HTTPS, the appropriate enterprise certificate must be in the trust store of the respective runtime. Without this chain of trust, errors quickly look like API outages when in reality only certificate verification fails.

Firewall and outbound authorizations

Normally, you need exclusively outbound HTTPS connections for QuantenRam. Practically, this means: your firewall or web gateway must allow requests to quantenram.net on port 443 and not block DNS resolution for the same destination path. For API usage, no inbound rules, webhooks, or special ports on the client side are required.

In more restrictive environments, the authorization should apply explicitly to the runtime that will actually send requests later, not just to browsers. This sounds trivial but often fails in practice because a browser works while curl, Python, or an IDE plugin run over a different proxy path or a different certificate store. Therefore, the browser is only a visibility test, never the final proof of working API connectivity.

Proxy configuration cleanly set per shell and runtime

If your company requires an outbound proxy, it should be explicitly set for every relevant environment. This is especially important for CI, containers, WSL, background services, and IDE-integrated tools. What works in an interactive terminal doesn't automatically apply to every other process.

export HTTPS_PROXY="http://proxy.company.intern:8080"
export HTTP_PROXY="http://proxy.company.intern:8080"
export NO_PROXY="localhost,127.0.0.1,.company.intern"

curl https://quantenram.net/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

Under Windows or in PowerShell, the same logic should be equally explicit. Especially there, some tools hang on WinHTTP, others on environment variables, and others on their own GUI settings. If you want reproducible results, first document the shell variant and check exactly there.

$env:HTTPS_PROXY = "http://proxy.company.intern:8080"
$env:HTTP_PROXY = "http://proxy.company.intern:8080"
$env:NO_PROXY = "localhost,127.0.0.1,.company.intern"

curl.exe https://quantenram.net/v1/models `
  -H "Authorization: Bearer YOUR_API_KEY"

If your proxy requires authentication, the necessary credentials should only be distributed via the intended secure mechanisms. Avoid local helper scripts that leave proxy passwords and API keys in plaintext in log files or shell histories. The actual API works more stably when the proxy path is clean and sparingly configured.

Handle SSL and TLS certificates correctly

A common error in enterprise environments is trying to cover up TLS problems with insecure workarounds. Flags like --insecure can help for a one-time visibility test but are not a reliable operational solution. As soon as a corporate proxy inspects HTTPS, the enterprise certificate must be included in the trust chain of the runtime. Only then do certificate errors remain distinguishable from real server problems.

sudo cp company-root-ca.crt /usr/local/share/ca-certificates/company-root-ca.crt
sudo update-ca-certificates

export SSL_CERT_FILE="/etc/ssl/certs/ca-certificates.crt"

The above pattern fits well with Debian, Ubuntu, and WSL-based setups. In Windows, the system certificate store may be the right place instead, while some Python runtimes or containers prefer an explicit file path. Decisive is not the operating system but that every runtime uses the same trust store or consciously documented deviations.

VPNs change more than just the route

A VPN can make the connection to QuantenRam more stable when company policies only apply via the VPN path. But it can also make it more complex, for example through split-tunnel rules, changed DNS resolvers, or additional latency. If you see sporadic timeouts or API problems that only occur in the office, the question about active VPN, tunnel mode, and regional exit is often more important than the next prompt change.

A comparison test in three states has proven practical in the field: without VPN, with VPN, and with VPN plus proxy. If only one of these three variants works stably, the cause is almost never the model itself. Pay special attention to streaming requests. They react more sensitively to proxy and VPN timeouts than short non-streaming test requests.

Test connectivity and TLS specifically

A good network test should not just say that "it doesn't work" but at which point it hangs. curl -v shows you DNS resolution, proxy usage, TLS handshake, and final HTTP status in one pass. This allows clean distinction of whether the disruption occurs before the request, during TLS setup, or only after successful API reachability.

curl -v https://quantenram.net/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

openssl s_client -connect quantenram.net:443 -servername quantenram.net

The curl test is usually the most important in everyday life because it checks the complete application path. openssl s_client is then helpful when you mainly need to clarify certificate or trust store questions. In both cases: use the same machine, the same proxy, and preferably the same shell as later in the real productive path. Otherwise, you're only testing a nice-looking parallel world.

Don't generalize timeouts and streaming together

Many teams set a single global timeout and then wonder about unstable long responses. In practice, it's better to consider connection setup and read phase separately. A short connect timeout makes sense so that blocked proxies become quickly visible. A more generous read timeout is much more realistic with longer responses or streaming.

If non-streaming works stably but streaming frequently breaks, that's a strong indication of network, proxy, or VPN effects. Then the model shouldn't be switched immediately, but the transport path should first be measured under realistic runtimes. That's exactly why network tests belong in everyday operations and not just in initial setup.

The reliable minimal formula for enterprise networks is: outbound HTTPS on port 443, explicit proxy instead of implicit defaults, cleanly installed corporate CA, and a real curl test in exactly the runtime that will later send productive requests.