Private, secure DeepSeek-R1 deployments
DeepSeek models are taking the AI world by storm, and we're thrilled to offer DeepSeek-R1 (and DeepSeek-V3) on dedicated deployments for private, secure, compliant inference that don't share your prompts or data with anyone.
Serving DeepSeek-R1 in production requires 8xH200 or 16xH100 GPUs and is a replacement for OpenAI-o1 for high-volume use cases.
For testing and experimentation, we also recommend distilled R1 models, which can be up to 32x cheaper:
DeepSeek-R1 Qwen 7B
DeepSeek-R1 Qwen 32B
DeepSeek-R1 Llama 70B