Flexible instance types per model deployment
Model deployments now support changing instance types, enabling you to experiment with different hardware configurations and use specific hardware for staging, development, and production...
See our latest feature releases, product improvements and bug fixes
Apr 14, 2025
Model deployments now support changing instance types, enabling you to experiment with different hardware configurations and use specific hardware for staging, development, and production...
Apr 10, 2025
For users who love working in the terminal, we're excited to announce truss push --tail , which streams Baseten logs directly to your command line. You no longer need to switch context between your...
Apr 7, 2025
We’ve overhauled the Baseten docs to make them more readable, structured, and easier to navigate for both new and returning users. Some highlights: New homepage to help new users get started All-new...
Mar 21, 2025
The OpenAI SDK has become a standard for interacting with AI models, making it extremely important in the inference space. We’re happy to announce official OpenAI-compatible APIs for both chat...
Feb 10, 2025
Now with improved performance, robustness, and an even more delightful DevEx since our beta launch, we’re thrilled to announce the general availability of Baseten Chains for production compound AI!...
Jan 30, 2025
We run health checks on your deployments to ensure they’re able to run inference. Now, you can customize these checks to monitor anything , from tracking 500 errors to detecting CUDA issues and more....
Jan 21, 2025
We've expanded our metrics support to include GPU memory usage and utilization for MIG (Multi-Instance GPU) instance types. These metrics were previously unavailable for MIG configurations. This...
Dec 20, 2024
We’ve revamped our metrics dashboard to make monitoring and debugging easier! Here’s what’s new: Unified view : All metrics are now displayed on a single page—no more clicking between tabs. This...
Dec 19, 2024
Our new Speculative Decoding integration lets you leverage speculative decoding as part of our streamlined TensorRT-LLM Engine Builder flow. Just modify the new speculator configuration in the Engine...
Dec 13, 2024
We’ve added several new endpoints to our REST API, giving you even more control over your deployments, environments, and resources. Here’s what’s new: Deletion Endpoints Delete a model:...