Customer stories

We're creating a platform for progressive AI companies to build their products in the fastest, most performant infrastructure available.

Trusted by top engineering and machine learning teams
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo

DJ Zappegos logoDJ Zappegos, Engineering Manager
DJ Zappegos logo

DJ Zappegos,

Engineering Manager

Customer Stories

Logo

Wispr Flow creates effortless voice dictation with Llama on Baseten

Wispr Flow runs fine-tuned Llama models with Baseten and AWS to provide seamless dictation across every application.

Read case study

Wispr Flow runs fine-tuned Llama models with Baseten and AWS to provide seamless dictation across every application.

Read case study
Logo

Rime serves speech synthesis API with stellar uptime using Baseten

Rime AI chose Baseten to serve its custom speech synthesis generative AI model and achieved state-of-the-art p99 latencies with 100% uptime in 2024

Read case study

Rime AI chose Baseten to serve its custom speech synthesis generative AI model and achieved state-of-the-art p99 latencies with 100% uptime in 2024

Read case study
Logo

Bland AI breaks latency barriers with record-setting speed using Baseten

Bland AI leveraged Baseten’s state-of-the-art ML infrastructure to achieve real-time, seamless voice interactions at scale.

Read case study

Bland AI leveraged Baseten’s state-of-the-art ML infrastructure to achieve real-time, seamless voice interactions at scale.

Read case study
Logo

Custom medical and financial LLMs from Writer see 60% higher tokens per second with Baseten

Writer, the leading full-stack generative AI platform, launched new industry-specific LLMs for medicine and finance. Using TensorRT-LLM on Baseten, they increased their tokens per second by 60%.

Read case study

Writer, the leading full-stack generative AI platform, launched new industry-specific LLMs for medicine and finance. Using TensorRT-LLM on Baseten, they increased their tokens per second by 60%.

Read case study
Logo

Patreon saves nearly $600k/year in ML resources with Baseten

With Baseten, Patreon deployed and scaled the open-source foundation model Whisper at record speed without hiring an in-house ML infra team.

Read case study

With Baseten, Patreon deployed and scaled the open-source foundation model Whisper at record speed without hiring an in-house ML infra team.

Read case study