Baseten Blog | Page 10
Deploy open-source models in a couple clicks from Baseten’s model library
An explanation of how Baseten's model library works for deploying and serving popular open-source models.
Getting started with foundation models
Summarizing foundation models, focusing on data type, scale, in-context learning, and fine-tuning, illustrated with Meta's LLaMA model family.
New in May 2023
Explore new text generation and text-to-speech models, their GPU requirements, and join the community around open-source models.
Understanding NVIDIA’s Datacenter GPU line
This guide helps you navigate NVIDIA’s datacenter GPU lineup and map it to your model serving needs.
Comparing GPUs across architectures and tiers
So what are reliable metrics for comparing GPUs across architectures and tiers? We’ll consider core count, FLOPS, VRAM, and TDP.
New in April 2023
LLMs go OSS, AI community thrives, Baseten offers free credits to start deploying models
Comparing NVIDIA GPUs for AI: T4 vs A10
Comparing NVIDIA T4 vs. A10 GPUs for AI training/art: We analyze price & specs to determine the best GPU for ML.
If You Build It, Devs will Come: How to Host an AI Meetup
Want to host an AI community meetup, but aren’t sure where to start? Julien shares his top 10 tips for successfully hosting an AI meetup.
New in March 2023
An open-source ChatGPT, fine-tuning FLAN-T5, AI projects, and exciting compliance updates