How to build function calling and JSON mode for LLMs 

We recently introduced built-in functionality for function calling and structured output for LLM deployments with our TensorRT-LLM Engine Builder. In this webinar, we’ll dive into how we built it!

What you'll learn:

  • Understanding structured output and function calling: Learn how these features ensure schema-compliant model outputs and enable LLMs to select and execute specific tools.

  • Building JSON mode and tool use: Dive into the implementation details, including defining schemas and tools, building a state machine, and using logit biasing to force valid output.

  • Hands-on demo: See these features in action with live demos and a comparison with alternative methods.

Why attend?

  • Direct insights from the engineers who built it.

  • Learn how to deploy more reliable and efficient LLM-based systems.

  • Get your questions answered live.

Don’t miss this opportunity to pick the brains of the engineers behind these features and enhance your AI products—register now and secure your spot!

Host 

Rachel Rapp

Developer Advocate

Speakers 

Bryce Dubayah

Engineering

Philip Kiely

Lead Developer Advocate

‌
Trusted by top engineering and machine learning teams
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo

Related resources