Laurel ships ML models 9+ months faster using Baseten

$200k+
in ML savings
9+
months faster deployment

Background

Professional services firms need to know precisely how time is spent — and so do you. Laurel is a Series B company providing an AI-powered time solution that automatically collects all work-related activities throughout your day, across your application stack, and generates comprehensive billing to send directly to clientele.

Laurel’s desktop app tracks users’ digital footprints during the workday while adhering to a built-in privacy mechanism. To automatically categorize hundreds of thousands of time entries every day, Laurel leverages sophisticated ML models and Baseten’s product suite.

The Problem

Andrew Ward, VP of Machine Learning at Laurel, was tasked with building a machine learning infrastructure from scratch in under four months. He was well-staffed on the data science end, and the team understood their user data better than anyone.

However, the infrastructure would be hard and slow to build, and largely undifferentiated. From an infrastructure perspective, operationalizing models seamlessly and effectively is a flat-out headache.

Andrew first brought in a specialized AWS ML Infra Consultant who wanted to leave SageMaker behind; he realized SageMaker would require a dedicated infrastructure engineer or two to operate. This led to his conversations with Baseten. The central aim was accelerating their product time-to-market.

The Solution

After their initial meeting, the Baseten and Laurel teams executed a test implementation and set everything up “extremely fast” — and then quickly ran into bugs. Yet, Andrew emphasizes that hurdle and the subsequent troubleshooting were green flags.

For an ethical ML company, honesty and collaboration are critical to a technical partnership, especially when that partnership powers the core component of your competitive value prop.

“There is value in having Truss open source. Being able to look at pieces of code and understand the thinking behind how the product is structured is actively useful to us.”

Andrew Ward, VP of Machine Learning at Laurel

Our teams cracked the issues within hours and served a proof-of-concept yet complex model in less than a day. Since the partnership officially began, the Laurel team has found that Baseten deeply knows the problem they are solving and sticks to it — something Andrew calls rare for the ML ecosystem.

In addition, he highlights Baseten’s “incredible customer responsiveness,” as the team is always willing to custom-build solutions. For instance, the Laurel engineering team uses an asynchronous, queue-driven architecture which is not supported by Baseten’s RESTful API endpoints. Within a day, the Baseten team shipped a product feature that functioned natively with Laurel system architecture.

The Result

Baseten has saved the Laurel team a tremendous amount of time and funds.

Without Baseten, Andrew would have had to hire one engineer (if not more) to internally build and maintain comparable features. A senior engineer would cost upwards of $200,000 annually plus stock options — a decent hit to a startup’s budget.

Finding someone to hire, onboarding them, and having them execute this complex project would have taken up to a year.

“You need an engineer who's been there and done a project like this before. There's actually not that many of those people in the world.”

Andrew Ward, VP of Machine Learning at Laurel

Andrew compares that to Baseten: “With you guys, we were up and running in just three days.”

Ultimately, Baseten enabled Laurel to send their product to market with killer features — far faster than they could’ve pulled off independently.

The Future

Laurel’s product is undeniably ML-intensive. With Baseten’s help, Andrew can envision scaling to automatically categorize hundreds of thousands of professionals’ time every day.

Beyond this, the Laurel team looks forward to trying more unique models for temporally aware categorization, which comes with a new set of technical challenges. It’s unlike how most operators currently use ML — and it’s also the future of the field.

With this, Laurel is excited to keep learning best practices from Baseten to deliver best-in-class AI-powered products to their customers.

“The early stages of our relationship have made me feel confident that as we grow and change and have more models and weirder stuff happen, we'll be able to work with Baseten to leverage their strengths and solve our problems.”

Andrew Ward, VP of Machine Learning at Laurel

Explore Baseten today

We love partnering with companies developing innovative AI products by providing the most customizable model deployment with the lowest latency.