Loading…
Tuesday November 12, 2024 12:55pm - 1:20pm MST
LLMs are huge to deploy and manage. Have you ever wondered if there's a way to pinpoint exactly which parts of your code are draining resources, causing latency, and hurting performance? In a world where efficiency is crucial, dynamically inspecting application behavior and performance at runtime can be transformative. Join this session to explore how to leverage OpenTelemetry’s profiling feature to optimize LLM code at a much deeper level.

We'll cover how to:
1. Identify specific pieces of code that consume excessive CPU and memory, or cause memory leaks and OOM errors.
2. Improve LLM performance by understanding model behavior, reducing latency, and meeting SLAs and SLOs.
3. Achieve efficient deployments on Kubernetes, ensuring optimal resource utilization and cost savings.
Speakers
avatar for Seema Saharan

Seema Saharan

Site Reliability Engineer, CNCF Ambassador, Autodesk
Meet Seema, the tech whiz at Autodesk. She's not just about fixing things – she loves sharing what she knows! Whether speaking at cool events like GitLab Commit, and GitHub Universe or breaking down tech on her YouTube channel, Seema makes the complicated stuff easy and fun. Join... Read More →
avatar for Aditya Soni

Aditya Soni

CNCF Ambassador, DevOps Engineer II, Forrester
Aditya Soni is a DevOps/SRE tech professional He worked with Product and Service based companies including Red Hat, Searce, and is currently positioned at Forrester Research as a DevOps Engineer II. He holds AWS, GCP, Azure, RedHat, and Kubernetes Certifications.He is a CNCF Ambassador... Read More →
Tuesday November 12, 2024 12:55pm - 1:20pm MST
Salt Palace | Level 2 | 255 B
Feedback form is now closed.

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link