Customers are looking for a turnkey solution to integrate LLMs with their existing applications. This session will provide an overview of the operational considerations, architectural patterns, and governance controls needed to operate LLMs at scale.
In this session we will see how to improve observability of container workloads focusing on the three pillars of monitoring, logging and traceability. In the operational performance, we will discuss how to detect behaviours that deviate from normal operating patterns.
Regulatory customers have multiple guardrails when running workloads on managed compute provided by AWS. This talk will focus on the setting up guardrails, deployment and monitoring of the ML services using Service Catalog Tools.
Learn for free, join the best tech learning community for a price of a pumpkin latte.
Event notifications, weekly newsletter
Delayed access to all content
Immediate access to Keynotes & Panels
Access to Circle community platform
Immediate access to all content
Courses, quizes & certificates
Community chats