Conf42 Machine Learning 2025 - Online

- premiere 5PM GMT

Hallucination by Design: How Embedding Models Misunderstand Language

Abstract

Join me to discover how embedding models misunderstand human language. I’ll reveal test results where models think “take medication” and “don’t take medication” are identical. Learn the patterns and techniques to make GenAI systems more reliable. If you’re using LLM’s, you can’t afford to miss this.

...

Ritesh Modi

Principal AI Engineer @ Microsoft

Ritesh Modi's LinkedIn account Ritesh Modi's twitter account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)