top of page

Ofer Mendelevitch: Why do LLMs hallucinate?

Ofer Mendelevitch is a data scientist and machine learning engineer. He holds a B.Sc. in computer science from Technion and M.Sc. in EE from Tel Aviv university. At Vectara Ofer leads developer relations and advocacy. Prior to Vectara he built and led data science teams at Syntegra, Helix, Lendup and Yahoo!

Why do LLMs hallucinate?

After ChatGPT took the industry by storm, everyone is now working to implement LLM-powered application and trying to understand how to best apply this incredible innovation within their own business and to benefit their customers. One of the key issues with LLMs is hallucination, the tendency of large language models to make up responses that could be inaccurate or even completely incorrect. In this talk I will discuss why hallucination occurs, and some of the ways to address it, including retrieval augmented generation (or grounded-generation). Finally I'll show a demo of an LLM-powered app for asking questions about recent news, that uses Grounded Generation to mitigate hallucination.

8 views0 comments
bottom of page