Dr. Amr Awadallah at The AI Quality Conference 2024

Dr. Amr Awadallah at The AI Quality Conference 2024

Vectara Inc

55 лет назад

312 Просмотров

In this video, you'll hear Vectara CEO, Dr. Amr Awadallah, share profound insights at The AI Quality Conference 2024 in San Francisco. Amr will share Vectara's vision as well as the thought process behind effectively reducing hallucinations in LLMs through Retrieval Augmented Generation. AI quality is key, and in this talk, you will understand how one can maximize the quality of their LLM.

** Introduction to Vectara **

Vectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.

** Try Vectara **

Start your Account at vectara.com. Or visit:

https://console.vectara.com/signup

**Deep Dive into Mockingbird**
https://vectara.com/blog/mockingbird-a-rag-and-structured-output-focused-llm/

** Hughes Hallucination Evaluation Model (HHEM) **

(3/26/2024 Release!) - Vectara now includes a Factual Consistency Score (FCS) to evaluate Hallucinations:

https://vectara.com/blog/automating-hallucination-detection-introducing-vectara-factual-consistency-score/

Technical Deep Dive of the HHEM:
https://vectara.com/blog/cut-the-bull-detecting-hallucinations-in-large-language-models/

Measuring Hallucinations in RAG Systems (layperson HHEM intro):
https://vectara.com/blog/measuring-hallucinations-in-rag-systems/

Get the HHEM Open Source code from GitHub
https://github.com/vectara/hallucination-leaderboard

Setting up Your Own HEM Leaderboard
https://huggingface.co/blog/leaderboards-on-the-hub-vectara

** Try HHEM **

Vectara has not only focused on retrieval but on mitigating hallucinations in LLMs, the leading risk factor preventing companies from adopting this monumental technology. As such, Vectara has developed an open-source model for evaluating hallucinations in LLMs, called the HHEM, and has also developed a complementary leaderboard that lists the top LLMs and ranks their proclivity for hallucinating. The Hughes Hallucination Evaluation Model (HHEM) is available as an open-source model on Huggingface:

https://huggingface.co/vectara/hallucination_evaluation_model

See the HHEM Leaderboard:
https://huggingface.co/spaces/vectara/leaderboard

**Vectara Use-Cases**
https://docs.vectara.com/docs/use-case-exploration

**Vectara x Incorta Nexus**
https://vectara.com/blog/power-precision-enhancing-operational-genai-with-retrieval-augmented-generation-from-vectara/

Check out this article for more insight into integrating Vectara into LangChain!
https://blog.langchain.dev/langchain-vectara-better-together/

** DEVELOPER Resources **

Join our Discord
https://discord.gg/GFb8gMz6UH

Read our API documentation
https://docs.vectara.com/

Join the Vectara Developer Community
https://discuss.vectara.com/

** Social Channels **

https://twitter.com/vectara
https://www.linkedin.com/company/vectara/mycompany/
https://www.youtube.com/@vectara
Ссылки и html тэги не поддерживаются


Комментарии: