Комментарии:
This is a really good video, thank you so much! Out of curiosity, why do you use iterm2 as a terminal and how did you set it up to look that cool? 😍
ОтветитьUseful, Nice, Thank You 🤩🤩🤩
ОтветитьHow did you rip the aws documentation
ОтветитьHuge class!!
ОтветитьGreat video..but where have you added OpeAI API keys ?
ОтветитьEasily one of the best explained walk-throughs of LangChain RAG I’ve watched. Keep up the great content!
ОтветитьWow! Thanks
ОтветитьThis is Amazing 🙌
Ответитьhai bro i am creating a chatbot which takes data from third party api which means there is less data but dynamic data for every call so should i use RAG approach if not then suggest me a batter approach
ОтветитьExcellent coding! working wonderful! Appreciate. One question please: what difference if I change from md to pdf?
Ответитьhi, your video is so good. I just wanna know,if i want to automatically change my document in the production environment and keep the query service don't stop and always use the latest document as the sources, how can i do this by changing the code?❤
ОтветитьThank you for this very instructive video. I am looking at embedding some research documents from sources such as PubMed or Google scholar. Is there a way for the embedding to use website data instead of locally stored text files?
ОтветитьAs others have asked: "Could you show how to do it with an open source LLM?" Also, instead of Markdown (.md) can you show how to use PDFs ? Thanks.
ОтветитьAbsolutely epic video. I was able to follow along with no problems by watching the video and following the code. Really tremendous job, thank you so much! Definitely subscribing!
Ответитьthis is great but can we use our custom LLM as server i dont want to use OPENAI is there any way
Ответитьany one face tesseract error in windows,,it works well at linux
?
Could you show how to do it with an open source LLM?
ОтветитьI never comment on videos, but this was such an in-depth and easy to understand walkthrough! Keep it up!
Ответитьfirst thank you very much and now also tell to apply memory of various kinds
ОтветитьGreat video! This was my first exposure to ChromaDB (worked flawlessly on a fairly large corpus of material). Looking forward to experimenting with other language models as well. This is a great stepping stone towards knowledge based expansions for LLMs. Nice work!
Ответитьhow about an entire codebase?
ОтветитьVery nice video, what kind of theme do you use to make the vscode look like this? Thanks.
ОтветитьI have a question, what if the user asks the RAG app, "Can you help me summarize the content of page 5?"? Can the RAG app identify which page the user is asking to summarize?
ОтветитьI too am quite impressed with your videos (this is my 2nd one). I have now subscribed and I bet you'll be growing fast.
Ответитьthanks dude!
ОтветитьCan I use HuggingfaceEmbeddings instead of OpenAIEmbeddings?
ОтветитьI was just thinking about this, great work.
Hypothetically, what if your data sucks? What models can I use to create the documentation? (lol)
Amazing video - directly subscribed to your channel ;-) Can you also provide an example with using your own LLM instead of OpenAI?
ОтветитьCan we use other LLM besides openAI?
ОтветитьThank you for this. Looking forward to tutorials on using Assistants API.
Ответить