Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

Alejandro AO - Software & Ai

1 год назад

431,472 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@simonbarbeaux6401
@simonbarbeaux6401 - 30.11.2023 20:23

y a t il moyen de vous contacter par mail ?

Ответить
@subhasish79
@subhasish79 - 30.11.2023 03:43

A Udemy Killer video indeed. Thank you very much!

Ответить
@mango-strawberry
@mango-strawberry - 30.11.2023 00:10

hi alejandro can you tell me what are the prerequisites for following this video? i know python, machine learning basics

Ответить
@federiconobili6038
@federiconobili6038 - 28.11.2023 22:42

extremely high quality tutorial! Congratulations! It was extremely helpful. A further step forward would be to store the pdfs' embeddings into a database so that every time that you close your application, you have not to upload your pdfs again. Any suggestion? Thanks. I'm a new subscriber of your channel.

Ответить
@woojay
@woojay - 28.11.2023 06:28

Thank you so much. This was super helpful for my own that I was building.

Ответить
@marcogiovannini7456
@marcogiovannini7456 - 27.11.2023 20:43

How can i change the temperature?

Ответить
@gurjeet333
@gurjeet333 - 27.11.2023 03:08

Hi, Thanks for the great video. Can you also create a short video on how to deploy these in hugging face code spaces? The reason I am asking for this is because when we have to showcase this project it should be in production and not in a development environment for anyone else to access. Please consider doing a short video around that.

Ответить
@dsx0164
@dsx0164 - 26.11.2023 17:26

Huggingface:

ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 2048. Given: 2516 `inputs` tokens and 0 `max_new_tokens`

Ответить
@dsx0164
@dsx0164 - 26.11.2023 15:47

Did not find huggingfacehub_api_token, please add an environment variable `HUGGINGFACEHUB_API_TOKEN`

Ответить
@rezaulmasum205
@rezaulmasum205 - 25.11.2023 23:09

Could you share any model where I can get results from a JSON file? For example, I have thousands of products, and I would like to retrieve some products based on a prompt, such as 'price more than 50' or 'find the best cross-sell products.'

Ответить
@badreddinebengaidhassine441
@badreddinebengaidhassine441 - 25.11.2023 19:52

Thanks for this amazing video, I wanted just to know how can we evaluate the model build in this tutorial like : accuracy, loss, precision...?

Ответить
@federiconobili6038
@federiconobili6038 - 24.11.2023 20:07

I continue to face problems with tiktoken package, someone else?

Ответить
@Belerez
@Belerez - 23.11.2023 11:24

Can you remake the video but with Llama2 instead ?

Ответить
@waytojava1928
@waytojava1928 - 22.11.2023 14:12

This is great work. Congratulations and I will support you. Couple of questions: 1) Will I have to upload my pdfs everytime start the project, can we fix that to store details in some files ? 2) can it point out to the pages where it concludes information from ? 3) Can you move the chat text box to the bottom rather than the top just like chat gpt and always focus on the end of the page after response ?

Ответить
@amedyasar9468
@amedyasar9468 - 22.11.2023 09:49

how can we change the model to have more token to answer... I have 16k api from openai how ever i got token problem due to token limit 4k...

Ответить
@simaobonvalot9141
@simaobonvalot9141 - 21.11.2023 19:01

I'am trying to create the embeddings locally, without using OpenAI API but my process just dies when I try to create the vectorstore....

Wondering if it is my PC problem:

load INSTRUCTOR_Transformer
Killed

Ответить
@suyeounlee5325
@suyeounlee5325 - 21.11.2023 15:05

Wow!!! you are very very very handsome!!!

Ответить
@TheMrGoodkind
@TheMrGoodkind - 19.11.2023 19:25

This is great! If I want to embed this chatbot on my personal website, how would I do that?

Ответить
@sfisothecreative99
@sfisothecreative99 - 19.11.2023 01:16

I just had to subscribe. Great quality content!

Ответить
@abhinavraj8868
@abhinavraj8868 - 18.11.2023 15:26

Did anyone else face this issue? Did not find openai_api_key please add an environment variable OPENAI_API_KEY which contains it or pass openai_api_key as a named parameter. Although I have OPENAI_API_KEY in my .env file

Ответить
@hamzahassan6726
@hamzahassan6726 - 18.11.2023 12:42

legend

Ответить
@pdp_29
@pdp_29 - 18.11.2023 08:34

is OpenAI paid?

Ответить
@SreejithKSGupta
@SreejithKSGupta - 17.11.2023 21:45

Can you explain how I can use a local model for the same?😅

Ответить
@laurentlemaire
@laurentlemaire - 16.11.2023 18:51

Excellent video! Thanks for describing it so clearly and with the helpful git repo.

Ответить
@kingfunny4821
@kingfunny4821 - 16.11.2023 11:27

can use this offline

Ответить
@spadron04
@spadron04 - 16.11.2023 01:36

Thank you so much Alejandro if you chat with your own local pdf there is no sharing involve if you have private .pdfs files to chat with?

Ответить
@marcelodelta
@marcelodelta - 15.11.2023 23:56

Hello, very good content, congratulations. With this model it is possible to carry out more complex issues, for example. Is it possible to send several JSON files with product values and other information and ask for the values of all products in the JSON files to be calculated and added together?

Ответить
@valeriociotti7904
@valeriociotti7904 - 15.11.2023 14:28

Hi, is there a reason why when I run the code with the local LLM like in the example, when I ask a question related to the document I get the error: ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 2048. Given: 3017 `inputs` tokens and 0 `max_new_tokens` ?

Ответить
@ResearchTutorials-hx4xm
@ResearchTutorials-hx4xm - 15.11.2023 13:40

Thanks I also had the token limit issue, could you please advise? I have a plus account with openai, would I need an enterprise account?

Ответить
@SashaBaych
@SashaBaych - 13.11.2023 02:04

Man, god bless you!

Ответить
@AdegbengaAgoroCrenet
@AdegbengaAgoroCrenet - 12.11.2023 21:52

I rarely comment on YT videos and I must say your sequencing and delivery of this content is really good. Its informative, clear, concise and straight to the point. No fluff or hype, just good and quality content with exceptional delivery. I couldn't help but subscribe to your channel and smash the like button. I have seen alot of videos about this and they don't deliver the kind of value you have

Ответить
@lalitaawasthi8201
@lalitaawasthi8201 - 10.11.2023 12:43

Thank you so much for such amazing content. It was really helpful.

Ответить
@user-yh8nq6jd1d
@user-yh8nq6jd1d - 09.11.2023 04:18

Great content! I keep getting an error when completing the Vector Store. There a whole bunch of ^^^^ under FAISS.from_texts(texts=text_chunks, embedding=embeddings)

Ответить
@Ricocase
@Ricocase - 08.11.2023 16:27

Can't see the screen. Point of video?

Ответить
@SebastienBERTRAND84
@SebastienBERTRAND84 - 07.11.2023 11:32

Excellent vidéo but you could have taken a better example than joe rogan for you embedding example price. Who with a brain really listen this guy ?

Ответить
@soccerchannel1413
@soccerchannel1413 - 07.11.2023 11:28

Hey man thank you for the information. However i am using a windows machine and am facing this ERROR: "Could not build wheels for faiss-cpu, tiktoken, which is required to install pyproject.toml-based projects" can someone help how i can solve this error.

Ответить
@ruddysimonpour6281
@ruddysimonpour6281 - 07.11.2023 02:58

Can you create a video implementing Langchain and Vectara for PDF files and text files too?

Ответить
@kamaravichow
@kamaravichow - 05.11.2023 21:02

Didn't you leave OpenAIEmbedding without passing text ?

Ответить
@akshayjadhav8429
@akshayjadhav8429 - 05.11.2023 18:55

How to solve Cuda out of memory error

Ответить
@milans967
@milans967 - 05.11.2023 11:45

Hi cant we use this app in ios?

Ответить
@aldotanca9430
@aldotanca9430 - 04.11.2023 18:13

Thanks, I particularly appreciated the detailed explanation of the process. Very clear.
I am planning on an application which will use a large corpus of text and it is likely to be unfunded, so I am finding it hard to decide on what approaches to follow, given new stuff seems to come up every week.
But I think I will give this approach a go, as a proof of concept at least, and move from there.

Ответить
@deviprasanna5906
@deviprasanna5906 - 03.11.2023 13:55

This is really great explanation. But Instructor_transformer is taking too long to process the text. Please let me know the solution for this

Ответить
@user-up3kj2xh9m
@user-up3kj2xh9m - 03.11.2023 09:59

In this process what is input tokens when using openai api key? Is it reduced as compared to complete pdf?

Ответить
@nicolasmontanaro9629
@nicolasmontanaro9629 - 03.11.2023 01:14

Can you exceed the 200mb limit? is that possible?

Ответить
@coderwarehouse
@coderwarehouse - 02.11.2023 13:14

what if I want to save the model? and then infer

Ответить
@rainbowtrout8331
@rainbowtrout8331 - 01.11.2023 22:28

The way you explain each step is so helpful! Thank you

Ответить
@techandprogramming4688
@techandprogramming4688 - 26.10.2023 04:35

Great content! Thanks for sharing all the knowledge so beautifully and smartly, without getting things complicated.
Also, I would like to say that please more & more of COMPLEX projects for us, LLM as a product or a complete software product, and also some things on LLMOps

Ответить
@carlmoller807
@carlmoller807 - 25.10.2023 19:01

Awesome! In your pic it looks like docs are handled async, is that correct? :)

Ответить
@ricardolm6161
@ricardolm6161 - 25.10.2023 17:00

I was having an issue with FAISS, the import looked more like "from langchain.vectorstores.faiss import FAISS"

Ответить