Комментарии:
y a t il moyen de vous contacter par mail ?
ОтветитьA Udemy Killer video indeed. Thank you very much!
Ответитьhi alejandro can you tell me what are the prerequisites for following this video? i know python, machine learning basics
Ответитьextremely high quality tutorial! Congratulations! It was extremely helpful. A further step forward would be to store the pdfs' embeddings into a database so that every time that you close your application, you have not to upload your pdfs again. Any suggestion? Thanks. I'm a new subscriber of your channel.
ОтветитьThank you so much. This was super helpful for my own that I was building.
ОтветитьHow can i change the temperature?
ОтветитьHi, Thanks for the great video. Can you also create a short video on how to deploy these in hugging face code spaces? The reason I am asking for this is because when we have to showcase this project it should be in production and not in a development environment for anyone else to access. Please consider doing a short video around that.
ОтветитьHuggingface:
ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 2048. Given: 2516 `inputs` tokens and 0 `max_new_tokens`
Did not find huggingfacehub_api_token, please add an environment variable `HUGGINGFACEHUB_API_TOKEN`
ОтветитьCould you share any model where I can get results from a JSON file? For example, I have thousands of products, and I would like to retrieve some products based on a prompt, such as 'price more than 50' or 'find the best cross-sell products.'
ОтветитьThanks for this amazing video, I wanted just to know how can we evaluate the model build in this tutorial like : accuracy, loss, precision...?
ОтветитьI continue to face problems with tiktoken package, someone else?
ОтветитьCan you remake the video but with Llama2 instead ?
ОтветитьThis is great work. Congratulations and I will support you. Couple of questions: 1) Will I have to upload my pdfs everytime start the project, can we fix that to store details in some files ? 2) can it point out to the pages where it concludes information from ? 3) Can you move the chat text box to the bottom rather than the top just like chat gpt and always focus on the end of the page after response ?
Ответитьhow can we change the model to have more token to answer... I have 16k api from openai how ever i got token problem due to token limit 4k...
ОтветитьI'am trying to create the embeddings locally, without using OpenAI API but my process just dies when I try to create the vectorstore....
Wondering if it is my PC problem:
load INSTRUCTOR_Transformer
Killed
Wow!!! you are very very very handsome!!!
ОтветитьThis is great! If I want to embed this chatbot on my personal website, how would I do that?
ОтветитьI just had to subscribe. Great quality content!
ОтветитьDid anyone else face this issue? Did not find openai_api_key please add an environment variable OPENAI_API_KEY which contains it or pass openai_api_key as a named parameter. Although I have OPENAI_API_KEY in my .env file
Ответитьlegend
Ответитьis OpenAI paid?
ОтветитьCan you explain how I can use a local model for the same?😅
ОтветитьExcellent video! Thanks for describing it so clearly and with the helpful git repo.
Ответитьcan use this offline
ОтветитьThank you so much Alejandro if you chat with your own local pdf there is no sharing involve if you have private .pdfs files to chat with?
ОтветитьHello, very good content, congratulations. With this model it is possible to carry out more complex issues, for example. Is it possible to send several JSON files with product values and other information and ask for the values of all products in the JSON files to be calculated and added together?
ОтветитьHi, is there a reason why when I run the code with the local LLM like in the example, when I ask a question related to the document I get the error: ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 2048. Given: 3017 `inputs` tokens and 0 `max_new_tokens` ?
ОтветитьThanks I also had the token limit issue, could you please advise? I have a plus account with openai, would I need an enterprise account?
ОтветитьMan, god bless you!
ОтветитьI rarely comment on YT videos and I must say your sequencing and delivery of this content is really good. Its informative, clear, concise and straight to the point. No fluff or hype, just good and quality content with exceptional delivery. I couldn't help but subscribe to your channel and smash the like button. I have seen alot of videos about this and they don't deliver the kind of value you have
ОтветитьThank you so much for such amazing content. It was really helpful.
ОтветитьGreat content! I keep getting an error when completing the Vector Store. There a whole bunch of ^^^^ under FAISS.from_texts(texts=text_chunks, embedding=embeddings)
ОтветитьCan't see the screen. Point of video?
ОтветитьExcellent vidéo but you could have taken a better example than joe rogan for you embedding example price. Who with a brain really listen this guy ?
ОтветитьHey man thank you for the information. However i am using a windows machine and am facing this ERROR: "Could not build wheels for faiss-cpu, tiktoken, which is required to install pyproject.toml-based projects" can someone help how i can solve this error.
ОтветитьCan you create a video implementing Langchain and Vectara for PDF files and text files too?
ОтветитьDidn't you leave OpenAIEmbedding without passing text ?
ОтветитьHow to solve Cuda out of memory error
ОтветитьHi cant we use this app in ios?
ОтветитьThanks, I particularly appreciated the detailed explanation of the process. Very clear.
I am planning on an application which will use a large corpus of text and it is likely to be unfunded, so I am finding it hard to decide on what approaches to follow, given new stuff seems to come up every week.
But I think I will give this approach a go, as a proof of concept at least, and move from there.
This is really great explanation. But Instructor_transformer is taking too long to process the text. Please let me know the solution for this
ОтветитьIn this process what is input tokens when using openai api key? Is it reduced as compared to complete pdf?
ОтветитьCan you exceed the 200mb limit? is that possible?
Ответитьwhat if I want to save the model? and then infer
ОтветитьThe way you explain each step is so helpful! Thank you
ОтветитьGreat content! Thanks for sharing all the knowledge so beautifully and smartly, without getting things complicated.
Also, I would like to say that please more & more of COMPLEX projects for us, LLM as a product or a complete software product, and also some things on LLMOps
Awesome! In your pic it looks like docs are handled async, is that correct? :)
ОтветитьI was having an issue with FAISS, the import looked more like "from langchain.vectorstores.faiss import FAISS"
Ответить