Комментарии:
Thank you for the video, it was fantastic!
ОтветитьSuch a great article! I learned a lot from this video, such as how complicated systems can be put together using a stack of models, illustrated in the RAG to name an example. Jeremy, you are such a kind person to share this with the world.
ОтветитьSo comprehensive. Perhaps the best introduction I have ever seen to the topic. Thanks so much.
ОтветитьThis is a real gem. Reminds me of the authentic, high quality training material from Andrej Karpathy. Looking forward to future similar tutorials if you decide to make them! Thank you!
ОтветитьI really enjoyed this talk. Thank you so much.
ОтветитьThank you so much for this
ОтветитьThink you very much sear..
ОтветитьGreat course! Hello from Almaty Google developers community!
ОтветитьRLHF is one of the most regressive "people pleasing at the expense of utility" things ive ever seen. I genuinely think that it (and all the praise heaped on it affecting the way a lot of people learn this stuff) have set us back like 3/4 as far as GPT-4 and Llama models have brought us ahead
ОтветитьThis video shows AI is dumb. Is just programming. Do not have "reason", like a human being. And there are a lot of videos saying is going to control us, lol.
ОтветитьJeremy, Congrats on the 100k subscribers.
Well deserved and hopefully a catalyst to get your invaluable content more exposure.
Thank you for creating this amazing talk around all the basics and applications with language models, this is really helpful!
Ответитьthanks so much Jeremy. the actual method to use and make function calling on llm's was breaking my brain and I didn't understand the json schema part of it. would be wonderful if you could do a full course on the major LLM topics, Fine-tuning, RAG and Agents of course. Would be wonderful if it used mostly open source models. I haven't found a model yet that will repond reliably with function calling / agent based execution. *other than GTP-4 that is.
ОтветитьThe circles in the glasses at the beginning make for me a Detroit: Become Human vibe
Ответитьwow
ОтветитьThe godfather dropping some knowledge. Thank you for keeping AI for everyone in the most responsible way.
ОтветитьThanks...great summary....now i know the relatiinship between neural network parameters and vector DB's
ОтветитьThe coffee and thimble question is interesting because there really isn’t a right answer . The thimble could be in the coffee mug in a sense that it’s in the liquid holding space, or it could be embedded in the coffee cup. I think it fails to ask the question which is needed to really discern what’s going on.
ОтветитьHappy birthday Jeremy! Just got to the section where your bday is revealed and it is today! Thank you for all the great work :)
ОтветитьIt would be nice to see your full custom instructions!
ОтветитьIs there a Way using CNN I can do a Sentiment Analysis of a Image?
ОтветитьTrying to execute this on G Colab and getting this error: ImportError: libcudart.so.12: cannot open shared object file: No such file or directory
ОтветитьCame here to learn about LLM’s finetuned for hacking.
Was not disappointed 😅
Tibetan language never made to seo qualify , Any suggestion.
ОтветитьThis is golden summary of the state of the LLMs, Thank You
ОтветитьBy far the most useful practical guide to LLM's by length. Thank you Jeremy!
ОтветитьHands down one of the best videos on LLMs on the internet.
ОтветитьSo it's more like a software developer's guide
ОтветитьThanks!!!
ОтветитьI found this video really helpful! Can you also share your system prompt for GPT-4 so it avoids writing summary in the end and stops lecturing us on ethics?
ОтветитьThanks Jeremy! This was really helpfull!
ОтветитьShe's cute❤
ОтветитьThank you Jeremy for all of your work and for sharing such quality videos. ❤
ОтветитьThanks for saving our careers yet again Jeremy
ОтветитьJeremy, I'm impressed by the video quality of your camera. What camera are you using? (If you can say, of course)
ОтветитьThank you Jeremy for this introduction. It just answered many of my questions and affirmed some of my doubts about how many of the applications that use LLMs work today.
ОтветитьHey Jeremy loved your video, have you tried VLLM for inference it is even faster than GPTQ but uses much more RAM
ОтветитьI am a total beginner but u made me understand abt LM models way better than anuone else..u r such a great teacher..I pray for giving u Lord Gurus blessings dor more insight and vision for such a Humble and good Soul.😊😊
ОтветитьIf you want to use it to solve your markdown issue, you need to break it into two steps.
The first is to describe the grammar of the subset of the markdown you need to parse. As long as you can describe a closed grammer and the elements you want are in it you can be brief.
Next, ask it to create a finite state machine to parse that grammar.
The way gpt tries to parse the markdown, and most other languages, is a common mistake made by most human developers, which is probably why it is weak in this area.
If you follow the better approach it does an amazing job in my experience.
I wanted to do text to sparql, but I couldn't get the training data.
ОтветитьHow far out until 90% of software devs aren't needed anymore?
Ответить