Комментарии:
Thanks!
Ответить这是我见过最详细的从零创建Transformer模型的视频,从代码实现到数据处理,再到可视化,up主真是嚼碎磨细了讲,感谢!
ОтветитьI coded along with this video and train the model now. However, in epoch 9 it still only predicts endless repetitions of one random word, it never feels like a sentence or something. Has anyone else experienced that?
Ответитьthanks for this video. Super cool. I have one question though. What determines if a module should have dropout or not? InputEmbedding has no dropout but things as simple as ResidualConnection has dropout? LayerNorm has no dropout. I don't know what the pattern it is there.
ОтветитьWow Your explanation amazing
ОтветитьGreat video, you are insanely talented btw.
ОтветитьHello, first I want to thank you for your great tutorials which are really helpful and motivating when trying to program ai!!
But i also have a question about a comment of your source code, to be exact the comment (1, seq_len) & (1, seq_len, seq_len) in the return values of the dataset class. Would it not be (1, 1, seq_len) instead of (1, seq_len) because unsqueeze(0) is called two times?
Please enlight me on my possible misunderstanding.
Thank you admin. Your video is great. It helps me understand. Thank you very much.
ОтветитьThis is such a great work, I don't really know how to thank you but this is an amazing explanation of an advanced topic such as transformer.
ОтветитьWhat is the point of defining the attention method as static?
ОтветитьI'm working on Speech-to-Text conversion using Transformers, this was very helpful, but how can I change the code to be suitable for my task?
Ответитьwhat to say.. just WOW! thank you so much !!
ОтветитьThis video is great! But can you explain how you convert the formula of positional embeddings into log form?
ОтветитьGreeting from China! I am PhD student focused on AI study. Your video really helped me a lot. Thank you so much and hope you enjoy your life in China.
ОтветитьThank you Umar for our extraordinary excellent work! Best transformer tutorial ever I have seen!
ОтветитьFirst of all, thank you. This is a great video. I have one question though, in the inference, how do I handle unknown token?
ОтветитьThis feels really fantastic when looking someone write a program from bottom up
ОтветитьWOW WOW WOW, though it was a bit tough for me to understand it, I was able to understand around 80 % of the code, beautiful. Thank you soo much
ОтветитьSincere congratulations for this fine and very useful tutorial ! Much appreciated 👏🏻
ОтветитьThere seems to be a very disturbing background bass sound at certain parts of your video especially while you are typing. Could you please sort it out for future videos? Thanks
ОтветитьThis is amazing thank you 🙏
ОтветитьMate you are a beast!
ОтветитьI love your videos. Thank you for sharing your knowledge and i cant wait to learn more.
ОтветитьDear Umar, thank you so so much for the video! I don't have much experience in deep learning, but your explanations are so clear and detailed I understood almost everything 😄. It wil be a great help for me at my work. Wish you all the best! ❤
ОтветитьVery good video. Tysm for making this, you are making a difference
ОтветитьGreat Job. Amazing. Thanks a lot. I really appreciate you. It is so much effort.
ОтветитьThanks so much such a great video. Really liked it a lot. I have a small query. For ResidualConnection, in the paper the equation is given by "LayerNorm(x + Sublayer(x))". In the code, we have: x + self.dropout(sublayer(self.norm(x))). Why it is not self.norm(self.dropout((x + sublayer(x))) ?
ОтветитьAwesome! Highly appreciate. 超級讚!非常的感謝。
Ответитьthe code is really well written. very easy and nicely organized.
Ответитьthanks for making the video. only thing is i wish the text was bigger. it was hard to see.
ОтветитьThe file 'train.py' define the loss_fn.Why is the ignore_index tokenizer_src rather than tokenizer_tgt?
Ответитьperfect video!! Thank you so much. I always wonder the detail code and its explanation and now I almost understand all of it. thanks:) you are the best for me!
ОтветитьJust to repeat what everyone else is saying here - many thanks for an amazing explanation! Looking forward to more of your videos.
ОтветитьThanks for making it so easy to understand. I definitely learn a lot and gain much more confidence from this!
ОтветитьThanks!
ОтветитьThank God, it's not one of those 'ML in 5 lines of Python code' or 'learn AI in 5 minutes'. Thank you. I can not imagine how much time you must have spent on making this tutorial. thank you so much. I have watched it three times already and wrote the code while watching the second time (with a lot of typos :D).
ОтветитьThanks!
ОтветитьThanks Bro. With your explanation, I am able to build the transformer model for my application. You explained so awesome. Please do what you are doing.
Ответитьhow can we force it to use GPU instead of CPUs. its taking around 100 mins for 20 epochs. i have geforce 4080 and i9 13900k 64GB. This was my docker "docker run --gpus all -p 9999:9999 -v D:\dc:/tf -it job_image:latest" i included your requirements.txt into mine and rebuilt the docker.
ОтветитьI'm not sure if it is because I have study this content 1000000 times or not, but is the first time that I understood the code, and feel confident about it. Thanks!
ОтветитьYou are a great professional, thanks a ton for this
ОтветитьSubscribe because you have a cat named 奥利奥
ОтветитьVideo otimo, pena que não entendo o inglês, sou brasileiro, tinha varias duvidas mas com certeza vou aprender ingles depois venho aqui!! parabéns pelo video
ОтветитьKeep doing what you are doing. I really appreciate you taking out so much time to spread such knowledge for free. Been studying transformers for a long time but never have I understood it so well. The theoretical explanation in the other video combined with this practical implementation, just splendid. Will be going through your other tutorials as well. I know how much time taking it is to produce such high level content and all I can really say is that I really am grateful for what you are doing and hope that you continue doing it. Wish you a great day!
ОтветитьOMG. And you also note Matrix shapes in comments! Beautiful. I actually know the shapes without having to trace some variable backwards.
Ответить