Комментарии:
What tests should I add to future coding tests for LLMs?
ОтветитьWith this man every coding assistant model is the best coding assistant model 😂😂
ОтветитьThanks Great Video! I found LLama to be great to code with and I am integrating Llama2 into our own Multi Application Platform.
ОтветитьCRAZY!!!
ОтветитьHow well it compares to other languages than Python?
ОтветитьYes, Please show us how to locally install it! They charge through the nose soon.
ОтветитьYes please please make a video regarding setup
ОтветитьGreat video, how does it compare with WizardML?
ОтветитьHi Matthew, amazing video! Thanks!
Could you tell me what is your Graphic card ?
Hey Matthew - would be great for you to do a deep dive in Text Generation UI and how to use the whole thing.. Also, cover GGUF and GPTQ (other formats too) would be helpful...
Ответитьcan you test on falcon LLM and is it better than LLAMA or chatgpt 4?
Ответитьwould be interesting to ask CodeLlama to generate Game Theory simulations. Just to see how much of Math or other non-developer domains it can bring as code.
I've done it with GPT-4 and is really cool how much Game Theory you can learn just by running python examples.
which model would you suggest for three.js or babylon.js?
ОтветитьI'm struggling to figure out the workflow for iterative conversations with codeLLAMA. The examples are all single prompt-response pairs. I want guidance on prolonged, iterative back-and-forth dialogues where I can ask, re-ask, and ask further over many iterations.
A tutorial showing how to incrementally build something complex through 200+ iterative prompt-response exchanges would be extremely helpful. Rather than one-off prompts, walk through prompting conversationally over hours to build up a website piece by piece. I want to 'chew the bone' iteratively with codeLLAMA like this.
Will the 34B run on a 4090?
ОтветитьI was able to coax chat gpt into writing a working snake game. I used iterative prompting. At one point I ran the program, receiving an error, I pasted that error and chatgpt resolved it correctly. Ultimately it correctly implemented snake with one random
fruit.
install it now and it'll be out of date in a few months, with some other LLM beating it, good vid but I'm sticking with chatgpt for now.
ОтветитьWould have been good to see how it does on other languages such as html, css, scss, js, ts, php, node etc
ОтветитьMan, you turned my world around
Thanks for your content!
What is with these shocked faces on thumbnails?
ОтветитьYou are biased Man. You see the result in first. Okay, As you are trying very hard to convince yourself that the LAMA is best but its not. You are just testing when the GPT fail 🤣. Man I build softwares with gpt and already selling.
ОтветитьThat was impressive. I like to ask, "build a calculator that adds, subtracts, divides and multiplies any two integers. Write the code in html, css, and JavaScript"
ОтветитьNice video. For some reason the snake game I got was not as good as the one you got. What I got was shorter, and had at least one syntax error. It's strange because, as far as I can tell, I did everything the same way, same prompt, same settings, etc. Anyone else have trouble?
ОтветитьAn excellent video, but IMMO it doesn't reflect the reality for most people. When you're coding, you usually don't spend an extensive amount of time crafting a 'perfect' prompt; otherwise, you might as well write the code yourself. Typically, prompts are more 'casual' and resemble brainstorming phrases.
One significant issue I've observed with current LLM models (particularly with GPT rather than Llama) is their tendency to prioritize your prompt over providing the 'correct' answer. If your prompt is misleading or partially incorrect, the model often generates a response that's also flawed, akin to a 'political correctness mistake.' Rarely do the models suggest, 'I think you're mistaken; here's a better way to do it.'
Conducting a more scientific test on this aspect could be quite interesting.
Yes please, a Full tutorial on how to get it installed on a gaming laptop would be epic! Thank you!
ОтветитьWow now lets get those AAA game companies bankrupt 😂
ОтветитьNobody is going to mention how he used parameters in LLAMA but didn't even optimize chatgpt 4 with plugins.
ОтветитьThis is laughable. All independent sources have EVERYTHING behind chatgpt4 and LLAMA behind chatgpt3.5. nice try though!
ОтветитьLLaMA is trash.
ОтветитьNot even close.
ОтветитьThis just sounds like an overfitting tests. Those challenges are very likely in the training set of those models.
ОтветитьHi! Did you see that in the example where ChatGPT "failed", an undefined situation was checked? The function all_equal should return if all items in the list are equal. But then it checked it with an empty list, "all_equal([])" and wanted it to return "True". However, the question did not define what should happen when the function is used with an empty list. Why should it return "True"? Are all items equal if there are no items in the list? I.e. are all items in an empty list equal? 😉
ОтветитьHow about giving it much harder problems to solve? These are cookie cutter problems.
Ask it to generate test data for a radar software then tell it to apply a kalman filter to make the radar predicts better.
This is why I subscribed to this channel. Connecting the viewer to the actual project
ОтветитьYup, there it is
Yup, there it is
Chat gpt 3.5 turbo?
Ответить"Better than GPT 4!!"
Meh.
I've got a calculator in my cell phone. It's better than GPT 4.
So what?
If you install this Llama model, it will be free, but what's machine that will run it? You need 32GB RAM - does the quantization work here to help you run this model on 16 GB?
ОтветитьThing is .. the corups re large.. all these examples are part of the training intentionally..the thing is not to ask for examples and snake games.. ask for real code, coz i dont think u make snake games as real use cases
Ответить+1 on the code Llana installation video.
Ответитьwhat specs do you need to run the 34B parameter version?
ОтветитьFor some reason I don't get the code you got. I've used all the same settings, prompts and even reinstalled Oobabooga from scratch. i've also tried the 32g version which is supposed to be more accurate. I've got a few versions running too though, none of them working as supposed. I was also impressed by the communication while debugging. The AI suggested for example to add some print instructions to get more information and then tried making fixes with my feedback based on this.
ОтветитьYes plz
ОтветитьNo
ОтветитьIt did as good on those benchmarks (based on the data they're sharing), but that's not the version they released.
ОтветитьGuys, any good tutorials on how to install this code version 34b and running it using cpu on windows or linux?
Ответить