Комментарии:
You become a billionaire, to some govern agent make you "fly" from some building randomly :)
ОтветитьNowadays it's dangerous be a billionaire. Just survive and be an average "asset".
ОтветитьAI = Artificial Idiot
ОтветитьVery good in depth study
ОтветитьI think I need the Torque Test Channel for these.
ОтветитьYeah the marketing was more than over the top in this one, but this is Nvidia for you. The L40S is a low-end device made to get around regulations limiting capabilities sold to China. Never-the-less, $300K for a few low-end GPU's is unreal, I can't see the price gouging continue for much longer now that competition is rolling in.
ОтветитьMhm. Irrelevant for me for a number of reasons, but have fun.
ОтветитьIf you are considering upgrading your models from an L40 to an L40S, it's important to observe that the tensor cores exhibit similar speeds on both cards at INT4 and you don't have any concrete adavantage. Additionally, if you currently use or own a few RTX6000 GPUs, upgrading to the L40S may not yield significant performance improvements; the differences in performance are marginal, except for certain specific aspects. However, if you are making a new investment, opting for the L40S could be a more favorable choice. It is advisable to carefully plan the amortization of this investment over an 18-month period.
ОтветитьNeed more ram.
Also just use 4090's 10x cheaper for the same performance. But less ram :'c(
The prohibitive prices we are seeing here is why I gave up on vram products for inferencing.
APU’s with much cheaper but slower ddr5 memory is the way to go. The Apple m1 ultra showed that this is possible.
We just need Intel, AMD, Qualcomm to make this product with a more open and repairable ecosystem.
I won’t say no to affordable vram products but at the end of the day, they will only supplement the APU.
Soooooo is there a plan to turn them on, and compute something with them for a different video? Or did you just get them for footage haha. Cause sorry for the criticism, this was a good video topic but cmon -__-
ОтветитьUnique coverage, fabulous 👍
Thank you STH
Looks like a nice workstation hedt to play games with.
ОтветитьLove the videos, love the content. Following data center tech has always been fascinating for me, even though it's been 10 years since I've had to step foot inside one.
I never thought I'd have to say this, considering I'm known for being an incredibly fast talker, but this was the first video I've ever had to slow down to 75% in order to not feel burned out.
wow, so they found a way to ship more consumer GPUs into the professional market, and have fully abandoned the consumer market. expect the 4090 prices to go wild, and probably completely go away. how long before they start doing it with the AD103 die too? fuck Nvidia for this.
ОтветитьOne big difference between the L40 and L40s that you conveniently "forgot" to mention, is that the L40s is 1 gram heavier than the L40. How much did big GPU pay you to leave this detail out of the video? This is exactly why I always do my own research.
ОтветитьWhat's up with Supermicro's dual socket H13 boards lacking PCIe slots? Are they relying on "ribbons" to optimize? Asking for a broadwell friend
ОтветитьJust procured a bunch of L40S GPU's for our inference tasks, and they rip. We also have bunch of H100's for the initial training of our LLM's. You touched on it during your talk but power and cooling requirements are the biggest hurdle when it comes to the H/A100's. The L40S GPU's are a lot more reasonable, and the price and lead times are nice bonusses.
ОтветитьIt's a bit too much toxic marketing for me. Can't build multi-GPU servers with cheap gaming cards anymore because Nvidia killed 2-slot 3090s/4090s to prevent exactly this. Their coolers are so heavy that they rip out the PCIE slot under their own weight, and so hilariously large they don't fit in even normal PC cases.
L40S is identical to the RTX 4090, except for the extra 24GB memory chips which cost $100 in retail, yet they increase the price by an extra $9000.
And NVLink on the A100/H100 is kept locked to their proprietary CUDA language, so you can't use it with portable OpenCL codes.
But can it play crysis? 😂
ОтветитьAs someone who works on these things, the only billionaire those GPUs will be making is Jensen Huang
Ответить"Look at the baby future president(s)!"
Chonky models will better manage countries than officials.
about to build a 40-node cluster with one of these in each machine. they're really remarkably flexible cards.
ОтветитьAs someone who got disappointed by switching to a dual 3090 setup from a dual Titan RTX setup, I just want to add that an important metric that Nvidia buries in the spec sheet that's important for DL performance is the FP16 with FP32 Accumulate. Where Nvidia half rates it on the Geforce cards to stop them from beating the pro cards.
ОтветитьFor me, memory matters. H100 is 80GB, and RTX600 Ada, RTX5880 Ada, L40S and L40 are all 48GB. RTX4090 is just 24GB, but so is the RTX 3090ti, which is much cheaper.
There is water cooled RTX6000 Ada, but is there water cooled L40S?
Sadly, it's still not a good solution. Intel has nothing for workstation, AMD is missing software. At this point... I am hoping for Qualcomm to sell the Cloud AI100 Ultra with 128GB LPDDR5X for workstation
What about Plex transcoding? /s
ОтветитьBy buying these GPUs I know someone who definetly is gonna be a Bilionare. But the more you buy the more you save.
ОтветитьHi Patrick, thanks for the specs table. Makes it much easier to visualise. Nvidia publishes specs in a different format for different GPUs so it’s tedious to compare especially when naming is different. How does A800 stack up to these?
ОтветитьThis is where your 4090 shortage stems from....
ОтветитьSupermicro estore link doesn’t work…
Ответитьthis is proof we are still in the stone age as far as ai goes - real ai doesn't hit sme/smb mkt for 5 years
ОтветитьI love how relatable you are in your vlog 👍
ОтветитьI know its off topic but yo P...thats a nice watch
Ответитьnice! thanks Patrick
ОтветитьBut can it run Crysis?
ОтветитьI can't wait to see videos in the years coming of people using these once they're like $500 on the used market. 😂😂😂 LTT will be all over that.
ОтветитьIs this shot at the new studio or old? Can't quite tell idk
ОтветитьThe sharesale Supermicro store link is broken FYI
ОтветитьCan't wait to buy a H100 for $100 in 8-10 years
ОтветитьWhat is the closest GPU(s) from AMD against these H100 and L40S?
ОтветитьThanks for the info Patrick. Great learning about things Ill be able to afford in 20 years when it hits ebay.
ОтветитьIsn't vGPU/SR-IOV all software sauce? I really dislike nvidia price gating features that should be everywhere
ОтветитьCrazy amount of GPU computing power in one box.
ОтветитьSince I work in research the L40S just isn't a good choice. We need the VRAM. But for "smaller" or quantized LLM inference they can be fine.
Unfortunately, my data centers are not setup for high density servers so currently we are using 4x setups.
The more you buy the more you save!!
ОтветитьWow
Ответить