Комментарии:
Solution: start to program application and games in the right way to optimize the usage of the current technology based hardware.
ОтветитьWhile processors are the building for employees, RAM are the parking lot of CPU and giving crazy numbers for process nodes is nothing if you are using more cache for it in RAM
ОтветитьWe've still captured by the centralizing principle, instead of parallell-processing. It might be easier to write the code for centralized systems, but it'll hit a wall sooner or later. And all these efforts by AMD, Apple, Intel and nVidia will grind to a halt when they can't expand further using the chiplet philosophy.
Edit. On another note, when you look at Carbon, you'll realize it's a semiconductor. So, by switching from Silicon to Carbon, you can reduce the volume on the chip with the same number of atoms.
I think the worst part is the current tier system, a lot of the current CPU's shouldn't exist, and only exist to create a false pricing ladder..
ОтветитьOh well, I guess innovation has come to a halt and progress will just stop her..
ОтветитьWhy can't the same transistors used in the ALUs, LUTs and controller be used to build memory flip flop cache?
ОтветитьHard drives were an increasing bottleneck because of their lack of speed. Then along came SSDs. Now we all have them. Back in the day they used magnetic core memory for RAM. The advantage to that was that if a computer lost power, when power came back on the CPU would pick up where it left off without skipping a beat. There are many potential technologies in the labs, of which eventually one of them is going to come along and revolutionize memory.
ОтветитьIf the tech company want to sell their new product, they should beat their own record in the process. They intentionally make the improvement in the pass so little to beat the predecessors. But now the computer components are too powerful now, beating the latest become more impossible. Now their shirk thing so much, maybe the others choice is make the components bigger again so that they can provide room for crowded improvement. For the mean time I just want to know when the 1 TB Ram with 64 core 128 tread with 1 PB NVMe tablet PC become a common thing 🤔
ОтветитьI think you overestimate the cost reduction from using older manufacturing nodes for SRAM. In the past, the massive reduction in chip cost (for the same performance) has come nearly exclusively through scaling (shrinking microstructures), not through using mature manufacturing nodes!
ОтветитьThat's a whole lot of bullshit.
ОтветитьIt all comes down to heat management.
ОтветитьI may sound rude but i call bullshit on this kind of videos and their urge to alarm the population.
Since the dawn of the computers they have been facing these kind of problems, and there has always been a solution. I remember one of the most pupular problems was that a CPU couldn't pass the 4GHz speed due to phisical limitations or without lowering it's lifespan. What did they do? Simple: Add more cores to the CPU! The same happened with the USBs during the USB2 to USB3 transition. USB3 has more wires. The solution is always simple: upscale in a different direction. Upscaling in a different direction whenever you find a dead end has always been the way to evolve when computers is what we are talking about.
Now about your video: Who is it targeted to? Processor manufacturers? Yeah, like they don't know their limitations in the long run (and we are talking about decades ahead). Us, plain watchers? What can we do? Nothing! "Oh wooow! There's a huge problem ahead and we can't overcome it! Computer are doomed!" Yeah, sure. Or are you just trying to create alarmism for the sake of nothing? or for money? Good old fear monetizes well, right? You sound like a kid who just realized something new and who wants to do something about, but hell you're probably older than me. We've been around computers our whole life. We've seen computers grow from nearly zero. We know where the save button icon comes from. Come on!
So we get dies at 10-20% increase to gain huge performance at gained 10% costs
ОтветитьWonder how difficult it would be to have the 3D V-cache but with different node sources.
ОтветитьAMD wins the chiplet design challenge.
ОтветитьSmartphones don’t have the space for ever increasing cpu sizes
ОтветитьSRAM could scale if it not becoming larger.
I listen to AMD engineer about this somewhere about 3D V-Cache and why, thing is, more dense the SRAM = larger SRAM using same amount of die area, this mean more signal lines and longer run compare to power it could transmit. That's why AMD doing 3D stacked cache to shorten the signal lines and enable larger cache on the chip. Chiplet may help about yield and scale but introduce more latency to the SRAM and require more power.
Does Apple’s M1 Ultra not count as a chiplet design?
ОтветитьHBM is going to make a comeback
ОтветитьHeat with chiplets will require lower speeds, not such a problem with L3 and predictive loading, but critical with internal stacks and registers and L2.. in that order..
ОтветитьSounds like 3D cache is the solution
ОтветитьThat industry did not had time for research, because they fight all these decades. Low price, bigger production, market crash...They should stop that madness long time ago, increase the price and make proper product development.
ОтветитьCPU<->RAM latency and throughput these days are remarkably low/high, 5%, 10% or even 20% relative improvements still possible but it is asymptotically getting very close to the saturations...
the next improvements are a massive multi cores frontiers and the next - a quantum computing to the gadgets...
The explaining of cache is wrong, its not the most important data that is stored in cache, it's the most recently and frequently used data that is stored in the cache.
ОтветитьAmazing info. Thanks 🙏
ОтветитьThere was a Big Light switching ON in my Head. Thank you very much Sir!
Ответитьjust put it on top
Ответитьif SRAM is not an option just use Shimano or Campagnolo
Ответитьi was reading in the last day or so, old articles about the pentium pro and pentium 2 where the entire L2 cache was a separate on package die. so, we are just repeating history to a degree.
ОтветитьIntel doesnt use chiplets, but tiles. They are not the same thing.
Tiles are actually better but require a bit more effort.
Oh no, if this goes on, programmers has to resort to drastic measures like optimizing their programs. Anyways.
ОтветитьComponents can get so small and so fast, they are gonna need new magic
ОтветитьLol y would transfer the cache in the dram since ram is relatively cheap it would not impact the performance, in the server would cpu cache is virtually you can even assign cache from other machines if necessary after all 1 tb optic fiber connections are not alien to servers
ОтветитьChiplets don’t seem to have the performance output of single monolithic chips. Time will tell.
ОтветитьMake the CPU bigger. Let's say 4x, to future proof. The "Condor Galaxy" is showing us the future.
ОтветитьHow long until Apple is forced to go with a chiplet design, in your opinion?
ОтветитьWe are researching this, with storage class memory, new emerging memories. To replace sram, maybe sot-mram is a solution
ОтветитьLoving your content
Ответитьi cant read
ОтветитьThey will figure it out
ОтветитьSo we should be satisfied with what we already have?
Do programmers finally have to make their code efficient?
This answers why chiplet design is becoming so popular lately. Thanks a lot for the well conveyed and duly researched video.
ОтветитьEasy answer to the sram cache debacle, switch to 1T-sram, which is a hybrid SRAM/DRAM embedded memory, & is also an embedded version of psram or psdram(pseudo static ram, or pseudo static dynamic ram) & is the embodiment of the best of both worlds put into practice, as it has performance comparable to sram, while offering memory densities comparable to edram, & since the transistors are more densely packed in the same area 6-fold, then this type of memory, if used as a cache, would offer more cache memory in a smaller area, while also having lower latency than a standard SRAM cache.
1T-SRAM based caches are the way to go onwards, especially when combined with vertical stacking & MCM.
SRAM stopping with scaling is far from the death of SRAM. We have been getting closer and closer to the natural limits where further scaling of IC's is becoming physically impossible. I wouldn't even consider it that bad. That means that we finally have to get started on better written software. Also new process nodes ... are not like those we had earlier. Back then manufacturers could tell how much components shrunk. Now they just introduce things that are supposedly "better".
ОтветитьI was told one of the reason that sram doesn't scale well anymore is because s-ram used to actually use much higher density rules than regular logic. SRAM usually has separate DRC rules in a process, where they made use of all the fancy regular pattern techniques and so on to get that higher density. This made the density they could get in SRAM much higher than in logic. When you scaled the process, you mostly scaled the minimum sizes of general structures, and so by using the same 'hacks' you used in the last gen, you could scale the SRAM too.
But now the logic itself is also starting to use these 'physics hacks' to scale, with things like fixed metal line patterns and then having to use cut masks to do your low-metal wireing to get higher density in newer nodes. So in a sense, it is not that sram is no longer scaling - it is the fact that to get logic to scale, they are using the same tricks cache used for a long time already. As a result, the 'tools' used to make high density SRAM is no longer scaling, as the fundamentals are not changing.
I think that AMD would have emphatically beaten Nvidia in price to performance if the chiplet design of RDNA3 was truly more cost effective. Now it seems that they are going back to monolithic GPUs next generation because the increased cost of the chiplet design's complexity wasn't enough to justify the investment. They can't even drop the price of the 7900XTX that much because they were so expensive to make. Maybe another generation.
ОтветитьThe only thing who will replace S RAM is ReRAM.
ОтветитьWhile transistor size is scaling down then why not SRAM which is made of transistors
ОтветитьI would be interested in an update on this now that we have heard TSMC have been able to increase N2 SRAM density by 11%, widening the gap to Intel.
Ответить