Next-Gen CPUs/GPUs have a HUGE problem!

Next-Gen CPUs/GPUs have a HUGE problem!

High Yield

2 года назад

201,736 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@scaffale13
@scaffale13 - 30.12.2022 22:37

Solution: start to program application and games in the right way to optimize the usage of the current technology based hardware.

Ответить
@okman9684
@okman9684 - 30.12.2022 23:25

While processors are the building for employees, RAM are the parking lot of CPU and giving crazy numbers for process nodes is nothing if you are using more cache for it in RAM

Ответить
@NorseGraphic
@NorseGraphic - 31.12.2022 10:48

We've still captured by the centralizing principle, instead of parallell-processing. It might be easier to write the code for centralized systems, but it'll hit a wall sooner or later. And all these efforts by AMD, Apple, Intel and nVidia will grind to a halt when they can't expand further using the chiplet philosophy.
Edit. On another note, when you look at Carbon, you'll realize it's a semiconductor. So, by switching from Silicon to Carbon, you can reduce the volume on the chip with the same number of atoms.

Ответить
@B5152g
@B5152g - 31.12.2022 14:18

I think the worst part is the current tier system, a lot of the current CPU's shouldn't exist, and only exist to create a false pricing ladder..

Ответить
@arydant
@arydant - 31.12.2022 18:39

Oh well, I guess innovation has come to a halt and progress will just stop her..

Ответить
@rayraycthree5784
@rayraycthree5784 - 31.12.2022 20:17

Why can't the same transistors used in the ALUs, LUTs and controller be used to build memory flip flop cache?

Ответить
@TropicalCoder
@TropicalCoder - 01.01.2023 08:29

Hard drives were an increasing bottleneck because of their lack of speed. Then along came SSDs. Now we all have them. Back in the day they used magnetic core memory for RAM. The advantage to that was that if a computer lost power, when power came back on the CPU would pick up where it left off without skipping a beat. There are many potential technologies in the labs, of which eventually one of them is going to come along and revolutionize memory.

Ответить
@HikaruAkitsuki
@HikaruAkitsuki - 01.01.2023 16:54

If the tech company want to sell their new product, they should beat their own record in the process. They intentionally make the improvement in the pass so little to beat the predecessors. But now the computer components are too powerful now, beating the latest become more impossible. Now their shirk thing so much, maybe the others choice is make the components bigger again so that they can provide room for crowded improvement. For the mean time I just want to know when the 1 TB Ram with 64 core 128 tread with 1 PB NVMe tablet PC become a common thing 🤔

Ответить
@cube2fox
@cube2fox - 01.01.2023 20:09

I think you overestimate the cost reduction from using older manufacturing nodes for SRAM. In the past, the massive reduction in chip cost (for the same performance) has come nearly exclusively through scaling (shrinking microstructures), not through using mature manufacturing nodes!

Ответить
@hans-joachimbierwirth4727
@hans-joachimbierwirth4727 - 02.01.2023 09:20

That's a whole lot of bullshit.

Ответить
@ElectricityTaster
@ElectricityTaster - 04.01.2023 18:18

It all comes down to heat management.

Ответить
@zazuradia
@zazuradia - 05.01.2023 20:01

I may sound rude but i call bullshit on this kind of videos and their urge to alarm the population.
Since the dawn of the computers they have been facing these kind of problems, and there has always been a solution. I remember one of the most pupular problems was that a CPU couldn't pass the 4GHz speed due to phisical limitations or without lowering it's lifespan. What did they do? Simple: Add more cores to the CPU! The same happened with the USBs during the USB2 to USB3 transition. USB3 has more wires. The solution is always simple: upscale in a different direction. Upscaling in a different direction whenever you find a dead end has always been the way to evolve when computers is what we are talking about.
Now about your video: Who is it targeted to? Processor manufacturers? Yeah, like they don't know their limitations in the long run (and we are talking about decades ahead). Us, plain watchers? What can we do? Nothing! "Oh wooow! There's a huge problem ahead and we can't overcome it! Computer are doomed!" Yeah, sure. Or are you just trying to create alarmism for the sake of nothing? or for money? Good old fear monetizes well, right? You sound like a kid who just realized something new and who wants to do something about, but hell you're probably older than me. We've been around computers our whole life. We've seen computers grow from nearly zero. We know where the save button icon comes from. Come on!

Ответить
@dorion9111
@dorion9111 - 06.01.2023 21:35

So we get dies at 10-20% increase to gain huge performance at gained 10% costs

Ответить
@nagi603
@nagi603 - 07.01.2023 15:41

Wonder how difficult it would be to have the 3D V-cache but with different node sources.

Ответить
@earll.hinsonjr.6736
@earll.hinsonjr.6736 - 17.01.2023 06:35

AMD wins the chiplet design challenge.

Ответить
@bikemmm6167
@bikemmm6167 - 19.01.2023 18:00

Smartphones don’t have the space for ever increasing cpu sizes

Ответить
@sfalpha
@sfalpha - 21.01.2023 17:56

SRAM could scale if it not becoming larger.

I listen to AMD engineer about this somewhere about 3D V-Cache and why, thing is, more dense the SRAM = larger SRAM using same amount of die area, this mean more signal lines and longer run compare to power it could transmit. That's why AMD doing 3D stacked cache to shorten the signal lines and enable larger cache on the chip. Chiplet may help about yield and scale but introduce more latency to the SRAM and require more power.

Ответить
@johnadams6249
@johnadams6249 - 16.02.2023 23:53

Does Apple’s M1 Ultra not count as a chiplet design?

Ответить
@MaxIronsThird
@MaxIronsThird - 28.02.2023 17:34

HBM is going to make a comeback

Ответить
@tsclly2377
@tsclly2377 - 10.03.2023 13:23

Heat with chiplets will require lower speeds, not such a problem with L3 and predictive loading, but critical with internal stacks and registers and L2.. in that order..

Ответить
@nielsdaemen
@nielsdaemen - 20.03.2023 12:56

Sounds like 3D cache is the solution

Ответить
@simplemechanics246
@simplemechanics246 - 22.03.2023 15:41

That industry did not had time for research, because they fight all these decades. Low price, bigger production, market crash...They should stop that madness long time ago, increase the price and make proper product development.

Ответить
@stefanbanev
@stefanbanev - 03.04.2023 19:06

CPU<->RAM latency and throughput these days are remarkably low/high, 5%, 10% or even 20% relative improvements still possible but it is asymptotically getting very close to the saturations...
the next improvements are a massive multi cores frontiers and the next - a quantum computing to the gadgets...

Ответить
@karlemiltlbll4305
@karlemiltlbll4305 - 10.04.2023 14:23

The explaining of cache is wrong, its not the most important data that is stored in cache, it's the most recently and frequently used data that is stored in the cache.

Ответить
@yujaeha
@yujaeha - 13.04.2023 03:48

Amazing info. Thanks 🙏

Ответить
@samghost13
@samghost13 - 18.04.2023 23:48

There was a Big Light switching ON in my Head. Thank you very much Sir!

Ответить
@blueforce_aero_yt
@blueforce_aero_yt - 15.05.2023 19:12

just put it on top

Ответить
@jondonnelly3
@jondonnelly3 - 19.05.2023 20:38

if SRAM is not an option just use Shimano or Campagnolo

Ответить
@rnssr71
@rnssr71 - 01.06.2023 07:42

i was reading in the last day or so, old articles about the pentium pro and pentium 2 where the entire L2 cache was a separate on package die. so, we are just repeating history to a degree.

Ответить
@Capeau
@Capeau - 17.06.2023 02:55

Intel doesnt use chiplets, but tiles. They are not the same thing.
Tiles are actually better but require a bit more effort.

Ответить
@yuricopperhooves
@yuricopperhooves - 08.07.2023 13:32

Oh no, if this goes on, programmers has to resort to drastic measures like optimizing their programs. Anyways.

Ответить
@onion69420
@onion69420 - 10.07.2023 20:12

Components can get so small and so fast, they are gonna need new magic

Ответить
@adriancoanda9227
@adriancoanda9227 - 15.08.2023 16:15

Lol y would transfer the cache in the dram since ram is relatively cheap it would not impact the performance, in the server would cpu cache is virtually you can even assign cache from other machines if necessary after all 1 tb optic fiber connections are not alien to servers

Ответить
@MrDecessus
@MrDecessus - 17.08.2023 14:51

Chiplets don’t seem to have the performance output of single monolithic chips. Time will tell.

Ответить
@Conservative_Indiana
@Conservative_Indiana - 17.08.2023 22:26

Make the CPU bigger. Let's say 4x, to future proof. The "Condor Galaxy" is showing us the future.

Ответить
@mylittlepimo736
@mylittlepimo736 - 22.08.2023 08:39

How long until Apple is forced to go with a chiplet design, in your opinion?

Ответить
@Analisede_Tudo
@Analisede_Tudo - 01.09.2023 16:29

We are researching this, with storage class memory, new emerging memories. To replace sram, maybe sot-mram is a solution

Ответить
@johntupper1369
@johntupper1369 - 03.10.2023 00:58

Loving your content

Ответить
@ababababaababbba
@ababababaababbba - 13.10.2023 09:15

i cant read

Ответить
@Phil-D83
@Phil-D83 - 20.10.2023 06:29

They will figure it out

Ответить
@Enkaptaton
@Enkaptaton - 03.02.2024 19:52

So we should be satisfied with what we already have?
Do programmers finally have to make their code efficient?

Ответить
@chibby0ne
@chibby0ne - 28.04.2024 22:55

This answers why chiplet design is becoming so popular lately. Thanks a lot for the well conveyed and duly researched video.

Ответить
@xeschire706
@xeschire706 - 22.05.2024 15:29

Easy answer to the sram cache debacle, switch to 1T-sram, which is a hybrid SRAM/DRAM embedded memory, & is also an embedded version of psram or psdram(pseudo static ram, or pseudo static dynamic ram) & is the embodiment of the best of both worlds put into practice, as it has performance comparable to sram, while offering memory densities comparable to edram, & since the transistors are more densely packed in the same area 6-fold, then this type of memory, if used as a cache, would offer more cache memory in a smaller area, while also having lower latency than a standard SRAM cache.

1T-SRAM based caches are the way to go onwards, especially when combined with vertical stacking & MCM.

Ответить
@FrancisFjordCupola
@FrancisFjordCupola - 06.07.2024 14:14

SRAM stopping with scaling is far from the death of SRAM. We have been getting closer and closer to the natural limits where further scaling of IC's is becoming physically impossible. I wouldn't even consider it that bad. That means that we finally have to get started on better written software. Also new process nodes ... are not like those we had earlier. Back then manufacturers could tell how much components shrunk. Now they just introduce things that are supposedly "better".

Ответить
@JorenVaes
@JorenVaes - 13.07.2024 10:19

I was told one of the reason that sram doesn't scale well anymore is because s-ram used to actually use much higher density rules than regular logic. SRAM usually has separate DRC rules in a process, where they made use of all the fancy regular pattern techniques and so on to get that higher density. This made the density they could get in SRAM much higher than in logic. When you scaled the process, you mostly scaled the minimum sizes of general structures, and so by using the same 'hacks' you used in the last gen, you could scale the SRAM too.
But now the logic itself is also starting to use these 'physics hacks' to scale, with things like fixed metal line patterns and then having to use cut masks to do your low-metal wireing to get higher density in newer nodes. So in a sense, it is not that sram is no longer scaling - it is the fact that to get logic to scale, they are using the same tricks cache used for a long time already. As a result, the 'tools' used to make high density SRAM is no longer scaling, as the fundamentals are not changing.

Ответить
@ronchum5178
@ronchum5178 - 08.10.2024 08:09

I think that AMD would have emphatically beaten Nvidia in price to performance if the chiplet design of RDNA3 was truly more cost effective. Now it seems that they are going back to monolithic GPUs next generation because the increased cost of the chiplet design's complexity wasn't enough to justify the investment. They can't even drop the price of the 7900XTX that much because they were so expensive to make. Maybe another generation.

Ответить
@quentinlabbe6474
@quentinlabbe6474 - 01.11.2024 16:52

The only thing who will replace S RAM is ReRAM.

Ответить
@SureshSharma-eq1vz
@SureshSharma-eq1vz - 02.11.2024 12:36

While transistor size is scaling down then why not SRAM which is made of transistors

Ответить
@milesyounghamilton
@milesyounghamilton - 18.12.2024 13:55

I would be interested in an update on this now that we have heard TSMC have been able to increase N2 SRAM density by 11%, widening the gap to Intel.

Ответить