You are just short of needing a personal sized nuclear reactor to power these damn things, so I mean the logic follows that the failure rate is going to climb
Not very many people had a dedicated GPU in the 90s and 2000s. And there’s no way the failure rate was higher, not even Limewire could melt down the family PC back then. It sure gave it the college try, but it was usually fixable. The biggest failures, bar none, were HD or media drives.
Dedicated GPUs were pretty common in the 2000s, they were required for most games, unlike the 90s where it was an unstandardized wild west. The failure rate had to be higher, I know I had 3 cards die with less than 2 years use on each card in the 2000s. Cases back then had terrible airflow and graphic demands jumped quickly.
It feels like things are so powerful and complex that failure rates of all these devices is much higher now.
You are just short of needing a personal sized nuclear reactor to power these damn things, so I mean the logic follows that the failure rate is going to climb
I don’t have any stats to back this up, but I wouldn’t be surprised if failure rates were higher back in the 90s and 2000s.
We have much more sophisticated validation technologies and the benefit of industry, process and operational maturity.
Would be interesting to actually analyze the real world dynamics around this.
Not very many people had a dedicated GPU in the 90s and 2000s. And there’s no way the failure rate was higher, not even Limewire could melt down the family PC back then. It sure gave it the college try, but it was usually fixable. The biggest failures, bar none, were HD or media drives.
Dedicated GPUs were pretty common in the 2000s, they were required for most games, unlike the 90s where it was an unstandardized wild west. The failure rate had to be higher, I know I had 3 cards die with less than 2 years use on each card in the 2000s. Cases back then had terrible airflow and graphic demands jumped quickly.
We all did they used to cost like 60 bucks