Will we see people being paid to host small single-GPU servers in their home ? I guess that would require redesigning the training system because the data transfer speed would be much slower with a higher latency. Maybe that is not even compatible with LLM training ?
128 to 210 WEEKS for power transformers. You can fab more GPUs in 18 months but the electrical infrastructure to run them takes 3-4 years and there are maybe a handful of manufacturers. Feels like the stories from the telecom fiber overbuild of 2000 except this time the supply chain can't even overbuild if it wanted to.
Dont forget suger cane: https://aisupplychain.vercel.app/
I miss good’ol pre-LLM days so much