Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.
Isnt the main limiting factor signal integrity? Like we could do a CPU the size of a room now but it’s pointless as the stuff at one end wouldnt be able to even talk to the stuff in the middle as the signal just get fucked up on the way?
IIRC, light speed delay (or technically, electricity speed delay) it’s also a factor, but I can’t remember how much of a factor.
Signal integrity will probably be fine, you can always go with optical signalling for the long routes. What would be more of an issue is absurd complexity, latency from one end to the other, that kind of stuff. At some point, just breaking it down into a lot of semi-autonomous nodes in a cluster makes more sense. We kind of already started this with multi-core CPUs (and GPUs are essentially a lot of pretty dumb cores). The currently biggest CPUs all have a lot of cores, for a reason.