I’m hopeful but I will get excited once it is in actual builds and have been benchmarked.
Yeah for sure, I do think it’s only a matter of time before people figure out a new substrate. It’s really just a matter of allocating time and resources to the task, and that’s where state level planning comes in.
Probably yet another overblown headline.
Does anyone have access to the full text of the paper?
https://doi.org/10.1126/science.adv7434
Abstract
Large-scale generative artificial intelligence (AI) is facing a severe computing power shortage. Although photonic computing achieves excellence in decision tasks, its application in generative tasks remains formidable because of limited integration scale, time-consuming dimension conversions, and ground-truth-dependent training algorithms. We produced an all-optical chip for large-scale intelligent vision generation, named LightGen. By integrating millions of photonic neurons on a chip, varying network dimension through proposed optical latent space, and Bayes-based training algorithms, LightGen experimentally implemented high-resolution semantic image generation, denoising, style transfer, three-dimensional generation, and manipulation. Its measured end-to-end computing speed and energy efficiency were each more than two orders of magnitude greater than those of state-of-the-art electronic chips, paving the way for acceleration of large visual generative models.




