Luminous Computing, which is developing a light-based AI accelerator chip, raises $105M





Join today’s leading executives online at the Data Summit on March 9th. Register here.


According to some experts, the growth in the compute power necessary to develop future AI systems might run up against a wall with mainstream chip technologies. While startups like Cerabras claim to be developing hardware that can handle next-generation systems efficiently, at least from a power consumption perspective, researchers worry that highly sophisticated AI systems — which are already expensive to train and deploy — might become the sole domain of corporations and governments with the necessary resources.

One solution that’s been proposed is photonic chips, which use light to send signals as opposed to the electricity that conventional processors use. Photonic chips could, in theory, lead to higher performance because light produces less heat than electricity, can travel faster, and is less susceptible to changes in temperature and electromagnetic fields.

Lightmatter, LightOn, Celestial AI, Intel, and Japan-based NTT are among the companies developing photonics technologies. So is Luminous Computing, which today announced that it raised $105 million in a series A round with participation from investors including Microsoft cofounder Bill Gates, Gigafund, 8090 Partners, Neo, Third Kind Venture Capital, Alumni Ventures Group, Strawberry Creek Ventures, Horsley Bridge , and Modern Venture Partners, among others. (Post-money, Luminous’ valuation stands at between $200 million and $300 million.)

“It’s an incredible time to be a part of the AI ​​industry,” Luminous CEO and cofounder Marcus Gomez said in a statement. “AI has become superhuman. We can interact with computers in natural language and ask them to write a piece of code or even an essay, and the output will be better than most humans could provide. What’s frustrating is that we have the software to address monumental, revolutionary problems that humans can’t even begin to solve. We just don’t have the hardware that can run those algorithms.”

Light-based chips

Luminous was founded in 2018 by Michael Gao, CEO Marcus Gomez, and Mitchell Nahmias. Nahmias’ research at Princeton became the cornerstone of Luminous’ hardware. Gomez, who previously founded a fashion tech startup called Swan, was formerly a research scientist at Tinder and spent time working on machine intelligence and research software at Google. As for Gao, he’s the CEO at AlphaSheets, a data analytics platform aimed at enterprise customers.

“Over the past decade, the demand for AI compute has increased by a factor of nearly 10,000. Ten years ago, the biggest models were 10 million parameters and could be trained in 1 to 2 hours on a single GPU; today the largest models are over 10 trillion parameters and can take up to a year to train across tens of thousands of machines,” Gomez told VentureBeat via email. “Unfortunately, we’ve come to an impasse: hardware simply hasn’t kept up. Existing big AI models today are notoriously difficult and expensive to train, as the underlying hardware just isn’t fast enough. Training big AI models is mostly relegated to [big tech companies], as most companies can’t even afford to rent the necessary hardware. Even worse, even for [big tech companies], hardware growth is slowing so much that increasing model size much further is nearing intractable. Stagnation in AI progress is incoming rapidly.”

In traditional hardware, transistors control the flow of electrons through a semiconductor, performing operations by reducing information to a series of ones and zeros. By contrast, Luminous hardware calculates by splitting and mixing beams of light within nanometer-wide channels. Photonics chips’ calculations are analog as opposed to digital, meaning that they’re inherently less accurate. But photonics chips can perform these calculations — including the calculations involved in training AI models — quickly and in parallel, moving data and multiply large arrays of numbers instantly.

“Using … proprietary silicon photonics technology, [we’ve] designed a novel computer architecture that can scale drastically more efficiently, allowing users to train models that are 100 times to 1,000 times larger in tractable amounts of time, at substantially reduced costs, and with a drastically simpler programming model,” Gomez said. “In other words, [we’ve] designed a computer that makes training AI algorithms faster, cheaper, and easier.”

According to Gomez, Luminous aims to make a single computer chip as powerful as 3,000 boards equipped with Google’s third-generation tensor processing units (TPUs). (TPUs are custom chips developed specifically to accelerate AI development, powering products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs.) For reference, it took over 4,000 third-generation TPUs to train the language model used in the 2021 MLPerf machine learning hardware performance benchmark.

While Luminous is keeping a tight lid on the exact technical specifications of its hardware, Nahmias published a scientific article in January 2020 that compared the performance of photonic and electronic hardware in AI systems using what the paper called “multiply-accumulate” operations. Nahmias and the other coauthors found photonic hardware — presumably based on Luminous’ — was significantly better than electronic hardware in terms of energy, speed, and compute density.

“If you look at where modern AI computers get bottlenecked, it’s first and foremost on communication, at every scale – between chips, between boards, and between racks in the datacenter. If you fail to solve the communication bottleneck, you do indeed have to live on these terrible tradeoff curves,” Gomez added. “Luminous uses its … silicon photonics technology to directly solve the communication bottleneck at every scale of the hierarchy, and when [we] say solve, [we] mean solve: [we’re] increasing the bandwidth by 10 times to 100 times at every distance scale.”

Future plans

Photonic chips have drawbacks that must be addressed if the technology is to reach the mainstream. They’re physically larger than their electronic counterparts and difficult to mass-produce, for one, owing to the immaturity of photonic chip fabrication plants. Moreover, photonic architectures still largely rely on electronic control circuits, which can create bottlenecks.

“For large applications, including AI and machine learning and large-scale analytics, power dissipation across many components is expected to be high — an order of magnitude higher than current systems,” writes The Next Platform’s Nicole Hemsoth in a January 2021 analysis of photonics technologies. “We likely are at least five years to a decade away from silicon photonics-based computing.”

But pre-revenue Luminous — which has over 90 employees — claims to have produced working prototypes of its chips, and the company aims to ship development kits to its customers within the next few months. The funding from the latest round brings Luiminous’ total capital raised to $115 million and will primarily go towards doubling the size of the engineering team, building out Luminous’ chips and software, and gearing up for “commercial-scale” production,” Gomez says .

“Luminous’ initial target customers are hyperscalers that build their own datacenters to drive their own machine learning algorithms,” Gomez continued. “Luminous’ computer has the memory, compute, and bandwidth necessary to train these super large algorithms, and it’s designed from the ground up with the AI ​​user in mind … For users that use big AI models to drive their core revenue, we completely unblock them from growing their models, and we eliminate thousands of hours otherwise sunk into programming complexity and engineering overhead.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More




Roxxcloud

Leave a Reply

Your email address will not be published.