News

January 14, 2022

Read the full article here

This article will look at a technology for AI inference processing using light rather than electrons from LIghtmatter and combined with traditional CMOS including SRAM memory. This article is based upon an interview with Lightmatter CEO, Nick Harris. The company sees this product being useful for data center inference and perhaps eventually in some AI computation intensive industrial and consumer applications (such as autonomous vehicles).

There are widely cited forecasts that project accelerating information and communications technology (ICT) energy consumption increases through the 2020’s with a 2018 Nature article estimating that if current trends continue, this will consume more than 20% of electricity demand by 2030. At several industry events I have heard talks that say one of the important limits of data center performance will be the amount of energy consumed. NVIDIA’s latest GPU solutions use 400+W processors and this energy consumption could more than double in future AI processor chips. Solutions that can accelerate important compute functions while consuming less energy will be important to provide more sustainable and economical data centers.

Lightmatter’s Envise chip is a general-purpose machine learning accelerator that combines photonics (PIC) and CMOS transistor-based devices (ASIC) into a single compact module. The device uses silicon photonics for high performance AI inference tasks and consumes much less energy than CMOS only solutions and thus helping to reduce the projected power load from data centers.

CMOS and photonics chips are combined into the Envise compute module. 500 MB of SRAM are used for storing weighting levels from a trained machine learning (ML) model.

The Envise optical chip does analog processing and thus doesn’t have the precision of floating-point number calculations used in conventional computing. Thus, Envise processors are more suited to applications where this lack of precision isn’t an issue, such as AI inference. Envise thus provides a specialized computer that excels for certain types of problems. With the slowing of traditional CPU scaling, specialized computing devices such as Envise will play important roles for compute for specific applications.

Envise runs similar to the Google tensor flow devices for general purpose AI applications except that it uses an optical AI processor engine. Any application that uses linear algebra can be run on the Envise modules including AI inference, natural language processing, financial modelling and ray tracing.

Lightmatter will be offering its Envise processors in a Envise server combining 16 Envise modules with AMD EPYC processors, and SSD and DDR4 DRAM.

Lightmatter has a roadmap for even faster processing using more colors for parallel processing channels with each color acting as a separate virtual computer.

Nick said that in addition to data center applications for Envise he could see the technology being used to enable autonomous electric vehicles that require high performance AI but are constrained by battery power, making it easier to provide compelling range per vehicle charge. In addition to the Envise module, Lightmatter also offers optical interconnect technology that it calls Passage.

Lightmatter is making optical AI processors that can provide fast results with less power consumption than conventional CMOS products. Their compute module combines CMOS logic and memory with optical analog processing units useful for AI inference, , natural language processing, financial modelling and ray tracing.

If you like this article consider subscribing to our bi-monthly newsletter to get information about our portfolio, solutions, and insights delivered to your inbox.