The profile is currenly unclaimed by the seller. All information is provided by CB Insights.

lightmatter.ai

Founded Year

2017

Stage

Series B | Alive

Total Raised

$113.18M

Last Raised

$80M | 1 yr ago

About Lightmatter

Lightmatter builds chips for artificial intelligence computing. Its architecture leverages unique properties of light to enable fast and efficient inference and training engines.

Lightmatter Headquarter Location

60 State Street

Boston, Massachusetts, 02109,

United States

Predict your next investment

The CB Insights tech market intelligence platform analyzes millions of data points on venture capital, startups, patents , partnerships and news mentions to help you see tomorrow's opportunities, today.

Research containing Lightmatter

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned Lightmatter in 3 CB Insights research briefs, most recently on Aug 20, 2020.

Expert Collections containing Lightmatter

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

Lightmatter is included in 3 Expert Collections, including Artificial Intelligence.

A

Artificial Intelligence

8,718 items

This collection includes startups selling AI SaaS, using AI algorithms to develop their core products, and those developing hardware to support AI workloads.

G

Game Changers 2018

36 items

Our selected startups are high-momentum companies pioneering technology with the potential to transform society and economies for the better.

A

AI 100

200 items

The winners of the 4th annual CB Insights AI 100.

Lightmatter Patents

Lightmatter has filed 43 patents.

The 3 most popular patent topics include:

  • Artificial neural networks
  • Machine learning
  • Artificial intelligence
patents chart

Application Date

Grant Date

Title

Related Topics

Status

4/30/2019

6/21/2022

Computer memory, Integrated circuits, Computer buses, SDRAM, Ethernet

Grant

Application Date

4/30/2019

Grant Date

6/21/2022

Title

Related Topics

Computer memory, Integrated circuits, Computer buses, SDRAM, Ethernet

Status

Grant

Latest Lightmatter News

“PACMAN” Hack Can Break Apple M1’s Last Line of Defense

Jun 10, 2022

“PACMAN” Hack Can Break Apple M1’s Last Line of Defense Share Explore by topic Topics Support IEEE Spectrum IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy. Saving articles to read later requires an IEEE Spectrum account The Institute content is only available for members Downloading full PDF issues is exclusive for IEEE Members Access to Spectrum's Digital Edition is exclusive for IEEE Members Following topics is a feature exclusive for IEEE Members Adding your response to an article requires an IEEE Spectrum account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE . Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, archives, PDF downloads, and other benefits. Learn more → Access Thousands of Articles — Completely Free Create an account and get exclusive content and features: Save articles, download collections, and talk to tech insiders — all free! For full access and benefits, join IEEE as a paying member. 4 min read Apple's M1 prcoessor is a powerful and high-efficiency chip, though perhaps not as impervious in its defenses as its initial safety record might suggest. Apple Apple’s M1 processor made a big splash on its November 2020 release, noteworthy for its eye-popping performance and miserly power consumption. But the value of its security may not be as obvious at first blush. A lack of serious attacks since its launch nearly two years ago indicates that its security systems, among them a last line of defense called pointer authentication codes, are working well. But its honeymoon period could possibly be coming to an end. At the International Symposium on Computer Architecture later this month researchers led by MIT’s Mengjia Yan will present a mode of attack that so weakens the pointer authentication code (PAC) defense that the core of a computer’s operating system is made vulnerable. And because PACs may be incorporated in future processors built from the 64-bit Arm-architecture, the vulnerability could become more widespread. It’s possible that other processors are already using PACs, but the M1 was the only one available to Yan’s lab. “What we found is actually quite fundamental,” says Yan. “It’s a class of attack. Not one bug.” How PACMAN picks the lock goes to the heart of modern computing. The vulnerability, called PACMAN, assumes that there is already a software bug in operation on the computer that can read and write to different memory addresses. It then exploits a detail of the M1 hardware architecture to give the bug the power to execute code and possibly take over the operating system. “We assume the bug is there and we make it into a more serious bug,” says Joseph Ravichandran a student of Yan’s who worked on the exploit with fellow students Weon Taek Na and Jay Lang. To understand how the attack works you have to get a handle on what pointer authentication is and how a detail of processor architecture called speculative execution works. Pointer authentication is a way to guard against software attacks that try to corrupt data that holds memory addresses, or pointers. For example, malicious code might execute a buffer overflow attack, writing more data than expected into a part of memory, with the excess spilling over into a pointer’s address and overwriting it. That might then mean that instead of the computer’s software executing code stored at the original address, it is diverted to malware stored at the new one. Pointer authentication appends a cryptographic signature to the end of the pointer. If there’s any malicious manipulation of the pointer, the signature will no longer match up with it. PACs are used to guard the core of the system’s operating system, the kernel. If an attacker got so far as to manipulate a kernel pointer, the mismatch between the pointer and its authentication code would produce what’s called an “exception,” and the system would crash, ending the malware’s attack. Malware would have to be extremely lucky to guess the right code, about 1 in 65,000. PACMAN finds a way for malware to keep guessing over and over without any wrong guesses triggering a crash. How it does this goes to the heart of modern computing. For decades now, computers have been speeding up processing using what’s called speculative execution. In a typical program, which instruction should follow the next often depends on the outcome of the previous instruction (think if/then). Rather than wait around for the answer, modern CPUs will speculate—make an educated guess—about what comes next and start executing instructions along those lines. If the CPU guessed right, this speculative execution has saved a bunch of clock cycles. If it turns out to have guessed wrong, all the work is thrown out, and the processor begins along the correct sequence of instructions. Importantly, the mistakenly computed values are never visible to the software. There is no program you could write that would simply output the results of speculative execution. Initial solutions to PACMAN only tended to increase the processor’s overall vulnerability. However, over the past several years researchers have discovered ways to exploit speculative execution to do things like sneak data out of CPUs. These are called side-channel attacks, because they acquire data by observing indirect signals, such as how much time it takes to access data. Spectre and Meltdown , are perhaps the best known of these side-channel attacks. Yan’s group came up with a way to trick the CPU into guessing pointer authentication codes in speculation so an exception never arises, and the OS doesn’t crash. Of course, the answer is still invisible to software. But a side-channel trick involving stuffing a particular buffer with data and using timing to uncover which part the successful speculation replaces, provides the answer. [A similar concept is explained in more detail in “ How the Spectre and Meltdown Hacks Really Worked ”] With regard to PACMAN, Apple’s product team provided this response to Yan’s group: “We want to thank the researchers for their collaboration as this proof-of-concept advances our understanding of these techniques. Based on our analysis, as well as the details shared with us by the researchers, we have concluded this issue does not pose an immediate risk to our users and is insufficient to bypass device protections on its own.” Other researchers familiar with PACMAN say that how dangerous it really is remains to be seen. However, PACMAN “increases the number of things we have to worry about when designing new security solutions,” says Nael Abu-Ghazaleh chair of computer engineering at University of California Riverside and an expert in an expert in architecture security, including speculative execution attacks. Processors makers have been adding new security solutions to their designs besides pointer authentication in recent years. He suspects that now that PACMAN has been revealed, other research will begin to find speculative attacks against these new solutions. Yan’s group explored some naïve solutions to PACMAN, but they tended to increase the processor’s overall vulnerability. “It’s always an arms race,” says Keith Rebello , the former program of DARPA’s System Security Integrated Through Hardware and firmware (SSITH) program and currently a senior technical fellow at The Boeing Company. PACs are there “to make it much harder to exploit a system, and they have made it a lot harder. But is it the complete solution? No.” He’s hopeful that tools developed through SSITH, such as rapid re-encryption , could help. Abu-Ghazaleh credits Yan’s group with opening a door to a new aspect of processor security. “People used to think software attacks were standalone and separate from hardware attacks,” says Yan. “We are trying to look at the intersection between the two threat models. Many other mitigation mechanisms exist that are not well studied under this new compounding threat model, so we consider the PACMAN attack as a starting point.” From Your Site Articles The Conversation (0) 10 min read This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light. Alexander Sludds DarkBlue1 Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars. The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data. While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely available—along with the burgeoning quantities of data that can be easily harvested and used to train neural networks. The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations , a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example. Here, I will describe a very different approach to this problem—using optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood. Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons. Reducing the energy needs of neural networks might require computing with light For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations. While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results). What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices , which are just rectangular arrays of numbers—spreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file. This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up. Multiplying With Light Two beams whose electric fields are proportional to the numbers to be multiplied, x and y, impinge on a beam splitter (blue square). The beams leaving the beam splitter shine on photodetectors (ovals), which provide electrical signals proportional to these electric fields squared. Inverting one photodetector signal and adding it to the other then results in a signal proportional to the product of the two inputs. David Schneider Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet , a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet , a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images. Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resources—along with time, money, and energy—at the problem. As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime. Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically. It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages. But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elements—meaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs. The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra. To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products together—the multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019 . We're working now to build such an optical matrix multiplier. Optical data communication is faster and uses less power. Optical computing promises the same advantages. The basic computing unit in this device is an optical element called a beam splitter . Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam. Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs. To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/√2 and (x − y)/√2. In addition to the beam splitter, this analog multiplier requires two simple electronic components—photodetectors—to measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity. Why is that relation important? To understand that requires some algebra—but nothing beyond what you learned in high school. Recall that when you square ( x + y)/√2 you get (x2 + 2xy + y2)/2. And when you square (x − y)/√2, you get (x2 − 2xy + y2)/2. Subtracting the latter from the former gives 2xy. Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers. Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c). Lightmatter My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation. Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulse—you can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy. Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple times—consuming energy each time—it can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations. Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements. I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence , are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year. Another startup using optics for computing is Optalysis , which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely. Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude. There is also a company called Luminous , spun out of Princeton University , which is working to create spiking neural networks based on something it calls a laser neuron . Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approaches—spiking and optics—is quite exciting. There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training. There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way. There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude. Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed. Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks , were first demonstrated in the 1970s . But this approach didn't catch on. Will this time be different? Possibly, for three reasons. First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this time—and the future of such computations may indeed be photonic. From Your Site Articles

  • When was Lightmatter founded?

    Lightmatter was founded in 2017.

  • Where is Lightmatter's headquarters?

    Lightmatter's headquarters is located at 60 State Street, Boston.

  • What is Lightmatter's latest funding round?

    Lightmatter's latest funding round is Series B.

  • How much did Lightmatter raise?

    Lightmatter raised a total of $113.18M.

  • Who are the investors of Lightmatter?

    Investors of Lightmatter include Spark Capital, Matrix Partners, Google Ventures, Hewlett Packard Enterprise, SIP Global Partners and 6 more.

Discover the right solution for your team

The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.

Request a demo

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.