Breaking the scalability limit of analog computing


Credit: Pixabay/CC0 Public Domain
As machine learning models become larger and more complex, they require faster and more energy-efficient hardware to perform calculations. Conventional digital computers are struggling to keep up.
A similar optical nerve network can perform tasks similar to digital tasks, such as image classification or speech recognition, but because the computation is done using light instead of Electronic signalOptical neural networks can run many times faster while consuming less power.
However, these same devices are prone to hardware failures that can make calculations less accurate. microscopic defects in Hardware components is one of the causes of these errors. In an optical neural network with many connected components, errors can quickly accumulate.
Even with error-correction techniques, due to the fundamental properties of the devices that make up optical neural networks, some errors are inevitable. A network large enough to be implemented in the real world would be too imprecise to be effective.
MIT researchers have overcome this barrier and found a way to efficiently extend optical neural networks. By adding a small hardware component to the optical switches that make up the network’s architecture, they can reduce even uncorrectable errors that would otherwise build up in equipment.
Their work makes it possible to create a super-fast, energy-efficient analog neural network that can operate with the same precision as a digital network. With this technique, as an optical circuit becomes larger, the number of errors in its computation actually decreases.
“This is remarkable, as it goes against the intuition of analog systems, where larger circuits are said to have higher errors, so the errors place a limit on the openability.” This paper allows us to address the question of the scalability of these systems with an obvious ‘yes’,” said lead author Ryan Hamerly, a visiting scientist at the Research Laboratory Researcher in MIT Electronics (RLE) and the Quantum Optics Laboratory, and a senior scientist at NTT Research.
Hamerly’s co-authors are graduate student Saumil Bandyopadhyay and senior author Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), leader of the Light Quantum Laboratory and is a member of the RLE. Research published on natural communication.
Multiply by light
An optical neural network consists of many connected components that function like reprogrammable, navigational mirrors. These navigable mirrors are known as Mach-Zehnder Inferential Machines (MZI). The neural network data is encoded into light, which is projected onto the optical neural network from the laser.
A typical MZI contains two mirrors and two beamsplitters. Light enters the upper part of the MZI, where it is divided into two parts that interfere with each other before being recombined by a second beamsplitter and then reflected out of the lower part to the next MZI in the array. Researchers can take advantage of the interference of these optical signals to perform complex linear algebra operations, known as matrix multiplication, which is the way neural networks process data.
But the errors that can occur in each MZI quickly accumulate as light moves from one device to another. Some errors can be avoided by identifying them first and adjusting the MZIs so that earlier errors are eliminated by devices later in the array.
“It’s a very simple algorithm if you know what the errors are. But these errors are hard to pinpoint because you only have access to the inputs and outputs of your chip,” Hamerly said. “This prompted us to look at whether we could make corrections without calibration.”
Hamerly and his former collaborators demonstrate a mathematical technique that went a step further. They can successfully infer errors and precisely adjust the MZIs accordingly, but even this does not eliminate all errors.
Due to the fundamental nature of the MZI, there are cases where it is not possible to adjust the device so that all light travels from the bottom port to the next MZI. If the device loses some of its light at each step and the array is very large, then only a small amount of energy will be left in the end.
“Even with error correction, there is a fundamental limit to how good a chip can be. MZIs can’t physically recognize certain settings that they need to be configured,” he said. .
So the team developed a new type of MZI. The researchers added an additional beamsplitter to the end of the device, calling it 3-MZI because it has three beamsplitters instead of two. Because of the way this additional beamsplitter mixes light, the MZI should easily achieve the setting it needs to send all light from the outside through its bottom port.
Importantly, the additional beamsplitter is only a few micrometers in size and is a passive component, so it does not require any additional wiring. Adding additional beamsplitters does not significantly change the size of the chip.
Bigger chip, less error
When the researchers ran simulations to test their architecture, they found that it could eliminate many uncorrectable errors that interfered with accuracy. And as the optical neural network gets larger, the number of errors in the device actually decreases—the opposite of what happens in a device with a standard MZI.
Using 3-MZIs, they have the ability to create a device Hamerly says it’s large enough for commercial use with a 20-fold reduction in error.
The researchers also developed a variant of the MZI design specifically for correlation errors. These are caused by manufacturing imperfections—if the chip thickness is slightly off, all MZIs can be biased to roughly the same extent, so the errors are almost the same. together. They found a way to change the configuration of MZI to make it robust against these types of bugs. This technique also increases the bandwidth of the optical neural network so that it can run three times faster.
Now that they have demonstrated these techniques using simulation, Hamerly and his collaborators plan to test these methods on physical hardware and continue working towards an optical system. neural network they can effectively deploy in the real world.
“Programmable photonics tolerating asymptotic error”, natural communication (2022).
Provided by
Massachusetts Institute of Technology
quote: Breaking the scaling limit of analog computing (2022, Nov 29) retrieved Nov 29, 2022 from https://techxplore.com/news/2022-11-scaling-limits-analog. html
This document is the subject for the collection of authors. Other than any fair dealing for private learning or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.