When trying to make a purchase with a shopping app, we can quickly browse through a list of recommendations while acknowledging that the machine knows about us—at least, it’s learning how to do so. As an effective emerging technology, machine learning (ML) has become quite popular with a variety of applications ranging from miscellaneous applications to supercomputers.
As a result, specialized ML computers are being developed at various scales, but their productivity is somewhat limited: the workload and development costs are largely concentrated in their software stacks. , which needs to be developed or reworked on a special basis to support any scale model.
To solve the problem, researchers from the Chinese Academy of Sciences (CAS) proposed a fractal parallel computing models and published their research in Intelligent computing on September 5.
“To solve the productivity problem, we proposed ML computers with fractal von Neumann (FvNA) architecture,” said Yongwei Zhao, a researcher from the State Key Processors Laboratory, Institute CAS Computer Technology said.
“Fractability” is a borrowed geometrical concept that describes similar letters applied to any scale. According to the researchers, if a system is “fractured,” it implies that the system always uses the same program regardless of size.
FvNA, a parallel, multi-tiered von Neumann architecture, is not only fractal but isometric—literally, “sameness between layered structures”.
That is, in stark contrast to the usual asymmetric ML computer architecture, FvNA applies the same instruction set architecture (ISA) to every layer. “The lower layer is completely controlled by the higher layer, so only the top layer is visible to the programmer as a monolithic processor. As a result, ML computers built with FvNA have a programmable in a sequential, uniform, and unchanging view,” the researchers explain.
Although FvNA has been shown to be applicable to the ML domain and has the potential to reduce programming productivity problems while still performing as efficiently as its ad-hoc counterparts, several issues remain. be solved. In this article, the following three things were addressed:
- How can FvNA remain quite efficient with such a strict architectural constraint?
- Can FvNA also apply to payloads from other domains?
- If so, what are the exact prerequisites?
To answer these questions, the researchers began by modeling a fractal parallel machine (FPM), an abstract parallel computer modeled from FvNA. FPM is built on Valiant’s multi-BSP, a homogeneous multi-layer parallelism model, with only minor extensions.
An instance of FPM is a tree structure of nested components; Each component contains memory, processor, and child components. Components that can execute fracops—payload diagrams on fractal parallel computing systems, such as reading some input from external memory, performing computations on the processor, and then writing output data to external memory.
Compared with Valiant’s multi-BSP, FPM has reduced the parameters for simpler abstraction, the researchers say. “More importantly, FPM places explicit restrictions on programming by exposing a single processor to the programming interface. The processor only knows about the parent component and its children. its not the global system specification.” In other words, the program never knows where it is in the tree. Therefore, FPM cannot be programmed to be scale dependent by definition.
Meanwhile, the researchers proposed two different ML-targeting FvNA architectures—Cambricon-F specific and Cambricon-FR universal—and illustrated FPM’s fractal programming style by running some general purpose sample programs. The patterns include dynamic programming, divide-and-conquer, and shamelessly parallel programming algorithms, all of which have proven to be effectively programmable.
“We have made it clear that, although originally developed from the field of ML, fractal parallel computing is quite commonly applied,” the researchers concluded, drawing from the preliminary results of the study. They say that FPM, general purpose and cost-optimized, is as powerful as many basic parallel computing models such as BSP and alternating Turing machines. They also believe that a full implementation of FPM could be useful in a variety of scenarios ranging from the entire world wide web to in vivo devices at the micrometer scale.
However, the researchers made a remarkable discovery from this study that FPM limits the entropy of programming by applying constraints to the control pattern of parallel computing systems. “Currently, fractal machines, such as the Cambricon-F/FR, only take advantage of such entropy reduction to simplify the software development process. “Whether energy reduction can be achieved by introducing fractal control into conventional parallel machines is an interesting open question.”
Yongwei Zhao et al., Fractal Parallel Computing, Intelligent computing (2022). DOI: 10.34133/2022/9797623
Powered by Intelligent Computing
quote: Fractal Parallel Computing, a geometry-inspired productivity booster (2022, Dec 5) retrieved Dec 5, 2022 from https://techxplore.com/news/2022-12 -fractal-parallel-geometry-spired-productivity-booster.html
This document is the subject for the collection of authors. Other than any fair dealing for private learning or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.