1

Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search

Mind Mappings is a novel framework that enables first-order optimization with gradient descent for mapping space search, a core challenge in deploying efficient programmable accelerators.

ExTensor: An Accelerator for Sparse Tensor Algebra

ExTensor is an acceletor for Sparse Tensor Algebra, a key class of workloads that powers crucial areas such as deep learning. Key insight behind the design is to hierarchically eliminate ineffectual work that exists due to sparsity to demonstrate significant speed-ups.

Buffets: An Efficient and Composable Storage Idiom for Explicit Decoupled Data Orchestration

A key issue in hardware accelerator design is re-usability of designs across different accelerators. With Buffets, we present a reusable, composable, and efficient storage idiom for programmable hardware accelerators.

Morph: Flexible acceleration for 3d cnn-based video understanding

Morph is a flexible hardware accelerator for 3D-CNNs, a key workload used in video understanding. Key insight behind the design is to design a flexible hardware that allows high degrees of flexibility that allows different mappings of 3D-CNNs that maximizes performance for the given layer of 3D-CNN..

UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition

A result of reducing precision in deep neural networks is increasing repetition of parameters in DNN models. UCNN is a hardware-software codesign approach based on algebraic reassociation techniques to exploit weight repetition that significantly improves efficiency of DNN inference.