menu MENU

ADA researchers present at Design Automation Conference

The two papers present work on improving efficiency for developers working with hardware accelerators and improving training performance of deep recommendation systems.

Five researchers affiliated with the University of Michigan’s Applications Driving Architectures Lab were selected to present their work at this year’s Design Automation Conference (DAC) taking place July 10-14 in San Francisco. DAC is the oldest and largest conference devoted to the design and automation of electronic systems, embedded systems and software, and intellectual property.

Read more about the projects:

PriMax: Maximizing DSL Application Performance With Selective Primitive Acceleration

Authors: Nicholas Wendt, Ph.D. Student (University of Michigan), Todd Austin, ADA Principal Investigator (University of Michigan), Valeria Bertacco, ADA Center Director (University of Michigan)

Abstract: Domain-specific languages (DSLs) improve developers’ productivity by abstracting away low-level details of an algorithm’s implementation. These languages often provide powerful primitives to describe complex operations, potentially granting flexibility during compilation for hardware acceleration.

This work proposes PriMax, a general methodology for effectively mapping DSL applications to hardware accelerators. Using benchmark results, it constructs a decision tree that selects between multiple accelerated primitive implementations to maximize a target performance metric. In our graph analytics case study with two accelerators, PriMax produces a geomean speedup of 1.57x over CPU, higher than either target accelerator alone and close to the ideal 1.58x speedup.

A Joint Management Middleware to Improve Training Performance of Deep Recommendation Systems with SSDs

Authors: Chun-Feng Wu, Post-doctoral Fellow (Harvard University), Carole-Jean Wu (Meta AI), Gu-Yeon Wei, ADA Principal Investigator (Harvard University), David Brooks, ADA Principal Investigator (Harvard University)

Abstract: As the sizes and variety of training data scale all the time, data preprocessing gradually becomes a performance bottleneck for training deep recommendation systems. This challenge becomes more serious when training data is stored in Solid-State Drives (SSDs). Due to the access behavior gap between recommendation systems and SSDs, unused training data may be read and filtered out during preprocessing. We advocate a joint management middleware to avoid reading unused data by bridging the access behavior gap. The evaluation results show that our middleware can effectively improve the performance of the data preprocessing phase so as to boost training performance.