on-demand webinar

Harvard University: Effective SW/HW Co-Design of Specialized ML Accelerators Using Catapult HLS

Estimated Watching Time: 66 minutes

Share

Harvard sheds light on their agile algorithm-hardware co-design and co-verification methodology powered by HLS. It led to an order of magnitude improvement in the design effort across 3 generations of edge AI accelerator SoCs.

The slowdown of Moore’s law coupled with the surging democratization of machine learning has spurred the rise of application-driven architectures as CMOS scaling, alone, is no longer sufficient to achieve desired performance and power targets. In order to keep delivering energy efficiency gains, specialized SoCs are exhibiting skyrocketing design complexity with increasing development efforts. In this webinar, we will shed light on our agile algorithm-hardware co-design and co-verification methodology powered by High-Level Synthesis (HLS), which enabled us to reduce front-end VLSI design efforts by orders of magnitude during the tapeout of three generations of edge AI many-accelerators SoCs. With a particular focus on accelerator design for Natural Language Processing (NLP), we will share details on proven practices and overall learnings from a high-productivity digital VLSI flow which leverages Catapult HLS in order to efficiently close the loop between the application’s software modeling and the hardware implementation. Finally, we will mention some of the HLS challenges we encountered, offer recommendations cultivated from our learnings, and highlight internal and external efforts to further improve HLS user experience and ASIC design productivity. 

What you will learn: 

  • Proven practices enabling quick and correct-by-construction ML accelerator design via HLS.
  • Approaches for using HLS in closing algorithm-hardware verification loops.
  • HLS challenges we encountered and learnings.
  • Ongoing and future efforts to address current HLS limitations.

Who should attend:

  • Anyone interested in building ASICs using HLS
  • RTL design and verification engineers 

Harvard sheds light on their agile algo-hw co-design & co-verification methodology powered by HLS. It led to an order of magnitude improvement in the design effort across 3 generations of edge AI accelerator SoCs.

Meet the speaker

Harvard University

Thierry Tambe

Researcher at the John A. Paulson School of Engineering and Applied Sciences

Thierry Tambe is an Electrical Engineering PhD candidate at Harvard University advised by Prof. Gu-Yeon Wei and Prof. David Brooks. His current research interests focus on designing energy-efficient and high-performance algorithms, hardware accelerators, and systems-on-chip for machine learning and natural language processing in particular. He also bears a keen interest in agile SoC design methodologies. Prior to debuting his doctoral studies, Thierry was an engineer at Intel in Hillsboro, Oregon where he designed various analog/mixed-signal architectures for high-bandwidth memory and peripheral interfaces on Xeon and Xeon-Phi HPC SoCs. Thierry is a 2021 NVIDIA PhD Graduate Fellow.

Related resources