온디맨드 웨비나

Harvard University: Effective SW/HW Co-Design of Specialized ML Accelerators Using Catapult HLS

예상 소요 시간: 66분

공유

Harvard sheds light on their agile algorithm-hardware co-design and co-verification methodology powered by HLS. It led to an order of magnitude improvement in the design effort across 3 generations of edge AI accelerator SoCs.

The slowdown of Moore’s law coupled with the surging democratization of machine learning has spurred the rise of application-driven architectures as CMOS scaling, alone, is no longer sufficient to achieve desired performance and power targets. In order to keep delivering energy efficiency gains, specialized SoCs are exhibiting skyrocketing design complexity with increasing development efforts. In this webinar, we will shed light on our agile algorithm-hardware co-design and co-verification methodology powered by High-Level Synthesis (HLS), which enabled us to reduce front-end VLSI design efforts by orders of magnitude during the tapeout of three generations of edge AI many-accelerators SoCs. With a particular focus on accelerator design for Natural Language Processing (NLP), we will share details on proven practices and overall learnings from a high-productivity digital VLSI flow which leverages Catapult HLS in order to efficiently close the loop between the application’s software modeling and the hardware implementation. Finally, we will mention some of the HLS challenges we encountered, offer recommendations cultivated from our learnings, and highlight internal and external efforts to further improve HLS user experience and ASIC design productivity. 

What you will learn: 

  • Proven practices enabling quick and correct-by-construction ML accelerator design via HLS.
  • Approaches for using HLS in closing algorithm-hardware verification loops.
  • HLS challenges we encountered and learnings.
  • Ongoing and future efforts to address current HLS limitations.

Who should attend:

  • Anyone interested in building ASICs using HLS
  • RTL design and verification engineers 

Harvard sheds light on their agile algo-hw co-design & co-verification methodology powered by HLS. It led to an order of magnitude improvement in the design effort across 3 generations of edge AI accelerator SoCs.

발표자 소개

Harvard University

Thierry Tambe

Researcher at the John A. Paulson School of Engineering and Applied Sciences

Thierry Tambe is an Electrical Engineering PhD candidate at Harvard University advised by Prof. Gu-Yeon Wei and Prof. David Brooks. His current research interests focus on designing energy-efficient and high-performance algorithms, hardware accelerators, and systems-on-chip for machine learning and natural language processing in particular. He also bears a keen interest in agile SoC design methodologies. Prior to debuting his doctoral studies, Thierry was an engineer at Intel in Hillsboro, Oregon where he designed various analog/mixed-signal architectures for high-bandwidth memory and peripheral interfaces on Xeon and Xeon-Phi HPC SoCs. Thierry is a 2021 NVIDIA PhD Graduate Fellow.

관련 자료

정확한 단일 BOM(Bill of Materials)을 생성하는 보다 효율적인 방법
E-book

정확한 단일 BOM(Bill of Materials)을 생성하는 보다 효율적인 방법

통합 BOM 소프트웨어가 있는 최신 PLM 솔루션을 통해 BOM 관리를 간소화할 수 있습니다.

효과적인 제품 데이터 관리를 통한 제품 개발 성능 개선
E-book

효과적인 제품 데이터 관리를 통한 제품 개발 성능 개선

본 eBook에서 설문조사에 참여한 제품 설계, 엔지니어링 및/또는 제조 기업의 PDM 모범 사례에 대해 알아보십시오

PLM은 무엇이며, 왜 클라우드 PLM을 사용해야 할까요?
Infographic

PLM은 무엇이며, 왜 클라우드 PLM을 사용해야 할까요?

PLM 정의 클라우드 PLM으로 디지털 트윈을 효율적으로 관리해 혁신적인 제품을 시장에 더 빨리 선보이는 방법에 대해 알아보십시오. 자세히 알아보십시오.