点播式网络研讨会

Part 2: Adapting software algorithms to hardware architectures for high performance and low-power

预估观看时长:50 分钟

分享

Part 2: Adapting software algorithms to hardware architectures for high performance and low-power

GPUs and DSPs offer very high-parallelism and impressive memory bandwidths, within the scope of a fully programmable platform. However, they need to fetch and decode every instruction, and must have a relatively fixed architecture, which leads to wasted energy. The Single-Instruction-Multiple-Data architecture of most high-performance GPUs also leads to reduced performance and reduced energy efficiency when threads can take different execution paths (the so called "divergence" problem).

FPGAs, on the other hand, provide a fully customizable architecture. For example, the precision of each computation can be tailored specifically for the application at hand. Moreover control is fully application specific and hardwired. Finally, their memory architecture can be specialized as much as needed, much beyond the DRAM/SRAM/register hierarchy that DSPs and GPUS provide.

Design costs stemming from low-level RTL design and the difficulty to reuse a significant portion of past designs have been a significant adoption hurdle for FPGAs in rapidly evolving application domains. This has recently changed, thanks to the advent of high-level synthesis, which allows a design team to quickly explore one or several highly optimized architectures from essentially the same software model, written e.g. in CUDA or OpenCl, that has been used to implement the same algorithm on a CPU or GPU. A very broad set of highly optimized low-level libraries written in these languages (e.g. CUBLAS, CUDNN, ...) is available to ease the task of accelerating machine learning, computer vision, image recognition, database search and other applications on FPGAs.

Who should attend:

  • Programmers interested in learning how to efficiently implement highly
    parallel applications on FPGAs

What you will learn:

  • Code optimization strategies to efficiently map OpenCl and C++ code
    on FPGAs
  • Difference between memory architectures on GPUs and FPGAs
  • How to select the best platform for the application at hand
  • Migration path to ASICs when the algorithms are consolidated and
    manufacturing costs must be reduced

主讲嘉宾简介

Politecnico di Torino

Luciano Lavagno

Professor

Luciano Lavagno received his Ph.D. in EECS from U.C.Berkeley in 1992. Luciano co-authored four books and over 200 scientific papers. He was the architect of the POLIS HW/SW co-design tool and one of the architects of the Cadence CtoSilicon High-Level Synthesis tool. He is a professor with Politecnico di Torino, Italy and, also, a consultant for the Catapult High-Level Synthesis group of Siemens EDA. His research interests include High-Level Synthesis, HW/SW co-design, and design tools for wireless sensor networks.

相关资源

Catapult LP 提供功耗优化后的 ESL 硬件实现流程
White Paper

Catapult LP 提供功耗优化后的 ESL 硬件实现流程

本文概括描述了用于探索低功耗架构的 Catapult 流程,并详细讨论了使用 Catapult LP 设计流程实现的低功耗优化结果。

STMicroelectronics 利用 High-Level Synthesis 迅速将汽车图像信号处理解决方案推向市场
White Paper

STMicroelectronics 利用 High-Level Synthesis 迅速将汽车图像信号处理解决方案推向市场

对于汽车市场,该流程符合 ISO 26262 标准,因此能确保可靠性。本文介绍了该团队如何使用 HLS 流程来设计和验证图像信号处理 (ISP) 设备,使其尽快上市。