オンデマンド・ウェビナー

Part 2: Adapting software algorithms to hardware architectures for high performance and low-power

おおよその視聴時間50 分

共有

Part 2: Adapting software algorithms to hardware architectures for high performance and low-power

GPUs and DSPs offer very high-parallelism and impressive memory bandwidths, within the scope of a fully programmable platform. However, they need to fetch and decode every instruction, and must have a relatively fixed architecture, which leads to wasted energy. The Single-Instruction-Multiple-Data architecture of most high-performance GPUs also leads to reduced performance and reduced energy efficiency when threads can take different execution paths (the so called "divergence" problem).

FPGAs, on the other hand, provide a fully customizable architecture. For example, the precision of each computation can be tailored specifically for the application at hand. Moreover control is fully application specific and hardwired. Finally, their memory architecture can be specialized as much as needed, much beyond the DRAM/SRAM/register hierarchy that DSPs and GPUS provide.

Design costs stemming from low-level RTL design and the difficulty to reuse a significant portion of past designs have been a significant adoption hurdle for FPGAs in rapidly evolving application domains. This has recently changed, thanks to the advent of high-level synthesis, which allows a design team to quickly explore one or several highly optimized architectures from essentially the same software model, written e.g. in CUDA or OpenCl, that has been used to implement the same algorithm on a CPU or GPU. A very broad set of highly optimized low-level libraries written in these languages (e.g. CUBLAS, CUDNN, ...) is available to ease the task of accelerating machine learning, computer vision, image recognition, database search and other applications on FPGAs.

Who should attend:

  • Programmers interested in learning how to efficiently implement highly
    parallel applications on FPGAs

What you will learn:

  • Code optimization strategies to efficiently map OpenCl and C++ code
    on FPGAs
  • Difference between memory architectures on GPUs and FPGAs
  • How to select the best platform for the application at hand
  • Migration path to ASICs when the algorithms are consolidated and
    manufacturing costs must be reduced

講演者の紹介

Politecnico di Torino

Luciano Lavagno

Professor

Luciano Lavagno received his Ph.D. in EECS from U.C.Berkeley in 1992. Luciano co-authored four books and over 200 scientific papers. He was the architect of the POLIS HW/SW co-design tool and one of the architects of the Cadence CtoSilicon High-Level Synthesis tool. He is a professor with Politecnico di Torino, Italy and, also, a consultant for the Catapult High-Level Synthesis group of Siemens EDA. His research interests include High-Level Synthesis, HW/SW co-design, and design tools for wireless sensor networks.

関連情報

Simulation process and data management for ship design
Webinar

Simulation process and data management for ship design

Learn how to create a seamless ship design workflow with simulation and data sharing. Ensure effective collaboration and provide the right information.

CAE統合ワークフローの力を発揮させて高速船を効率的に設計
Webinar

CAE統合ワークフローの力を発揮させて高速船を効率的に設計

システム・シミュレーションを実施して推進システムを作成し、それを使って数値流体力学 (CFD) 自航シミュレーションを実施し、最高速度を評価する方法を解説します。

シミュレーション・ツールを活用して船舶設計プロセスを加速
White Paper

シミュレーション・ツールを活用して船舶設計プロセスを加速

船舶設計の非効率なデザイン・スパイラルを脱却しましょう。このホワイトペーパーは、シーメンスのソリューションを使用したシミュレーション主導型船舶設計プロセスについて紹介しています。最新のデジタル技術をフル活用する方法をご一読ください。