온디맨드 웨비나

Part 1: Why Are High-Performance Low-Energy Applications Moving from GPUs and DSPs to FPGAs and ASICs?

예상 소요 시간: 55분

공유

Part 1: Why Are High-Performance Low-Energy Applications Moving from GPUs and DSPs to FPGAs and ASICs?

Transistor counts and performance of integrated circuits are reaching their peak. Artificial intelligence is emerging as the next "big thing" in areas such as automated driving, security, language recognition and translation. Most of its algorithms are embarrassingly parallel, thus easing the creation of new services and the growth of existing ones, without requiring faster clock speeds.

Memory access bandwidth and energy-per-computation become the new performance indices. GPUs (e.g. Nvidia Drive PX-2) and DSPs (e.g. Mobileye Vision Computing Engines and Vector Microcode Processors) offer very high-parallelism within the scope of a fully programmable platform. However, they need to fetch and decode every instruction, and must have a relatively fixed architecture, which leads to wasted energy.

FPGAs, on the other hand, also exploit the latest technology generations, but provide a fully customizable architecture, in particular with respect to the memory hierarchy. However, they are still fully programmable, and can thus be quickly customized to new algorithms and emerging applications.

This webinar will cover high-parallelism applications from the domains listed above, and will discuss why the quest for lowest energy consumption, in order to reduce packaging and operational costs, is driving implementation platforms to include FPGAs for tasks that were traditionally the domain of GPUs and DSPs. This webinar is the first in a two part webinar series. The second webinar in this series is Part 2: Adapting Software Algorithms to Hardware Architectures for High Performance and Low-Power.

Who should attend:

Programmers and managers interested in learning the trade-offs between
CPU/GPU/DSP based platforms and those including CPUs and FPGAs

What you will learn:

  • Key architectural characteristics and differences between GPUs and
    FPGAs
  • Memory architectures and access mechanisms on GPUs and FPGAs
  • Programming models based on high-level languages like OpenCl

발표자 소개

Politecnico di Torino

Luciano Lavagno

Professor

Luciano Lavagno received his Ph.D. in EECS from U.C.Berkeley in 1992. Luciano co-authored four books and over 200 scientific papers. He was the architect of the POLIS HW/SW co-design tool and one of the architects of the Cadence CtoSilicon High-Level Synthesis tool. He is a professor with Politecnico di Torino, Italy and, also, a consultant for the Catapult High-Level Synthesis group of Siemens EDA. His research interests include High-Level Synthesis, HW/SW co-design, and design tools for wireless sensor networks.

관련 자료

신제품 출시에 따른 비용, 지속성, 품질 및 속도의 균형 유지
E-book

신제품 출시에 따른 비용, 지속성, 품질 및 속도의 균형 유지

NPI 프로세스는 시간이 많이 소요되고 복잡할 수 있습니다. 본 eBook에서 기업이 신제품을 보다 빠르고 효율적으로 설계하고 제조하는 방법을 알아보십시오.

올바른 디지털 도구로 신제품 출시 가속화
E-book

올바른 디지털 도구로 신제품 출시 가속화

기계 컴포넌트 및 장비 제조업체가 현재 직면하고 있는 트렌드와 과제에 대해 자세히 알아보고, 새로운 고품질 제품의 출시를 보다 비용 효율적으로 가속하는 방법을 살펴보십시오. eBook 다운로드

품질 우수성 실현하여 신제품 출시
E-book

품질 우수성 실현하여 신제품 출시

설계에서 제조에 이르는 품질 관리 프로세스를 통합하십시오. 본 eBook에서 품질 개선 프로세스를 구현하여 신제품 출시를 시작하는 방법을 알아보십시오.