온디맨드 웨비나

Neural Network Quantization for Low-Power

예상 소요 시간: 62분

공유

This webinar will describe how to use Qkeras and High-Level Synthesis to produce a bespoke quantized CNN accelerator, and compares the accuracy, power, performance, and area of different quantizations.

Inferencing for Convolutional Neural Networks (CNNs) is notoriously compute intensive. This makes them an ideal candidate for hardware acceleration, which is faster and more power efficient than running software on general purpose CPUs. Training and inferencing are typically done using floating point representations of the features, weights, and biases. Using a fixed point representation reduces the size and power of the operators in the accelerator. With a purpose built accelerator, the size of fixed point operators can be anything - they are not limited to 8 or 16 bits. Qkeras, or quantized Keras, is a library built on Tensorflow that allows developers to specify quantized fixed-point operations for each layer. It enables training and inferencing with reduced precision representations. This webinar will describe how to use Qkeras and High-Level Synthesis to produce a bespoke quantized CNN accelerator, and compares the accuracy, power, performance, and area of different quantizations. What you will Learn

  • How to determine the optimal operand sizing for a hardware accelerator deploying a neural network using QKeras
  • How to determine the area, performance, and energy of a neural network accelerator
  • How to compare software performance against hardware accelerated performance, and make informed trade-off decisions

Who Should Attend

  • Developers of neural networks that will be deployed on the edge or in other contexts where low power and efficiency are required in addition to high performance.

발표자 소개

Siemens EDA

Russell Klein

HLS Program Director

Russell Klein is a Program Director at Siemens EDA’s (formerly Mentor Graphics) High-Level Synthesis Division focused on processor platforms. He is currently working on algorithm acceleration through the offloading of complex algorithms running as software on embedded CPUs into hardware accelerators using High-Level Synthesis. He has been with Mentor for over 25 years, holding a variety of engineering, marketing and management positions, primarily focused on the boundary between hardware and software. He holds six patents in the area of hardware/software verification and optimization. Prior to joining Mentor he worked for Synopsys, Logic Modeling, and Fairchild Semiconductor.

Siemens EDA

Ajay Mishra

Senior Product Deployment Manager

Ajay Mishra is a Senior Product Deployment Manager in Siemens EDA’s High-Level Synthesis Division, currently working on physical realization of algorithm acceleration using High-Level Synthesis. Mr. Mishra developed a methodology to determine power estimates based physical realization. Ajay Mishra has 22 years of experience, including semiconductor design work at STMicroelectronics, Intel, and Philips Semiconductors, and 14 years of EDA experience in Mentor Graphics (now Siemens EDA). At Siemens EDA he served as Product Deployment Engineer to facilitate the proliferation and deployment of C-to-GDSII products, flows, and methodologies. Growing IC design complexities cause most IC architects to look for smart and efficient electronic design automation solutions to improve IC performance and time to market while avoiding technological barriers in between engineering domains.

관련 자료

영상: 엔터프라이즈 레시피 관리 솔루션으로 소비재 산업의 레시피 개발 프로세스 간소화
Video

영상: 엔터프라이즈 레시피 관리 솔루션으로 소비재 산업의 레시피 개발 프로세스 간소화

Siemens Enterprise Recipe Management를 통해 몇 분 만에 레시피를 변환하고 확대하십시오. 생산을 간소화하고 시험 시간을 단축하십시오. Siemens 웨비나 영상을 시청하고 레시피 개발 프로세스를 혁신하십시오.

레시피 관리 소프트웨어 및 디지털 트랜스포메이션으로 소비재 산업의 혁신 주도
White Paper

레시피 관리 소프트웨어 및 디지털 트랜스포메이션으로 소비재 산업의 혁신 주도

레시피 관리 소프트웨어를 통해 R&D 잠재력을 높여 복잡한 신제품을 만들 수 있습니다. 자세히 알아보십시오.

CPG 제조에서의 포뮬레이션 개발 최적화
Webinar

CPG 제조에서의 포뮬레이션 개발 최적화

포뮬레이션 개발 최적화를 소개하는 웨비나를 통해 CPG 제조에서 혁신 효율성을 높이는 방법에 대해 알아보십시오.