on-demand webinar

Rapid Algorithm to HW: Using HLS for Computer Vision and Deep Learning Seminar

Complete Seminar Recording, Slides, and Q&A

Estimated Watching Time: 200 minutes

Share

Rapid Algorithm to HW: Using HLS for Computer Vision and Deep Learning

Recently there has been an explosion in advances in the Computer Vision and Deep Learning Market for the application of Image Processing and Recognition. It has applications across multiple industries such as medical, industrial, energy, defense and of course automotive with the drive towards Autonomous Cars. In many cases, these algorithms require a huge amount of parallel compute performance with low-power requirements such that an FPGA or ASIC are the only practical solution. However, the development cycle of RTL is often prohibitive in this market and it does not adapt to rapid change.

In this 5 part recorded seminar, you will get an introduction and technical details to understand how HLS can help project teams rapidly and accurately explore power/performance of their algorithms, quickly get to FPGA implementations to create demonstrator/prototypes and then be able to use the same source to deliver high-performance RTL IP for ASIC implementation. We recommend watching the videos in order.

Keynote: Computer Vision, Machine Learning and HLS – Some History, Trends and Highlights

Image processing for computers has been around for a long time, as has AI (Artificial Intelligence) and learning machines and High-Level Synthesis (HLS); actually decades. So, why now for all of the buzz? Why is it so more applicable now? This keynote presents many of the market trends, future growth areas around computer vision and machine learning along the current capabilities of HLS that are naturally bringing these technologies together to rapidly accelerate the delivery of high-performance, low-power systems from rapidly changing algorithms.

Computer Vision: The Basics of How HLS Can Be Used to Accelerate from Algorithm to FPGA/ASIC for High-Performance Vision Processing

High-Level Synthesis (HLS) has been used in multiple companies, projects and designs targeting vision processing in autonomous car projects. HLS is the fastest way to turn complex algorithms into efficient HW implementation and also create a methodology that enables design teams to rapidly react to changes in algorithm or functional specification and still make demanding schedules. This session will step through the basics of how HLS works and why HLS is such a good fit for Image Processing and vision applications using a practical example vision algorithm (HOG: Histogram of Oriented Gradients). 

High-Level Verification: Verification Methodology and Flows When Using C++ and HLS

When designers look to move up in abstraction from RTL to C/SystemC one of the first questions that they ask is; “what does my verification methodology look like?”. In RTL, methodologies for verification are known and proven, but when using High-Level Synthesis (HLS), the same ecosystem is not established across the industry. This session highlights proven tools and methodology that help an HLS designer check and verify his design, measure and close coverage, and compare the C to RTL implementation. Resulting in enabled verification at the C++ level with all of the same methodologies deployed at RTL. 

Machine Learning: How HLS Can Be Used to Quickly Create FPGA/ASIC HW for a Neural Network Inference Solution

HLS (High-Level Synthesis) has the unique ability to go from complex algorithms written in C to RTL enabling accurate profiles for power and performance for an algorithm's implementation without having to write it by hand. Neural Networks are typically developed and trained in a high-performance compute environment but in many cases, the inference solution can be reduced and then HW accelerators are the only solution to meet power and real-time requirements. This session reviews the consideration around fast HW prototyping for validating acceleration in Neural Networks for Inferencing vs highest performance implementation and the tradeoffs.

DRS360 Autonomous Driving Platform: Why HLS Just Makes so Much Sense

The DRS360 Autonomous Driving Platform uniquely fuses raw data from radar, LIDAR, vision and other sources in real-time; designed to deliver high-resolution data for central decision making targeted for Level 5 autonomous driving. It incorporates multiple Computer Vision and Neural Networking algorithms that must deliver real-time performance on large data sets with the lowest possible power. This session reviews why HLS is a good fit for the requirements to bring these complex algorithms to realization; platform independence, rapid FPGA prototyping, high-performance with low-power and design demands for designing neural network inferencing. 

Who Should View

  • RTL Designers or Project Managers interested in moving up to HLS
  • Architects or Algorithm developers in the field of image processing, computer vision, machine and deep learning interested in rapid and accurate exploration of power/performance metrics
  • New Project teams with only a few designers and multiple SW experts wanting to rapidly create high-performance FPGA or ASIC IP for Computer Vision or Deep Learning markets

What You Will Learn

  • How HLS can be used to implement an example Computer Vision Algorithm in either an FPGA or ASIC technology and the trade-offs for power and performance. You will walk away with examples, building blocks, etc that are completed and can be referenced.
  • How to achieve faster but complete verification signoff in an HLS flow measuring quality, coverage; saving days and weeks in verification costs
  • How HLS can be applied in multiple ways to implement acceleration for Deep Learning and in particular Convolutional Neural Networks. You will walk away with examples, building blocks, etc that are completed and can be referenced.

Meet the speakers

Siemens EDA

Ellie Burns

Former Director of Marketing

Ms. Burns has over 30 years of experience in the chip design and the EDA industries in various roles of engineering, applications engineering, technical marketing and product management. She was formerly the Director of Marketing for the Calypto Systems' Division at Siemens EDA responsible for low-power RTL solutions with PowerPro and HLS Solutions with Catapult. Prior to Siemens and Mentor, Ms. Burns held engineering and marketing positions at CoWare, Cadence, Synopsys, Viewlogic, Computervision and Intel. She holds a BSCpE from Oregon State University.

Siemens EDA

Michael Fingeroff

HLS Technologist

Michael Fingeroff has worked as an HLS Technologist for the Catapult High-Level Synthesis Platform at Siemens Digital Industries Software since 2002. His areas of interest include Machine Learning, DSP, and high-performance video hardware. Prior to working for Siemens Digital Industries Software, he worked as a hardware design engineer developing real-time broadband video systems. Mike Fingeroff received both his bachelor's and master's degrees in electrical engineering from Temple University in 1990 and 1995 respectively.

Siemens EDA

Nizar Sallem

Principal Engineer Sensor Fusion and Algorithms

Nizar Sallem joined Siemens EDA on January 2017 to lead the the sensor fusion algorithms development for the DRS360 platform. His areas of interest include Computer Vision, Artificial Intelligence, Machine Learning and Embedded Programming. Prior to working for Siemens he worked as senior Computer Vision/Software engineer developing cutting edge algorithms for cameras and LiDARs for both research institutes and startups focusing on real-time constraints. Nizar received his PhD in Robotics and Embedded Systems from the University of Toulouse in France in 2014, his Master degree in Signal Processing and Control Theory and Engineering diploma in Computer Science form the National School of Engineers of Tunis in 2006 and 2007 respectively.

Related resources