tīmekļa seminārs pēc pieprasījuma

Application-Specific Neural Network Inference

Paredzamais skatīšanās laiks: 24 minūtes

Kopīgot

Application-Specific Neural Network Inference

There is a wide range of solutions for implementation of the inference stage of convolutional neural networks available on the market. Almost all of them follow a generic accelerator approach which introduces overhead and implementation penalties for a specific network configuration. High-level Synthesis leverages application/network specific optimizations to further optimize PPA for specific neural networks or classes of networks. This webinar gives an introduction to the design flow starting from AI/ML frameworks like TensorFlow down to FPGA/ASIC and relevant optimization techniques.

This webinar is part 6 of the HLS for Vision and Deep Learning Hardware Accelerators webinar series.

What you will learn:

  • How HLS is used to implement a computer vision algorithm in either
    an FPGA or ASIC technology and the trade-offs for power and
    performance.
  • How HLS is employed to analyze unique architectures for a very
    energy-efficient inference solution such as a CNN (Convolutional
    Neural Network) from a pre-trained network.
  • How to integrate the design created in HLS into a larger system,
    including peripherals, processor, and software.
  • How to verify the design in the context of the larger system and how
    to deploy it into an FPGA prototype board.

Iepazīstieties ar runātāju

Siemens EDA

Herbert Taucher

Head of Research Group

Herbert is responsible for industrial research in electronics in Siemens. His team is working on computing architectures and design flows for secure and safe real-time capable industrial Edge Computing. There is a special focus on AI/ML as compute workload and on leveraging AI/ML in the design flow. Herbert has a 20+ year history in SoC/ASIC/FPGA design.