On-Demand-Webinar

Application-Specific Neural Network Inference

Teilen

Application-Specific Neural Network Inference

There is a wide range of solutions for implementation of the inference stage of convolutional neural networks available on the market. Almost all of them follow a generic accelerator approach which introduces overhead and implementation penalties for a specific network configuration. High-level Synthesis leverages application/network specific optimizations to further optimize PPA for specific neural networks or classes of networks. This webinar gives an introduction to the design flow starting from AI/ML frameworks like TensorFlow down to FPGA/ASIC and relevant optimization techniques.

This webinar is part 6 of the webinar series HLS for Vision and Deep Learning Hardware Accelerators.

What you will learn:

  • How HLS is used to implement a computer vision algorithm in either
    an FPGA or ASIC technology and the trade-offs for power and
    performance.
  • How HLS is employed to analyze unique architectures for a very
    energy-efficient inference solution such as a CNN (Convolutional
    Neural Network) from a pre-trained network.
  • How to integrate the design created in HLS into a larger system,
    including peripherals, processor, and software.
  • How to verify the design in the context of the larger system and how
    to deploy it into an FPGA prototype board.

Verwandte Ressourcen