Algorithm developers are usually using double precision data types to be able to focus on the mathematical functionality of the algorithm. When this algorithm is implemented as a hardware module, the data accuracy must be reduced to minimum number of bits that still fulfills the system performance requirements. The process of converting the floating-point algorithm to bit-level optimized model is complicated and requires special knowledge. This webinar introduces a simple and robust quantization methodology based on value range analysis.
What You Will Learn
Who Should Attend
Senior Application Engineer
Petri Solanti is a senior application engineer at Siemens, with an HLS and low-power tools focus. He is a designer and application engineer with over 25 years of experience in Electronics System-Level design tools and methodologies. His areas of interest include design methodologies from algorithm to RTL, system analysis and HW/SW co-design. Prior to Mentor, Mr. Solanti held application engineer positions at Cadence, CoWare, Synopsys and MathWorks. He received his MScEE degree from Tampere University of Technology, Finland.