Bhushan has optimized and deployed more than 1000s of AI models on-device on iOS and Android ecosystem.
Currently, He is building AI Hub at Qualcomm to make on-device journey on Android and Snapdragon platform as seamless as possible.
Previously, he worked on Apple on CoreML framework and helped deployed various system and developer use cases. He also worked at Nvidia in GPU compiler focusing on optimizing code generation for CUDA and graphics load (e.g. Nintendo and Nvidia Shield).
Bhushan Sonawane
How to Optimize, Validate and Deploy ML Models On Device (Part II)
We'll walk through the steps to bring your ML model on device. In this hands on section of the workshop we will demonstrate the end to end workflow for a sample use case, using Qualcomm AI Hub to optimize a model and deploy it on device.
We'll then help you get set up and walk through various examples on how to use Qualcomm AI Hub. The Qualcomm AI Hub team will be there to teach you the ins and outs, enabling you to use the platform and bring your ML use case on device quickly and easily.
Talk Title
How to Optimize, Validate and Deploy ML Models On Device
In this workshop we address the common challenges faced by developers migrating AI workloads from the cloud to edge devices. Qualcomm aims to democratize AI at the edge, easing the transition to the edge by supporting familiar frameworks and data types.
We'll talk through why ML is best done on device and how to easily select a model for your use case, train (or fine-tune), and then compile for the device of your choice.
We'll walk through how to get started, iterate on your model and meet performance requirements to deploy on device! We'll show examples on how to optimize models and bundle the model into your application.