Review Committee

  • Marwane Hariat, Wave Computing
  • Khalid Tahboub, Qualcomm
  • Smriti Ojha, AMD
  • Shuai Zhang, Qualcomm
  • Ali Shafiee, University of Utah
  • Partha Maji, Arm ML Research
  • Karl Taht, University of Utah
  • Urmish Thakker, Arm
  • Vinayak Gokhale, Google
  • Claudionor Coelho, Google
  • Aliasger Zaidy, FWDNXT Inc.
  • Zheng Wang, Amazon
  • Pradeep Ramani, NVIDIA
  • Amrit Panda, Microsoft
  • Tijmen Blankevoort, Qualcomm
  • Sikandar Mashayak, Apple
  • Jain Karishma, Intel
  • He Xiao, Cadence
  • Wei Wang, Amazon
  • Tushar Krishna, Georgia Institute of Technology
  • Nader Sehatbakhsh, Georgia Institute of Technology
  • Balajee Vamanan, University of Illinois at Chicago
  • Xia Lu, Amazon
  • Rahmani Mariam, Intel
  • Keshari Rishabh, Intel
  • Kushal Datta, Nvidia
  • Andre Xian Ming Chang, Purdue University
  • Ananya Pareek, Apple
  • Nitin Garegrat, Microsoft
  • Sylvain Flamant, Wave computing
  • Rosario Cammarota, Intel
  • Chen Ding, University of Rochester
  • Andy Glew, NVIDIA
  • Sreepathi Pai, University of Rochester
  • Raj Jain, Washington University
  • Smruti Sarangi, IIT Delhi
  • Shaoshan Liu, PerceptIn
  • Ali Shafiee, Samsung
  • Danian Gong, Cadence

Organizing Committee

Raj Parihar link

d-Matrix Corp.

Dr. Raj Parihar is currently the Chief AI Architect at d-Matrix Corporation working on company’s flagship product. In the past, Raj was part of Microsoft Silicon Engineering and Solutions (SES) group and worked on future generation of Brainwave systems. Prior to that, while working at Cadence Tensilica, he was involved in architectural exploration and performance analysis of neural network AI processor DNA 100. He also contributed to the microarchitectural enhancements (next generation branch predictors and cache prefetchers) of P-series Warrior cores at MIPS/ImgTech. His work on Cache rationing won the best paper award at ISMM’16. Dr. Parihar received his Doctorate and Masters from University of Rochester, NY and his Bachelors from Birla Institute of Technology & Science, Pilani, India.

Michael Goldfarb link


Michael Goldfarb is a Staff Engineer at Waymo working on HW/SW architecture for machine learning. Previously he was at Qualcomm Research where he contributed to many important features of the AI 100 accelerator. For a short time Michael was at NVIDIA on various projects in the Compute Architecture group for high performance training, specifically compilers, kernels and architecture for deep learning. Prior to that, he was at Qualcomm Research where he worked on optimizing AI/ML applications for Snapdragon powered devices and developed new accelerator architectures for low power on device inference. His research interests are in machine learning, compilers, parallel programming and accelerator architecture. Michael received his Masters and Bachelors degrees from Purdue University (West Lafayette, IN).

Satyam Srivastava link

d-Matrix Corp.

Dr. Satyam Srivastava is the Chief AI Software Architect at d-Matrix Corporation. He works on building the software stack for new AI accelerators. In his prior role at Intel he worked on enabling machine learning and media systems on Intel compute architectures. His interests include machine learning, visual computing, and compute accelerators. Dr. Srivastava obtained his Doctorate degree from Purdue university (West Lafayette, IN) and Bachelor’s degree from Birla Institute of Technology and Science, Pilani (India).

Tao (Terry) Sheng link


Dr. Tao Sheng is an Applied Scientist Manager at Amazon. He has been working on multiple cutting-edge projects in the research areas of computer vision, machine learning in more than 10 years. Prior to Amazon, he worked with Qualcomm and Intel. He has strong interests in deep learning for mobile vision, edge AI, etc. He has published ten US and International patents and eight papers. Most recently, he led the team to win the First Prize of IEEE International Low-Power Image Recognition Challenge (LPIRC-I) at CVPR 2018 and win top two Prizes at LPIRC-II 2018 among a wide variety of global competitors.

Kushal Datta link


Dr. Kushal Datta is Sr. Product Manager of communications and IO libraries for AI, data analytics and HPC at Nvidia. Previously, at Intel Data Center Group, he invented new techniques to expedite deep learning model training and inference on Xeon architecture. His interests include creating new tools and methods to improve overall wall clock time of complex AI and scientific applications on large scale systems. He has published over twenty academic papers, several white papers and blogs. He holds five granted U.S. patents. He received his Ph.D. in ECE from University of North Carolina at Charlotte and Bachelors in Computer Science from Jadavpur University, India.

Sikandar Mashayak link

Wave Computing

Sikandar Y. Mashayak is a Staff Research and Development Engineer in AI Core Technologies team at Wave Computing Inc, Campbell, CA. At Wave, he is developing fast, efficient, and accurate neural network models and co-designing mixed-precision neural network training and inference algorithms for AI accelerator hardware. His research interests are in machine learning, NN model compression (pruning and quantization), co-design AI algorithms, numerical modeling, optimization, and heterogeneous parallel computing. He received his Ph.D. from the University of Illinois at Urbana-Champaign, the M.S. degree from Purdue University, W. Lafayette, IN, and the B.E. (Hons.) degree from Birla Institute of Technology and Science, Pilani, India.

Ananya Pareek link

Apple Inc.

Ananya Pareek is a System Architect at Apple. Currently working on optimizing hardware platforms from a performance-power tradeoff perspective. Previously he has worked at Samsung on CPU core pipeline and ISA extensions for enabling faster ML computations. His interests are in developing hardware platforms for ML/Deep Learning, HW/SW codesign, and modeling systems for optimization. He received his M.S. degree from the University of Rochester, NY, and B.Tech. from Indian Institute of Technology Kanpur, India.

Fanny Nina Paravecino link


Dr. Nina-Paravecino is currently a Senior Researcher at the AI and Advance Architectures group in Microsoft, where she leads different efforts to improve performance of Deep Learning workloads. Previously, Dr. Nina-Paravecino was part of Intel Corporation as a Research Scientist to push Intel’s ground-breaking volumetric reconstruction technology using Deep Learning. In the past, her work has contributed to efficiently exploit GPU architectures and enabled identification of bottlenecks on a myriad of applications that includes image processing and video analytics. Dr Nina-Paravecino received her Ph.D. in Computer Engineering from Northeastern University, her M.Sc. in Computer Engineering from University of Puerto Rico at Mayaguez Campus, and her B.S. in System and Informatics Engineering from University of San Antonio Abad of Cusco – Peru. She has been PC-member/Reviewer of different Journals/Conferences/Workshops such as IEEE Transactions on Image Processing 2017, JPDC 2017, CF 2018, PPoPP 2018, SC 2018, GPGPU 2018, PARCO 2018, IA^3 2019, SC 2019, DAC 2020, ICCD 2020, HPCA 2021. Most recently, Dr. Nina was co-chair of the Video Analytics mini-track at HICSS 2020.

Prerana Maslekar link


Prerana Maslekar is a Silicon Verification Engineer in Silicon Engineering Group at Microsoft. She works on ensuring design correctness for hardware deployed in Azure cloud. Prior to this, she worked on verification of sensors used in microsoft devices. Prerana has a masters from The University of Texas at Austin in the track of Architecture, Computer Systems and Embedded Systems. Her interests lie in Accelerator architecture, ML accelerator hardware, security attacks exploiting architecture flaws and data center architectures.