Workshop on Energy Efficient Machine Learning and Cognitive Computing

announcement Important Announcements

How To Attend Virtual EMC2 2020: We will be using Microsoft Teams to coordinate between keynote and invited speakers, panel and paper presentations. All events will be streamed live on YouTube

memory Upcoming Workshops

schedule Program for EMC2 at EMC2 Virtual Workshop 2020

08:30 - 08:40
Welcome and Opening Remarks
Michael Goldfarb, Qualcomm Inc.
08:40 - 09:40
09:40 - 10:15
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Wojciech Samek, Fraunhofer Heinrich Hertz Institute
10:15 - 10:20
10:20 - 10:55
10:55 - 11:30
Designing Nanosecond Inference Engines for the Particle Collider
Clauidonor N. Coelho Jr (Palo Alto Networks), Thea Aarrestad (CERN), Vladimir Loncar (CERN), Maurizio Pierini (CERN), Adrian Alan Pol (CERN), Sioni Summers (CERN), Jennifer Ngadiuba, (Caltech)
11:30 - 12:10
12:10 - 12:30
12:30 - 13:10
13:10 - 14:10
Energy Efficient Machine Learning on Encrypted Data: Hardware to the Rescue
Farinaz Koushanfar, University of California, San Diego
14:10 - 14:45
15:20 - 15:55
Efficient Deep Learning At Scale
Hai Li, Duke University
15:55 - 16:00
16:35 - 17:10
Efficient Machine Learning via Data Summarization
Baharan Mirzasoleiman, Computer Science, University of California, Los Angeles
17:55 - 18:10
18:10 - 18:15
Closing Remarks
Satyam Srivastava, Intel Corporation
View the complete program for the 6th Edition of EMC2 here.

description Workshop Objective

As artificial intelligence and other forms of cognitive computing continue to proliferate into new domains, many forums for dialogue and knowledge sharing have emerged. In the proposed workshop, the primary focus is on the exploration of energy efficient techniques and architectures for cognitive computing and machine learning, particularly for applications and systems running at the edge. For such resource constrained environments, performance alone is never sufficient, requiring system designers to carefully balance performance with power, energy, and area (overall PPA metric).

The goal of this workshop is to provide a forum for researchers who are exploring novel ideas in the field of energy efficient machine learning and artificial intelligence for a variety of applications. We also hope to provide a solid platform for forging relationships and exchange of ideas between the industry and the academic world through discussions and active collaborations.

format_list_bulleted Topics for the Workshop

  • Architectures for the edge: IoT, automotive, and mobile
  • Approximation, quantization reduced precision computing
  • Hardware/software techniques for sparsity
  • Neural network architectures for resource constrained devices
  • Neural network pruning, tuning and and automatic architecture search
  • Novel memory architectures for machine learing
  • Communication/computation scheduling for better performance and energy
  • Load balancing and efficient task distribution techniques
  • Exploring the interplay between precision, performance, power and energy
  • Exploration of new and efficient applications for machine learning
  • Characterization of machine learning benchmarks and workloads
  • Performance profiling and synthesis of workloads
  • Simulation and emulation techniques, frameworks and platforms for machine learning
  • Power, performance and area (PPA) based comparison of neural networks
  • Verification, validation and determinism in neural networks
  • Efficient on-device learning techniques
  • Security, safety and privacy challenges and building secure AI systems

Recent Editions

View the complete list of previous editions.