EMC2 - Energy Efficient Machine Learning and Cognitive Computing

memory Upcoming Workshops

schedule Program for EMC2 at HPCA 2026

View the complete program for the 11th Edition of EMC2 here.

description Workshop Objective

In the Eleventh edition of EMC2 workshop, we plan to facilitate conversation about the sustainability of large-scale AI computing systems being developed to meet the ever-increasing demands of generative AI. This involves discussions spanning multiple interrelated areas. First, we continue to serve as the leading forums for discussing the energy-efficiency aspect of GenAI workloads which directly impact the overall viability and economic value of AI technology. Second, we reassess the scaling laws of AI with the prevalence of agentic, multi-modal, and reasoning-based models in conjunction with novel techniques such as a highly sparse expert architecture and disaggregated computation. Finally, we discuss sustainable and high-performance computing paradigms towards efficient datacenters and hybrid computing models that can cater to the exponential growth in model sizes, application areas, anduser base. This would allow us to explore ideas to build the hardware, software, systems, and scaling infrastructure, as well as model architectures that make AI technology even more prevalent and accessible.

format_list_bulleted Topics for the Workshop

  • Neural network architectures for resource constrained applications.
  • Efficient hardware designs to implement neural networks including sparsity, locality, and systolic designs.
  • Power and performance efficient memory architectures suited for neural networks.
  • Network reduction techniques – approximation, quantization, reduced precision, pruning, distillation, and reconfiguration.
  • Exploring interplay of precision, performance, power, and energy through benchmarks, workloads, and characterization.
  • Performance potential, limit studies, bottleneck analysis, profiling, and synthesis of workloads.
  • Explorations and architctures aimed to promote sustainable computing.
  • Simulation and emulation techniques, frameworks, tools, and platforms for machine learning.
  • Optimizations to improve performance of training techniques including on-device and large-scale learning.
  • Load balancing and efficient task distribution, communication and computation overlapping for optimal performance.
  • Verification, validation, determinism, robustness, bias, safety, and privacy challenges in AI systems.
  • Efficient deployment strategies for edge and distributed environments.
  • Model compression and optimization techniques that preserve reasoning and problem-solving capabilities.
  • Architectures and frameworks for multi-agent systems and retrieval-augmented generation (RAG) pipelines.
  • Systems-level approaches for scaling future foundation models (e.g., Llama 4, GPT-5 and beyond).
  • We will follow that same formatting guidelines and duplicate submission policies as HPCA.

Recent Editions

View the complete list of previous editions.