The 11th EMC2 - Energy Efficient Machine Learning and Cognitive Computing

Co-located with the The ACM International Conference on Architectural Support for Programming Languages and Operating Systems ASPLOS 2026

Sunday, March 22, 2026
Pittsburgh, PA
Room:

emoji_events EMC² Competition: AI Infrastructure Demos, sponsored by Runara.ai — Submit by March 15. Learn more →

description Workshop Objective

In the Eleventh edition of EMC2 workshop, we plan to facilitate conversation about the sustainability of large-scale AI computing systems being developed to meet the ever-increasing demands of generative AI. This involves discussions spanning multiple interrelated areas. First, we continue to serve as the leading forums for discussing the energy-efficiency aspect of GenAI workloads which directly impact the overall viability and economic value of AI technology. Second, we reassess the scaling laws of AI with the prevalence of agentic, multi-modal, and reasoning-based models in conjunction with novel techniques such as a highly sparse expert architecture and disaggregated computation. Finally, we discuss sustainable and high-performance computing paradigms towards efficient datacenters and hybrid computing models that can cater to the exponential growth in model sizes, application areas, anduser base. This would allow us to explore ideas to build the hardware, software, systems, and scaling infrastructure, as well as model architectures that make AI technology even more prevalent and accessible.

chat Call for Papers

The goal of this Workshop is to provide a forum for researchers and industry experts who are exploring novel ideas, tools and techniques to improve the energy efficiency of MLLMs as it is practised today and would evolve in the next decade. We envision that only through close collaboration between industry and the academia we will be able to address the difficult challenges and opportunities of reducing the carbon footprint of AI and its uses. We have tailored our program to best serve the participants in a fully digital setting. Our forum facilitates active exchange of ideas through:

  • Keynotes, invited talks and discussion panels by leading researchers from industry and academia
  • Peer-reviewed papers on latest solutions including works-in-progress to seek directed feedback from experts
  • Independent publication of proceedings through IEEE CPS

We invite full-length papers describing original, cutting-edge, and even work-in-progress research projects about efficient machine learning. Suggested topics for papers include, but are not limited to the ones listed below:

format_list_bulleted Topics for the Workshop

  • Neural network architectures for resource constrained applications.
  • Efficient hardware designs to implement neural networks including sparsity, locality, and systolic designs.
  • Power and performance efficient memory architectures suited for neural networks.
  • Network reduction techniques – approximation, quantization, reduced precision, pruning, distillation, and reconfiguration.
  • Exploring interplay of precision, performance, power, and energy through benchmarks, workloads, and characterization.
  • Performance potential, limit studies, bottleneck analysis, profiling, and synthesis of workloads.
  • Explorations and architctures aimed to promote sustainable computing.
  • Simulation and emulation techniques, frameworks, tools, and platforms for machine learning.
  • Optimizations to improve performance of training techniques including on-device and large-scale learning.
  • Load balancing and efficient task distribution, communication and computation overlapping for optimal performance.
  • Verification, validation, determinism, robustness, bias, safety, and privacy challenges in AI systems.
  • Efficient deployment strategies for edge and distributed environments.
  • Model compression and optimization techniques that preserve reasoning and problem-solving capabilities.
  • Architectures and frameworks for multi-agent systems and retrieval-augmented generation (RAG) pipelines.
  • Systems-level approaches for scaling future foundation models (e.g., Llama 4, GPT-5 and beyond).
  • We will follow that same formatting guidelines and duplicate submission policies as ASPLOS.

emoji_events EMC² Competition: AI Infrastructure Demos

Sponsored by Runara.ai

Overview

The EMC² Competition invites researchers, practitioners, and students to present production-ready AI infrastructure systems demonstrating measurable improvements in efficiency, observability, and operational robustness. The competition is designed to surface systems that operate under real-world constraints- including scale, reliability, cost, and sustainability.

The primary goal is to move beyond toy benchmarks, isolated micro-optimizations, and paper-only abstractions. We want to see working demos, toolkits, and integrated systems that can be realistically deployed in modern AI infrastructure environments. Submissions should emphasize live execution, actionable telemetry, and end-to-end impact rather than synthetic evaluations.

Competition Tracks

Track 1: Efficient Inference

Systems improving cost, performance, and energy efficiency of inference workloads:

  • Runtime inference optimization
  • Scheduling, batching, parallelism
  • Hardware-aware execution
  • Cost/energy-aware pipelines
Track 2: Infrastructure Observability

Visibility, monitoring, and diagnostics across the AI stack:

  • Live telemetry and visualization
  • Utilization and bottleneck analysis
  • Cross-layer observability
  • Actionable operator insights

Hardware and Model Flexibility

Participants are free to choose any hardware platform and any inference model, including GPUs or custom accelerators, cloud-based, on-premise, or edge environments, and open-source or proprietary models. There are no restrictions on vendors, architectures, or model families, provided the submission demonstrates real execution, live metrics, and measurable impact.

Submission Requirements

  • Team size: Up to 2 participants
  • Format: Single-page proposal (PDF) describing the problem, system architecture, hardware/model used, live metrics captured, and how efficiency or observability improvements are demonstrated
  • Originality: The idea does not need to be novel, but implementation must be original. Prior work may be extended with clear attribution.
  • Demo requirement (mandatory): Submissions must include a working demo that tracks live metrics, demonstrates real execution, and clearly shows improvements in cost, performance, or sustainability. Simulation-only or slide-only submissions will not be considered.

Submission deadline: March 15

Selection & Presentation

Accepted teams will display their demos during the workshop lunch break and engage directly with judges during in-person evaluations. Based on judging scores, the top 3 teams from each track will be selected for oral presentations.

Final Presentations: 4:00 - 5:00 PM, 10 minutes per team, Live demo encouraged

At the conclusion of the session, one winning team per track will be selected.

Awards

Each winning team receives:

$500 cash prize + Certificate of Recognition

Winners eligible for Summer Internship opportunities at Runara.ai

Evaluation Criteria

Since this is an open-scope, applied competition, proposals will be evaluated based on their real-world application potential instead of rigid benchmark metrics. Judging prioritizes: production readiness and robustness, clarity and usefulness of metrics, real measurable end-to-end impact, clean system design, and practical relevance to modern AI infrastructure.

For any questions please contact raj@runara.ai

Invited Talk

Energy Consumption in AI Datacenters: Can We Address this Challenge?

Josep Torrellas, UIUC link

The global electricity demand from data centers, AI, and cryptocurrencies in expected to be around 800TW/h by the end of 2026. Currently, companies like Oracle are building 500MW AI campuses. Training a single model takes 100+ MW over many months, with clusters of about 100K GPUs, and the projected AI cluster size is expected to increase several times in the next few years. Clearly, power will be the primary limiter to the growth of AI. Given this scenario, what can we, researchers, do? In this talk, I will discuss the problem and suggest some techniques that we can use to try to mitigate it. Some of the ideas are being developed in the context of the ACE Center for Evolvable Computing, a center funded by SRC and DARPA for efficient distributed computing.

Keynote

Future of energy-efficient cognitive computing: a six-word story on sentience, systems, and sustainability

Parthasarathy Ranganathan, Google link

Invited Talk

Private Neural Recommendation with Homomorphic Encryption and Orion

Brandon Reagen, NYU link

AI is great. It powers many of our favorite services and drives industries. However, today’s solutions pose a tradeoff between utility and privacy where receiving the highest quality service often requires disclosing private information. In this talk I will show how things don’t have to be this way. Fully Homomorphic Encryption (FHE) is a cryptographic method that enables computation directly on encrypted data, never disclosing sensitive inputs while still enabling access to high-quality services. I will then cover the challenges of using this technology, which include both extreme performance overhead and programming difficulty, and how our Orion framework addresses them. Finally, I will highlight our most recent work that demonstrates how FHE can be applied to neural recommendation. Time permitting, a demonstration will be given.

Invited Talk

Making the Accuracy-Efficiency Trade-Off in Agentic Systems

Esha Choukse, Microsoft link

For decades, systems researchers have balanced competing objectives: latency versus throughput, performance versus power, and speed versus cost. Traditionally, accuracy was a fixed requirement—non-negotiable and external to the systems equation. But as modern computing increasingly hosts AI-driven workloads, accuracy itself has become a tunable system variable. In this talk, I’ll argue that the next frontier in systems design lies in treating accuracy as a first-class performance knob, to be traded for efficiency in a principled way. Drawing from our recent work, I will show how software–hardware co-design can explicitly expose, quantify, and manage the accuracy-efficiency tradeoff.

Invited Talk

Efficient and Scalable Agentic AI With Heterogeneous Systems

Zain Asgar, Gimlet AI link

Invited Talk

Architecting Responsible AI: Efficient and Carbon-Aware Caching for AI and Its Security Implications

Sihang Liu, University of Waterloo link

While generative AI offers transformative potential, its rapid scaling has introduced significant challenges in sustainability and responsible deployment. This talk explores the role of caching in AI systems, including both performance and its broader implications for sustainability and security. First, I will present our recent work on improving the efficiency of video generation systems through caching techniques. I will then discuss our context caching system that explores the trade-off between operational carbon savings and the increased embodied carbon associated with storage. Finally, I will highlight our recent findings on novel security vulnerabilities arising from caching mechanisms used in image generation services.