According to some pundits, growth in the demand for AI training and inference so vastly exceeds Moore’s Law that the demand can only be met with a rapidly growing population of ever more power-hungry data centers that could collectively consume a significant fraction of the world’s electricity. Our job, as the visionaries and engineers of the Infrastructure enabling AI, is to ensure that disproportionate surge in energy demand doesn’t happen, i.e., we must deliver the benefits of AI in ways that are sustainable and economically viable. This presentation will begin with a realistic discussion around the “demand” side of the equation. It will then identify an offsetting set of opportunities for innovation on the “supply” side — opportunities to so vastly improve the effectiveness of AI infrastructure that, in aggregate, they can offset the growth on the demand side. The supply side discussion will highlight examples across the infrastructure “stack”: in the models, the algorithms, the software, the individual compute nodes and in the scale-out mechanisms used at the data center level.
Bio:
David is passionate about research and innovation and has a track record of embracing high-risk initiatives, such as software-defined networking, software radio, IoT, and data-intensive computing. He has worked in academia, as a faculty member at MIT; in government, at DARPA and NSF; in industry at Intel, Amazon/A9.com, Microsoft, and VMware; and as a partner in a venture capital firm. Dr. Tennenhouse has championed research related to a wide range of technologies, including networking, distributed computing, blockchain, computer architecture, storage, machine learning, robotics, and nano/biotechnology. David holds a BASc and MASc in Electrical Engineering from the University of Toronto and obtained his Ph.D. at the University of Cambridge.