אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
Ameer Haj Ali (UC Berkeley)
יום רביעי, 12.05.2021, 11:30
The end of Moore's law is driving the search for new techniques to improve system performance as applications continue to evolve rapidly and computing power demands continue to rise. One promising technique is to build more intelligent compilers.
Compilers map high-level programs to lower-level primitives that run on hardware. During this process, compilers perform many complex optimizations to boost the performance of the generated code. These optimizations often require solving NP-Hard problems and dealing with an enormous search space. To overcome these challenges, compilers currently use hand-engineered heuristics that can achieve good but often far-from-optimal performance. Alternatively, software engineers resort to manually writing the optimizations for every section in the code, a burdensome process that requires prior experience and significantly increases the development time.
In this work, novel approaches for automatically handling complex compiler optimization tasks are explored. End-to-end solutions using deep reinforcement learning and other machine learning algorithms are proposed. These solutions dramatically reduce the search time while capturing the code structure, different instructions, dependencies, and data structures to enable learning a sophisticated model that can better predict the actual performance cost and determine superior compiler optimizations. The proposed techniques can outperform existing state-of-the-art solutions while requiring shorter search time. Furthermore, unlike existing solutions, the deep reinforcement learning solutions are shown to generalize well to real benchmarks.
Ameer Haj-Ali completed his Ph.D. in Electrical Engineering and Computer Science at UC Berkeley in two years, where he was advised by Professors Ion Stoica (RISE Lab) and Krste Asanovic (ADEPT Lab). At UC Berkeley Ameer helped bring up/led many projects spanning machine learning in compiler optimization and hardware-software codesign. This includes Gemmini, AutoPhase, NeuroVectorizer, ProTuner, Ansor, AutoCkt, and RLDRM (awarded best paper award). Before attending UC Berkeley, Ameer finished his M.Sc. studies (summa cum laude, the valedictorian) at the Technion in 2018, where he worked with Professor Shahar Kvatinsky on using emerging memory technologies to enhance the performance of modern computer systems.