A Study of Synchronization Optimizations for Parallel Dynamic Languages and Transactional Processing

Arie Tal, Ph.D. Thesis Seminar
Tuesday, 21.5.2019, 13:30
Taub 601
Prof. E. Petrank

Dynamically-typed programming languages such as Python, Ruby and JavaScript are widely used, and much effort is spent on making them efficient. One substantial research effort in this direction is the enabling of parallel code execution. As a first step towards efficiently synchronizing such languages, we look at concurrent data-structures that may modify their memory layout dynamically (e.g. increase their size or change their shape). In this talk we propose a novel Layout Lock that incurs a negligible overhead for reads and a small overhead for updates of the data structure. We then demonstrate the benefits of layout changes and also the advantages of the Layout Lock as its supporting synchronization mechanism for two new data structures. Next, we look at built-in collections in dynamic programming languages. Such collections are an important ingredient of dynamic environments, but are difficult to make safe, efficient, and scalable. In this talk, we propose an approach for efficient and concurrent collections. We show that our approach has no overhead on single-threaded benchmarks, scales linearly for Array and Hash accesses, and achieves similar scalability to Fortran and Java on classic parallel algorithms. Finally, we look into the cost and complexity of monitoring local versus shared objects in statically typed programming languages such as C and C++ that make use of pointers. We show how the collected information can be used to implement an optimization on a Software Transactional Memory Implementation, thereby reducing the cost of synchronizing memory accesses to addresses that are determined to be reachable by a single thread.

Back to the index of events