דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
בעז אופיר (הרצאה סמינריונית לדוקטורט)
event date icon
יום רביעי, 09.03.2016, 14:30
event location icon
Taub 601
event speaker icon
מנחה: Prof. M. Elad
The main topic of our research is creating a multi-scale dictionary learning paradigm for sparse and redundant signal and image representations. The appeal of a multi-scale dictionary is obvious - in many cases data naturally comes at different scales. To date, popular dictionary based approaches are limited to a single scale, and small signals/patch sizes. Multi-scale approaches, on the other hand, are typically analytic, with little to no adaptation to the data. A multi-scale dictionary should be able to combine the advantages of generic multi-scale representations (such as Wavelets), with the power of learnt dictionaries, in capturing the intrinsic characteristics of a family of signals. Using such a dictionary would allow representing the data in a more efficient, i.e. sparse, manner, allowing applications to take a more global look at the signal. In our work we aim to achieve this goal without incurring the prohibitive costs of an explicit dictionary with large atoms. We present two approaches to tackle this problem: The first approach is based on learning the dictionary, in parts, in the analysis domain of an orthogonal multi-scale operator (namely, orthogonal Wavelets). While our analysis domain atoms are small, when applying the inverse Wavelet transform as part of the synthesis process, the ``effective'' atoms we get can be very large. Using this approach we got promising results, achieving sparser representations of images, and as a sparsifying dictionary for compressed sensing. We show how by combining this approach with standard single-scale denoising we can achieve state-of-the-art image denoising results. In the second approach, we plug in a multi-scale dictionary as the base/fixed part of a double-sparsity model. The multi-scale dictionary we use is Cropped Wavelets, a Wavelet based dictionary specially adapted to representing finite support signals/images without significant border affects. Using this model we can extend patch-based image processing tasks to handle large image patches (i.e., 64x64 pixels). In order to train a dictionary for such large signals, using this model, we introduce an online dictionary learning algorithm that uses stochastic gradient techniques to replace the standard batch approach. This algorithm allows training a dictionary on millions of examples. We demonstrate the potential of such dictionaries for tasks such as image restoration and compression, as well as training general dictionaries for natural images.