דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

שיפור תוצאות מציאותיות על ידי מודלים מבוססים דאטא
event speaker icon
ניר דיאמנט (הרצאה סמינריונית למגיסטר)
event date icon
יום שלישי, 23.11.2021, 10:30
event location icon
Zoom Lecture: 93062236880
event speaker icon
מנחה: Prof. A. Bronstein
Understating and controlling generative models' latent space is a complex task. In this work, we propose a novel method to learn the behavior of any specific attribute in an existing GAN's latent space and edit real data samples accordingly. We perform Sim2Real learning, relying on only three synthetic samples from two classes per attribute, allowing an unlimited amount of different precise edits. We present an AutoEncoder based model which learns both the essence of a difference (delta) of attributes in the latent space and how to apply this delta to an existing sample to generate a shifted one accordingly. While previous methods rely on a known structure of the latent space, e.g., linear connection between some properties in StyleGAN, our method inherently does not require any structural constraints and learns the latent space behavior by itself. We demonstrate our method in the face images domain, editing different expressions, poses, and lighting attributes. We analyze our results qualitatively and show they outperform the previous works. Energy-saving LIDAR camera for short distances estimates an object's distance using temporally intensity-coded laser light pulses and calculates the maximum correlation with the back-scattered pulse. Though on low power, the backs-scattered pulse is noisy and unstable, which leads to inaccurate and unreliable depth estimation. To address this problem, we use GANs (Generative Adversarial Networks), which are two neural networks that can learn complicated class distributions through an adversarial process. We learn the LIDAR camera's hidden properties and behavior, creating a novel, fully unsupervised forward model that simulates the camera. Then, we use the model's differentiability to explore the camera parameter space and optimize those parameters in terms of depth, accuracy, and stability. To achieve this goal, we also propose a new custom loss function designated to the back-scattered code distribution's weaknesses and its circular behavior. The results are demonstrated on both synthetic and real data.