אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
עידו בן-טוב (הרצאה סמינריונית למגיסטר)
יום רביעי, 25.11.2009, 14:00
The well known learning models in Computational Learning Theory are either adversarial, meaning that the examples are arbitrarily selected by the teacher, or i.i.d., meaning that the teacher generates the examples independently and identically according to a certain distribution. However, it is also quite natural to study learning models in which the teacher generates the examples according to a stochastic process.
A particularly simple and natural time-driven process is the random walk stochastic process. We consider exact learning models based on random walk, and thus having in effect a more restricted teacher compared to both the adversarial and the uniform exact learning models. We investigate the learnability of common concept classes via random walk, and give positive and negative separation results as to whether exact learning in the random walk models is easier than in less restricted models.
This talk will discuss the Online learning model, random walk on the hypercube, learnability of O(log n)-juntas, and learnability of read-once DNF and general DNF formulas.