דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
מרגריטה אוסדצ'י (אונ' חיפה)
event date icon
יום שלישי, 01.12.2015, 11:30
event location icon
חדר 1061, בניין מאייר, הפקולטה להנדסת חשמל
In this work we consider non-linear classifiers that positively classify a point when it resides in the intersection of $k$ hyperplanes. We learn these classifiers by minimizing the minimax risk of the negative training examples and the sum of hinge-loss of the positive training examples. These classifiers fit typical real-life datasets that consist of a small number of positive data points and a large number of negative data points. Such approach is computationally appealing since the majority of training examples (belonging to the negative class) are represented by the statistics of their distribution, which is used in a single constraint on the minimax risk, as opposed to SVM, in which the number of variables is equal to the size of the training set. We also provide empirical risk bounds and show that they are dimensionally independent and decay as $1/\sqrt{m}$ for $m$ samples.

We propose an efficient algorithm for training an intersection of finite number of hyperplanes and we demonstrate the effectiveness of our classifiers on real data, including letter and scene recognition. We show that these classifiers are significantly faster than kernel methods, as they only calculate $k$ inner products. In contrast, kernel SVMs produce a significant number of support vectors rendering the classification impractical.