Description
We study the problem of learning from multiple experts under budget constraints. A Model for each expert is trained on annotations provided from the expert’s previous labeling experience. Experts are varying in reliability and experience. Our goal is to reduce the number of active experts to match a stated budget and still achieve high performance. We introduce a supervised machine learning approach, that jointly optimizes classification and budget. Two models are introduced, simple and full expertise model. Simple model selects a subset of experts, which are staying fixed during test time. During test time the full expertise model selects for each data point a new subset of experts. Finally we are evaluating the robustness of the full expertise model against undirected label flip attacks.
|