Description
The classical isolation learning methods suffers from Catastrophic Forgetting (CF) when trained on multiple datasets in a streaming fashion. Life Long Learning (LLL) is a machine learning paradigm that deals with learning of multiple datasets sequentially over time without the risk of CF. It is a well-established notion that machine learning models which are trained on multiple datasets tend to be more robust. We empirically proved this notion for LLL methods in Natural Language Processing (NLP). Moreover, we incorporated known defense strategies against adversarial attacks with LLL and evaluated its impact on robustness of the models. To the best of our knowledge, our work is the first to evaluate and compare the robustness of LLL methods and isolation learning methods in NLP.
|