Kayal, PratikPratikKayalSingh, MayankMayankSinghGoyal, PawanPawanGoyal2025-08-312025-08-312020-01-05[9781450377386]10.1145/3371158.33711942-s2.0-85078495453https://d8.irins.org/handle/IITG2025/24258The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training phase. Experimental results show that transfer with weak supervision from a source domain to various target domains provides performance very close to that obtained via supervised training on the target domain itself.falseDomain Transfer | Sentiment Analysis | Weakly labeled datasetsWeakly-supervised deep learning for domain invariant sentiment classificationConference Paperhttps://arxiv.org/pdf/1910.13425239-2435 January 20203cpConference Proceeding4