Kayal, PratikSingh, MayankGoyal, Pawan2025-08-282025-08-282019-10-01https://arxiv.org/abs/1910.13425https://d8.irins.org/handle/IITG2025/19787The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training phase. Experimental results show that transfer with weak supervision from a source domain to various target domains provides performance very close to that obtained via supervised training on the target domain itself.en-USMachine Learning (cs.LG)Computation and Language (cs.CL)Information Retrieval (cs.IR)Machine Learning (stat.ML)Weakly-Supervised Deep Learning for Domain Invariant Sentiment Classificatione-Printe-Print123456789/435