Garg, AyushAyushGargKagi, Sammed S.Sammed S.KagiSrivastava, VivekVivekSrivastavaSingh, MayankMayankSingh2025-08-312025-08-312021-01-01[9781954085886]10.26615/978-954-452-056-4_0132-s2.0-85132052022https://d8.irins.org/handle/IITG2025/26388Code-mixing is a phenomenon of mixing words and phrases from two or more languages in a single utterance of speech and text. Due to the high linguistic diversity, code-mixing presents several challenges in evaluating standard natural language generation (NLG) tasks. Various widely popular metrics perform poorly with the code-mixed NLG tasks. To address this challenge, we present a metric independent evaluation pipeline MIPE that significantly improves the correlation between evaluation metrics and human judgments on the generated code-mixed text. As a use case, we demonstrate the performance of MIPE on the machine-generated Hinglish (code-mixing of Hindi and English languages) sentences from the HinGE corpus. We can extend the proposed evaluation strategy to other code-mixed language pairs, NLG tasks, and evaluation metrics with minimal to no effort.trueMIPE: A Metric Independent Pipeline for Effective Code-Mixed NLG EvaluationConference Paperhttps://doi.org/10.26615/978-954-452-056-4_013123-13220212cpConference Proceeding2