Chhipa, Prakash ChandraPrakash ChandraChhipaVashishtha, GautamGautamVashishthaSai Jithamanyu, Settur AnanthaSettur AnanthaSai JithamanyuSaini, RajkumarRajkumarSainiShah, MubarakMubarakShahLiwicki, MarcusMarcusLiwicki2025-08-312025-08-312025-01-01[9798331320850]2-s2.0-105010233560https://d8.irins.org/handle/IITG2025/28403Existing self-supervised adversarial training (self-AT) methods rely on handcrafted adversarial attack strategies for PGD attacks, which fail to adapt to the evolving learning dynamics of the model and do not account for instance-specific characteristics of images. This results in sub-optimal adversarial robustness and limits the alignment between clean and adversarial data distributions. To address this, we propose ASTrA (Adversarial Self-supervised Training with Adaptive-Attacks), a novel framework introducing a learnable, self-supervised attack strategy network that autonomously discovers optimal attack parameters through exploration-exploitation in a single training episode. ASTrA leverages a reward mechanism based on contrastive loss, optimized with REINFORCE, enabling adaptive attack strategies without labeled data or additional hyperparameters. We further introduce a mixed contrastive objective to align the distribution of clean and adversarial examples in representation space. ASTrA achieves state-of-the-art results on CIFAR10, CIFAR100, and STL10 while integrating seamlessly as a plug-and-play module for other self-AT methods. ASTrA shows scalability to larger datasets, demonstrates strong semi-supervised performance, and is resilient to robust overfitting, backed by explainability analysis on optimal attack strategies. Project page for source code and other details at https://prakashchhipa.github.io/projects/ASTrA.falseASTRA: ADVERSARIAL SELF-SUPERVISED TRAINING WITH ADAPTIVE-ATTACKSConference Paper100735-10075720250cpConference Proceeding