Joshi, SharadSharadJoshiSaxena, SurajSurajSaxenaKhanna, NitinNitinKhanna2025-08-312025-08-312019-10-0110.1016/j.image.2019.05.0202-s2.0-85067189615https://d8.irins.org/handle/IITG2025/23170Knowledge of source smartphone corresponding to a document image can be helpful in a variety of applications including copyright infringement, ownership attribution, leak identification, and usage restriction. In this work, we investigate a convolutional neural network-based approach to solve source smartphone identification problem for printed text documents which have been captured by smartphone cameras and shared over messaging platform. The proposed method comprises of a fusion technique which allows a single network to learn a model directly from two-channel images fused out of native letter images and their denoised versions. In the absence of any publicly available dataset addressing this problem, we introduce a new image dataset consisting of 770 images of documents printed in three different fonts, captured using 22 smartphones and shared over WhatsApp. A series of experiments are conducted on the newly captured dataset including an experiment in the presence of an active adversary who might re-scale the native images before sharing over WhatsApp. In all the experiments, for classification of WhatsApp-processed document images, the proposed method outperforms the baseline methods.falseCamera identification | Convolutional neural networks (CNN) | Document forensics | Image forensics | Intrinsic signatures | WhatsAppFirst steps toward CNN based source classification of document images shared over messaging appArticlehttps://arxiv.org/pdf/1808.0594132-41October 20199arJournal8