Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human Action Recognition
 
  • Details

Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human Action Recognition

Source
IEEE Transactions on Pattern Analysis and Machine Intelligence
ISSN
01628828
Date Issued
2022-09-01
Author(s)
Kumawat, Sudhakar
Verma, Manisha
Nakashima, Yuta
Raman, Shanmuganathan  
DOI
10.1109/TPAMI.2021.3076522
Volume
44
Issue
9
Abstract
Conventional 3D convolutional neural networks (CNNs) are computationally expensive, memory intensive, prone to overfitting, and most importantly, there is a need to improve their feature learning capabilities. To address these issues, we propose spatio-Temporal short-Term Fourier transform (STFT) blocks, a new class of convolutional blocks that can serve as an alternative to the 3D convolutional layer and its variants in 3D CNNs. An STFT block consists of non-Trainable convolution layers that capture spatially and/or temporally local Fourier information using an STFT kernel at multiple low frequency points, followed by a set of trainable linear weights for learning channel correlations. The STFT blocks significantly reduce the space-Time complexity in 3D CNNs. In general, they use 3.5 to 4.5 times less parameters and 1.5 to 1.8 times less computational costs when compared to the state-of-The-Art methods. Furthermore, their feature learning capabilities are significantly better than the conventional 3D convolutional layer and its variants. Our extensive evaluation on seven action recognition datasets, including Something$^2$2 v1 and v2, Jester, Diving-48, Kinetics-400, UCF 101, and HMDB 51, demonstrate that STFT blocks based 3D CNNs achieve on par or even better performance compared to the state-of-The-Art methods.
Publication link
http://export.arxiv.org/pdf/2007.11365
URI
https://d8.irins.org/handle/IITG2025/25109
Subjects
3D convolutional networks | human action recognition | Short-Term fourier transform
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify